Artificial Intelligence is, after distributed ledger technology, the new frontier for legal scholars, and many are working to define how important and significant its future development is and how it is going to shape our legislation, affect our judiciary, and transform our societies. Many are striving to outline new legal definitions of AI, propose novel legal subjectivity and liability for AI’s defects or damage, or reframe ethical principles that AI has to follow, once we finally create it and release it to the world.
Although answers to these questions are surely important, the focus of this review is on something rather different. In our opinion before we start solving complex questions that truly define the legal status of AI, we should provide enough means and liberty for creators to realize their ideas first. Hence, we are confident that in the short term, the legal community should focus primarily on the following issues.
1. Plurality of Subjects Liable for AI
First and foremost we see a real challenge in limiting the plurality of subjects responsible for AI and its potentially harmful consequences. Where responsibility and liability for AI and its behavior is not clear, inventors and early commercial users might, among other things, be wary about using potentially ground-breaking AI simply due to risks associated with its introduction. Moreover, clarifying liability will allow for simpler and cheaper insurance as insurance companies will be able to easily identify liable subjects and offer competitive insurance. Skipping this rather important step will result in slower progression and higher costs in AI research and development.
2. Wide Use of Data is Crucial
The quality of data that AI uses is often presented as the most important issue to be addressed. Processing higher quality data (whatever the definition of this “higher quality” might be) will surely result in better outcomes. If we agree that this premise is true, we have a strong interest in providing AI with the best data available. This provision can often be limited or made impossible by strict data protection rules, especially if the relevant data is personal. We should encourage rather robust data protection legislation in the European Union and push for clearer (and wider) borders on the use of data by AI while, at the same time, protecting such data by wide use of anonymization and/or pseudonymization regimes. Preventing AI from learning from some types or areas of data will result in skewed outcomes with poor quality.
3. Sandboxes Everywhere
Releasing AI to the world without properly testing it would be incredibly risky and, all-in-all, unpredictable. European legislators should therefore focus on providing inventors and early users open regulatory sandboxes to provide for public supervision of all forms of AI before they are allowed for public use. These sandboxes should be mandatory and AI should be exposed to rigorous and complex testing by competent individuals. Naturally, these public authorities are bound to store huge amounts of data as a co-product of AI testing and will be able to utilize this data to further improve their tests and define best practices.
4. Point to Safe AI
Connected to our previous point, state authorities should not be afraid to award certifications to AI that passes this rigorous testing. This certification could, in certain cases, also limit liability and provide the inventors with another reason to allow their AI to be tested and, consequently, certified. Certification also provides the public with a clear way to see what AI was tested, how it fared, and which authority is confident that the AI is safe enough for public use. From the use of certifications in other areas we already know that this practice is beneficial to all parties interested.
These aforementioned points might not be as exciting as defining what AI is for all perceivable purposes, but they are, in our view, truly more important. Instead of trying to construe these rather difficult definitions (that will keep us busy for the years to come), let us focus on creating an environment that promotes innovation of AI that is safe, publicly available, and exceptional.
By Michal Matejka, Partner, and Milos Pupik, Associate, PRK Partners
This Article was originally published in Issue 6.8 of the CEE Legal Matters Magazine. If you would like to receive a hard copy of the magazine, you can subscribe here.