The White Paper on Artificial Intelligence published on February 19th by the European Commission presents some important building block policy options to enable the trustworthy and secure development of artificial intelligence in the EU, fully respecting the presiding values and the fundamental rights of its citizens. The enormous volume of data which has already been generated and that yet to be generated constitutes an opportunity for Europe to position itself at the forefront of global AI policy. The use of AI brings both fears and uncertainties: on the one hand, citizens fear they will be left powerless against the information asymmetries of algorithmic decision-making, while on the other, companies are truly concerned with legal uncertainty.
The aim of a clear European regulatory framework must be to build trust among consumers and AI businesses, and thereby accelerate the uptake of the technology. Developers of AI are already subject to European and national legislation on fundamental rights (such as data protection, privacy, and non-discrimination), consumer protection, and product safety and liability rules. Although consumers expect the same level of safety and respect for their rights whether or not a product or a system relies on AI, some specific, inherent features of AI (such as its lack of transparency) can make the application and enforcement of this legislation more difficult. Member States, according to the White Paper, are pointing to the current absence of a common European framework. AI – most specifically, machine learning models – reveal the ability to track and analyze the daily habits of people. AI can be used, in breach of EU data protection and other rules, by state authorities or other entities for mass surveillance and by employers to observe employee behavior. Analyzing large chunks of data and identifying relations among them, AI can also be used to retrace and de-anonymize data about persons, creating new personal data protection risks even for datasets that do not specifically include personal data. AI is also used by online intermediaries to prioritize information for their users and to perform content moderation. The processed data, the way applications are designed, and the possibility of human intervention can affect the rights to free expression, personal data protection, and privacy, as well as political freedoms.
Article 6 of the EU’s General Data Protection Regulation outlines the conditions under which personal data can be legally processed, with one such requirement being that the data subject has given their explicit consent. However, there are exemptions to the rule for public security issues, for which AI recognition technologies should be allowed to automatically identify persons legally.
When the Covid-19 pandemic began to spread, systems for temperature detection at work and even in airports with the use of AI technology proliferated, allowing for the monitoring of numerous cameras at once, and automatically sending alerts to data controllers. Moreover, a thermal camera, which must not pick up skin color, can record the face image of anyone who registers a fever.
To this end, Recital 46 of the EU General Data Protection Regulation specifically mentions epidemics: “The processing of personal data should also be regarded to be lawful where it is necessary to protect an interest which is essential for the life of the data subject or that of another natural person. Processing of personal data based on the vital interest of another natural person should in principle take place only where the processing cannot be manifestly based on another legal basis. Some types of processing may serve both important grounds of public interest ... for instance when processing is necessary for ... for monitoring epidemics and their spread.” For EU companies, data processing must always be compliant with Articles 6 and 9 of the GDPR. In Greece, on March 18th the Data Protection Authority published guidelines for the processing of personal data in the context of Covid-19 protection measures. The guidelines state that protecting personal data is not absolute; fundamental rights and the proportionality principle should be taken into account in favor of the society’s public good and interest.
In addition, earlier this month the European Data Protection Board emphasized that a legal condition may legitimize restrictions of freedoms provided the restrictions are proportionate and limited to the emergency period. When processing is necessary for reasons of substantial public interest in the area of public health, there is no need to rely on individual consent.
The balance between the public benefit and individual privacy concerns must be reconsidered in the absence of a common European framework. Defending data protection rights during this pandemic has to take into account the length of the emergency period and the proportional actions of the authorities. The latter and their clear and updated definition and public communication must be the benchmarks for drafting secure legal frameworks on these issues in the coming months.
By Ioanna Michalopoulou, Managing Partner, Michalopoulou & Associates Lawgroup
This Article was originally published in Issue 7.3 of the CEE Legal Matters Magazine. If you would like to receive a hard copy of the magazine, you can subscribe here.