Effective from 2 February 2025, the provisions of Chapter 1 and 2 of the EU Regulation no. 2024/1689 laying down harmonised rules on artificial intelligence (“EU AI Act”) regarding AI literacy and the ban on use of AI systems that pose unacceptable risks have entered into force.
The EU AI Act provides for a phased approach regarding its entering into force, different parts of the act being applicable as of 6, 12, 18, 24 and 36 months from the act’s initial adoption of 1 August 2024. The first of such time markers has been reached as of 2 February 2025, which marks the entering into force of Chapter 1 (“General provisions”) and Chapter 2 (“Prohibited AI Practices”) of the EU AI Act.
1. AI LITERACY OBLIGATIONS
Starting with 2 February 2025, providers of AI systems (meaning entities which develop AI systems, place AI systems on the market or put AI systems into service under its own name or trademark) as well as deployers of AI systems (meaning entities using AI systems under their authority in the course of professional activities) have the obligation to take measures to ensure, to the largest extent possible, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. The EU AI Act defines AI literacy as the skills, knowledge and understanding required to facilitate the informed deployment of AI systems and to gain awareness about the opportunities and risks of AI and possible harm it can cause.
At the moment, no specific sanctions are provided for breaches of AI literacy obligations, however, it can be reasonably expected that the performance or non-performance of such obligations may be regarded by the competent authorities as a factor when considering the appropriate sanction to be applied to a provider of deployer for other breaches of the EU AI Act.
2. PROHIBITED AI SYSTEMS
Also, as of 2 February 2025, the so-called “unacceptable risks”, referring to AI systems that pose unacceptable risks to the fundamental rights, rule of law and the European Union’s values, are prohibited under Chapter 2 of the EU AI Act. The prohibited AI practices are as follows:
- Manipulation and deceptive techniques: placing on the market, putting into service or use of AI systems employing subliminal, purposefully manipulative, or deceptive techniques with the objective or effect of materially distorting human behavior, by appreciably impairing their ability to make an informed decision.
- Exploitation of vulnerabilities: placing on the market, putting into service or use of AI systems that exploit any vulnerabilities of natural persons arising from age, disability, or socio-economic conditions with the objective or the effect of materially distorting their behaviour in a manner that causes or is likely to cause significant harm.
- Social scoring: placing on the market, putting into service or use of AI systems for enacting social scoring (i.e. evaluation or classification of people) based on behavioral or personal attributes, leading to detrimental or unfavourable treatment in contexts that are unrelated to those in which the data was gathered or which are unjustified in relation to their social behaviour or its gravity.
- Individual criminal offence risk assessment and prediction: placing on the market, putting into service or use of AI systems exclusively for profiling individuals to asses or predict chances of criminal behaviour.
- Unauthorized facial recognition databases: placing on the market, putting into service or use of AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage;
- Emotion recognition in work and education: placing on the market, putting into service or use of AI systems used to infer emotions in workplace or educational settings are prohibited, except when justified for medical or safety purposes.
- Biometric Categorization: placing on the market, putting into service or use of AI systems categorizing individuals based on biometric data to deduce sensitive information, such as race, political opinions, religion, trade union membership or sexual orientation, are restricted.
- Usage by law-enforcement of real-time biometric identification: The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement is largely banned, unless for narrowly defined exceptions.
- Sanctions for non-compliance with the prohibitions above have not yet entered into force, such being delayed until 2 August 2025 as per the EU AI Act’s phased approach. Upon their entering into force, the sanctions provide for administrative fines of up to the higher of EUR 35,000,000 or 7% of total worldwide annual turnover of the entity in breach.
3. DOES THIS APPLY TO YOU?
The EU AI Act’s regulatory scope is comprehensive, encompassing a wide range of AI activities and stakeholders. It applies to providers placing on the EU market AI systems or models, irrespective of their country of origin, as well as deployers of AI systems within the EU and even extending to entities established in third-party countries whose AI system outputs are utilized within the EU.
Organizations involved in importing, distributing, or manufacturing products which integrate AI systems are also subject to the EU AI Act’s requirements. Further, the EU AI Act also applies to affected persons which are located in the EU.
By Monica Statescu, Partner, and Eduard Maxim, Associate, Filip & Company