23
Sat, Nov
57 New Articles

New European Rules for Artificial Intelligence

New European Rules for Artificial Intelligence

Czech Republic
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

This March, MEPs approved new rules for artificial intelligence (AI), the so-called Artificial Intelligence Act. This was presented by the European Commission in April 2021. It is the world's first comprehensive AI legislation. The Act aims to improve the functioning of the internal market by setting a single legal framework for the development, launch and use of AI. The AI Act also seeks to reduce the administrative and financial burden on businesses, especially small and medium-sized enterprises.

Main goals

The European Commission has set out four main goals for the regulatory framework:

  • to ensure that AI systems to be launched on the market and used within the EU are safe and comply with existing EU legislation on fundamental rights and values,
  • to provide legal certainty to facilitate investment and innovation in AI,
  • to improve the administration and effective enforcement of existing legislation governing the fundamental rights and security requirements applicable to AI systems; and
  • to facilitate the development of a single market for legal, safe and trustworthy AI applications and avoid market fragmentation.

The Act sets out a unified definition of AI, which includes machine learning as well as expert systems and statistical models. The Act defines AI as „software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with“.

It also harmonises the rules for the development, launch and use of AI systems across the EU, based on a balanced risk-assessment approach..

Levels of risks

The Act follows a risk-assessment approach that distinguishes between the AI used, creating (i) unacceptable risk, (ii) high risk, (iii) limited risk and (iv) minimal risk.

  1. Unacceptable risk

AI systems that are a threat to humans are considered an unacceptable risk. This includes, in particular, systems that:

  • manipulate people's opinion, behaviour and decision-making and could result in physical or psychological harm,
  • exploit the vulnerability of a particular group of people (e.g. children or mentally disabled people) in order to manipulate their behaviour in such a way as to cause physical or psychological harm,
  • divide people based on their social behaviour, and
  • performs real-time remote biometric identification in publicly accessible locations.

Such systems will be banned. However, in certain cases they may be exempted.

  1. High risk

AI systems that have a negative impact on security or fundamental rights will be considered high risk. These will be divided into two main categories. The first category will be systems that are used in products covered by EU product safety legislation. These include toys, cars, medical devices, lifts or aerospace products.

The second category includes AI systems from eight specific areas that will have to be registered in the EU database, including:

  • biometric identification and categorisation of persons,
  • management and operation of critical infrastructure,
  • education and professional training,
  • employment, workforce management and access to self-employment,
  • access to and use of essential private and public services and benefits,
  • law enforcement,
  • migration, asylum and border management, and
  • the administration of justice and democratic processes.

High risk AI systems must comply with strict quality rules and disclosure, control and monitoring requirements. The Act deals in detail with these high risk systems. The specific regulatory requirements impact in particular on their providers. Providers will have to, among other things, develop a risk management system throughout the life cycle of the system, perform data management, create technical documentation to demonstrate compliance, create user instructions, implement a quality management system, etc.

However, it is not only the providers, but also others in the “chain”, such as importers, distributors or users of such systems. The role of importers and distributors is primarily to verify that the high risk system complies with the applicable regulations. This means that the importer must verify the conformity of the system by checking its documentation, while the distributor must verify that the system has the CE marking (conformité européenne). The CE marking demonstrates that the product complies with EU safety, health and environmental requirements.

The challenge for users is to use the high risk AI system in accordance with the instructions provided by the provider. In the future, we will certainly not avoid situations where a user, i.e. a company or its customers, will suffer damage as a result of using the system. It will be subject to litigation to prove whether the AI system was used in accordance with or in violation of the rules for its use. Other obligations include, for example, the need for human supervision of the system, monitoring of input data and system operation, and retention of automated records for at least six months.

  1. Limited risk

For AI systems with limited risk, meeting minimum transparency requirements will be sufficient under the new regulatory framework. Their users will have to be informed that they are interacting with AI and will have to be able to decide whether or not they want to continue using the system.

This category will mainly include AI systems that generate or manipulate image, audio or video content.

  1. Minimal risk

The Act allows for essentially free use of AI with minimal risk. This group includes applications such as video games or spam filters.

GPAI

GPAI stands for "general purpose AI". It refers to systems such as ChatGPT. These are among the most widely used types of AI by the public today. GPAI can be classified as high-risk AI systems if they can be directly used for at least one purpose that is classified as high-risk. Special rules apply to providers of all GPAI models and they must meet the following requirements: 

  • to create technical documentation of the GPAI model and provide it to the AI Office or the locally competent supervisory authority upon request,
  • to create documentation for third parties using the GPAI model to build their own AI systems,
  • to implement a policy of respecting EU copyright law, specifically the opt-out principle for text and data mining, and
  • to make available a summary of the content used to train the GPAI model.

EUROPEAN AI OFFICE

The European Artificial Intelligence Authority ("the Authority") will be established within the European Commission as part of the Directorate-General for Communications Networks, Content and Technology. The Office forms the foundation for a unified European AI governance system.

In particular, it will play a key role in the implementation of the Act by supporting national administrations. Primarily, it will enforce the rules for GPAI systems, will have powers to require information from AI system providers and will also have the power to impose sanctions for breaches of the Act.

The Office will work with both Member States and the wider expert community through dedicated forums and expert groups. Specifically, the Office will be tasked with supporting the Act and enforcing the rules set out therein, contributing to the application of the Act among Member States, including the establishment of various advisory bodies at EU level, developing tools and methodologies for assessing the capabilities and impact of AI systems, preparing guidelines and directives to support the effective implementation of the Act, etc.

SANCTIONS

The Act also contains significant sanctions for breaches of the rules contained therein, in particular sanctions for breaches of the absolute prohibition on the use of certain AI systems and sanctions for operators for breaches of their obligations.

There are two types of sanctions. They are either fixed penalties or penalties expressed as a percentage of the company's annual turnover, with the higher option being applied. In the case of operators, a fine of up to EUR 15 million or 3% of annual turnover may be imposed. For breaches of the absolute prohibition, the fine can be up to EUR 35 million or 7% of the company's annual turnover.

When imposing a fine, it will also take into account:

  • the nature, seriousness and duration of the breach,
  • whether the subject has been fined in the past for the same offence,
  • the size of the entity that committed the breach and its market share.

IMPLEMENTATION

As already indicated above, an AI Office will be established within the Commission to monitor the effective implementation and compliance of GPAI model providers.

The effectiveness of the Act is divided into four main timelines: 

  • 6 months to prohibit the use of AI systems posing an unacceptable risk,
  • 12 months for rules on GPAI,
  • 24 months for rules on high-risk AI systems listed in Annex III, and
  • 36 months for rules relating to high-risk AI systems listed in Annex II.

CONCLUSION

Given the growing importance of AI in almost all industries across the society, it is certainly advisable to set some rules for its use. The AI Act is a major step towards the creation of a coherent and harmonised legal framework for AI in the European Union that emphasises safety, transparency and respect for fundamental rights. The question remains how the new regulations will work in practice and what influence they will have on the development of other similar regulations outside the EU.

By Jaroslav Tajbr, Partner, and Jan Cermak, Junior Associate, Eversheds Sutherland