28
Fri, Feb
92 New Articles

Estonia: 5 Practical Steps to Prepare for the Artificial Intelligence Act

Issue 11.11
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

On August 1, 2024, the Artificial Intelligence Act (Regulation (EU) 2024/1689, AI Act) came into effect, establishing several regulations concerning artificial intelligence systems within the European Union. Although the regulation is already in force, its requirements will gradually come into effect: the first deadlines are set for February and August 2025, while most of the requirements will apply starting August 2, 2026.

Mapping Artificial Intelligence Systems

The first step is to create a comprehensive overview of the AI systems you use – whether developed in-house or merely AI-based tools used in daily operations. This will help you assess which systems fall under the regulation’s risk categories: unacceptable, high, limited, or minimal risk.

When mapping the systems, consider your role, as obligations differ somewhat according to roles. You need to clarify for each AI system whether you are, for example, a provider, deployer, importer, product manufacturer, or in some other role. An overview of the systems will provide a clear picture of their compliance with the regulations. And even if the AI Act does not apply, there may still be a need to comply with other legal requirements.

Assessing Compliance

After mapping, you can begin to analyze what risk level your systems correspond to. First, you must determine whether the AI systems you are using are allowed, as some systems will be prohibited starting February 2, 2025. The AI Act deems systems such as applications used for emotion recognition in workplaces, manipulative systems, etc. as incompatible with the values and fundamental rights of the European Union.

Next, examine whether the AI system is intended to be used as a safety component of a product, or the system is itself a product (e.g., industrial machines, toys). Additionally, the regulation lists fields that may involve high risk (e.g., biometrics, systems used in critical infrastructure, education, or employment).

In contrast, minimal-risk systems as those used in spam filters pose a relatively small threat to people’s rights. The regulation does not restrict their use nor impose additional requirements.

Clarify Requirements

Next, clarify what the specific requirements are and what needs to be done in the context of each AI system. If you have determined that you have a high-risk AI system, you need among others to: (a) create a risk management system, (b) organize data management, (c) prepare technical documentation, and (d) ensure transparency.

Ensuring Compliance with Other Legal Regulations

In the rush to implement the AI Act, do not forget to comply with other closely related legal acts, such as the General Data Protection Regulation. If you use personal data for the development and further training of AI systems, it is essential to ensure that such activities are purposeful and based on appropriate legal grounds. Transparency must also be ensured, meaning that necessary information must be provided regarding data processing related to the AI system and the rights of data subjects established by law must be respected.

Copyright regulations are also important, as it must be ensured that the use of content protected by copyright for training and utilizing AI complies with the relevant rights. Verify that the training data and the outputs of the AI system do not infringe on anyone’s copyrights and that there is permission for content usage.

Ensuring Cybersecurity

Even though Estonia has not yet established specific guidelines, adopted implementing legislation, and designated responsible authorities under the AI Act, it remains a digital innovation leader within the European Union. The country is home to many companies that heavily rely on advanced AI systems. To support the development plans of Estonian organizations, the Estonian Information System Authority has commissioned an analysis of AI technology risks and mitigation measures. This is an essential read for aanyone planning to adopt artificial intelligence or has already adopted it but has not considered the associated risks. It offers an overview of the current state of AI systems, their deployment models, and the associated cybersecurity risks, along with practical measures to address these challenges. As part of this initiative, a quick guide and worksheet have been created for AI system implementers, providing tools to assess deployment models and systematically evaluate risks and compliance needs. These resources are also recommended for effective system risk control and selecting appropriate mitigation measures.

Although there is still time to comply with the AI regulation, it is crucial to act now. Build a team that includes specialists who can cover both the technological and legal requirements. If there are many AI systems with different risk levels, compliance should be approached systematically and gradually, establishing realistic internal deadlines – starting with prohibited systems, then moving to high-risk systems.

By Egon Talur, Partner, Priit Pold, Senior Associate, and Liina Jents, Specialist Counsel, Cobalt

This article was originally published in Issue 11.11 of the CEE Legal Matters Magazine. If you would like to receive a hard copy of the magazine, you can subscribe here.