Artificial intelligence (AI) technologies are a combined range of technologies, including machine-learning techniques, robotics and automated decision-making systems used to improve prediction, optimize operations and resource allocation, and personalize service.
The adoption of AI by businesses is increasing, and there are currently no clear regulations to govern the liability for damages caused by AI in the EU. Recognizing this, the European Parliament and the European Council have proposed a Directive on adapting non-contractual civil liability rules to AI (AI Directive). The aim is to address the challenges that arise in attributing liability for damages caused by AI technology and to promote the adoption of AI in businesses across the EU.
The AI Directive seeks to provide clarity and predictability on liability for the damages caused by AI technologies. Clear liability rules should promote trust in AI technologies and increase their adoption. This can benefit both individuals and businesses, as they can take advantage of the benefits of AI technologies without worrying about the legal implications. The AI Directive seeks to tackle the legal uncertainty and inconsistency arising due to national differences in regulations. A unified approach to AI liability in the EU can help create a level playing field for all companies and encourage the development and adoption of AI technologies. It recognizes the challenges associated with complex AI systems and puts forward rules to address them.
One of the challenges of regulating AI is the difficulty in attributing liability for damages to a specific person or entity. The complexity and autonomy of AI systems make it challenging to prove who is at fault in the event of damages. The AI Directive addresses these challenges by establishing rules on the disclosure of evidence and a legal presumption of causality between the defendant's fault and the output produced by the AI system. To establish causality under the AI Directive, the claimant must provide sufficient evidence to demonstrate that the defendant has some fault through non-compliance with a "duty of care" in national and EU law. Next, it needs to be shown that it is "reasonably likely" that the fault has influenced the (lack of) output. In other words, there needs to be a plausible connection between the defendant's fault and the output produced by the AI system that caused the harm or damage. Finally, the claimant needs to demonstrate that the output in question gave rise to the damages suffered by the claimant. This means that the harm or damage suffered by the claimant must have been caused by the output produced by the AI system rather than by some other factor.
Overall, the AI Directive's causality principles aim to ensure that victims of harm or damage caused by AI systems are able to establish liability and seek compensation. The principles provide a framework for determining fault and causation in cases where AI systems are involved, helping clarify the legal responsibilities of different parties involved in developing, deploying, and using AI systems. However, the legal presumption of causality between the defendant's fault and the output produced by AI technology only applies to high-risk AI technologies as defined in the AI Act, that is, AI systems used as safety component of products subject to third-party ex-ante conformity assessment and other stand-alone AI systems with mainly fundamental rights implications, for example, biometric identification and categorization of natural persons, management and operation of critical infrastructure and law enforcement. Other AI technologies that do not meet this definition may not be subject to the same legal presumption of causality.
Furthermore, despite the AI Directive's efforts to help claimants overcome AI-specific challenges when claiming damages, the claimant still bears the burden of proving the liable person's fault and/or duty of care and the damage. This can be difficult and costly, especially for individuals with limited financial resources. The AI Directive requires relevant evidence to be made available to claimants under certain conditions. However, not all claimants may have the technical expertise to understand and interpret this evidence, which could put them at a disadvantage in court.
The AI Directive is a step in the right direction, but it only covers non-contractual civil liability rules for AI-related damages. It does not address other legal aspects related to AI, such as intellectual property rights or data protection. Furthermore, while the AI Directive aims to promote the adoption of AI technologies by providing a common framework for liability rules and regulations, the lack of harmonization may impede this goal by creating additional legal complexities and uncertainties for companies and individuals operating in the EU. Therefore, it will be important to ensure that there is sufficient coordination and cooperation between member states in implementing the AI Directive to minimize potential inconsistencies and promote a more cohesive legal framework for AI-related liability issues in the EU.
By Gjorgji Georgievski, Partner, and Dimitar Trombevski, Junior Associate, ODI Law