Legal Liability for AI Decision Making

Legal Liability for AI Decision Making

Turkiye
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

With the most recent technological developments, Artificial Intelligence [“AI”] and related technologies are being deployed by governments and businesses alike in a wide spectrum of sectors. With applications of AI increasing exponentially in every possible aspect of society, there is no doubt an accompanying aspect of risk, which is nearly impossible to measure. In this article, we try to focus on the possible legal ramifications and liability risks associated with AI decision-making.

While there is no widespread agreement on a set definition for AI, it can most simply be defined as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings”. Today, AI programs are utilised in a plethora of areas in varying degrees. Given enough data and an accurate initial program, an AI can be used to identify faces, predict future criminality, or carry out seemingly more innocuous tasks like driving a car or providing the results for your internet search.

Risks Associated with AI Usage and Liability

Given its widespread usage in nearly every aspect of our society, AI, especially data-driven decision-making and profiling algorithms, poses a real and serious risk against the exercise of legally recognised rights. Hostile and malicious applications of AI technologies raise concerns of freedom of expression, right to privacy and prohibition of discrimination, while unintentional errors in daily AI decisions may cause serious bodily or monetary harm in cases like autonomous vehicles and medical devices.

Taking into account that AI decisions are not ones made in an informational vacuum, but ones that exist in relation with many other actual and technological phenomena, the question of legal liability for AI actions becomes increasingly difficult to answer. In the world of AI and big data, there is an endless stream of data exchange and an infinitely complex chain of relationships between machines and humans, which in turn makes it that much harder to identify the responsible party for any given risk that realized. While it is sure that specific regulation about AI usage is to be made in the close future, the essential ethical and legal questions on our method of liability allocation should be asked and answered before taking legislative action.

As scholar Peter Cane noted: 

“Responsibility is not just a function of the quality of will manifested in conduct, nor the quality of that conduct. It is also concerned with the interest we all share in security of person and property, and with the way resources and risks are distributed in society. Responsibility is a relational phenomenon.”

Therefore, it is impossible to answer the question of liability allocation in a straightforward and standardised way, as the answer will inevitably be linked to every separate case in hand. With this in mind, as is the case with other, more conventional, conduct that is penalised in society, it would be a mistake to expect a single method of liability allocation to apply to each and every risk associated with the usage of AI and related technologies.

Can AI Systems be Held Liable for Their Own Decisions?

The first question that comes to mind for many of us about the responsibility arising from the use of artificial intelligence technologies is whether these systems, which seem to make decisions on their own, without any human influence, should be held accountable for the consequences of their own behaviour. 

Persons that can be held legally responsible, “legal subjects", primarily consist of natural persons as in human beings. However, as a result of legal developments since the middle-ages, some structures that are not actual people can be recognized as legal subjects. According to some scholars, it can be said that the recognition of public and private institutions as "legal entities" is one of the cornerstones of the development of European Law.

Today, all around the world, legal entities as well as natural persons are seen as legal subjects. However, these legal entities are actually the sum of people who act consciously for a certain purpose and can also explain the purposes of these actions. On the contrary, artificial intelligence systems used today generally fall under the category of "narrow artificial intelligence", that is, they consist of systems equipped with the ability to make decisions in a very specific area. Therefore, since it is not possible to state that these systems act consciously and independently, it can be said that they can only be legal objects and not subjects. While the discussion can once again arise if and when a conscious and general artificial intelligence system is produced, today it does not seem possible to hold artificial intelligence systems responsible for their own decisions.

What is the Legal Nature of an AI?

Now we know that artificial intelligence systems cannot be held responsible for their own actions and decisions and can only qualified as legal objects, the question of who will be held liable for possible damages arises. An important aspect of the question lies on the fact that artificial intelligence systems pass through the hands of many people such as software developers, producers, and data providers before they come into use.

While seeking an answer to the question of which of these people will be held liable for possible damages, first of all, the legal nature of artificial intelligence as a legal object should be addressed, and then the responsibility should be assigned according to the conditions in which the damage occurs.

Today, while the legal nature of artificial intelligence is a point of disagreement in European Law, the dominant view is that artificial intelligence systems that are in general use are in fact a "product" offered to end users and other manufacturers. Although there are contrary opinions, artificial intelligence systems can be evaluated as a product in the context of Article 2 of the Product Liability Directive 85/374, adopted by the Council of Europe [the "Directive"] and therefore can be evaluated in the context of "product liability" according to the Directive.

Among the discussions on the applicability of the Directive to artificial intelligence, a Regulation Proposal was published by the European Union in April 2021 to regulate legislation on artificial intelligence. The Proposal regulates many problems related to artificial intelligence but does not contain any regulation on liability. Another regulation is expected to be made regarding the question liability arising from artificial intelligence and related technologies at the end of 2021 or at the beginning of 2022. 

In Turkish law, the concepts of “products” and “producers” are not fully regulated under the Turkish Code of Obligations. “Intangible goods”, which arguably include AI technologies, were first included as “goods” in an amendment to the Customer Protection Act No. 4077 and the Regulation on Liability for Damages Caused by Defective Goods in 2003. After the abolition of this law and a long period of legal gap, Product Reliability and Technical Regulations Act No. 7223 [“PLTRA”] entered into force on 12 March 2021. In this act, intangible goods and by extension AI systems were classified as “products”.

Who Can be Held Liable for AI Decisions? 

AI is classified as a product both according to the Directive and the newly adopted PLTRA in Turkish law. Therefore, liability for possible damages should be assessed in accordance with this classification.

The Directive and PLTRA are compatible in many respects. Most importantly, in both regulations, it has been stipulated that the manufacturer and the importer, along with the distributor who is secondarily responsible, will be jointly liable for the damages suffered by the user and non-user third parties, for any damage caused by a defect in the product. Another important aspect observed in both regulations is that the manufacturers, importers, and distributors are held liable on the basis of strict liability. For strict liability to arise, the manufacturer does not need to be in fault, and can be held liable regardless of their conscious decisions or their awareness of the defect.

Since the Directive and PLTRA adopt the principle of strict liability, it is sufficient to prove that the product is defective, the damage occurred and the causal link between the defect in the product and the damage in order for the manufacturer to be held responsible. In Turkish law, it is argued that the burden of proof regarding the defectiveness of the product should be interpreted as narrowly as possible. Even in decisions adopted prior to the PLTRA, the Court of Cassation ruled that, "Due to the complex nature of manufacturing, it will be impossible for the injured person to prove some issues, therefore the presumption of defect should be accepted as proof." Therefore, the damage caused by the product constituted proof in itself that the product was defective. Especially since artificial intelligence systems consist of very complex software, algorithms and databases, it may be almost impossible for the injured person to prove that the artificial intelligence that causes the damage is defective. While the PLTRA has not yet been applied broadly due to its recency, tis line of reasoning will prove to be healthier for its application.

By Zahide Altunbas Sancak, Partner, Guleryuz & Partners