The history of artificial intelligence can be traced back to the 1950s, when researchers and computer scientists first explored the possibility of creating machines which could accomplish tasks typically performed by humans. Since then, AI has undergone several significant phases of development, each of which has been marked by essential landmarks and milestones. In 1997, IBM’s Deep Blue computer famously defeated world chess champion Garry Kasparov. In recent years, AI has seen the development of deep learning and neural networks capable of processing large amounts of data and making complex predictions.
AI solutions have been around for a while, even though most of us are unaware of them. Google, Samsung Research, Tesla’s cars, have been using AI, while we have talked to Siri and Alexa for quite some time. However, there was no “conversation” with any of them, as they could not pick up the context, let alone read between the lines.
That’s exactly what ChatGPT can do — understand you and generate accurate content and well-structured answers. This is the reason why ChatGPT represents such a breakthrough, as it shows signs of ”human-like understanding“ on an entirely new level, especially compared to Siri or Alexa. Nevertheless, we do not even know what this ChatGPT’s “new level” is as with every new version ChatGPT makes new breakthroughs and reaches new possibilities. For instance, on the release page of GPT-4, OpenAI claims that ChatGPT can pass the bar exam with top 10% score, whereas the previous version (3.5) scored among the bottom 10%.
When asked to define artificial intelligence (“AI”), ChatGPT answered: ”Artificial Intelligence (AI) refers to the simulation of human intelligence in machines designed to think and act like humans.”
In a nutshell, AI is a set of algorithms that quickly process vast amounts of data, simulating human intelligence. Although the ultimate objective is to improve our lives, AI carries many risks and challenges and it’s up to humanity to deal with them.
The emergence of these technologies raises important questions that must be addressed, both by and for legal professionals. There are potential effects of ChatGPT on the law and rules we have been applying for so long.
Who is entitled to the copyright if I ask ChatGPT to write a song for me? I? ChatGPT? Microsoft? Is utilizing AI in any manner considered plagiarism? Who would be liable for damages AI caused?
Since students were quick to start (ab)using ChatGPT, universities needed to react by banning AI. Interestingly, they use AI to detect whether an AI was used for student work. Businesses also jumped into using the powers of AI, where the case of CNET is mostly discussed.
In addition, AI systems rely on collecting and analyzing vast amounts of data, including potentially sensitive personal information, in performing tasks and making decisions, which may lead to privacy violations if not adequately secured.
Another major concern with AI is that it may cause massive layoffs and increase unemployment. Many jobs are at risk of becoming obsolete. Accounting, for example, where AI could automate most tasks, such as data entry, financial reporting, and tax compliance, eliminating the need for a human to get involved or, at least, reducing the time required for completing such processes. Similarly, AI may affect logistics, manufacturing, healthcare, IT, finance, and other industries.
New technology will require new rules. Specific legislation is still under development and we have yet to determine the kind of impact improved AI will have on everyday life. We are already familiar with some of the bills.
The EU AI Act, published in 2021, focuses on protecting human rights. This regulation practically distinguishes the AI application into four risk categories. The first category is the unacceptable risk of AI applications which are banned, such as social scoring. The second category is the AI of high-risk applications, such as using ai in education, law enforcement, biometric identification, and employment (e.g., CV scanning tools), which are subject to specific legal requirements. The third are AI applications with limited risks which are subject to certain transparency obligations (e.g., chatbots will need to announce to the users that they are speaking to a machine). Lastly, the fourth category is the AI of minimal-risk applications which face fewer restrictions. This proposal aims to set harmonized rules for the development, marketing, and usage of AI systems in the Union, and transparency obligations for specific AI systems.
In addition, the United States AI Bill of Rights lays out five principles, each accompanied by technical guidance, for responsible AI implementation. These principles include ensuring safe and effective systems, protecting against algorithmic discrimination, guaranteeing data privacy, providing notice and explanation of AI usage, and offering alternative options for those who wish to opt out.
In general, the guiding principles for AI regulation in the Western world aim to balance promoting innovation and growth in AI with protecting human rights and addressing ethical considerations. This includes ensuring that the development and deployment of AI do not violate fundamental human rights, holding individuals and organizations accountable for the impacts of AI systems, and requiring AI systems to be transparent in their decision-making processes. Legislators are also becoming increasingly interested in addressing liability issues connected to AI, ensuring that protection from harmful effects is at the same level as with any other technology.
On the other side, China introduced the Internet Information Service Algorithmic Recommendation Management Provisions that regulate how businesses using AI can use algorithms to make recommendations. The provisions require service providers to inform users about the nature of the algorithmic recommendation services they offer. They must give users the option to opt out of being targeted based on their personal characteristics.
Serbia also touched base on AI through the 2020-2025 AI Development Strategy. The strategy focuses on improving the general economic situation in Serbia with AI development, but also recognizes the benefits of implementing artificial intelligence in public administration, medicine, and healthcare, as well as in traffic and road infrastructure. As Serbia’s legal framework follows EU standards, the strategy includes European principles such as non-discrimination, transparency, security, and human oversight.
Given the numerous initiatives underway to regulate AI globally, it’s too soon to predict the precise form these regulations will take, particularly with the rapidly evolving nature of technology.
These are exciting and challenging times for all stakeholders. Addressing AI concerns through regulatory solutions requires a multi-disciplinary approach involving technology developers, policymakers, researchers, and society. The complexity of the task is immense and first requires thorough comprehension of all the potential uses of the new technology to create equally novel legal frameworks.
By Nemanja Sladakovic, Senior Associate, Gecic Law