Tue, Feb
44 New Articles

Will Artificial Intelligence Kill Copyright?

Will Artificial Intelligence Kill Copyright?

  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Artificial intelligence (AI) is set to change how we perceive intellectual property.  With the recent surge of AI, people are discovering that AI can generate works of art.  This raises important questions, such as the attribution of authorship for these creations and the protection of intellectual property (IP) rights.

We explore some critical issues regarding AI and its connection with IP here and offer potential solutions.  We also provide insight into famous cases and trends that may be crucial for the further development of AI in IP rights.

The Owner of the Copyright

So, who is the author of intellectual property created by an AI?  Is it the individual giving instructions and prompts?  The creator of AI?  The owner of the data used for training the AI?  The AI itself?  Is there an author?

Even though using AI to create IP is not explicitly regulated, we can still find answers to some of these questions in existing legislation.  For example, the USA, Spain, Germany, and Serbia only allow human beings to be considered authors.  Therefore, AI may not claim authorship in these countries.  Similarly, patent registration offices in the UK, USA, Europe, South Korea, Taiwan, and Germany have refused to register AI as the inventor of a patent (DABUS case).  When we asked ChatGPT whether it can hold copyright, this is the response we got: “As an AI language model, I am a computer program, and I do not have legal personality, so I cannot hold copyright over anything.  The responses I generate are generated from patterns and associations in the data I was trained on and are not owned by me or any individual or organization.  However, the data I was trained on may be subject to copyright owned by the original sources or organizations that provided it.” So, AI cannot be the Author or the Inventor.

But who can?

The work must be an original creation of the author to enjoy copyright protection.  This poses the question, could an AI creation be considered authentic?  Even though not directly prohibited, the current case law is very much against this.  Both the European Court of Justice of the European Union and the US Supreme Court believe that copyright protection only applies to original works with some degree of intellectual creation, creative powers of the mind, and reflection of the author’s personality (in Infopaq and Feist Publications cases, respectively).  Since the element of originality is missing when an AI is the sole creator, this implies that the author is also missing.

Based on this, there are currently two possible solutions to this issue.  First, there is no author.  The IP created by AI will be considered public domain.  Alternatively, the work will be considered a derivative of all the materials AI used for training.

However, there is an obvious problem in proving that something was AI-generated.  Recognizing whether something was AI-generated or not is nearly impossible.  This is made further troublesome because even giving AI the same prompts can lead to different results based on the previous prompts given to the AI.  Even AI detection software can fail at this task, especially if the AI code is unavailable to the public, which most often will not be the case.  One way to avoid this issue is to have artists record the entire creation process.  This is, obviously, supremely impractical, but it’s the only way for now to avoid any doubts about the origin of IP.

The UK, however, takes a different approach.  The UK Copyright, Design, and Patents Act considers the author of any computer-generated work (meaning generated by a computer with no human author) to be the person that made the necessary arrangements to create the work.  This rule applies to copyright, music production, and design but not to patents.  While this provides some clarity, it still does not eliminate the uncertainty surrounding the authorship question.  The person who made the necessary arrangements could still be the person who made the AI or the user of the AI which gave the prompt, depending on how complex the prompts and algorithms are.  This would be the case if AI made the IP.  However, what if the AI didn’t create the IP but only helped you create it?  Could AI be considered an ordinary tool, just like any other software?  These cases are more complex and need to be solved individually.

We already have an existing case that may indicate a way to solve this issue in the future.  Zarya of the Dawn, a graphic novel created with the assistance of AI, has been registered in the United States.  It is important to note that AI only generated the images. A human writer conceptualized and structured the story, devised each page’s layout, and made artistic decisions about arranging all the components.

While the Copyright registration office is not a court, and courts may take a different stance on this issue, it is safe to assume that if there is significant human involvement in creating IP, AI could still be considered a tool like any other.

The Input-Output Issue

Ownership is the primary concern when it comes to using AI.  However, it is not the only one.  Another critical question needs to be addressed: Does the use of AI threaten the rights of other authors?

To understand this issue, we first need to know how AI works.  AI systems require data to generate an output.  This data comes from various sources, such as articles, books, pictures, social media, web pages, and databases.  The AI system then processes and analyzes the data using various techniques such as machine learning.  This is called training.  Training the AI with copyrighted content will likely be considered fair use.  But what about the output?

This leads us to the core of this issue.  AI collects vast amounts of data, which may contain copyright-protected material, such as images, paintings, or text.  To create IP, AI usually reviews or uses reproductions of other people’s work.  AI-generated content may reproduce entire copyrighted works, such as articles, songs, or images, without permission or citation.  As a result, the owners of AI technology may risk violating someone’s copyright.

However, several arguments contradict the idea of AI violating copyright.  First, the input data which the AI uses is massive.  There is so much data that many argue there can be no copyright infringement.  Despite that, a prompt could make AI focus on different data, narrowing the search, and harming the rights of only a selected handful of artists, coming dangerously close to plagiarism.  In this case, it would still be unclear who made the infringement, the prompt giver or the author of AI.

Second, AI learns from the work of other authors, just like humans.  Learning and experiencing other people’s work is the backbone of all creation.  How could we forbid learning?  However, unlike AI, humans can understand and appreciate art on multiple levels, from surface-level aesthetics to the deeper meanings and themes that underpin the work.  Humans can also develop their unique styles and perspectives over time, which can be difficult for AI to replicate.

Undoubtedly, this issue will significantly impact the companies developing AI, the entire AI industry, and the owners of the data used for training.  This is why we need to look at several cases that may indicate the future development of AI regulation regarding IP.

  1. Microsoft, OpenAI, GitHub

The first is a proposed class action lawsuit targeting Microsoft, its subsidiary GitHub, and business partner OpenAI, alleging that their creation of GitHub Copilot, an AI-powered coding assistant, relies on “software piracy on an unprecedented scale.” The lawsuit claims that Copilot is trained on code scraped from public repositories, some of which are published with licenses that require anyone reusing the code to credit its creator, which the Copilot did not do.  This lawsuit claims that Microsoft and its collaborators violated the legal rights of millions of programmers who spent years writing the original code.

According to Microsoft and OpenAI, the plaintiffs did not have the standing to bring the lawsuit as they failed to prove that they suffered specific injuries due to the companies’ actions.  Additionally, the companies argued that the case did not identify the copyrighted works that were allegedly misused or the contracts that were breached.

Microsoft also argued that the copyright allegations would conflict with the doctrine of fair use, which permits using copyrighted works without a license in certain circumstances.  They cited a 2021 decision by the US Supreme Court, which ruled that Google’s use of Oracle’s source code to develop its Android operating system was transformative fair use.

While the case is still in its early stages, it could have significant implications for the AI industry.

  1. Stability AI

Stability AI, the maker of Stable Diffusion, has been the subject of two lawsuits: a class action lawsuit and a lawsuit by Getty Images.

Getty Images sued Stability AI for allegedly infringing on Getty’s copyrighted images on a massive scale.  Getty claimed that Stability AI marketed its Stable Diffusion and DreamStudio interface to consumers seeking creative imagery.  Its success was partly due to the infringement of Getty Images’ content.

Getty Images also claimed in the lawsuit that Stability AI removed Getty’s copyright management information, provided false copyright management information of its own, and violated Getty’s trademarks by replicating Getty’s watermark on specific images.

This case might be somewhat specific because Stability AI didn’t directly collect the model’s training data or train the models behind the software.  Instead, it was developed by a German university (LMU Munich).  Therefore, it might be more difficult to prove copyright infringement in this case since using AI for educational purposes would fall under fair use.

Regardless, as this case develops, we will need to keep a close eye on it since it could significantly impact AI regulation in the future.

  1. Nova Productions v Mazooma Games

This case might be important when it comes to UK law.  Specifically, in this case, the court ruled that a person playing a computer game cannot be considered the author of screenshots taken while playing the game and that the player had not been involved in any of the necessary arrangements for creating the images.  The court instead determined that the game developers were responsible for making the arrangements required for creating the screenshots.  When we apply this principle to the issue of copyright regarding AI, this may imply that the author of the AI, more precisely, the designer of the learning algorithm, owns copyright over AI-created work.


Things will become even more complex as AI becomes more widespread and advanced.  That’s why new AI regulations are needed.  The question of who or what society wishes to reward would then need to be asked.  Should it be the owners of an AI system?

Regarding copyright protection, the approach that seems to provide the most economic incentives grants authorship to the person who made the AI possible.  This approach will ensure that companies invest in AI and technology, knowing they will get returns on their investment.

On the other hand, regarding patents, IP regulation can be very beneficial due to its flexibility.  Since patents made with AI can be easier to develop than regular patents, lawmakers could decide to protect the rights of AI-made patents for a shorter period than the standard patents.  This would balance the inventor’s rights, economic incentives, and social welfare.  Instead of just giving patents to the inventor, lawmakers could share the rewards of an AI-generated invention among the AI developer, the person directing the AI, and the owner of the data used to train it.

There may need to be more than existing copyright laws and regulations to address this new technology’s challenges and opportunities.  To ensure that AI-generated art is created and shared in a way that is both ethical and legally sound, future regulations will need to consider the unique characteristics of AI-generated works, the challenges of enforcing copyright in a digital age, and the need to balance the interests of creators, consumers, and society.  Only by approaching this issue with a thoughtful and nuanced perspective can we ensure that AI-generated art continues to push the boundaries of creativity while respecting all stakeholders’ rights and interests.  Luckily there is a new AI Act in the making.

By Miloš Petaković, Senior Associate, and Bojan Tutić, Associate, Gecic Law

Gecic Law at a Glance

Committed to redefining a law firm's role in an emerging regional market, Gecić Law is a full-service law firm that advises international and local clients from the public and private sectors in navigating the complex legal landscape of the region across multiple practice areas. Members of the Gecić Law team have graduated from leading universities in the US and Europe. They have extensive local and international experience, with a particular focus on EU regulatory frameworks and international trade and a proven track record in providing innovative and practical solutions in the most complex of matters.

Gecić Law is an exclusive member of two leading global alliances, TerraLex and TAGLaw, extending its international footprint. The firm and its lawyers have continuously been recognized in several practice areas by elite global directories, including The Legal 500, Chambers and Partners and Benchmark Litigation. Gecić Law was named Law Firm of the Year: South Eastern Europe 2021 and Law Firm of the Year: Eastern Europe and the Balkans 2020 at The Lawyer European Awards and was repeatedly nominated in other practice areas.

For more details, please visit geciclaw.com.