The EU’s AI Act, the world’s first comprehensive legislation on artificial intelligence, imposes the bulk of its due diligence obligations on companies that sell AI systems that are particularly risky for people’s health or fundamental rights.
Where this hefty regulatory burden will land in practice might largely depend on the concept of “intended purpose,” which, according to the AI Act, means “the use for which an AI system is intended by the provider, including the specific context and conditions of use.”
The notion comes from EU product safety law, of which the AI Act is a prominent example, but it doesn’t sit quite right with so-called general-purpose AI systems, or GPAI, such as OpenAI’s ChatGPT or Microsoft’s Copilot, which have no specific purpose but can carry out various tasks.
What is the intended purpose of a system designed for an infinite number of uses? Who should be responsible for due diligence over a GPAI system used for a high-risk application? What is the regulatory risk for users going beyond a system’s intended purpose? These and more questions are bound to bug the industry.
Risk minimization
One of the first questions to which GPAI system providers such as Microsoft and OpenAI will have to seek an answer is whether allowing customers to use their systems for use cases that the AI Act deems high-risk would automatically make them high-risk.
For Dessislava Savova, a partner at law firm Clifford Chance, GPAI system providers could definitively make the point that their system was not designed specifically for that high-risk purpose, which is different from a situation where you don't prevent some uses. This argument would be of course prevented if the documentation put in place by providers directly refers to high-risk use cases with respect to the relevant GPAI system.
In this reading, as GPAI systems are by definition not built with any specific intended use, the crux is the difference between the intention and a possibility, meaning that the bulk of the responsibilities should shift from the provider to the user, dubbed system "deployer" in the AI Act.
Conversely, regulators might argue that including a high-risk application in the acceptable-use policy would make the provider intentionally acknowledge it as an inherent function of the GPAI system.
This view might be reinforced by the AI Act’s article on how market surveillance authorities should cooperate on the supervision of GPAI systems, including when they have sufficient reason to consider as non-compliant a GPAI system that can be used “for at least one purpose that is classified as high-risk.”
Without clear guidance, the argument could go both ways. Thus, based on their risk appetite, some GPAI system providers are already introducing restrictions in their licensing terms that their system cannot be used for high-risk use cases.
“Many organizations are responding pragmatically to new regulatory obligations, which included reviewing their terms and ensuring that those terms comply with new laws," Barry Scannell, a partner at the law firm William Fry, told MLex.
Market dynamic
As AI companies factor “intended purpose” into their corporate strategy, the incentive to minimize one’s regulatory risk might create a market dynamic in which the acceptable use policy becomes a competitive element.
Clifford Chance’s Savova notes that the most advanced AI adopters are already looking at the acceptable use policy when negotiating the licensing terms with GPAI system providers.
“The AI Act’s whole philosophy is based on use cases, so when multiple uses are possible, it’s important to work on the intended purpose. But the law says very little about the contractual relationship between providers and deployers,” Savova said.
In other words, once the AI Act’s high-risk regime kicks in, the considerations for choosing one GPAI system over its competitors might not relate just to which system is the most performative but also to what providers allow it to be used for.
“I could see a market emerging where GPAI system providers could advertise their AI products based on their limited or general intended uses plus performance as well as other aspects,” Victoria Hordern, a partner at the law firm Taylor Wessing, told MLex.
GPAI system providers might face specific questions, however, when including “intended purpose” in their offerings. Suppose OpenAI doesn’t allow ChatGPT to be used to screen CVs, except for a few high-value clients. Would that make the whole of ChatGPT high-risk? Or only insofar as it is used by the high-value clients?
An argument could be made here that these are essentially different GPAI systems since they have different intended purposes, and it would seem disproportionate for the other customers to have to comply with the obligations of high-risk system users. But again, until there is relevant regulatory guidance, companies cannot be sure.
A time bomb
For this market dynamic to develop fully, there would need to be a widespread understanding of the regulatory risk, which varies massively within the industry, at least in these early days. And the liability risks might be extremely significant.
The AI Act provides that anyone who modifies the intended purpose of an AI system, including a GPAI system, in such a way as to make the system high-risk, automatically becomes a high-risk system provider.
For William Fry’s Scannell, these provisions have the potential to be nothing short of a “massive time bomb,” as companies might become providers of high-risk AI systems without even realizing it.
Suppose that, ahead of the AI Act’s regime for high-risk systems entering into application in August 2026, Microsoft forbids the use of Copilot for filtering job applications in its acceptable-use policy.
If someone in a company starts using Copilot for screening CVs, that would be considered as fundamentally changing the AI system's intended purpose for that company. Could misuse by a single employee turn the company into a high-risk system provider?
If so, the company would then have to comply with the AI Act’s relevant requirements, such as risk management and data governance. But the company's management might not even realize the misuse is happening.
Someone who doesn’t land a job with the company and decides to investigate might file a data subject access request under EU data protection law and find out that the recruitment decision was based on an AI system without the proper due diligence.
The regulatory risk of such a not-so-remote possibility is far from negligible. And the stakes are high: Non-compliance with high-risk providers’ obligations can result in a fine of up to 3 percent of the company’s worldwide annual turnover or 15 million euros, whichever is higher.
“Companies are of course aware that employees may use AI tools for uses that the company has not permitted,” Taylor Wessing’s Hordern said. “We're seeing companies try to anticipate this activity through policies and training.”
Much like in the early days of the General Data Protection Regulation, when employees were unaware they were processing personal data, companies might indeed have a hard time keeping tabs on what AI systems are being used across the board and for what purposes.
“These [value chain] provisions are untidy and too broad. There are so many ways they could be triggered. It’s a big risk that always comes up when discussing AI policies with companies to monitor what employees are doing,” Scannell said.
Interested in staying up to date on the latest regulatory changes and what’s on the horizon? MLex identifies risk to business wherever it emerges, with expert journalists pinpointing the likely impact of the proposals, investigations, enforcement actions and rulings that matter most to your business and clients.
By Luca Bertuzzi, Senior Correspondent, MLex