With artificial intelligence dominating tech conversations over the last year and with a draft AI Act being looked at by the EU, CMS Partners Dora Petranyi, Gabriela Staber, Klaus Pateter, and Olga Belyakova look at where AI is today and how European legislation might impact its future.
CEELM: To establish context, what are we talking about when we say “AI-driven innovation?”
Belyakova: Technically speaking, these are new solutions that are created on the basis of AI algorithms. However, it’s not as straightforward as it may seem. At present, AI is a topic of widespread discussion, yet it involves several essential components. Primarily, for AI to progress, it requires access to data. When we observe the journey of ChatGPT from its initial release to its current state, it is obvious that once it gathered a lot of data, it became much smarter. At the same time, as the volume of data increases, the demand for AI to be reliable and safe also rises, and that is where AI regulations come into play. AI is currently in its early stages and is too young to gain the immediate trust of the public – it will have to earn it going forward.
Petranyi: Lawyers specializing in TMT have been closely monitoring the use of big data over the years. However, events at the end of last year really stood out – AI became remarkably user-friendly and comprehensible for a broader audience. It generated enough interest that related articles are not only on the front page of geeky magazines but in mainstream media. Another important factor is that it’s not just AI acting in isolation, but it’s the collaborative effort of humans and AI that produces the outcomes. It is not a disconnected feature but there is a close connection between what humans do and what AI generates. Additionally, there are pilot projects and a lot of noise related to AI. That is why there is a huge need for regulation.
CEELM: Speaking of regulations, how does the relevant regulatory/legislative landscape look nowadays?
Petranyi: At the EU level, there is a draft AI Act currently under consideration, but notably, the very definition of AI is largely contested. However, it is critical to start with definitions and incorporate human involvement elements in it. For example, one contentious issue is whether the act should make an exception for AI developed for strategic defense purposes, particularly given the increasing prevalence of AI-driven military operations. Once definitions are established, another challenge is the categorization of AI. The EU plans to classify AI into various categories, including high-risk, low to medium-risk, and low-risk, each carrying specific regulatory obligations and responsibilities.
Staber: It’s important to emphasize that companies are responsible for self-categorizing, while the regulatory authorities overview these classifications. When it comes to high-risk AI, companies will be obligated to undergo a certification process. This becomes particularly relevant for certain sectors like life sciences, which heavily relies on AI in medical devices that are deemed high-risk. In such cases, companies will need to secure compliance with two sets of requirements – for medical devices and for AI. Many industry experts express concerns about the substantial impact these regulations may bring, including the introduction of new liability rules for AI. These rules will establish specific disclosure requirements for AI systems and lower the burden of proof for claimants in civil litigation cases involving damages if the provider has failed to comply with regulatory requirements.
Pateter: A notable addition in the latest draft of the AI Act is an interesting aspect of AI’s definition – the ability for AI to function, at least partially, autonomously. While it doesn’t provide complete clarity, it represents a step forward in terms of progress.
Moreover, in this most recent draft, a new type of AI, known as the foundational model, has been introduced. This is a response to the widespread adoption of systems like ChatGPT, which can go beyond just web applications and extend to APIs. These foundational models have found applications in various sectors, including healthcare and the legal industry.
CEELM: Taking into account the ongoing progress, what are your expectations for the evolution of AI in the region?
Pateter: As with other regulatory measures, the AI draft act will likely foster innovation itself and enable its mass adoption, especially among conservative industries. At present, decision-makers are trying to navigate a grey area of law. Having well-defined, clear regulations and legal frameworks, on the other hand, will likely provide them with a sense of security in what they do.
Staber: Interestingly, there is a concern that the EU’s role as the first to enact regulations could potentially have adverse effects on innovation within Europe. Such regulations may risk impeding innovation on the continent. Much discussion has revolved around the EU’s competitiveness in this field, taking into account the broader global dynamics involving the US and China – factors that cannot be disregarded.
Belyakova: On the other hand, looking at the GDPR, the EU has established itself as a trendsetter, with the biggest part of the rest of the world subsequently adopting its standards. The GDPR regulations have already surfaced in the US, the Middle East, and various other regions, with all of them considering the EU’s model as a precedent. I suspect that the same pattern will emerge with AI. While the exact rules are yet to be defined, the EU stands as a flagship in this regard, and overlooking its regulations is unlikely when considering the global landscape.
Petranyi: Indeed, the EU has proudly been protecting the rights of individuals, especially when compared to many counterparts that often prioritize profit. Nevertheless, the regulator’s quick response to introduce regulations, although fueled by good intentions, can still be a concern. As for the implementation of the AI regulation, there is a two-year timeframe in sight. I anticipate that innovation won’t stagnate during these two years, and we might encounter numerous unexpected developments. Caution is needed not to overregulate based solely on current observations, as things can evolve very quickly, especially in agile areas such as AI.
CEELM: To what extent is CEE at the forefront of such innovation? Or is it simply following the lead of other jurisdictions?
Belyakova: Undoubtedly, CEE is at the forefront of technology innovation. Wherever technology succeeds, countries try to claim it as their own. Prominent companies like Grammarly, ReSpeecher, and others were born in CEE and maintain substantial offices and R&D divisions in the region. Even non-CEE companies are tapping into CEE talent pools to advance their AI initiatives, as CEE possesses immense talent and an entrepreneurial spirit. What differs is that companies from our region might be too modest to shine bright and scream to the world about their achievements.
Staber: In Austria, there are a few remarkable, but lesser-known innovative companies. In the biotech sector, we have a wealth of universities and startups that use AI, particularly in the domains of precision and personalized medicine. They explore various treatment options for different ailments, experimenting with existing medications when conventional approaches yield no results. The outcomes have been highly promising.
Pateter: From the startup perspective, finances are consistently a primary concern. The reason the US remains a big competitor lies in the ease of access to venture investments, way easier than in Europe. In Europe, we encounter a more intricate and diverse market, and with the ongoing process of integration, I believe the EU and CEE markets will eventually catch up. In due time, I believe we’ll become even more competitive, including in the field of IT and AI and their use in specialized domains, particularly within the industrial sector.