Artificial Intelligence Act: EU agrees on the Regulation of Artificial Intelligence
The EU has reached an agreement on a new regulation for the use of Artificial Intelligence, the Artificial Intelligence Act (AI Act). Will this enable them to strike a balance between regulation and the competitiveness of European companies? On one hand, the AI Act sets clear guidelines for the use of AI, while on the other hand, it presents numerous challenges for companies.
Citizen Trust and Competitiveness as EU Goals
Already in April 2021 – long before AI with ChatGPT became a hot topic - the EU Commission made a proposal on how Artificial Intelligence should be used in Europe. With the proposed AI regulation, the EU aimed to strengthen citizens' trust in what is likely a groundbreaking new technology and to create a legal framework for competitive use within the EU.
The negotiations of the AI Act were overtaken by the release of the chatbot ChatGPT, which generates texts, images, or source code in real time using generative AI. At the end of 2023, the trilogue process between the EU Commission, EU Parliament, and European Council resulted in an agreement. The final text is expected to be definitively adopted in March 2024 and the AI regulation is set to come into effect shortly thereafter. However, it is expected to unfold its effects after a transition period of two years, likely in the summer of 2026. For companies planning to deploy AI applications, this is no reason to rest on their laurels.
Risk-Based Approach in the Classification of AI Systems
The AI Act initially creates a uniform framework that classifies AI systems based on risk. The law distinguishes between AI systems with unacceptable, high, low, or minimal risk. The impact can be summarized briefly: The higher the risk, the higher the requirements for the respective AI system. These range up to a general ban on AI systems with an unacceptable risk.
The EU sees an unacceptable risk in all AI applications where AI is used to influence the behavior of individuals in such a way that harm is inflicted on the person or a third party. It also prohibits practices aimed at exploiting or influencing vulnerable groups (age, disability, social situation) or at using social scoring to the detriment of the individuals concerned. The use of real-time remote biometric identification systems in public spaces for law enforcement is generally prohibited, with a few specific exceptions.
The key area of regulation is high-risk AI applications. These are systems that pose a significant risk to health, safety, or fundamental rights. They are subject to strict requirements regarding transparency, data accuracy, and human oversight. High-risk systems include AI applications in the field of autonomous driving or medical technology, but a wide range of other systems fall into this category. These include AI systems in critical infrastructures, education, employment, and law enforcement.
For systems that pose only low risk, the AI Act, in turn, provides for a simplified catalog of obligations. Here, transparency obligations are at the forefront. This is to ensure that the end-user knows that they are using a system with AI.
AI systems with minimal risk, however, are not covered by the AI Act. They can therefore be used without restrictions. The EU had in mind simple AI systems, such as automated elements of firewalls or SPAM filters. With the increasing spread of generative AI, more and more AI systems are likely to fall within the scope of the AI Act in the future.
High Requirements for High-Risk Systems
Companies that develop, distribute, or want to use AI systems with a high risk must comply with a multitude of requirements. In addition to the general transparency obligations introduced with the AI Act for the regulated classes, numerous other obligations must be implemented. First, a comprehensive risk analysis must be carried out and, comparable to a data protection impact assessment, measures must be taken to minimize the risks. Furthermore, it must be ensured that the AI used in the system has been trained only with reliable and high-quality data.
This is to avoid bias and inaccurate results. The systems must also be particularly secure against manipulative interventions, such as cyber-attacks. In addition, providers must ensure that human control of the AI is possible. For example, it must be ensured that human interaction can correctively intervene in the activity of the high-risk system or stop it. A typical application would be driver intervention in a semi-autonomous vehicle.
If the system processes personal data, additional data protection requirements must be observed, and providers must keep detailed records of the development, training, deployment, and use of high-risk AI systems to ensure traceability and accountability.
With these high requirements, the EU intends to create a framework that utilizes the benefits of AI while simultaneously minimizing risks and protecting fundamental values and rights. Companies that develop, offer, or use such systems must fully comply with the requirements. Otherwise, they face fines of up to 35 million euros or 7 % of the global annual group turnover.
Risk-Independent Requirements for "General Purpose AI"
Since the spread of language models like ChatGPT occurred after the first draft of the AI Act, the legislator felt compelled to also impose additional regulations for such AI systems that have a broad general application area or purpose, which apply regardless of the previously described risk classification (including systems with minimal risk).
All providers of "General Purpose AI" must implement comprehensive transparency obligations. This is especially true concerning the use of such language models for generating or manipulating texts and images.
There are additional requirements if the systems are particularly powerful and may pose systemic risks. Providers will have to comply with additional obligations, such as serious incidents monitoring or model evaluation. The rights of authors are also strengthened, making it easier for them to object to training or processing of copyrighted works.
Since the currently most widely used large language models like ChatGPT or Gemini do not originate from providers in the EU, the EU also had their providers in mind with the AI Act and imposes numerous obligations not only on German providers but on all those who distribute their products in the EU or use data from the EU.
Do Not Sleep on the Implementation of Requirements
Even though most of the AI Act's regulations will not fully take effect until mid-2026, companies that want to integrate Artificial Intelligence into their products and services are well advised to familiarize themselves with the requirements of the AI Act early on. In particular, the obligations that already apply to the training and development of such systems should be implemented early on to avoid a rude awakening once the responsible supervisory authorities monitor and verify compliance with the AI Act.
Note: In early June 2024, face-to-face events on the topic of "Artificial Intelligence" in small and medium-sized enterprises will take place in Hamburg, Cologne, and Stuttgart as part of the "Focus Law" event series. Sven Körner, author, founder, and AI researcher, will give a keynote speech. Subsequently, the IT law experts from RSM Ebner Stolz will present the legal framework in the context of implementing innovative AI projects. Further information on these events and registration options will be available here shortly.
Contact