The planned legal framework for artificial intelligence is intended to set uniform standards for the protection of security and fundamental rights throughout Europe. At the same time, the EU wants to use it to promote acceptance of and investment in AI.
Proposal of the EU Commission for the regulation of AI
In April 2021, the EU Commission already presented a first draft for the regulation of artificial intelligence in the EU in April 2021. The regulation is to apply not only to providers, i. e. the developers of AI systems, but also to their users, provided they use AI for professional activities. This means that the entire corporate use of AI is covered, especially if the AI is integrated into a company's own products.
The EU Commission's proposal follows a risk-based approach - comparable to the GDPR. The use of AI is thereby divided into four different risk types (unacceptable risk, high risk, limited risk, minimal risk) depending on the intended use.
For example, AI systems that perform biometric identification or categorization, or are used in law enforcement or critical infrastructure such as transportation, are to be classified as high risk AI. In contrast, the free use of AI-powered spam filters will be low risk.
Risk groups
Unacceptable Risk ➤ Use of AI is prohibited
AI systems that pose a threat to humanity will be prohibited.
They include:
- Cognitive behavioral manipulation of individuals or certain vulnerable groups,
- Social scoring: classifying people based on behavior, socioeconomic status, and personal characteristics
- Real-time remote biometric identification systems
High-risk AI systems ➤ The use of AI is continually being evaluated
AI systems with a high risk to the health and safety or fundamental rights of natural persons.
- AI systems in products covered by EU product safety regulations.
- AI in eight specific areas to be registered in an EU database:
- Biometric identification and categorization of natural persons.
- Management and operation of critical infrastructure
- Education and training
- Employment, workforce management and access to self-employment
- Access to and use of essential private and public services and benefits
- Law enforcement
- Management of migration, asylum, and border control.
- Assisting in the interpretation and application of laws.
These AI systems must be assessed and verified before they are placed on the market and throughout their lifecycle.
Generative AI: Additional transparency requirements.
Generative AI that generates content based on requests and specifications, such as ChatGPT must meet additional transparency requirements, such as disclosing that the content was generated by AI.
Limited-risk: Low transparency requirements
Limited-risk AI systems have minimal transparency requirements to allow users to make informed decisions.
Different obligations and requirements should apply depending on the categorization of the AI system in question. Systems with unacceptable risk, i. e. those that contradict the EU’s ethical principles, are to be banned. According to the EU Commission, this should apply, for example, to social scoring systems.
High-risk AI is to be the most heavily regulated. According to the Commission's draft, providers and users of such systems are to be subject to the following obligations, among others.
- Ensure high data quality
- Information requirements to end users
- Human oversight measures to minimize risk
- Recordkeeping and documentation obligations
- Implement risk assessment and mitigation systems.
Providers and commercial users of limited-risk AI systems, on the other hand, are primarily required to comply with certain transparency requirements.
According to the EU Commission, the vast majority of AI systems currently deployed in the EU fall into the "low risk" category. Such systems should be able to be developed and deployed without additional legal obligations.
Note: In addition to the AI Regulation, the EU Commission has presented a draft for an AI Liability Directive, which is intended to regulate the liability consequences of damage caused by AI systems.
Negotiating position of the EU Parliament
With the publication of its final position on the Commission's draft, the EU Parliament has made some further changes to the AI Regulation on June 14, 2023 and has now introduced it into the legislative process.
Among other things, the already broad definition of AI systems was broadened again. Accordingly, AI is now defined as "a machine-based system that is designed to operate with varying degrees of autonomy and that can generate results, such as predictions, recommendations, or decisions, for explicit or implicit goals that affect the physical or virtual environment".
The EU Parliament's addition of so-called generative AI to the regulation, which includes the ChatGPT tool, is also particularly noteworthy. In addition to transparency obligations, providers of such models are to ensure that the system does not produce illegal content and publish detailed summaries of the proprietary data they have used for training purposes.
To accompany this, the EU Parliament wants to reduce the level of fines for violations of the rules, with only a few exceptions. In addition, exemptions for research activities and AI components made available under open source licenses are intended to encourage AI innovation.
Further procedural steps
Following the position of the EU Parliament, the final negotiations in the trilogue procedure can now begin. As part of the coordination between the EU Parliament, the Council of Ministers and the EU Commission, the final draft of the AI Regulation is to be drawn up and agreed upon. This process is expected to be completed by the end of 2023. If successful, the regulation would enter into force this year and the majority of the provisions would have to be implemented by the companies concerned within a period of 24 months.
Note: Due to their design as a regulation, the rules are directly applicable. Prior transposition into national law is not required.