EditoRed

Association of media editors of the European Union, Latin America and the Caribbean

LAW ON ARTIFICIAL INTELLIGENCE: EUROPEAN COUNCIL AND PARLIAMENT REACH AGREEMENT ON FIRST RULES FOR IA IN THE WORLD​

The negotiators of the European Institutions upon reaching the agreement / Source: European Council

LAW ON ARTIFICIAL INTELLIGENCE: EUROPEAN COUNCIL AND PARLIAMENT REACH AGREEMENT ON FIRST RULES FOR IA IN THE WORLD

After three days of “marathon” talks, the Council presidency and European Parliament negotiators have reached a provisional agreement this Monday, December 11, 2023, on the proposal for harmonized rules on artificial intelligence (AI), the so-called artificial intelligence law .

The draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. This landmark proposal also aims to stimulate investment and innovation in AI in Europe.

The AI Act is a flagship legislative initiative with the potential to foster the development and adoption of safe and trusted AI across the EU single market by public and private actors. The main idea is to regulate AI based on its ability to cause harm to society following a ‘risk-based’ approach: the higher the risk, the stricter the rules.

As the first legislative proposal of its kind in the world, it may set a global standard for AI regulation in other jurisdictions, as the GDPR (General Data Protection Regulation or, in Spanish, Reglamento General de Protección de Datos) has done, thus promoting the European approach to technology regulation on the world stage.

MAIN ELEMENTS OF THE AGREEMENT

Compared to the Commission’s initial proposal, the main new elements of the interim agreement can be summarized as follows:

* Rules on high-impact general purpose AI models that may cause systemic risk in the future, as well as on high-risk AI systems.

* A revised governance system with some enforcement powers at the EU level

* Expansion of the list of prohibitions, but with the possibility of using remote biometric identification by law enforcement authorities in public spaces, subject to safeguards.

* Better protection of rights by requiring those implementing high-risk AI systems to conduct a fundamental rights impact assessment before putting an AI system into use.

DEFINITIONS AND SCOPE

To ensure that the definition of an AI system provides sufficiently clear criteria to distinguish AI from simpler software systems, the compromise agreement aligns the definition with the approach proposed by the OECD.

The interim agreement also clarifies that the regulation does not apply to areas outside the scope of EU law and should not, in any case, affect the competences of member states in the area of national security or any entity entrusted with tasks in this area.

In addition, the AI law will not apply to systems that are used exclusively for military or defense purposes . Similarly, the agreement states that the regulation would not apply to AI systems used for the sole purpose of research and innovation , nor to persons using AI for non-professional reasons.

CLASSIFICATION OF IA SYSTEMS AS PROHIBITED AND HIGH-RISK PRACTICES

The compromise agreement provides for a horizontal layer of protection, including a high-risk classification, to ensure that AI systems that are unlikely to cause serious violations of fundamental rights or other significant risks are not captured.

AI systems that present only limited risk would be subject to very light transparency obligations, e.g., disclosing that content was generated by AI so that users can make informed decisions about its further use.

A wide range of high-risk AI systems would be authorized, but subject to a number of requirements and obligations to access the EU market. These requirements have been clarified and adjusted by the co-legislators in such a way as to make them more technically feasible and less burdensome for stakeholders to comply with. For example, with regard to data quality or in relation to the technical documentation that must be produced by SMEs to demonstrate that their high-risk AI systems meet the requirements.

Since AI systems are developed and distributed through complex value chains, the compromise agreement includes changes that clarify the allocation of responsibilities and roles of the various actors in those chains, in particular suppliers and users of AI systems.

It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other legislation, such as relevant EU sectoral or data protection legislation.

For some uses of AI, the risk is considered unacceptable and, therefore, these systems will be banned in the EU.

The interim agreement prohibits, for example, cognitive behavioral manipulation, non-selective removal of facial images from the Internet or CCTV images, emotion recognition in the workplace and in educational institutions, social scoring, biometric categorization to infer sensitive data, such as sexual orientation or religion. beliefs, and some cases of predictive policing for individuals.

LAW ENFORCEMENT EXCEPTIONS

Taking into account the specificities of law enforcement authorities and the need to preserve their ability to use AI in their vital work, several changes were agreed in the Commission’s proposal regarding the use of AI systems for law enforcement purposes.

Subject to appropriate safeguards , these changes are intended to reflect the need to respect the confidentiality of sensitive operational data in connection with their activities.

For example, an emergency procedure was introduced that allows law enforcement agencies to deploy in case of emergency a high-risk AI tool that has not passed the conformity assessment procedure . However, a specific mechanism has also been introduced to ensure that fundamental rights are sufficiently protected against possible misuses of AI systems.

In addition, as regards the use of real-time remote biometric identification systems in publicly accessible spaces, the interim agreement clarifies the objectives when such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore exceptionally be allowed to use such systems.

The compromise agreement provides for additional safeguards and limits these exceptions to cases of victims of certain crimes, the prevention of genuine, present or foreseeable threats, such as terrorist attacks, and the search for persons suspected of the most serious crimes.

GENERAL-PURPOSE AI SYSTEMS AND BASIC MODELS

New provisions have been added to account for situations where AI systems can be used for many different purposes (general purpose AI) and where general purpose AI technology is subsequently integrated into another high-risk system.

The interim agreement also addresses the specific cases of general purpose AI (GPAI) systems.

Specific rules have also been agreed upon for basic models, large systems capable of competently performing a wide range of distinctive tasks, such as generating video, text, images, conversing in side language, computing, or generating computer code.

The interim agreement provides that base models must comply with specific transparency obligations prior to commercialization.

A stricter regime was introduced for “high-impact” foundation models. These are data-intensive trained foundation models with advanced complexity, capabilities and performance well above average, which can spread systemic risks along the value chain.

A NEW GOVERNANCE ARCHITECTURE

Following the new rules on GPAI models and the clear need for their implementation at EU level, an AI Office is created within the Commission tasked with overseeing these more advanced AI models, helping to promote standards and testing practices, and enforcing common standards across member states.

A scientific panel of independent experts will advise the IA Office on GPAI models, contributing to the development of methodologies for assessing the capabilities of foundation models, advising on the designation and emergence of high-impact foundation models, and monitoring potential material safety risks related to foundation models.

The IA Board, which would be composed of Member State representatives, will remain a coordination platform and advisory body to the Commission and will give an important role to Member States in the implementation of the regulation, including the design of codes of practice for foundations. models.

Finally, a consultative forum for stakeholders, such as representatives of industry, SMEs, start-ups, civil society and academia, will be established to provide expertise to the IA Board.

PENALTIES

Fines for violations of the AI law were set as a percentage of the overall annual turnover of the offending company in the previous financial year or a predetermined amount, whichever is higher.

This would amount to €35 million or 7% for violations of prohibited AI applications, €15 million or 3% for violations of AI law obligations and €7.5 million or 1.5% for the provision of incorrect information.

However, the provisional agreement sets more proportionate limits on administrative fines for SMEs and start-ups for violations of the provisions of the AI Law.

The settlement agreement also makes it clear that a natural or legal person may file a complaint with the relevant market surveillance authority in relation to non-compliance with the AI Law and can expect such a complaint to be handled in accordance with the specific procedures of that authority.

TRANSPARENCY AND PROTECTION OF FUNDAMENTAL RIGHTS

The interim agreement provides for a fundamental rights impact assessment before its implementers place a high-risk AI system on the market.

The interim agreement also provides for greater transparency with respect to the use of high-risk artificial intelligence systems.

In particular, certain provisions of the Commission’s proposal have been amended to indicate that certain users of a high-risk AI system that are public entities will also be required to register in the EU database for high-risk AI systems.

In addition, the newly added provisions emphasize the obligation for users of an emotion recognition system to inform natural persons when they are exposed to such a system.

MEASURES TO SUPPORT INNOVATION

With a view to creating a more innovation-friendly legal framework and promoting evidence-based regulatory learning, the provisions on innovation support measures have been substantially modified compared to the Commission’s proposal.

In particular, it has been clarified that the limited AI regulatory testing environments , which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing innovative AI systems under real-world conditions.

In addition, new provisions have been added to allow testing AI systems in real-world conditions, under specific conditions and safeguards.

To ease the administrative burden for smaller companies, the interim agreement includes a list of actions to be taken to support such operators and provides for some limited and clearly specified derogations.

ENTRY INTO FORCE

The interim agreement provides that the AI law should apply two years after its entry into force, with some exceptions for specific provisions.

Following today’s provisional agreement, work will continue at the technical level in the coming weeks to finalize the details of the new regulation.

The Presidency will submit the compromise text to the representatives of the Member States (Coreper) for approval once this work has been completed.

The full text will have to be confirmed by both institutions and undergo a legal-linguistic review before formal adoption by the co-legislators.

 

This article was originally published in Aquí Europa.

Member access

Member access