The EU Artificial Intelligence Act

The EU Artificial Intelligence Act (EU AI Act) is a European Union regulation concerning Artificial Intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). It comes into force on 1 August 2024, with provisions coming into operation gradually over the following 6 to 36 months.

Provisions

Risk categories

There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI:

  • Unacceptable risk – AI applications in this category are banned, except for specific exemptions. When no exemption applies, this includes AI applications that manipulate human behaviour, those that use real-time remote biometric identification (such as facial recognition) in public spaces, and those used for social scoring (ranking individuals based on their personal characteristics, socio-economic status or behaviour).
  • High-risk – AI applications that are expected to pose significant threats to health, safety, or the fundamental rights of persons. Notably, AI systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice. They are subject to quality, transparency, human oversight and safety obligations, and in some cases require a “Fundamental Rights Impact Assessment” before deployment. They must be evaluated both before they are placed on the market and throughout their life cycle. The list of high-risk applications can be expanded over time, without the need to modify the AI Act itself.
  • General-purpose AI – Added in 2023, this category includes in particular foundation models like ChatGPT. They are subject to transparency requirements. High-impact general-purpose AI systems which could pose systemic risks (notably those trained using a computational capability exceeding 1025 FLOPS) must also undergo a thorough evaluation process.
  • Limited risk – AI systems in this category have transparency obligations, ensuring users are informed that they are interacting with an AI system and allowing them to make informed choices. This category includes, for example, AI applications that make it possible to generate or manipulate images, sound, or videos (like deepfakes). In this category, free models that are open source (i.e., whose parameters are publicly available) are not regulated, with some exceptions.
  • Minimal risk – This category includes, for example, AI systems used for video games or spam filters. Most AI applications are expected to fall into this category. These systems are not regulated, and Member States cannot impose additional regulations due to maximum harmonisation rules. Existing national laws regarding the design or use of such systems are overridden. However, a voluntary code of conduct is suggested.

Exemptions

Articles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act.

Article 5.2 bans algorithmic video surveillance only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including “a real and present or real and foreseeable threat of terrorist attack”.

Recital 31 of the act states that it aims to prohibit “AI systems providing social scoring of natural persons by public or private actors”, but allows for “lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law.” La Quadrature du Net interprets this exemption as permitting sector-specific social scoring systems, such as the suspicion score used by the French family payments agency Caisse d’allocations familiales.

Governance

The AI Act establishes various new bodies in Article 64 and the following articles. These bodies are tasked with implementing and enforcing the Act. The approach combines EU-level coordination with national implementation, involving both public authorities and private sector participation.

The following new bodies will be established:

  1. AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of general-purpose AI providers.
  2. European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice.
  3. Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process.
  4. Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for general-purpose AI models (notably by launching qualified alerts of possible risks to the AI Office), and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings.

While the establishment of new bodies is planned at the EU level, Member States will have to designate “national competent authorities”. These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting “market surveillance”. They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.

Enforcement

The Act regulates the entry to the EU internal market using the New Legislative Framework. It contains essential requirements that all AI systems must meet to access the EU market. These essential requirements are passed on to European Standardisation Organisations, which develop technical standards that further detail these requirements.

The Act mandates that member states establish their own notifying bodies. Conformity assessments are conducted to verify whether AI systems comply with the standards set out in the AI Act. This assessment can be done in two ways: either through self-assessment, where the AI system provider checks conformity, or through third-party conformity assessment, where the notifying body conducts the assessment. Notifying bodies also have the authority to carry out audits to ensure proper conformity assessments.

Criticism has arisen regarding the fact that many high-risk AI systems do not require third-party conformity assessments. Some commentators argue that independent third-party assessments are necessary for high-risk AI systems to ensure safety before deployment. Legal scholars have suggested that AI systems capable of generating deepfakes for political misinformation or creating non-consensual intimate imagery should be classified as high-risk and subjected to stricter regulation.


Source: