The European Union has responded to artificial intelligence’s unprecedented development and scale by introducing a new regulation. The Artificial Intelligence Act came into force at the beginning of August. Find out what it has in store below.
On 13 June 2024, the European Union adopted Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (the »AI Act«), which entered into force on 1 August 2024.
The AI Act aims to ensure the safe use of AI for all users. The AI Act divides AI into different tiers or classifies systems employing AI by the risk they pose. AI systems are thus divided into four categories:
- Minimal risk: No additional obligations are foreseen for these types of systems, though organisations may choose to voluntarily adopt appropriate codes of conduct. This category includes, for example, spam filters or AI-based video games.
- Specific transparency risk or limited risk systems: This category includes e.g. chatbots (chat GPT). These systems are subject to the requirement to clearly disclose to users that they are in contact with a machine. This will allow the individual to decide whether or not to continue the communication. In addition, they will need to be able to recognise content that has been generated by artificial intelligence. AI-generated content that is made public will also need to be labelled as AI-generated (this includes so-called “deep fake” audio and video content).
- High-risk systems: This includes e.g. systems for the administration of justice and democratic processes (artificial intelligence to search for court decisions) or recruitment, workforce management and access to self-employment (software to sort CVs as part of the recruitment process). These systems will have to comply with strict requirements, including implementing a risk mitigation system, human oversight, a qualitative dataset, automatic logging of activities over the system’s lifetime to ensure traceability, etc.
- Systems deemed a clear threat to fundamental human rights: such systems will be banned outright. This includes e.g. voice-assisted toys that encourage dangerous behaviour and the possibility of social scoring of individuals by state authorities (such as the Social Credit System in China). Therefore, the following systems will be banned:
-
- those that use subliminal techniques beyond a person’s conscious awareness;
- those that use deliberately manipulative or deceptive techniques, impairing a person’s ability to make an informed decision; or
- those that exploit vulnerabilities related to age, disability, or specific social or economic circumstances, etc.
Each EU Member State will also be required to designate or establish one Notifying Authority. This authority will be responsible for establishing and implementing the necessary procedures for the assessment, designation and notification of the bodies responsible for assessing compliance with this Act. Notified Bodies will be responsible for conformity assessment and for issuing certificates to the relevant systems.
The European Artificial Intelligence Office, established within the Commission in February 2024, is responsible for supervising the enforcement and implementation of the AI Act. Beyond supervision, the Office is also tasked with promoting global AI governance and ensuring that the European Union spearheads the ethical and sustainable development of AI systems.
The AI Act will only become fully applicable in two years, allowing for a gradual application period to adapt to the obligations it lays down. Prohibitions will apply after six months, rules on the regulation of general purpose systems after 12 months and obligations for embedding AI in regulated products after 36 months.
Given the extended timeline for the AI Act’s full implementation, the European Artificial Intelligence Authority launched the AI Pact specifically to facilitate the adaptation. The Office will also supervise organisations that make a voluntary commitment or pact. The latter encourages early compliance with the AI Act’s obligations. This voluntary initiative enables participants to share best practices and practical insights on implementing the Act, enhance the visibility and credibility of safeguards, and build trust in AI technologies.
Considering the current pace of development, it appears that AI will become an integral part of society. However, we can harness it to our advantage while ensuring that necessary safeguards are in place to protect users.
Author: Tina Marciuš Ravnikar, Attorney at Law