Home Articles EU passes legislation regulating AI

EU passes legislation regulating AI

By: Francesca Scott

December 19 2023

scaled (Source: Reuters Connect)

In a first-of-its-kind legislation, the European Union (EU) agreed on its long-awaited Artificial Intelligence (AI) Act that regulates the use of AI. The rules include transparency requirements and aim to “mitigate harm in areas where using AI poses the biggest risk to fundamental rights,” according to the MIT Technology Review. Potentially making it harder to spread misinformation in the form of AI-generated content, the legislation will affect all generative AI companies and any industries that employ it, including automotive, healthcare, tech, finance, security, and surveillance.

The EU is the first body to enact such a law globally. While the initial framework was developed in 2021, a provisional agreement was made on December 9 this year and will be enforced by 2025. 

After ChatGTP’s release in November 2022, the following summer saw a statement signed by policymakers and industry leaders that declared, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

A tiered approach focuses on power and risk. As quoted by the Financial Times, EU Commissioner Thierry Breton said the act prioritized “transparency requirements for all general-purpose AI models (such as ChatGPT)” and “stronger requirements for powerful models with systemic impacts.”

Set by members of the European Parliament, the legislation is to ensure that “AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”

Breaches of the act will result in fines of up to €35 million for the highest infringements and €15 million in penalties for lesser incursions. Breaches include the dissemination of misinformation, the manipulation of political discourse, privacy violations, and bias and discrimination. Failure to adequately disclose the use of AI would also be considered a breach. 

Legislation delayed by AI advancement

Initially, the act ranked AI risk levels, aiming to ban systems capable of cognitive behavioral manipulation and biometric identification. When new generative AI models like ChatGTP emerged, the EU was forced to create new rules to accommodate the tech. Unlike traditional AI, which completes tasks by following predefined rules, foundational models like ChatGPT that translate data into new material give more dynamic responses. 

The EU was forced to create additional rules to encompass the technology. The act now covers labeling AI-generated content, ensuring model designs don’t allow the production of illegal content, and transparency on data the model is trained on. These rules allow for transparency on when AI is being used and how it works.

However, the legislation is not without criticism. The process of labeling high-risk AI systems that can impact human rights appears to have a loophole that allows the developers themselves to make such a classification. Amnesty International has voiced concerns that “The same company that would be subject to the law is given the power to unilaterally decide whether or not it should apply to them.” More caveats allow biometric identification to be used in cases of “urgency,” in cases of national security, and emotion recognition to be used in cases of “safety” in workplaces. 

Would you like to submit a claim to fact-check or contact our editorial team?

0 Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before