The AI regulation: from transparency obligations to banning AI with unacceptable risks

Chat GPT, Bard and Gemini, artificial intelligence (“AI”) is increasingly being used in contemporary life, but with new innovations also come new risks. For example, there is a lack of transparency about how AI output is created, but it is also the question whether privacy and the protection of personal data are adequately safeguarded. This means that rules are needed to ensure that AI does not infringe on the rights of European citizens. As part of the digital transformation, the European Union wants to regulate the use of AI through the AI Act, among other things. On Feb. 2, member states unanimously approved this regulation.

In this article, we discuss the outline of this new regulation, and how it will impact you and your business.

Who is this regulation intended for?

The AI Act applies inter alia to providers and users of AI systems within the EU. A provider of AI systems is a (legal) person or other organization that develops an AI system or offers and AI system (with or without payment) on the market.

A user of an AI system is a (legal) person or other organization that, under its own responsibility, uses an AI system in the context of its professional activity, for example, a large web shop that uses AI to determine what advertising is shown to a user.

Finally, the AI Act also applies to importers and distributors of AI. For suppliers and users of AI located in non-European countries, the AI Act applies at the time the output (the results generated by the AI system) is used within the European Union. The regulation does however not apply to AI systems developed or used only for military purposes.

Level of risk

AI-act takes a risk-based approach and classifies AI systems according to the following four categories:

  1. Unacceptable risk
  2. High risk
  3. Low risk
  4. Minimal risk

Unacceptable risk

AI systems that fall into this category are in principle prohibited. These are AI systems that pose a clear threat to the safety and fundamental rights of individuals.

This is the case, for example, when manipulation is carried out through subliminal techniques (techniques whose effects are not consciously realized) or when the vulnerabilities of specific groups, such as children, persons with disabilities or persons belonging to a specific socially or economically vulnerable group, are abused. In addition, the use of biometric recognition is prohibited insofar as it is aimed at identifying special personal data such as political or religious affiliation, sexual orientation, and so on).

The regulation prohibits “social scoring” if it results in adverse treatment of individuals or groups that is unrelated to the context of the data collected or if those adverse effects are disproportionate to the social behavior exhibited. Furthermore, the use of real-time biometric data for remote identification for law enforcement purposes is not allowed unless it is necessary for a strict set of purposes defined in the regulation, such as in the case of kidnapping or human trafficking.

Finally, the following AI fall under this category: AI that predicts whether someone will engage in criminal behavior, facial recognition software based on scrapping photos from the Internet, AI that can infer emotions within work environments and schools.

High risk

This category includes risks to the health and safety or fundamental rights of natural persons. This includes AI that is used for profiling. Other examples include AI used in critical infrastructure, education (e.g., in reviewing exams), algorithms assessing job applicants, credit scoring in banks, and AI used within the medical sector or in law enforcement. Also, AI used to secure a product regulated by law and may fall as in the high-risk category.

However, the mentioned AI systems are not considered high-risk if the output is only of collateral importance and it does not pose a significant risk to the security or fundamental rights of natural persons. For example, consider AI that analyzes or structures incoming data, or AI that cleans up texts or makes text suggestions.

This category of AI is allowed if they comply with mandatory regulations and a conformity assessment has been conducted beforehand by the provider of AI. An assessment looks at whether the AI complies with the AI Act and pays particular attention to the control of risk and the protection of fundamental rights.

AI that falls under this high-risk category must meet strict requirements before the AI system can be used or marketed:

  • adequate risk assessment and mitigation systems;
  • high quality of the datasets feeding the system to minimize risk and discriminatory results;
  • Logging activities to ensure traceability of results;
  • Have available detailed documentation with information about the AI technology and its purpose for authorities to assess AI act compliance;
  • clear and adequate information to the end user of AI;
  • appropriate human supervision measures to minimize risks;
  • high level of robustness, safety and accuracy.

Low risk

Low-risk systems include chatbots such as Chat-GPT and Bard. These AI are particularly subject to transparency requirements. It is important that users should be able to be aware that they are interacting with a machine so that they remain critical of the AI’s possible input and output. For example, the output is not always reliable, and the input may be used by the chatbot for future creations. Therefore, it is important for the user to know that no confidential information is sent to the chatbot. A specific transparency level is expected for this reason, among others.

Minimal risk

The regulation does not define what counts as minimal risk, but one might think of spam filters or AI-powered video games. Because this category is not mentioned in the AI Act, no special legal requirements apply to this type of AI technology. However, this AI technology must of course comply with general security standards.

What does this mean for your business?

This new regulation ensures that one cannot simply use AI systems. If you are using an AI system or marketing a system, it is essential to assess which category your system falls into so that you meet the corresponding obligations in a timely manner.

The AI regulation is expected to be officially published in March. After that, the AI act will take effect in stages. Six months after publication, the ban on AI with unacceptable risk will apply. Two years after publication, the entire AI regulation will apply except for specific exemption provisions for which manufacturers need more time to adapt their products accordingly.