EU sets out rules for AI

EU sets out rules for AI

“On Artificial Intelligence, trust is a must, not a nice to have,” says digital vp Margrethe Vestager (pictured), “with these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

The rules differentiate between high risk, limited risk and minimal risk AI applications.

AI systems identified as high-risk include AI technology used in:

  • Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Safety components of products (e.g. AI application in robot-assisted surgery);
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustness, security and accuracy.

In particular, all remote biometric identification systems are considered high risk and subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle.

Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence).

Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk AI systems will have specific transparency obligations: e.g. when  using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal risk AI systems without any need for regulation include :  AI-enabled video games or spam filters. The vast majority of AI systems fall into this category. 

The European Artificial Intelligence Board will facilitate the regulations’ implementation, as well as drive the development of standards for AI. 

Fines of 6% of revenue are foreseen for companies that don’t comply with bans or data requirements

Smaller fines are foreseen for companies that don’t comply with other requirements spelled out in the new rules

The rules will apply both to developers and users of high-risk AI systems

Providers of risky AI must subject it to a conformity assessment before deployment

Other obligations for high-risk AI includes use of high quality datasets, ensuring traceability of results, and human oversight to minimise risk

The criteria for ‘high-risk’ applications includes intended purpose, the number of potentially affected people, and the irreversibility of harm.

The rules require approval by the European Parliament and member states before becoming law – a process which can take several years.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *