European Union squares the circle on the world’s first AI rulebook
After a 36-hour negotiating marathon, EU policymakers reached a political agreement on what is set to become the global benchmark for regulating Artificial Intelligence.
The AI Act is a landmark bill to regulate Artificial Intelligence based on its capacity to cause harm. The file passed the finishing line of the legislative process as the European Commission, Council, and Parliament settled their differences in a so-called trilogue on Friday (8 December).
At the political meeting, which set a new record for interinstitutional negotiations, the main EU institutions had to go through an appealing list of 21 open issues. As Euractiv reported, the first part of the trilogue closed the parts on open source, foundation models and governance.
However, the exhausted EU officials called for a recess 22 hours after it was clear that a proposal from the Spanish EU Council presidency on sensitive law enforcement was unacceptable to left-to-centre lawmakers. The discussions picked up again on Friday morning and only ended late at night.
AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement
After 22 hours of intense negotiations, EU policymakers found a provisional agreement regulating the most powerful AI models but strong disagreement in the law enforcement chapter forced the exhausted officials to call for a recess.
EU countries, led by France, insisted on having a broad exemption for any AI system used for military or defence purposes, even by an external contractor. The text’s preamble will reference that this will be per the EU treaties.
The AI Act includes a list of banned applications that pose an unacceptable risk, such as manipulative techniques, systems exploiting vulnerabilities, and social scoring. MEPs added databases based on bulk scraping of facial images like Clearview AI.
Parliamentarians obtained the banning of emotion recognition in the workplace and educational institutions, with a caveat for safety reasons meant to recognise if, for example, a driver falls asleep.
Parliamentarians also introduced a ban on predictive policing software to assess an individual’s risk for committing future crimes based on personal traits.
Moreover, parliamentarians wanted to forbid the use of AI systems that categorise persons based on sensitive traits like race, political opinions or religious beliefs.
Upon insistence from European governments, Parliament dropped the ban on using real-time remote biometric identification in exchange for some narrow law enforcement exceptions, namely to prevent terrorist attacks or locate the victims or suspects of a pre-defined list of serious crimes.
Ex-post use of this technology will see a similar regime but with less strict requirements. MEPs pushed to make these exceptions apply only as strictly necessary based on national legislation and prior authorisation of an independent authority. The Commission is to oversee potential abuses.
Parliamentarians insisted that the bans should not apply only to systems used within the Union but also prevent EU-based companies from selling these prohibited applications abroad. However, this export ban was not maintained because it was considered not to have a sufficient legal basis.
Leading MEPs make counter-proposal on AI rulebook’s law enforcement chapter
The EU lawmakers spearheading the AI law circulated a possible compromise on the dispositions related to law enforcement, one of the most sensitive areas of the file.
High-risk use cases
The AI regulation includes a list of use cases deemed at significant risk to cause harm to people’s safety and fundamental rights. The co-legislators included a series of filtering conditions meant to capture only genuine high-risk applications.
The sensitive areas include education, employment, critical infrastructure, public services, law enforcement, border control and administration of justice.
MEPs also proposed including the recommender systems of social media deemed ‘systemic’ under the Digital Services Act, but this idea did not make it into the agreement.
Parliament managed to introduce new use cases like for AI systems predicting migration trends and border surveillance.
AI Act: MEPs mull narrow facial recognition technology uses in exchange for other bans
The European Parliament might be on the verge of agreeing to some narrow conditions for using remote biometric identification technologies in real-time as part of a package deal extending the list of prohibited practices.
Law enforcement exemptions
The Council introduced several exemptions for law enforcement agencies, notably a derogation to the four-eye principle when national law deems it disproportionate and the exclusion of sensitive operation data from transparency requirements.
Providers and public bodies using high-risk systems must report it in an EU database. For police and migration control agencies, there will be a dedicated non-public section that will only be accessible to an independent supervisory authority.
In exceptional circumstances related to public security, law enforcement authorities might employ a high-risk system that has not passed the conformity assessment procedure requesting judicial authorisation.
Fundamental rights impact assessment
Centre-left MEPs introduced the obligation for public bodies and private entities providing services of general interest, such as hospitals, schools, banks and assurance companies deploying high-risk systems, to conduct a fundamental right impact assessment.
Spanish presidency envisages landing zones on law enforcement in AI rulebook
The Spanish presidency of the EU Council asked member states for flexibility in the sensitive area of law enforcement ahead of a crucial political meeting for the AI law.
The AI Act is a flagship bill to regulate Artificial Intelligence based …
Responsibility alongside the supply chain
Providers of general purpose AI systems like ChatGPT must provide all the necessary information to comply with the AI law’s obligations to downstream economic providers that create an application falling in the high-risk category.
In addition, the providers of components integrated into a high-risk AI system by an SME or start-up are prevented from unilaterally imposing unfair contractual terms.
The administrative fines are set as a minimum sum or percentage of the company’s annual global turnover if the latter is higher.
For the most severe violations of the prohibited applications, fines can be up to 6.5% or €35 million, 3% or €15 million for violations of obligations for system and model providers, and 1.5% or half a million euros for failing to provide accurate information.
The AI Act will apply two years after it enters into force, shortened to six months for the bans. Requirements for high-risk AI systems, powerful AI models, the conformity assessment bodies, and the governance chapter will start applying one year earlier,