AI

EU lawmakers back transparency and safety rules for generative AI

Comment

Artificial Intelligence for Deep Learning Technology over Top view scene of Motion blurred Crowd unrecognizable pedestrians walking internal subway intersection in rush hour before working hour, central Hong Kong.
Image Credits: Photographer is my life. / Getty Images

In a series of votes in the European Parliament this morning MEPs have backed a raft of amendments to the bloc’s draft AI legislation — including agreeing a set of requirements for so called foundational models which underpin generative AI technologies like OpenAI’s ChatGPT.

The text of the amendments agreed by MEPs in two committees put obligations on providers of foundational models to apply safety checks, data governance measures and risk mitigations prior to putting their models on the market — including obligating them to consider “foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law”.

The amendments also commit foundational model makers to reduce the energy consumption and resource use of their systems and register their systems in an EU database set to be established by the AI Act. While providers of generative AI technologies (such as ChatGPT) are obliged comply with transparency obligations in the regulation (ensuring users are informed the content was machine generated); apply “adequate safeguards” in relation to content their systems generate; and provide a summary of any copyrighted materials used to train their AIs.

In recent weeks MEPs have been focused on ensuring general purpose AI will not escape regulatory requirements, as we reported earlier.

Other key areas of debate for parliamentarians included biometric surveillance — where MEPs also agreed to changes aimed at beefing up protections for fundamental rights.

EU lawmakers eye tiered approach to regulating generative AI

The lawmakers are working towards agreeing the parliament’s negotiating mandate for the AI Act to unlock the next stage of the EU’s co-legislative process.

MEPs in two committees, the Internal Market Committee and the Civil Liberties Committee, voted on some 3,000 amendments today — adopting a draft mandate on the planned artificial intelligence rulebook with 84 votes in favour, 7 against and 12 abstentions.

“In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow,” the parliament said in a press release.

Among the key amendments agreed by the committees today are an expansion of the list of prohibited practices — adding bans on “intrusive” and “discriminatory” uses of AI systems such as:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

The latter, which would outright ban the business model of the controversial US AI company Clearview AI comes a day after France’s data protection watchdog hit the startup with another fine for failing to comply with existing EU laws. So there’s no doubt enforcement of such prohibitions on foreign entities that opt to flout the bloc’s rules will remain a challenge. But the first step is to have hard law.

Commenting after the vote in a statement, co-rapporteur and MEP Dragos Tudorache, added:

Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe. We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement.

A plenary vote in parliament to seal the mandate is expected next month (during the 12-15 June session), after which trilogue talks will kick off with the Council toward agreeing a final compromise on the file.

Back in 2021, when the Commission’ presented its draft proposal for the AI Act it suggested the risk-based framework would create a blueprint for “human” and “trustworthy” AI. However concerns were quickly raised that the plan fell far short of the mark — including in areas related to biometric surveillance, with the Commission only proposing a limited ban on use of highly intrusive technology like facial recognition in public.

Civil society groups and EU bodies pressed for amendments to bolster protections for fundamental rights — with the European Data Protection Supervisor and European Data Protection Board among those calling for the legislation to go further and urging EU lawmakers to put a total ban on biometrics surveillance in public.

MEPs appear to have largely heeded civil society’s call. Although concerns do remain. (And of course it remains to be seen how the proposal MEPs have strengthened could get watered back down again as Member States governments enter the negotiations in the coming months.)

Other changes parliamentarians agreed in today’s committee votes include expansions to the regulation’s (fixed) classification of “high-risk” areas — to include harm to people’s health, safety, fundamental rights and the environment.

AI systems used to influence voters in political campaigns and those used in recommender systems by larger social media platforms (with more than 45 million users, aligning with the VLOPs classification in the Digital Services Act), were also put on the high-risk list.

At the same time, though, MEPs backed changes to what counts as high risk — proposing to leave it up to AI developers to decide if their system is significant enough to meet the bar where obligations applying, something digital rights groups are warning (see below) is “a major red flag” for enforcing the rules.

Elsewhere, MEPs backed amendments aimed at boosting citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that “significantly” impact their rights.

The lack of meaningful redress for individuals affected by harmful AIs was a major loophole raised by civil society groups in a major call for revisions in fall 2021 who pointed out the glaring difference between the Commission’s AI Act proposal and the bloc’s General Data Protection Act, under which individuals can complain to regulators and pursue other forms of redress.

Another change MEPs agree on today is a reformed role for body called the EU AI Office, which they want to monitor how the rulebook is implemented — to supplement decentralized oversight of the regulation at the Member State level.

While, in a nod to the perennial industry cry that too much regulation is harmful for “innovation”, they also added exemptions to rules for research activities and AI components provided under open-source licenses, while noting the law promotes regulatory sandboxes, or controlled environments, being established by public authorities to test AI before its deployment.

Digital rights group EDRi, which has been urging major revisions to the Commission draft, said everything it had been pushing for was passed by MEPs “in some form or another” — flagging particularly the (now) full ban on facial recognition in public; along with (new) bans on predictive policing, emotion recognition and on other harmful uses of AI.

Another key win it points to is the inclusion of accountability and transparency obligations on deployers of high risk AI — applying on them a duty to do a fundamental rights impact assessment and mechanisms by which people affected can challenge AI systems.

“The Parliament is sending a clear message to governments and AI developers with its list of bans, ceding civil society’s demands that some uses of AI are just too harmful to be allowed, Sarah Chander, EDRi senior policy advisor,” told TechCrunch.

“This new text is a vast improvement from the Commission’s original proposal when it comes to reigning in the abuse of sensitive data about our faces, bodies, and identities,” added, Ella Jakubowska, an EDRi senior policy advisor who has focused on biometrics. 

However EDRi said there are still areas of concern — pointing to use of AI for migration control as one big one.

On this, Chander noted that MEPs failed to include in the list of prohibited practices where AI is used to facilitate “illegal pushbacks”, or to profile people in a discriminatory manner — which is something EDRi had called for. “Unfortunately, the [European Parliament’s] support for peoples’ rights stops short of protecting migrants from AI harms, including where AI is used to facilitate pushbacks,” she said, suggesting: “Without these prohibitions the European Parliament is opening the door for a  panopticon at the EU border.”

The group said it would also like to see improvements to the proposed ban on predictive policing — to cover location based predictive policing which Chander described as “essentially a form of automated racial profiling”. She said it’s worried that the proposed remote biometrics identification ban won’t cover the full extent of mass surveillance practices it’s seen being used across Europe.

“Whilst the Parliament’s approach is very comprehensive [on biometrics], there are a few practices that we would like to see even further restricted. Whilst there is a ban on retrospective public facial recognition, it contains an exception for law enforcement use which we still consider to be too risky. In particular, it could incentivise mass retention of CCTV footage and biometric data, which we would clearly oppose,” added Jakubowska, saying it would also want to see the EU outlaw emotion recognition no matter the context — “as this ‘technology’ is fundamentally flawed, unscientific, and discriminatory by design”.

Another concern EDRi flags is MEPs’ proposal to let AI developers judge if their systems are high risk or not — as this risk undermining enforceability.

“Unfortunately, the Parliament is proposing some very worrying changes relating to what counts as ‘high-risk AI. With the changes in the text, developers will be able to decide if their system is ‘significant’ enough to be considered high risk, a major red flag for the enforcement of this legislation,” Chander suggested.

While today’s committee vote is a big step towards setting the parliament’s mandate — and setting the tone for the upcoming trilogue talks with the Council — much could still change and there is likely to be some pushback from Member States governments, which tend to be more focused on national security considerations than caring for fundamental rights.

Asked whether it’s expecting the Council to try to unpick some of the expanded protections against biometric surveillance Jakubowska said: “We can see from the Council’s general approach last year that they want to water down the already insufficient protections in the Commission’s original text. Despite having no credible evidence of effectiveness — and lots of evidence of the harms — we see that many member state governments are keen to retain the ability to conduct biometric mass surveillance.

“They often do this under the pretence of ‘national security’ such as in the case of the French Olympics and Paralympics, and/or as part of broader trends criminalising migration and other minoritised communities. That being said, we saw what could be considered ‘dissenting opinions’ from both Austria and Germany, who both favour stronger protections of biometric data in the AI Act. And we’ve heard rumours that several other countries are willing to make compromises in the direction of the biometrics provisions. This gives us hope that there will be a positive outcome from the trilogues, even though we of course expect a strong push back from several Member States.”

Giving another early assessment from civil society, Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties (ICCL), which also joined the 2021 call for major revisions to the AI Act, also cautioned over enforcement challenges — warning that while the parliament has strengthened enforceability by amendments that explicitly allow regulators to perform remote inspections, he suggested MEPs are simultaneously tying regulators hands by preventing them access to source code of AI systems for investigations.

“We are also concerned that we will see a repeat of GDPR-like enforcement problems,” he told TechCrunch.

On the plus side he said MEPs have taken a step towards addressing “the shortcomings” of the Commission’s definition of AI systems — notably with generative AI systems being brought in scope and the application of transparency obligations on them, which he dubbed “a key step towards addressing their harms”.

But — on the issue of copyright and AI training data — Shrishak was critical of the lack of a “firm stand” by MEPs to stop data mining giants from ingesting information for free, including copyright-protected data.

The copyright amendment only requires companies to provide a summary of copyright-protected data used for training — suggesting it will be left up to rights holders to sue.

Asked about possible concerns that exemptions for research activities and AI components provided under open source licenses might create fresh loopholes for AI giants to escape the rules, he agreed that’s a worry.

Research is a loophole that is carried over from the scope of the regulation. This is likely to be exploited by companies,” he suggested. “In the context of AI it is a big loophole considering large parts of the research is taking place in companies. We already see Google saying they are ‘experimenting’ with Bard. Further to this, I expect some companies to claim that they develop AI components and not AI systems (I already heard this from one large corporation during discussions on General purpose AI. This was one of their arguments for why GPAI [general purpose AI] should not be regulated).”

However the Free Software Foundation Europe argues that a provision which limits the open source exemption to “micro-enterprises” means it will be difficult for tech giants to appropriate as a loophole in practice.

Alexander Sander, a senior policy consultant for the Foundation, told TechCrunch: “It is highly unlikely that Big Tech is outsourcing everything to Micro-Enterprises without deploying it afterwards. Once they deploy they fall under the regulation again (if bigger than a Micro-Enterprise).”

“In fact we safeguard developers with this decision and shift responsibilities to those who deploy and significantly benefit on the market,” he also suggested.

He added that the organization is generally quite happy with the MEPs’ proposal — while critiquing some “super complicated wording” and the fact the stipulation has just been included in a recital (i.e. rather than an article).

Clearer wording around activities “between micro-enterprises” would also be welcomed by the group, he said, as it wants this to also cover activities between non-profit and micro-enterprises.

This report was updated with additional response from the Free Software Foundation Europe 

Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn

More TechCrunch

SpaceX achieved a key milestone in its Starship flight test campaign: returning the booster and the upper stage back to Earth.

SpaceX launches mammoth Starship rocket and brings it back for the first time

There’s a lot of buzz right now about generative AI and what impact it might have on businesses. But look beyond the hype and high-profile deals like the one between…

Sirion, now valued around $1B, acquires Eigen in enterprise AI tooling consolidation play

Carlo Kobe and Scott Smith believed so strongly in the need for a debit card product designed specifically for Gen Zers that they dropped out of Harvard and Cornell at…

Kleiner Perkins leads $14.4M seed round into Fizz, a credit-building debit card aimed at Gen Z college students

A new app called MyGlimpact is intended not only to help people understand their environmental footprint, but why they shouldn’t feel guilty about it.

How many Earths does your lifestyle require?

Prolific Machines believes it has a way of transitioning away from molecules to something better: light.

Prolific Machines, with a $55M Series B, shines ‘light’ on a better way to grow lab proteins for food and medicine

It’s been 20 years since Shira Yevin, the lead singer of punk band Shiragirl drove a pink RV into the Vans Warped Tour grounds, the now-defunct punk rock festival notorious…

Punk singer Shira Yevin pushes for fair pay with InPink, a women-focused job marketplace

While the transport industry does use legacy software, many of these platforms are from an earlier era. Qargo hopes its newer technologies can help it leapfrog the competition.

Qargo raises $14M to digitize and decarbonize the trucking industry

When you look at how generative AI is being implemented across developer tools, the focus for the most part has been on generating code, as with Github Copilot. Greptile, an…

Greptile raises $4M to build an AI-fueled code base expert

The models tended to answer questions inconsistently, which reflects biases embedded in the data used to train the models.

Study finds that AI models hold opposing views on controversial topics

A growing number of businesses are embracing data models — abstract models that organize elements of data and standardize how they relate to one another. But as the data analytics…

Cube is building a ‘semantic layer’ for company data

Stock-trading app Robinhood is diving deeper into the cryptocurrency realm with the acquisition of crypto exchange Bitstamp.

Robinhood acquires global crypto exchange Bitstamp for $200M

Torpago’s Powered By product is geared for regional and community banks, with under $20 billion in assets, to launch their own branded cards and spend management programs.

Fintech Torpago has a unique way to compete with Brex and Ramp: turning banks into customers

Over half of Americans wear corrective glasses or contact lenses. While there isn’t a shortage of low-cost and luxury frames available online or in stores, consumers can only buy them…

Eyebot raised $6M for AI-powered kiosks that provide 90-second eye exams without optometrist

Google on Thursday said it is rolling out NotebookLM, its AI-powered note-taking assistant, to over 200 new countries, nearly six months after opening its access in the U.S. The platform,…

Google’s updated AI-powered NotebookLM expands to India, UK and over 200 other countries

Inflation and currency devaluation have always been a growing concern for Africans with bank accounts.

Starting in war-torn Sudan, YC-backed Elevate now provides fintech to freelancers globally

Featured Article

Amazon buys Indian video streaming service MX Player

Amazon has agreed to acquire key assets of Indian video streaming service MX Player from the local media powerhouse Times Internet, the latest step by the e-commerce giant to make its services and brand popular in smaller cities and towns in the key overseas market.  The two firms reached a…

7 hours ago
Amazon buys Indian video streaming service MX Player

Dealt is now building a service platform for retailers instead of end customers.

Dealt turns retailers into service providers and proves that pivots sometimes work

Snowflake is the latest company in a string of high-profile security incidents and sizable data breaches caused by the lack of MFA.

Hundreds of Snowflake customer passwords found online are linked to info-stealing malware

The buy will benefit ChromeOS, Google’s lightweight Linux-based operating system, by giving ChromeOS users greater access to Windows apps “without the hassle of complex installations or updates.”

Google acquires Cameyo to bring Windows apps to ChromeOS

Mistral is no doubt looking to grow revenue as it faces considerable — and growing — competition in the generative AI space.

Mistral launches new services and SDK to let customers fine-tune its models

The warning for the Ai Pin was issued “out of an abundance of caution,” according to Humane.

Humane urges customers to stop using charging case, citing battery fire concerns

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

Welcome to Elon Musk’s X. The social network formerly known as Twitter where the rules are made up and the check marks don’t matter. Or do they? The Tesla and…

Elon Musk’s X: A complete timeline of what Twitter has become

TechCrunch has kept readers informed regarding Fearless Fund’s courtroom battle to provide business grants to Black women. Today, we are happy to announce that Fearless Fund CEO and co-founder Arian…

Fearless Fund’s Arian Simone coming to Disrupt 2024

Bridgy Fed is one of the efforts aimed at connecting the fediverse with the web, Bluesky and, perhaps later, other networks like Nostr.

Bluesky and Mastodon users can now talk to each other with Bridgy Fed

Zoox, Amazon’s self-driving unit, is bringing its autonomous vehicles to more cities.  The self-driving technology company announced Wednesday plans to begin testing in Austin and Miami this summer. The two…

Zoox to test self-driving cars in Austin and Miami 

Called Stable Audio Open, the generative model takes a text description and outputs a recording up to 47 seconds in length.

Stability AI releases a sound generator

It’s not just instant-delivery startups that are struggling. Oda, the Norway-based online supermarket delivery startup, has confirmed layoffs of 150 jobs as it drastically scales back its expansion ambitions to…

SoftBank-backed grocery startup Oda lays off 150, resets focus on Norway and Sweden

Newsletter platform Substack is introducing the ability for writers to send videos to their subscribers via Chat, its private community feature, the company announced on Wednesday. The rollout of video…

Substack brings video to its Chat feature