Privacy

Europe’s AI Act falls far short on protecting fundamental rights, civil society groups warn

Comment

Image Credits: Ian Waldie / Staff / Getty Images

Civil society has been poring over the detail of the European Commission’s proposal for a risk-based framework for regulating applications of artificial intelligence which was proposed by the EU’s executive back in April.

The verdict of over a hundred civil society organizations is that the draft legislation falls far short of protecting fundamental rights from AI-fuelled harms like scaled discrimination and blackbox bias — and they’ve published a call for major revisions.

“We specifically recognise that AI systems exacerbate structural imbalances of power, with harms often falling on the most marginalised in society. As such, this collective statement sets out the call of 11[5] civil society organisations towards an Artificial Intelligence Act that foregrounds fundamental rights,” they write, going on to identify nine “goals” (each with a variety of suggested revisions) in the full statement of recommendations.

The Commission, which drafted the legislation, billed the AI regulation as a framework for “trustworthy”, “human-centric” artificial intelligence. However it risks veering rather closer to an enabling framework for data-driven abuse, per the civil society groups’ analysis — given the lack of the essential checks and balances to actually prevent automated harms.

Today’s statement was drafted by European Digital Rights (EDRi), Access Now, Panoptykon Foundation, epicenter.works, AlgorithmWatch, European Disability Forum (EDF), Bits of Freedom, Fair Trials, PICUM, and ANEC — and has been signed by a full 115 not-for-profits from across Europe and beyond.

The advocacy groups are hoping their recommendations will be picked up by the European Parliament and Council as the co-legislators continue debating — and amending — the Artificial Intelligence Act (AIA) proposal ahead of any final text being adopted and applied across the EU.

Europe lays out plan for risk-based AI rules to boost trust and uptake

Key suggestions from the civil society organizations include the need for the regulation to be amended to have a flexible, future-proofed approach to assessing AI-fuelled risks — meaning it would allow for updates to the list of use-cases that are considered unacceptable (and therefore prohibited) and those that the regulation merely limits, as well as the ability to expand the (currently fixed) list of so-called “high risk” uses.

The Commission’s proposal to categorizing AI risks is too “rigid” and poorly designed (the groups’ statement literally calls it “dysfunctional”) to keep pace with fast-developing, iterating AI technologies and changing use cases for data-driven technologies, in the NGOs’ view.

“This approach of ex ante designating AI systems to different risk categories does not consider that the level of risk also depends on the context in which a system is deployed and cannot be fully determined in advance,” they write. “Further, whilst the AIA includes a mechanism by which the list of ‘high-risk’ AI systems can be updated, it provides no scope for updating ‘unacceptable’ (Art. 5) and limited risk (Art. 52) lists.

“In addition, although Annex III can be updated to add new systems to the list of high-risk AI systems, systems can only be added within the scope of the existing eight area headings. Those headings cannot currently be modified within the framework of the AIA. These rigid aspects of the framework undermine the lasting relevance of the AIA, and in particular its capacity to respond to future developments and emerging risks for fundamental rights.”

They have also called out the Commission for a lack of ambition in framing prohibited use-cases of AI — urging a “full ban” on all social scoring scoring systems; on all remote biometric identification in publicly accessible spaces (not just narrow limits on how law enforcement can use the tech); on all emotion recognition systems; on all discriminatory biometric categorisation; on all AI physiognomy; on all systems used to predict future criminal activity; and on all systems to profile and risk-assess in a migration context — arguing for prohibitions “on all AI systems posing an unacceptable risk to fundamental rights”.

On this the groups’ recommendations echo earlier calls for the regulation to go further and fully prohibit remote biometric surveillance — including from the EU’s data protection supervisor.

Ban biometric surveillance in public to safeguard rights, urge EU bodies

The civil society groups also want regulatory obligations to apply to users of high risk AI systems, not just providers (developers) — calling for a mandatory obligation on users to conduct and publish a fundamental rights impact assessment to ensure accountability around risks cannot be circumvented by the regulation’s predominant focus on providers.

After all, an AI technology that’s developed for one ostensible purpose could be applied for a different use-case that raises distinct rights risks.

Hence they want explicit obligations on users of “high risk” AIs to publish impact assessments — which they say should cover potential impacts on people, fundamental rights, the environment and the broader public interest.

“While some of the risk posed by the systems listed in Annex III comes from how they are designed, significant risks stem from how they are used. This means that providers cannot comprehensively assess the full potential impact of a high-risk AI system during the conformity assessment, and therefore that users must have obligations to uphold fundamental rights as well,” they urge.

They also argue for transparency requirements to be extended to users of high risks systems — suggesting they should have to register the specific use of an AI system in a public database the regulation proposes to establish for providers of such system.

“The EU database for stand-alone high-risk AI systems (Art. 60) provides a promising opportunity for increasing the transparency of AI systems vis-à-vis impacted individuals and civil society, and could greatly facilitate public interest research. However, the database currently only contains information on high-risk systems registered by providers, without information on the context of use,” they write, warning: “This loophole undermines the purpose of the database, as it will prevent the public from finding out where, by whom and for what purpose(s) high-risk AI systems are actually used.”

Another recommendations addresses a key civil society criticism of the proposed framework — that it does not offer individuals rights and avenues for redress when they are negatively impacted by AI.

This marks a striking departure from existing EU data protection law — which confers a suite of rights on people attached to their personal data and — at least on paper — allows them to seek redress for breaches, as well as for third parties to seek redress on individuals’ behalf. (Moreover, the General Data Protection Regulation includes provisions related to automated processing of personal data; with Article 22 giving people subject to decisions with a legal or similar effect which are based solely on automation a right to information about the processing; and/or to request a human review or challenge the decision.)

The lack of “meaningful rights and redress” for people impacted by AI systems represents a gaping hole in the framework’s ability to guard against high risk automation scaling harms, the groups argue.

“The AIA currently does not confer individual rights to people impacted by AI systems, nor does it contain any provision for individual or collective redress, or a mechanism by which people or civil society can participate in the investigatory process of high-risk AI systems. As such, the AIA does not fully address the myriad harms that arise from the opacity, complexity, scale and power imbalance in which AI systems are deployed,” they warn.

They are recommending the legislated is amended to include two individual rights as a basis for judicial remedies — namely:

  • (a) The right not to be subject to AI systems that pose an unacceptable risk or do not comply with the Act; and
  • (b) The right to be provided with a clear and intelligible explanation, in a manner that is accessible for persons with disabilities, for decisions taken with the assistance of systems within the scope of the AIA;

They also suggest a right to an “effective remedy” for those whose rights are infringed “as a result of the putting into service of an AI system”. And, as you might expect, the civil society organizations want a mechanism for public interest groups such as themselves to be able to lodge a complaint with national supervisory authorities for a breach or in relation to AI systems that undermine fundamental rights or the public interest — which they specify should trigger an investigation. (GDPR complaints simply being ignored by oversight bodies is a major problem with effective enforcement of that regime.)

Other recommendations in the groups’ statement include the need for accessibility to be considered throughout the AI system’s lifecycle, and they call out the lack of accessibility requirements in the regulation — warning that risks leading to the development and use of AI with “further barriers for persons with disabilities”; they also want explicit limits to ensure that harmonized product safety standards which the regulation proposes to delegate to private standards bodies should only cover “genuinely technical” aspects of high risks AI systems (so that political and fundamental rights decisions “remain firmly within the democratic scrutiny of EU legislators”, as they put it); and they want requirements on AI system users and providers to apply not only when the outputs are applied within the EU but also elsewhere — “to avoid risk of discrimination, surveillance, and abuse through technologies developed in the EU”.

Sustainability and environmental protection has also been overlooked, per the groups’ assessment.

On that they’re calling for “horizontal, public-facing transparency requirements on the resource consumption and greenhouse gas emission impacts of AI systems” — regardless of risk level; and covering AI system design, data management and training, application, and underlying infrastructures (hardware, data centres, etc.

The European Commission frequently justifies its aim of encouraging the update of AI by touting automation as a key technology for enabling the bloc’s sought for transition to a “climate-neutral” continent by 2050 — however AI’s own energy and resource consumption is a much overlooked component of these so-called ‘smart’ systems. Without robust environmental auditing requirements also applying to AI it’s simply PR to claim that AI will provide the answer to climate change.

The Commission has been contacted for a response to the civil society recommendations.

Last month, MEPs in the European Parliament voted to back a total ban on remote biometric surveillance technologies such as facial recognition, a ban on the use of private facial recognition databases and a ban on predictive policing based on behavioural data.

They also voted for a ban on social scoring systems which seek to rate the trustworthiness of citizens based on their behaviour or personality, and for a ban on AI assisting judicial decisions — another highly controversial area where automation is already been applied.

So MEPs are likely to take careful note of the civil society recommendations as they work on amendments to the AI Act.

In parallel the Council is in the process of determining its negotiating mandate on the regulation — and current proposals are pushing for a ban on social scoring by private companies but seeking carve outs for R&D and national security uses of AI.

Discussions between the Commission, Parliament and Council will determine the final shape of the regulation, although the parliament must also approve the final text of the regulation in a plenary vote — so MEPs’ views will play a key role.

Europe sets out plan to boost data reuse and regulate ‘high risk’ AIs

More TechCrunch

When it comes to the world of venture-backed startups, some issues are universal, and some are very dependent on where the startups and its backers are located. It’s something we…

The ups and downs of investing in Europe, with VCs Saul Klein and Raluca Ragab

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. OpenAI announced this week that…

Scarlett Johansson brought receipts to the OpenAI controversy

Accurate weather forecasts are critical to industries like agriculture, and they’re also important to help prevent and mitigate harm from inclement weather events or natural disasters. But getting forecasts right…

Deal Dive: Can blockchain make weather forecasts better? WeatherXM thinks so

pcTattletale’s website was briefly defaced and contained links containing files from the spyware maker’s servers, before going offline.

Spyware app pcTattletale was hacked and its website defaced

Featured Article

Synapse, backed by a16z, has collapsed, and 10 million consumers could be hurt

Synapse’s bankruptcy shows just how treacherous things are for the often-interdependent fintech world when one key player hits trouble. 

18 hours ago
Synapse, backed by a16z, has collapsed, and 10 million consumers could be hurt

Sarah Myers West, profiled as part of TechCrunch’s Women in AI series, is managing director at the AI Now institute.

Women in AI: Sarah Myers West says we should ask, ‘Why build AI at all?’

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI and publishers are partners of convenience

Evan, a high school sophomore from Houston, was stuck on a calculus problem. He pulled up Answer AI on his iPhone, snapped a photo of the problem from his Advanced…

AI tutors are quietly changing how kids in the US study, and the leading apps are from China

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Well,…

Startups Weekly: Drama at Techstars. Drama in AI. Drama everywhere.

Last year’s investor dreams of a strong 2024 IPO pipeline have faded, if not fully disappeared, as we approach the halfway point of the year. 2024 delivered four venture-backed tech…

From Plaid to Figma, here are the startups that are likely — or definitely — not having IPOs this year

Federal safety regulators have discovered nine more incidents that raise questions about the safety of Waymo’s self-driving vehicles operating in Phoenix and San Francisco.  The National Highway Traffic Safety Administration…

Feds add nine more incidents to Waymo robotaxi investigation

Terra One’s pitch deck has a few wins, but also a few misses. Here’s how to fix that.

Pitch Deck Teardown: Terra One’s $7.5M Seed deck

Chinasa T. Okolo researches AI policy and governance in the Global South.

Women in AI: Chinasa T. Okolo researches AI’s impact on the Global South

TechCrunch Disrupt takes place on October 28–30 in San Francisco. While the event is a few months away, the deadline to secure your early-bird tickets and save up to $800…

Disrupt 2024 early-bird tickets fly away next Friday

Another week, and another round of crazy cash injections and valuations emerged from the AI realm. DeepL, an AI language translation startup, raised $300 million on a $2 billion valuation;…

Big tech companies are plowing money into AI startups, which could help them dodge antitrust concerns

If raised, this new fund, the firm’s third, would be its largest to date.

Harlem Capital is raising a $150 million fund

About half a million patients have been notified so far, but the number of affected individuals is likely far higher.

US pharma giant Cencora says Americans’ health information stolen in data breach

Attention, tech enthusiasts and startup supporters! The final countdown is here: Today is the last day to cast your vote for the TechCrunch Disrupt 2024 Audience Choice program. Voting closes…

Last day to vote for TC Disrupt 2024 Audience Choice program

Featured Article

Signal’s Meredith Whittaker on the Telegram security clash and the ‘edge lords’ at OpenAI 

Among other things, Whittaker is concerned about the concentration of power in the five main social media platforms.

2 days ago
Signal’s Meredith Whittaker on the Telegram security clash and the ‘edge lords’ at OpenAI 

Lucid Motors is laying off about 400 employees, or roughly 6% of its workforce, as part of a restructuring ahead of the launch of its first electric SUV later this…

Lucid Motors slashes 400 jobs ahead of crucial SUV launch

Google is investing nearly $350 million in Flipkart, becoming the latest high-profile name to back the Walmart-owned Indian e-commerce startup. The Android-maker will also provide Flipkart with cloud offerings as…

Google invests $350 million in Indian e-commerce giant Flipkart

A Jio Financial unit plans to purchase customer premises equipment and telecom gear worth $4.32 billion from Reliance Retail.

Jio Financial unit to buy $4.32B of telecom gear from Reliance Retail

Foursquare, the location-focused outfit that in 2020 merged with Factual, another location-focused outfit, is joining the parade of companies to make cuts to one of its biggest cost centers –…

Foursquare just laid off 105 employees

“Running with scissors is a cardio exercise that can increase your heart rate and require concentration and focus,” says Google’s new AI search feature. “Some say it can also improve…

Using memes, social media users have become red teams for half-baked AI features

The European Space Agency selected two companies on Wednesday to advance designs of a cargo spacecraft that could establish the continent’s first sovereign access to space.  The two awardees, major…

ESA prepares for the post-ISS era, selects The Exploration Company, Thales Alenia to develop cargo spacecraft

Expressable is a platform that offers one-on-one virtual sessions with speech language pathologists.

Expressable brings speech therapy into the home

The French Secretary of State for the Digital Economy as of this year, Marina Ferrari, revealed this year’s laureates during VivaTech week in Paris. According to its promoters, this fifth…

The biggest French startups in 2024 according to the French government

Spotify is notifying customers who purchased its Car Thing product that the devices will stop working after December 9, 2024. The company discontinued the device back in July 2022, but…

Spotify to shut off Car Thing for good, leading users to demand refunds

Elon Musk’s X is preparing to make “likes” private on the social network, in a change that could potentially confuse users over the difference between something they’ve favorited and something…

X should bring back stars, not hide ‘likes’

The FCC has proposed a $6 million fine for the scammer who used voice-cloning tech to impersonate President Biden in a series of illegal robocalls during a New Hampshire primary…

$6M fine for robocaller who used AI to clone Biden’s voice