AI

4 questions to ask when evaluating AI prototypes for bias

Comment

Hole in Blue Page
Image Credits: mikroman6 (opens in a new window) / Getty Images

Veronica Torres

Contributor

Veronica Torres is the worldwide privacy and regulatory counsel for Jumio, where she provides strategic legal counsel regarding business processes, applications and technologies to ensure compliance with privacy laws.

It’s true there has been progress around data protection in the U.S. thanks to the passing of several laws, such as the California Consumer Privacy Act (CCPA), and nonbinding documents, such as the Blueprint for an AI Bill of Rights. Yet, there currently aren’t any standard regulations that dictate how technology companies should mitigate AI bias and discrimination.

As a result, many companies are falling behind in building ethical, privacy-first tools. Nearly 80% of data scientists in the U.S. are male and 66% are white, which shows an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to skewed data results.

Significant improvements in design review processes are needed to ensure technology companies take all people into account when creating and modifying their products. Otherwise, organizations can risk losing customers to competition, tarnishing their reputation and risking serious lawsuits. According to IBM, about 85% of IT professionals believe consumers select companies that are transparent about how their AI algorithms are created, managed and used. We can expect this number to increase as more users continue taking a stand against harmful and biased technology.

So, what do companies need to keep in mind when analyzing their prototypes? Here are four questions development teams should ask themselves:

Have we ruled out all types of bias in our prototype?

To build effective, bias-free technology, AI teams should develop a list of questions to ask during the review process that can help them identify potential issues in their models.

There are many methodologies AI teams can use to assess their models, but before they do that, it’s critical to evaluate the end goal and whether there are any groups who may be disproportionately affected by the outcomes of the use of AI.

For example, AI teams should take into consideration that the use of facial recognition technologies may inadvertently discriminate against people of color — something that occurs far too often in AI algorithms. Research conducted by the American Civil Liberties Union in 2018 showed that Amazon’s face recognition inaccurately matched 28 members of the U.S. Congress with mugshots. A staggering 40% of incorrect matches were people of color, despite them making up only 20% of Congress.

By asking challenging questions, AI teams can find new ways to improve their models and strive to prevent these scenarios from occurring. For instance, a close examination can help them determine whether they need to look at more data or if they will need a third party, such as a privacy expert, to review their product.

Plot4AI is a great resource for those looking to start.

Have we enlisted a designated privacy professional or champion?

Due to the nature of their job, privacy professionals have been traditionally viewed as barriers to innovation, especially when they need to review every product, document and procedure. Rather than viewing a privacy department as an obstacle, organizations should instead see it as a critical enabler for innovation.

Enterprises must make it a priority to hire privacy experts and incorporate them into the design review process so that they can ensure their products work for everyone, including underserved populations, in a way that’s safe, compliant with regulations and free of bias.

While the process for integrating privacy professionals will vary according to the nature and scope of the organization, there are some key ways to ensure the privacy team has a seat at the table. Companies should start small by establishing a simple set of procedures to identify any new, or changes to existing, processing activities related to personal information.

The key to success with these procedures is to socialize the process with executives, as well as product managers and engineers, and ensure they are aligned with the organization’s definition of personal information. For example, while many organizations generally accept IP addresses and mobile device identifiers as personal information, outdated models and standards may categorize these as “anonymous.” Enterprises must be clear about what types of information qualify as personal information.

Furthermore, organizations may believe that personal information used in their products and services poses the greatest risk and should be the priority for reviews, but they must take into account that other departments, such as human resources and marketing, also process large amounts of personal information.

If an organization doesn’t have the bandwidth to hire a privacy professional for every department, they should consider designating a privacy champion or advocate who can spot issues and escalate them to the privacy team if needed.

Is our people and culture department involved?

Privacy teams shouldn’t be the only ones responsible for privacy within an organization. Every employee who has access to personal information or has an impact on the processing of personal information is responsible.

Expanding recruitment efforts to include candidates from different demographic groups and various regions can bring diverse voices and perspectives to the table. Hiring diverse employees shouldn’t stop at entry-and-mid-level roles, either. A diverse leadership team and board of directors are absolutely essential to serve as representatives for those who cannot make it into the room.

Companywide training programs on ethics, privacy and AI can further support an inclusive culture while raising awareness of the importance of diversity, equity and inclusion (DEI) efforts. Only 32% of organizations require a form of DEI training for their employees, emphasizing how improvements are needed in this area.

Does our prototype align with the AI Bill of Rights Blueprint?

The Biden administration issued a Blueprint for an AI Bill of Rights in October 2022, which outlines key principles, with detailed steps and recommendations for developing responsible AI and minimizing discrimination in algorithms.

The guidelines include five protections:

  1. Safe and effective systems.
  2. Algorithmic discrimination.
  3. Data privacy.
  4. Notice and explanation.
  5. Human alternatives, consideration and fallback.

While the AI Bill of Rights doesn’t enforce any metrics or pose specific regulations around AI, organizations should look to it as a baseline for their own development practices. The framework can be used as a strategic resource for companies looking to learn more about ethical AI, mitigating bias and giving consumers control over their data.

The road to privacy-first AI

Technology has the ability to revolutionize society as we know it, but it will ultimately fail if it doesn’t benefit everyone in the same way. As AI teams bring new products to life or modify their current tools, it’s critical that they apply the necessary steps and ask themselves the right questions to ensure they have ruled out all types of bias.

Building ethical, privacy-first tools will always be a work in progress, but the above considerations can help companies take steps in the right direction.

More TechCrunch

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

17 hours ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

3 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies