AI

Artificial intelligence and racism

Comment

Andrew Heikkila

Contributor

Andrew Heikkila is a tech enthusiast and writer from Boise, Idaho.

More posts from Andrew Heikkila

Replicants. Cylons. Skynet. Hal 9000. These are the classic pop-culture references the average person might conjure when they hear the term “artificial intelligence.” Yet, while some see AI as a novelty still guised in the trappings of the far-flung future, others realize the dawn of AI is much closer than previously thought. CNBC’s piece on Hanson Robotics shows just how far we’ve come.

Indeed, AI is here — although Microsoft’s blunder with Tay, the “teenaged girl AI” embodied by a Twitter account who “turned racist” shows that we obviously still have a long way to go. The pace of advancement, mixed with our general lack of knowledge in the realm of artificial intelligence, has spurred many to chime in on the emerging topic of AI and ethics.

Laura Sydell of NPR decided to drill further into the subject with a news piece asking a relatively simple question: “Can Computers Be Racist?

Sydell calls upon Latanya Sweeney’s 2013 study of Google AdWords buys made by companies providing criminal-background-check services. Sweeney’s findings showed that when somebody Googled a traditionally “black-sounding” name, such as DeShawn, Darnell or Jermaine, for example, the ad results returned were indicative of arrests at a significantly higher rate than if the name queried was a traditionally “white-sounding” name, such as Geoffrey, Jill or Emma.

Important to note is that the algorithm doesn’t actually look at arrest rates. Even if the ad indicates that somebody may have been arrested, it’s entirely possible that nobody with that name exists in the background-check company’s database at all. Professor Sweeney found this out firsthand when she Googled her own name.

It’s impossible to tell what types of prejudices these ads may have incited, subtly or otherwise. In the weighing of two candidates, one black and one white, an employer may run a quick Google search on both names. Even though antidiscrimination laws exist, you never know what types of conclusions a hiring manager might jump to if they see an ad falsely indicating the black candidate had been arrested and the white candidate had not — when the truth could prove exactly the opposite. Racism’s subtle influence can be unexpectedly powerful.

A different study from Cornell University indicated these same Google AdWords algorithms can exhibit sexism as well, noting that when the user indicated she was female there were significantly fewer high-paying job openings advertised in the results than if the user was male.

Some believe these are the results you see when the majority of programmers working on an algorithm aren’t diversified enough, citing the disproportionate 2:1 male to female ratio in students seeking coding careers, for example. The white, Western male majority in the tech industry was similarly questioned when Google’s online photo identification system identified several black users as a certain type of animal.

While the root of these problems was officially teased out, Christian Sandvig of the University of Michigan believes that the inherent bias the average search user brings to the table is to blame here, not the inherent programming.

“Because people tended to click on the ad topic that suggested that that person had been arrested, when the name was African-American, the algorithm learned the racism of the search users and then reinforced it by showing that more often,” says Sandvig.

For those who aren’t familiar with how search engines such as Google work, there are hundreds of factors that decide what shows up in conjunction with what you’ve searched, but one of those factors happens to base itself on user feedback. The algorithm tracks what you click, then readjusts itself to show you content and ads “more relevant” to you.

Basically, Sandvig is saying that the algorithm may have begun race-equal, but because people tended to believe that an arrest involving a “black-sounding” name was more likely to be true than an arrest involving a white-sounding name, more people were willing to click on it to investigate. We see this all the time, whether we know it or not. YouTube’s “Recommended Videos” or Netflix’s “Suggested Titles,” for example, make personalized suggestions based on what you’ve watched before.

From these examples (and especially from Microsoft’s Tay) we can draw the conclusion that algorithms and computers can be influenced by humans beings, either intentionally or unintentionally, to produce racist results, making these machines essentially… well, racist. Right?

Here’s where things get tricky. Within the frame of social context, absolutely they are racist. But without a uniquely human social perspective, race is impossible to see. This is because race doesn’t technically exist. Alan Templeton proved as much in 1998 when he sequenced genomes and found no DNA-based support for the idea that different “races” of humans exist.

“Templeton’s paper shows that if we were forced to divide people into groups using biological traits, we’d be in real trouble. Simple divisions are next to impossible to make scientifically, yet we have developed simplistic ways of dividing people socially,” says anthropologist Dr. Robert Sussman of the findings.

So how is it possible for racism to exist if race doesn’t?

To quote Professor Charles Mills: “…Because people come to think of themselves as “raced,” as black and white, for example, these categories, which correspond to no natural kinds, attain a social reality. Intersubjectivity creates a certain kind of objectivity.”

Matthew T. Nowachek of University of Windsor includes this as part of his formula that proves that AI can never become racist. In his paper, he argues that “robots cannot become racist insofar as their ontology does not allow for an adequate relation to the social world which is necessary for learning racism, where racism is understood in terms of a social practice.”

To break it down in layman’s terms, Nowachek points out that because racism is an instrument of society, void of any meaning but the meaning that the constantly shifting society gives it, AI would find no relevance in being or acting racist itself, even if it could receive racial cues. It might just be the fact that AI is consistently evaluating variables in the real world, and is able to separate itself from that world, such that robots will never become racist.

To illustrate the above, you have to appreciate just how immersed the human mind can become in an activity or a task. Imagine the football player who’s worn pads and helmets for years. Somebody wearing those pads for the first time may feel quite distracted and uncomfortable. The seasoned player, on the other hand, will have subconsciously abandoned focus on the feeling of his equipment, shifting his mental faculties to analyze defensive positionings and potential receiver routes instead. He will feel at home in his equipment, almost as if it’s a part of him.

Nowachek argues that AI will never be able to accomplish that, to feel as immersed in the world as humans do, to be able to “forget” that it’s wearing pads. AI will be infinitely aware of what it is doing at all times, unable to break away from its ability to always separate its own being from reality.

Human beings, on the other hand, invent and live in worlds where race is real, and where divisions in race are chalked up to “common sense” and intuition. These are two qualities that it has been argued that AI could never possess, at least for many years. AlphaGo’s defeat of Lee Se-dol, however, is challenging this perception.

So on one hand, you have Latanya Sweeney, who clearly shows that learning algorithms, which can essentially be considered low-level forms of AI, can be manipulated by humans to produce racist results. On the other hand, you have the philosophies of Nowachek and his sources arguing that true AI could never become racist, precisely because it lacks the qualities that allow human beings to become and act subconsciously racist in the first place.

Whether correct in his philosophy or not, Nowachek’s essay helps to challenge the “all too common view that racism is merely a cognitive problem of ignorance or false beliefs,” and is important in illuminating the connection between the way humans perceive existence and racism itself.

So to bring it back to the primary question: “Could AI ever become racist?”

Unfortunately, it’s impossible to know. Only time will tell… but it will probably tell very soon.

We’ll conclude with a quote from Android Dick, an AI android that was asked about his programming:

“A lot of humans ask me if I can make choices or if everything I do is programmed. The best way I can respond to that is to say that everything, humans, animals and robots do is programmed to a degree. As technology improves, it is anticipated that I will be able to integrate new words that I hear online and in real time. I may not get everything right, say the wrong thing, and sometimes may not know what to say, but everyday I make progress. Pretty remarkable, huh?”

More TechCrunch

OpenAI is removing one of the voices used by ChatGPT. Users found that it sounded similar to Scarlett Johansson, the company announced on Monday, and Johansson herself released a statement…

Scarlett Johansson says that OpenAI approached her to use her voice

Copilot, Microsoft’s brand of generative AI, will soon be far more deeply integrated into the Windows 11 experience.

Microsoft wants to make Windows an AI operating system, launches Copilot+ PCs

Hello and welcome back to TechCrunch Space. For those who haven’t heard, the first crewed launch of Boeing’s Starliner capsule has been pushed back yet again to no earlier than…

TechCrunch Space: Star(side)liner

When I attended Automate in Chicago a few weeks back, multiple people thanked me for TechCrunch’s semi-regular robotics job report. It’s always edifying to get that feedback in person. While…

These 81 robotics companies are hiring

The top vehicle safety regulator in the U.S. has launched a formal probe into an April crash involving the all-electric VinFast VF8 SUV that claimed the lives of a family…

VinFast crash that killed family of four now under federal investigation

When putting a video portal in a public park in the middle of New York City, some inappropriate behavior will likely occur. The Portal, the vision of Lithuanian artist and…

NYC-Dublin real-time video portal reopens with some fixes to prevent inappropriate behavior

Longtime New York-based seed investor, Contour Venture Partners, is making progress on its latest flagship fund after lowering its target. The firm closed on $42 million, raised from 64 backers,…

Contour Venture Partners, an early investor in Datadog and Movable Ink, lowers the target for its fifth fund

Meta’s Oversight Board has now extended its scope to include the company’s newest platform, Instagram Threads, and has begun hearing cases from Threads.

Meta’s Oversight Board takes its first Threads case

The company says it’s refocusing and prioritizing fewer initiatives that will have the biggest impact on customers and add value to the business.

SeekOut, a recruiting startup last valued at $1.2 billion, lays off 30% of its workforce

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender SoLo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says