Machine Learning And Human Bias: An Uneasy Pair

Comment

machine learning
Image Credits: Lightspring (opens in a new window) / Shutterstock (opens in a new window)

Jason Baldridge

Contributor
Jason Baldridge is co-founder of People Pattern and associate professor in the Department of Linguistics at the University of Texas at Austin. His primary specialization is computational linguistics, and his core research interests are formal and computational models of syntax, probabilistic models of both syntax and discourse structure, and machine learning for natural language tasks in general.

“We’re watching you.” This was the warning that the Chicago Police Department gave to more than 400 people on its “Heat List.” The list, an attempt to identify the people most likely to commit violent crime in the city, was created with a predictive algorithm that focused on factors including, per the Chicago Tribune, “his or her acquaintances and their arrest histories – and whether any of those associates have been shot in the past.”

Algorithms like this obviously raise some uncomfortable questions. Who is on this list and why? Does it take race, gender, education and other personal factors into account? When the prison population of America is overwhelmingly Black and Latino males, would an algorithm based on relationships disproportionately target young men of color?

There are many reasons why such algorithms are of interest, but the rewards are inseparable from the risks. Humans are biased, and the biases we encode into machines are then scaled and automated. This is not inherently bad (or good), but it raises the question: how do we operate in a world increasingly consumed with “personal analytics” that can predict race, religion, gender, age, sexual orientation, health status and much more.

I’d wager that most readers feel a little uneasy about how the Chicago PD Heat List was implemented – even if they agree that the intention behind the algorithm was good. To use machine learning and public data responsibly, we need to have an uncomfortable discussion about what we teach machines and how we use the output.

What We Teach Machines

Most people have an intuitive understanding of categories concerning race, religion and gender, yet when asked to define them precisely, they quickly find themselves hard-pressed to pin them down. Human beings can’t agree objectively on what race a given person is. As Sen and Wasow (2014) argue, race is a social construct based on a mixture of both mutable and immutable traits including skin color, religion, location and diet.

As a result, the definition of who falls into which racial category varies over time (e.g. Italians were once considered to be black in the American South), and a given individual may identify with one race at one time and with another race a decade later. This inability to precisely define a concept such as race represents a risk for personal analytics.

Any program designed to predict, manipulate and display racial categories must operationalize them both for internal processing and for human consumption. Machine learning is one of the most effective frameworks for doing so because machine learning programs learn from human-provided examples rather than explicit rules and heuristics.

So let’s say a programmer builds an algorithm that makes perfect racial predictions based on the categories known to an average American — what is called a “common-knowledge test.” Many of its outputs will be strange from other perspectives. Many Brazilians who are considered white in their home country would be recognized as black in the United States.

Biracial Americans and individuals from places such as India, Turkey and Israel often challenge racial categorization, at least as Americans understand it. The algorithm will thus necessarily operationalize the biases of its creators, and these biases will conflict with those of others.

The result is a machine learning program that treats race as its creators do — not necessarily as the individuals see themselves or as the users of the program conceive of race. This may be relatively unproblematic in use cases like marketing and social science research, but with the Chicago PD Heat List, ‘No Fly Lists’ and other public safety applications, biases and misperceptions could have severe ramifications at scale.

How We Use The Data

On an individual scale, any algorithm for personal analytics will make errors. A person is multi-faceted and complex, and rarely do we fit neatly into clearly delineated groups. Nonetheless, when individual-level predictions are aggregated, they can support better understanding of groups of people at scale, help us identify disparities, and inform better decisions about how to transform our society for the better.

So if knocking on the doors of potential criminals seems wrong, do we have alternatives?

With the Chicago PD’s algorithm, one option is to generate a ‘Heat Map’ based on the locations of high-risk populations and activities. Los Angeles, Atlanta, Santa Cruz and many other police jurisdictions already do something similar using a predictive policing tool called PredPol. It allows police departments to increase their presence in crime-prone areas, at the right times, without using any personal data. It strictly looks at type, place and time of crimes.

But is profiling by location another form of discrimination? Would police inevitably stop and ticket more people in heat map areas? If I can only afford to live in an economically depressed area, will I be stopped and questioned by police more often than individuals living in a wealthy area? Could a targeted, predictable police presence drive crime into locations where police are unprepared, and thus expand the geography of crime in a city?

Perhaps there is a net good, instead. With police strategically located and working with communities, there is an opportunity to reduce crime and create greater opportunity for residents. An algorithm has the potential to discriminate less than human analysts. PredPol reports double-digit crime reduction in cities that implement the software. The Chicago PD hasn’t released any data on the Heat List’s effectiveness yet.

The Chicago PD and PredPol models are important reminders that personal analytics aren’t the only option. Before we operationalize identity – and certainly before we target individuals and knock on doors – we have to consider the ethics of our approach, not just the elegance of the solution.

Taboo, But Necessary

Talking about bias is uncomfortable, but we can’t afford to ignore this conversation in the machine learning space. To avoid scaling stereotypes or infringing on personal rights, we have to talk about this as it applies to each machine learning algorithm that aims to identify and categorize people.

Transparency in the inputs to such algorithms and how their outputs are used is likely to be an important component of such efforts. Ethical considerations like these have recently been recognized as important problems by the academic community: new courses are being created and meetings like FAT-ML are providing venues for papers and discussions on the topic.

It’s easy to imagine how the Chicago PD Heat List could be used in a responsible way. It’s also easy to imagine worst-case scenarios: What if Senator Joe McCarthy had access to personal analytics during the communist witch hunts of the late 1940s and 50s? Today, what if countries with anti-gay and anti-transgender laws used this technology to identify and harm LGBT individuals?

These are troubling scenarios, but not sufficient reasons to bury this technology. There is a huge opportunity to help rather than harm people. Using machine learning, scholars and policymakers alike can ask important questions and use the results to inform decisions that have significant impact at the individual or societal scale.

Like so many technologies, machine learning itself is value neutral, but the final applications will reflect the problems, preferences and worldviews of the creators.

More TechCrunch

Anterior, a company that uses AI to expedite health insurance approval for medical procedures, has raised a $20 million Series A round at a $95 million post-money valuation led by…

Anterior grabs $20M from NEA to expedite health insurance approvals with AI

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. There’s more bad news for…

How India’s most valuable startup ended up being worth nothing

If death and taxes are inevitable, why are companies so prepared for taxes, but not for death? “I lost both of my parents in college, and it didn’t initially spark…

Bereave wants employers to suck a little less at navigating death

Google and Microsoft have made their developer conferences a showcase of their generative AI chops, and now all eyes are on next week’s Worldwide Developers Conference, which is expected to…

Apple needs to focus on making AI useful, not flashy

AI systems and large language models need to be trained on massive amounts of data to be accurate but they shouldn’t train on data that they don’t have the rights…

Deal Dive: Human Native AI is building the marketplace for AI training licensing deals

Before Wazer came along, “water jet cutting” and “affordable” didn’t belong in the same sentence. That changed in 2016, when the company launched the world’s first desktop water jet cutter,…

Wazer Pro is making desktop water jetting more affordable

Former Autonomy chief executive Mike Lynch issued a statement Thursday following his acquittal of criminal charges, ending a 13-year legal battle with Hewlett-Packard that became one of Silicon Valley’s biggest…

Autonomy’s Mike Lynch acquitted after US fraud trial brought by HP

Featured Article

What Snowflake isn’t saying about its customer data breaches

As another Snowflake customer confirms a data breach, the cloud data company says its position “remains unchanged.”

1 day ago
What Snowflake isn’t saying about its customer data breaches

Investor demand has been so strong for Rippling’s shares that it is letting former employees particpate in its tender offer. With one exception.

Rippling bans former employees who work at competitors like Deel and Workday from its tender offer stock sale

It turns out the space industry has a lot of ideas on how to improve NASA’s $11 billion, 15-year plan to collect and return samples from Mars. Seven of these…

NASA puts $10M down on Mars sample return proposals from Blue Origin, SpaceX and others

Featured Article

In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

When Bowery Capital general partner Loren Straub started talking to a startup from the latest Y Combinator accelerator batch a few months ago, she thought it was strange that the company didn’t have a lead investor for the round it was raising. Even stranger, the founders didn’t seem to be…

2 days ago
In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

Welcome to Startups Weekly — Haje’s weekly recap of everything you can’t miss from the world of startups. Anna will be covering for him this week. Sign up here to…

Startups Weekly: Ups, downs, and silver linings

HSBC and BlackRock estimate that the Indian edtech giant Byju’s, once valued at $22 billion, is now worth nothing.

BlackRock has slashed the value of stake in Byju’s, once worth $22 billion, to zero

Apple is set to board the runaway locomotive that is generative AI at next week’s World Wide Developer Conference. Reports thus far have pointed to a partnership with OpenAI that…

Apple’s generative AI offering might not work with the standard iPhone 15

LinkedIn has confirmed it will no longer allow advertisers to target users based on data gleaned from their participation in LinkedIn Groups. The move comes more than three months after…

LinkedIn to limit targeted ads in EU after complaint over sensitive data use

Founders: Need plans this weekend? What better way to spend your time than applying to this year’s Startup Battlefield 200 at TechCrunch Disrupt. With Monday’s deadline looming, this is a…

Startup Battlefield 200 applications due Monday

The company is in the process of building a gigawatt-scale factory in Kentucky to produce its nickel-hydrogen batteries.

Novel battery manufacturer EnerVenue is raising $515M, per filing

Meta is quietly rolling out a new “Communities” feature on Messenger, the company confirmed to TechCrunch. The feature is designed to help organizations, schools and other private groups communicate in…

Meta quietly rolls out Communities on Messenger

Featured Article

Siri and Google Assistant look to generative AI for a new lease on life

Voice assistants in general are having an existential moment, and generative AI is poised to be the logical successor.

2 days ago
Siri and Google Assistant look to generative AI for a new lease on life

Education software provider PowerSchool is being taken private by investment firm Bain Capital in a $5.6 billion deal.

Bain to take K-12 education software provider PowerSchool private in $5.6B deal

Shopify has acquired Threads.com, the Sequoia-backed Slack alternative, Threads said on its website. The companies didn’t disclose the terms of the deal but said that the Threads.com team will join…

Shopify acquires Threads (no, not that one)

Featured Article

Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Two senior police officials in Bangladesh are accused of collecting and selling citizens’ personal information to criminals on Telegram.

2 days ago
Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Carta, a once-high-flying Silicon Valley startup that loudly backed away from one of its businesses earlier this year, is working on a secondary sale that would value the company at…

Carta’s valuation to be cut by $6.5 billion in upcoming secondary sale

Boeing’s Starliner spacecraft has successfully delivered two astronauts to the International Space Station, a key milestone in the aerospace giant’s quest to certify the capsule for regular crewed missions.  Starliner…

Boeing’s Starliner overcomes leaks and engine trouble to dock with ‘the big city in the sky’

Rivian needs to sell its new revamped vehicles at a profit in order to sustain itself long enough to get to the cheaper mass market R2 SUV on the road.

Rivian’s path to survival is now remarkably clear

Featured Article

What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

Apple is hoping to make WWDC 2024 memorable as it finally spells out its generative AI plans.

3 days ago
What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

Apple’s annual list of what it considers the best and most innovative software available on its platform is turning its attention to the little guy.

Apple’s Design Awards highlight indies and startups

Meta launched its Meta Verified program today along with other features, such as the ability to call large businesses and custom messages.

Meta rolls out Meta Verified for WhatsApp Business users in Brazil, India, Indonesia and Colombia