AI

Is your startup using AI responsibly?

Comment

Image Credits: Arthur Debat (opens in a new window) / Getty Images

Ganes Kesari

Contributor

Ganes Kesari is a co-founder and head of analytics at Gramener. He helps transform organizations through advisory in building data science teams and adopting insights as data stories.

More posts from Ganes Kesari

Since they started leveraging the technology, tech companies have received numerous accusations regarding the unethical use of artificial intelligence.

One example comes from Alphabet’s Google, which created a hate speech-detection algorithm that assigned higher “toxicity scores” to the speech of African Americans than their white counterparts. Researchers at the University of Washington analyzed databases of thousands of tweets deemed “offensive” or “hateful” by the algorithm and found that black-aligned English was more likely to be labeled as hate speech.

This is one of countless instances of bias emerging from AI algorithms. Understandably, these issues have generated a lot of attention. Conversations on ethics and bias have been one of the top themes in AI in the recent past.

Organizations and actors across industries are engaging in research to eliminate bias through fairness, accountability transparency and ethics (FATE). Yet, research that is solely focused on model architecture and engineering is bound to yield limited results. So, how can you address this?

Resolving misconceptions on fighting AI bias

Fixing the model is insufficient, as that’s not where the root cause lies. To find out which measures can yield better results, we must first understand the real reasons. We can then look at potential solutions by studying what we do in the real world to tackle such biases.

AI models learn by studying patterns and identifying insights from historical data. But human history (and our present) is far from perfect. So, it’s no surprise that these models end up mimicking and amplifying the biases that lie in the data used to train them.

This is fairly clear to all of us. But, how do we handle such inherent bias in our world?

We inject bias to fight bias. When we feel that a community or segment of the population could be disadvantaged, we avoid basing our conclusion solely on past instances. At times, we go a step further and make inclusions to provide opportunity to such segments. This is a small step to reversing the trend.

This is the very step that we must take while teaching models. So, how do we inject human bias to fight the inherent “learned” bias of models? Here are some steps to achieve that.

Add diverse roles to your data science team

You need your team to look beyond technology when it comes to building responsible AI models. While it’s important to educate data science professionals on data privacy, this will have limited benefits. By bringing in people from the social sciences and humanities, you can gain access to skill sets and expertise that will help you mitigate potential biases within AI models.

People from these kinds of backgrounds will better understand users and ethical considerations and provide a human perspective on the insights generated. Anthropologists and sociologists can spot stereotypes in the models that might have slipped past the data scientist who created them and can correct the data for underlying bias. Behavioral psychologists in data science teams can help bridge the gap between users and technology and ensure fairness in the model outcomes.

As well as valuing diverse skill sets, it’s also vital for data science teams to bring in more members of a different gender, race or nationality than the norm. A diverse team offers fresh perspectives, questions age-old norms which may be outdated and prevents the team from slipping into the trap of group-think.

For example, the Google team that built the iOS YouTube app didn’t consider left-handed users when it added in mobile uploads, as all of the people on the team were right-handed. This meant that videos that were recorded in a left-handed person’s view appeared upside-down. All it would have taken was a few left-handed people to make the app significantly more user-friendly for 10% of the global population.

Build humans into the loop

No matter how sophisticated your model may be, it needs to be designed with human intervention. Humans can be part of a fail-safe mechanism, or better yet, data science teams can incorporate human judgment on a continuous basis and enrich the model progressively.

With AI being used in critical areas that involve human health or lives, it should be considered mission-critical with zero tolerance — not just for downtime or errors, but also for bias. The only way to achieve this is by keeping humans in charge to ensure that AI bias is avoided.

Doctors at a hospital in Jupiter, Fla. exemplified the necessity to never blindly trust AI algorithms: they dismissed IBM Watson’s suggestions for cancer treatments that could have had fatal consequences. Data science teams are constantly challenged to build better models and foolproof systems. However, this doesn’t mean that humans can be removed from the systems. Things do go wrong, even if it is in a minority of cases. We must design humans into the decision-making process to take control in every such instance.

Hold the data science team accountable

Most business data science teams’ primary targets are centered around delivering more revenue, engineering the most accurate models and automating processes for maximum efficiency. However, these goals ignore that someone (or multiple people) must ensure that the “right” thing is being done.

Data science teams must be held accountable for outcomes and must ensure that the business problem they are solving is not achieved at the expense of an ethical code. Ask yourself: Is your data science team incentivized by revenue and timeline alone? Or do they consider responsible product use and fair outcomes as an integral success criteria for the project? If the former is true, you need to rethink the goals that drive your team.

In order to ensure responsible AI use, you must elevate the moral fiber of your data science team. This needs active conservation and continuous education. Additionally, you also must plan for senior roles such as chief ethics officer or an ethics committee that will become the moral watchdog of your product. However, this doesn’t remove the need to place the responsibility on everyone else involved: you need accountability at all levels. For example, Paula Goldman became Salesforce’s first-ever chief ethical and humane use officer.

By having continuous conversations about the quality of AI solutions and their impact on society at large, you can instill a sense of responsibility from the top and ensure a trickle-down effect onto the rest of your team. There also are best practices and guidelines available, like these from Google.

While Big Tech has generally had its fair share of AI ethics blunders, we have also seen some right steps being taken forward. Both Microsoft and IBM have been vocal in their commitment to tackling bias in both their own programs and third-party ones.

Let your users know AI is not perfect

Many individuals within companies and in the wider consumer community place too much trust in AI without understanding its capabilities and flaws. And while business leaders often overlook this, we can’t expect all consumers to understand the promise of today’s AI.

People assume that AI can work wonders, but without data, talent or the right processes, it will be doomed to fail. Educating team members and consumers that AI is still in its initial stages — and should be treated as such — is vital to avoid blind trust that results in disastrous outcomes and bigger disappointments. Expectations on the capabilities of AI algorithms must remain realistic. Just as you might be reluctant to let your car run in autopilot mode in a crowded street, we must understand the limitations today. People need to understand that AI solutions should be seen in the same way — they are there to inform, not dictate.

Build a culture that promotes curiosity, willingness to question beliefs and flexibility to change

Ultimately, in order to achieve responsible use of AI, you need to embed certain attributes in your organization’s core. Culture takes shape over years and is extremely difficult to change. Organizations that aspire to be data-driven must have certain attributes in the core of its culture. Sow these seeds when your startup is in its early stages, rather than years later.

So, what are these crucial attributes that are so essential to responsible and ethical use of AI?

Curiosity. Ask yourself: Are all team members willing to experiment with the necessary steps to find the answers they need? Or are they comfortable executing the laid-out steps and processes? A curious team will find ways to make AI work to meet the outcomes.

The next is willingness to question beliefs: Does your company have a healthy environment for teams to question established practices? Do higher-ups listen and encourage challenging feedback? Teams must have an open culture to speak up when they see that AI is being implemented in a way that is not in line with the organization’s ideals.

Finally, company culture must promote flexibility to change. Working with technologies and data science naturally involves a lot of change — both in creating solutions and adopting them. Are teams willing to adapt based on what they have discovered through curiosity and by questioning set processes?

Having the right company culture lays the foundation for ethical AI use. Facebook is known to promote a culture of “move fast and break things.” Given the countless scandals over user privacy and abuse of data the tech giant has faced, this mantra has clearly not resulted in sound AI use.

Responsible use of AI is not something that happens overnight, nor is it something that can be assured simply by making a single adjustment to the model and expecting miraculous results. As we have seen in the wave of accusations of “ethics washing” that is currently sweeping the tech industry, simply stating that you’re going to combat AI bias with little action to back up the claim just won’t cut it.

Avoiding AI bias can only come by adding human input at various stages, and in a number of ways. By diversifying data science teams, building humans into the process, holding the team accountable, setting realistic expectations of AI’s capabilities and, finally — and perhaps most crucially — building the right company culture, you can pave the way for ethical use of AI in your organization.

More TechCrunch

China has closed a third state-backed investment fund to bolster its semiconductor industry and reduce reliance on other nations, both for using and for manufacturing wafers — prioritizing what is…

China’s $47B semiconductor fund puts chip sovereignty front and center

Apple’s annual list of what it considers the best and most innovative software available on its platform is turning its attention to the little guy.

Apple’s Design Awards nominees highlight indies and startups, largely ignore AI (except for Arc)

The spyware maker’s founder, Bryan Fleming, said pcTattletale is “out of business and completely done,” following a data breach.

Spyware maker pcTattletale shutters after data breach

AI models are always surprising us, not just in what they can do, but what they can’t, and why. An interesting new behavior is both superficial and revealing about these…

AI models have favorite numbers, because they think they’re people

On Friday, Pal Kovacs was listening to the long-awaited new album from rock and metal giants Bring Me The Horizon when he noticed a strange sound at the end of…

Rock band’s hidden hacking-themed website gets hacked

Jan Leike, a leading AI researcher who earlier this month resigned from OpenAI before publicly criticizing the company’s approach to AI safety, has joined OpenAI rival Anthropic to lead a…

Anthropic hires former OpenAI safety lead to head up new team

Welcome to TechCrunch Fintech! This week, we’re looking at the long-term implications of Synapse’s bankruptcy on the fintech sector, Majority’s impressive ARR milestone, and more!  To get a roundup of…

The demise of BaaS fintech Synapse could derail the funding prospects for other startups in the space

YouTube’s free Playables don’t directly challenge the app store model or break Apple’s rules. However, they do compete with the App Store’s free games.

YouTube’s free games catalog ‘Playables’ rolls out to all users

Featured Article

A comprehensive list of 2024 tech layoffs

The tech layoff wave is still going strong in 2024. Following significant workforce reductions in 2022 and 2023, this year has already seen 60,000 job cuts across 254 companies, according to independent layoffs tracker Layoffs.fyi. Companies like Tesla, Amazon, Google, TikTok, Snap and Microsoft have conducted sizable layoffs in the first months of 2024. Smaller-sized…

9 hours ago
A comprehensive list of 2024 tech layoffs

OpenAI has formed a new committee to oversee “critical” safety and security decisions related to the company’s projects and operations. But, in a move that’s sure to raise the ire…

OpenAI’s new safety committee is made up of all insiders

Time is running out for tech enthusiasts and entrepreneurs to secure their early-bird tickets for TechCrunch Disrupt 2024! With only four days left until the May 31 deadline, now is…

Early bird gets the savings — 4 days left for Disrupt sale

AI may not be up to the task of replacing Google Search just yet, but it can be useful in more specific contexts — including handling the drudgery that comes…

Skej’s AI meeting scheduling assistant works like adding an EA to your email

Faircado has built a browser extension that suggests pre-owned alternatives for ecommerce listings.

Faircado raises $3M to nudge people to buy pre-owned goods

Tumblr, the blogging site acquired twice, is launching its “Communities” feature in open beta, the Tumblr Labs division has announced. The feature offers a dedicated space for users to connect…

Tumblr launches its semi-private Communities in open beta

Remittances from workers in the U.S. to their families and friends in Latin America amounted to $155 billion in 2023. With such a huge opportunity, banks, money transfer companies, retailers,…

Félix Pago raises $15.5 million to help Latino workers send money home via WhatsApp

Google said today it’s adding new AI-powered features such as a writing assistant and a wallpaper creator and providing easy access to Gemini chatbot to its Chromebook Plus line of…

Google adds AI-powered features to Chromebook

The dynamic duo behind the Grammy Award–winning music group the Chainsmokers, Alex Pall and Drew Taggart, are set to bring their entrepreneurial expertise to TechCrunch Disrupt 2024. Known for their…

The Chainsmokers light up Disrupt 2024

The deal will give LumApps a big nest egg to make acquisitions and scale its business.

LumApps, the French ‘intranet super app,’ sells majority stake to Bridgepoint in a $650M deal

Featured Article

More neobanks are becoming mobile networks — and Nubank wants a piece of the action

Nubank is taking its first tentative steps into the mobile network realm, as the NYSE-traded Brazilian neobank rolls out an eSIM (embedded SIM) service for travelers. The service will give customers access to 10GB of free roaming internet in more than 40 countries without having to switch out their own existing physical SIM card or…

17 hours ago
More neobanks are becoming mobile networks — and Nubank wants a piece of the action

Infra.Market, an Indian startup that helps construction and real estate firms procure materials, has raised $50M from MARS Unicorn Fund.

MARS doubles down on India’s Infra.Market with new $50M investment

Small operations can lose customers by not offering financing, something the Berlin-based startup wants to change.

Cloover wants to speed solar adoption by helping installers finance new sales

India’s Adani Group is in discussions to venture into digital payments and e-commerce, according to a report.

Adani looks to battle Reliance, Walmart in India’s e-commerce, payments race, report says

Ledger, a French startup mostly known for its secure crypto hardware wallets, has started shipping new wallets nearly 18 months after announcing the latest Ledger Stax devices. The updated wallet…

Ledger starts shipping its high-end hardware crypto wallet

A data protection taskforce that’s spent over a year considering how the European Union’s data protection rulebook applies to OpenAI’s viral chatbot, ChatGPT, reported preliminary conclusions Friday. The top-line takeaway…

EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

Here’s a shoutout to LatAm early-stage startup founders! We want YOU to apply for the Startup Battlefield 200 at TechCrunch Disrupt 2024. But you’d better hurry — time is running…

LatAm startups: Apply to Startup Battlefield 200

The countdown to early-bird savings for TechCrunch Disrupt, taking place October 28–30 in San Francisco, continues. You have just five days left to save up to $800 on the price…

5 days left to get your early-bird Disrupt passes

Venture investment into Spanish startups also held up quite well, with €2.2 billion raised across some 850 funding rounds.

Spanish startups reached €100 billion in aggregate value last year

Featured Article

Onyx Motorbikes was in trouble — and then its 37-year-old owner died

James Khatiblou, the owner and CEO of Onyx Motorbikes, was watching his e-bike startup fall apart.  Onyx was being evicted from its warehouse in El Segundo, near Los Angeles. The company’s unpaid bills were stacking up. Its chief operating officer had abruptly resigned. A shipment of around 100 CTY2 dirt bikes from Chinese supplier Suzhou…

1 day ago
Onyx Motorbikes was in trouble — and then its 37-year-old owner died

Featured Article

Iyo thinks its GenAI earbuds can succeed where Humane and Rabbit stumbled

Iyo represents a third form factor in the push to deliver standalone generative AI devices: Bluetooth earbuds.

1 day ago
Iyo thinks its GenAI earbuds can succeed where Humane and Rabbit stumbled

Arati Prabhakar, profiled as part of TechCrunch’s Women in AI series, is director of the White House Office of Science and Technology Policy.

Women in AI: Arati Prabhakar thinks it’s crucial to get AI ‘right’