Sponsored Content by Amazon Web Services

Building more responsible AI

Multicultural Crowd of People. Group of different men and women. Young, adult and older peole. European, Asian, African and Arabian People. Empty faces. Vector illustration.

Dr. Nashlie Sephus, Tech Evangelist, tells us how it’s done

The implementation of fair, responsible, and accurate AI is a top management priority for most companies, a recent report by BCG and MIT’s Sloan School has found. But only 52% of companies report having a responsible AI program in place. And of those, 79% say it’s not the kind of fully-implemented program they want. 

How can a company move the needle on building a responsible AI program? For many companies, a key piece of the puzzle is education. The more you know, the better you can build. 

“Our customers are committed to getting responsible AI right, just as we are,” says Dr. Nashlie Sephus, the former startup CTO and AI specialist whose current role as Tech Evangelist for AWS is, in her words, “all about education and raising awareness.” Together with a group of experts from engineering, science, product, legal, and policy backgrounds, she helped develop the Responsible Use of Machine Learning Guide to help companies responsibly develop and use machine learning (ML) systems across the three major phases of their machine learning lifecycle: design and development, deployment, and ongoing use.

If you ask her to define the responsible use of AI, Dr. Sephus will describe it as an umbrella that includes fairness and bias, explainability, privacy and security, robustness, governance, and transparency. “All of these factors can play a huge role in whether or not a technology can do good or can do harm,” she explains. And while many companies pick off one piece at a time, often starting with privacy and security, Dr. Sephus is quick to point out that fairness and transparency can be just as powerful in building customer trust as keeping their data secure.

Dr. Sephus joined AWS through the acquisition of her company Partpic, a visual recognition technology startup that allowed users to search for specific industrial parts like bolts, screws, nuts, and fasteners with the camera app on a phone, then order the correct parts from a catalog. Her expertise with AI across the data lifecycle, in addition to her lived experience as a Black woman from the South, has earned her a deep and nuanced wisdom. Her focus on fairness and bias detection is something she’s honed over years in product development. 

At an internship she held before beginning her graduate work, Dr. Sephus worked on a team developing applications for car radios enabled by Bluetooth. “We would always use my voice to test it out, and it would never get the number six right because of my Southern accent.” That was one of her first clues that developing AI for everyone’s benefit can be a complicated process. 

Then there was the AWS engineering team Dr. Sephus worked with to help test and audit some of AWS’s AI and ML services like Amazon Rekognition, the AI service that enables face recognition. The team took a holistic approach to building AWS services, looking out for potential bias and accuracy so as to build products that benefit everyone. The process made her feel “extremely valued as a tech developer and as a consumer, as someone who understood the intricacies of AI as well as culture.”

“Working on that project,” Dr. Sephus says, “that was when I found my purpose.”

Image Credits: Getty Images

Today, Dr. Sephus’ mission is to make it easier for AWS customers to learn about fairness and bias detection in AI and to help guide Amazon’s approach to the responsible use of ML and AI now and in the future. Because bias can creep into product development at every point of the data lifecycle, Dr. Sephus recommends applying a fairness and bias lens every step of the way. Here are four questions she suggests development teams continually ask as they build new AI-powered products: 

  1. Is your team diverse? A diverse team will bring more perspectives to the table to help promote fairness and mitigate bias when designing and developing AI. It can take extra time up front to find or develop diverse talent, if it’s not already something a company is invested in. Dr. Sephus advises it’s well worth the effort. “The future of tech is inclusive,” she says, “and there’s a very special group of people that can help contribute to the solutions. Bring them to the table.”
  2. Are your annotators trained on biases that may exist and/or do you have enough representation on your annotator team? Multiple sets of annotators are better than one, and clear parameters are a must. Let’s say annotators from different cultures were labeling wedding dresses; what might be appropriate wedding apparel to one might not be to the other. Cultural upbringing, assumptions, and preferences that contribute to unconscious bias can swing even the most innocuous-seeming decisions, which is why training annotators on definitions of labels and getting proper representation on annotator teams is important.
  3. Is your model data still relevant? “The lifecycle of machine learning is always moving, like a flywheel,” Dr. Sephus says. “More often than not, the historical data on which you trained your model will evolve and grow. Sometimes the environment in which the model is deployed changes. You have to continually ask: ‘Is this data still relevant over time?’”
  4. When you explain or review your model behavior, can you explain how it arrived at its predictions? Let’s say an autonomous driving model classified red cars as slower moving because there were proportionally fewer red cars in the dataset and they happened to be drive more slowly. “It is important to be able to explain how your model made the prediction that it made,” Dr. Sephus advises, “and always ask: is it reasonable?”

Companies who are already implementing these questions and checkpoints along their ML model development journeys are setting themselves up for a competitive advantage, Dr. Sephus says, especially as regulations aimed at making sure that companies meet responsible standards are increasingly put into play around the world.

“For startups, especially, if you want to have a competitive edge, you need to think about what sets you apart from the next company,” Dr. Sephus points out. “Building ML systems that are more fair and equitable, and being transparent, puts you so much farther ahead than the companies that are not. Why not start now?”

About Dr. Nashlie H. Sephus

Dr. Nashlie H. Sephus is a Principal Tech Evangelist with AWS. Formerly, she led the AWS Visual Search team in Atlanta. She received her B.S. in Computer Engineering from Mississippi State University and her Ph.D. from the School of Electrical and Computer Engineering at the Georgia Institute of Technology. In addition to her work at AWS, is the founder and CEO of The Bean Path, a non-profit organization based in her hometown of Jackson, Mississippi, that assists individuals with technical expertise and guidance to fill the tech gap in communities.

More TechCrunch

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Ilya Sutskever, OpenAI’s longtime chief scientist and one of its co-founders, has left the company. OpenAI CEO Sam Altman announced the news in a post on X Tuesday evening. pic.twitter.com/qyPMIcvcsY…

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024

In addition to the body of the emails themselves, the feature will also be able to analyze attachments, like PDFs.

Gemini comes to Gmail to summarize, draft emails, and more

The summaries are created based on Gemini’s analysis of insights from Google Maps’ community of more than 300 million contributors.

Google is bringing Gemini capabilities to Google Maps Platform

Google says that over 100,000 developers already tried the service.

Project IDX, Google’s next-gen IDE, is now in open beta

The system effectively listens for “conversation patterns commonly associated with scams” in-real time. 

Google will use Gemini to detect scams during calls

The standard Gemma models were only available in 2 billion and 7 billion parameter versions, making this quite a step up.

Google announces Gemma 2, a 27B-parameter version of its open model, launching in June

This is a great example of a company using generative AI to open its software to more users.

Google TalkBack will use Gemini to describe images for blind people

Google’s Circle to Search feature will now be able to solve more complex problems across psychics and math word problems. 

Circle to Search is now a better homework helper

People can now search using a video they upload combined with a text query to get an AI overview of the answers they need.

Google experiments with using video to search, thanks to Gemini AI

A search results page based on generative AI as its ranking mechanism will have wide-reaching consequences for online publishers.

Google will soon start using GenAI to organize some search results pages

Google has built a custom Gemini model for search to combine real-time information, Google’s ranking, long context and multimodal features.

Google is adding more AI to its search results

At its Google I/O developer conference, Google on Tuesday announced the next generation of its Tensor Processing Units (TPU) AI chips.

Google’s next-gen TPUs promise a 4.7x performance boost

Google is upgrading Gemini, its AI-powered chatbot, with features aimed at making the experience more ambient and contextually useful.

Google’s Gemini updates: How Project Astra is powering some of I/O’s big reveals

Veo can generate few-seconds-long 1080p video clips given a text prompt.

Google’s image-generating AI gets an upgrade

At Google I/O, Google announced upgrades to Gemini 1.5 Pro, including a bigger context window. .

Google’s generative AI can now analyze hours of video