AI

Women in AI: Sarah Kreps, professor of government at Cornell

Comment

Women in AI Sarah Kreps
Image Credits: TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Sarah Kreps is a political scientist, U.S. Air Force veteran and analyst who focuses on U.S. foreign and defense policy. She’s a professor of government at Cornell University, adjunct professor of law at Cornell Law School and an adjunct scholar at West Point’s Modern War Institute.

Kreps’ recent research explores both the potential and risks of AI tech such as OpenAI’s GPT-4, specifically in the political sphere. In an opinion column for The Guardian last year, she wrote that, as more money pours into AI, the AI arms race not just across companies but countries will intensify — while the AI policy challenge will become harder.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I had my start in the area of emerging technologies with national security implications. I had been an Air Force officer at the time the Predator drone was deployed, and had been involved in advanced radar and satellite systems. I had spent four years working in this space, so it was natural that, as a PhD, I would be interested in studying the national security implications of emerging technologies. I first wrote about drones, and the debate in drones was moving toward questions of autonomy, which of course implicates artificial intelligence.

In 2018, I was at an artificial intelligence workshop at a D.C. think tank and OpenAI gave a presentation about this new GPT-2 capability they had developed. We had just gone through the 2016 election and foreign election interference, which had been relatively easy to spot because of little things like grammatical errors of non-native English speakers — the kind of errors that were not surprising given that the interference had come from the Russian-backed Internet Research Agency. As OpenAI gave this presentation, I was immediately preoccupied with the possibility of generating credible disinformation at scale and then, through microtargeting, manipulating the psychology of American voters in far more effective ways than had been possible when these individuals were trying to write content by hand, where scale was always going to be a problem.

I reached out to OpenAI and became one of the early academic collaborators in their staged release strategy. My particular research was aimed at investigating the possible misuse case — whether GPT-2 and later GPT-3 were credible as political content generators. In a series of experiments, I evaluated whether the public would see this content as credible but then also conducted a large field experiment where I generated “constituency letters” that I randomized with actual constituency letters to see whether legislators would respond at the same rates to know whether they could be fooled — whether malicious actors could shape the legislative agenda with a large-scale letter writing campaign.

These questions struck at the heart of what it means to be a sovereign democracy and I concluded unequivocally that these new technologies did represent new threats to our democracy.

What work are you most proud of (in the AI field)?

I’m very proud of the field experiment I conducted. No one had done anything remotely similar and we were the first to show the disruptive potential in a legislative agenda context.

But I’m also proud of tools that unfortunately I never brought to market. I worked with several computer science students at Cornell to develop an application that would process legislative inbound emails and help them respond to constituents in meaningful ways. We were working on this before ChatGPT and using AI to digest the large volume of emails and provide an AI assist for time-pressed staffers communicating with people in their district or state. I thought these tools were important because of constituents’ disaffection from politics but also the increasing demands on the time of legislators. Developing AI in these publicly interested ways seemed like a valuable contribution and interesting interdisciplinary work for political scientists and computer scientists. We conducted a number of experiments to assess the behavioral questions of how people would feel about an AI assist responding to them and concluded that maybe society was not ready for something like this. But then a few months after we pulled the plug, ChatGPT came on the scene and AI is so ubiquitous that I almost wonder how we ever worried about whether this was ethically dubious or legitimate. But I still feel like it’s right that we asked the hard ethical questions about the legitimate use case.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a researcher, I have not felt those challenges terribly acutely. I was just out in the Bay Area and it was all dudes literally giving their elevator pitches in the hotel elevator, a cliché that I could see being intimidating. I would recommend that they find mentors (male and female), develop skills and let those skills speak for themselves, take on challenges and stay resilient.

What advice would you give to women seeking to enter the AI field?

I think there are a lot of opportunities for women — they need to develop skills and have confidence and they’ll thrive.

What are some of the most pressing issues facing AI as it evolves?

I worry that the AI community has developed so many research initiatives that focus on things like “superalignment” that obscure the deeper — or actually, the right — questions about whose values or what values we are trying to align AI with. Google Gemini’s problematic rollout showed the caricature that can arise from aligning with a narrow set of developers’ values in ways that actually led to (almost) laughable historical inaccuracies in their outputs. I think those developers’ values were good faith, but revealed the fact that these large language models are being programmed with a particular set of values that will be shaping how people think about politics, social relationships and a variety of sensitive topics. Those issues aren’t of the existential risk variety but do create the fabric of society and confer considerable power into the big firms (e.g. OpenAI, Google, Meta and so on) that are responsible for those models.

What are some issues AI users should be aware of?

As AI becomes ubiquitous, I think we’ve entered a “trust but verify” world. It’s nihilistic not to believe anything but there’s a lot of AI-generated content and users really need to be circumspect in terms of what they instinctively trust. It’s good to look for alternative sources to verify the authenticity before just assuming that everything is accurate. But I think we already learned that with social media and misinformation.

What is the best way to responsibly build AI?

I recently wrote a piece for the Bulletin of the Atomic Scientists, which started out covering nuclear weapons but has adapted to address disruptive technologies like AI. I had been thinking about how scientists could be better public stewards and wanted to connect some of the historical cases I had been looking at for a book project. I not only outline a set of steps I would endorse for responsible development but also speak to why some of the questions that AI developers are asking are wrong, incomplete or misguided.

More TechCrunch

Welcome to Week in Review: TechCrunch’s newsletter recapping the week’s biggest news. This week Apple unveiled new iPad models at its Let Loose event, including a new 13-inch display for…

Why Apple’s ‘Crush’ ad is so misguided

The U.K. Safety Institute, the U.K.’s recently established AI safety body, has released a toolset designed to “strengthen AI safety” by making it easier for industry, research organizations and academia…

U.K. agency releases tools to test AI model safety

AI startup Runway’s second annual AI Film Festival showcased movies that incorporated AI tech in some fashion, from backgrounds to animations.

At the AI Film Festival, humanity triumphed over tech

Rachel Coldicutt is the founder of Careful Industries, which researches the social impact technology has on society.

Women in AI: Rachel Coldicutt researches how technology impacts society

SAP Chief Sustainability Officer Sophia Mendelsohn wants to incentivize companies to be green because it’s profitable, not just because it’s right.

SAP’s chief sustainability officer isn’t interested in getting your company to do the right thing

Here’s what one insider said happened in the days leading up to the layoffs.

Tesla’s profitable Supercharger network is in limbo after Musk axed the entire team

StrictlyVC events deliver exclusive insider content from the Silicon Valley & Global VC scene while creating meaningful connections over cocktails and canapés with leading investors, entrepreneurs and executives. And TechCrunch…

Meesho, a leading e-commerce startup in India, has secured $275 million in a new funding round.

Meesho, an Indian social commerce platform with 150M transacting users, raises $275M

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others