AI

Women in AI: Sarah Kreps, professor of government at Cornell

Comment

Women in AI Sarah Kreps
Image Credits: TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Sarah Kreps is a political scientist, U.S. Air Force veteran and analyst who focuses on U.S. foreign and defense policy. She’s a professor of government at Cornell University, adjunct professor of law at Cornell Law School and an adjunct scholar at West Point’s Modern War Institute.

Kreps’ recent research explores both the potential and risks of AI tech such as OpenAI’s GPT-4, specifically in the political sphere. In an opinion column for The Guardian last year, she wrote that, as more money pours into AI, the AI arms race not just across companies but countries will intensify — while the AI policy challenge will become harder.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I had my start in the area of emerging technologies with national security implications. I had been an Air Force officer at the time the Predator drone was deployed, and had been involved in advanced radar and satellite systems. I had spent four years working in this space, so it was natural that, as a PhD, I would be interested in studying the national security implications of emerging technologies. I first wrote about drones, and the debate in drones was moving toward questions of autonomy, which of course implicates artificial intelligence.

In 2018, I was at an artificial intelligence workshop at a D.C. think tank and OpenAI gave a presentation about this new GPT-2 capability they had developed. We had just gone through the 2016 election and foreign election interference, which had been relatively easy to spot because of little things like grammatical errors of non-native English speakers — the kind of errors that were not surprising given that the interference had come from the Russian-backed Internet Research Agency. As OpenAI gave this presentation, I was immediately preoccupied with the possibility of generating credible disinformation at scale and then, through microtargeting, manipulating the psychology of American voters in far more effective ways than had been possible when these individuals were trying to write content by hand, where scale was always going to be a problem.

I reached out to OpenAI and became one of the early academic collaborators in their staged release strategy. My particular research was aimed at investigating the possible misuse case — whether GPT-2 and later GPT-3 were credible as political content generators. In a series of experiments, I evaluated whether the public would see this content as credible but then also conducted a large field experiment where I generated “constituency letters” that I randomized with actual constituency letters to see whether legislators would respond at the same rates to know whether they could be fooled — whether malicious actors could shape the legislative agenda with a large-scale letter writing campaign.

These questions struck at the heart of what it means to be a sovereign democracy and I concluded unequivocally that these new technologies did represent new threats to our democracy.

What work are you most proud of (in the AI field)?

I’m very proud of the field experiment I conducted. No one had done anything remotely similar and we were the first to show the disruptive potential in a legislative agenda context.

But I’m also proud of tools that unfortunately I never brought to market. I worked with several computer science students at Cornell to develop an application that would process legislative inbound emails and help them respond to constituents in meaningful ways. We were working on this before ChatGPT and using AI to digest the large volume of emails and provide an AI assist for time-pressed staffers communicating with people in their district or state. I thought these tools were important because of constituents’ disaffection from politics but also the increasing demands on the time of legislators. Developing AI in these publicly interested ways seemed like a valuable contribution and interesting interdisciplinary work for political scientists and computer scientists. We conducted a number of experiments to assess the behavioral questions of how people would feel about an AI assist responding to them and concluded that maybe society was not ready for something like this. But then a few months after we pulled the plug, ChatGPT came on the scene and AI is so ubiquitous that I almost wonder how we ever worried about whether this was ethically dubious or legitimate. But I still feel like it’s right that we asked the hard ethical questions about the legitimate use case.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a researcher, I have not felt those challenges terribly acutely. I was just out in the Bay Area and it was all dudes literally giving their elevator pitches in the hotel elevator, a cliché that I could see being intimidating. I would recommend that they find mentors (male and female), develop skills and let those skills speak for themselves, take on challenges and stay resilient.

What advice would you give to women seeking to enter the AI field?

I think there are a lot of opportunities for women — they need to develop skills and have confidence and they’ll thrive.

What are some of the most pressing issues facing AI as it evolves?

I worry that the AI community has developed so many research initiatives that focus on things like “superalignment” that obscure the deeper — or actually, the right — questions about whose values or what values we are trying to align AI with. Google Gemini’s problematic rollout showed the caricature that can arise from aligning with a narrow set of developers’ values in ways that actually led to (almost) laughable historical inaccuracies in their outputs. I think those developers’ values were good faith, but revealed the fact that these large language models are being programmed with a particular set of values that will be shaping how people think about politics, social relationships and a variety of sensitive topics. Those issues aren’t of the existential risk variety but do create the fabric of society and confer considerable power into the big firms (e.g. OpenAI, Google, Meta and so on) that are responsible for those models.

What are some issues AI users should be aware of?

As AI becomes ubiquitous, I think we’ve entered a “trust but verify” world. It’s nihilistic not to believe anything but there’s a lot of AI-generated content and users really need to be circumspect in terms of what they instinctively trust. It’s good to look for alternative sources to verify the authenticity before just assuming that everything is accurate. But I think we already learned that with social media and misinformation.

What is the best way to responsibly build AI?

I recently wrote a piece for the Bulletin of the Atomic Scientists, which started out covering nuclear weapons but has adapted to address disruptive technologies like AI. I had been thinking about how scientists could be better public stewards and wanted to connect some of the historical cases I had been looking at for a book project. I not only outline a set of steps I would endorse for responsible development but also speak to why some of the questions that AI developers are asking are wrong, incomplete or misguided.

More TechCrunch

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender Solo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient, and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google launches a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

Consumer demand for the latest AI technology is heating up. The launch of OpenAI’s latest flagship model, GPT-4o, has now driven the company’s biggest-ever spike in revenue on mobile, despite…

ChatGPT’s mobile app revenue saw biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

3 days ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024