AI

Women in AI: Sarah Kreps, professor of government at Cornell

Comment

Women in AI Sarah Kreps
Image Credits: TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Sarah Kreps is a political scientist, U.S. Air Force veteran and analyst who focuses on U.S. foreign and defense policy. She’s a professor of government at Cornell University, adjunct professor of law at Cornell Law School and an adjunct scholar at West Point’s Modern War Institute.

Kreps’ recent research explores both the potential and risks of AI tech such as OpenAI’s GPT-4, specifically in the political sphere. In an opinion column for The Guardian last year, she wrote that, as more money pours into AI, the AI arms race not just across companies but countries will intensify — while the AI policy challenge will become harder.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

I had my start in the area of emerging technologies with national security implications. I had been an Air Force officer at the time the Predator drone was deployed, and had been involved in advanced radar and satellite systems. I had spent four years working in this space, so it was natural that, as a PhD, I would be interested in studying the national security implications of emerging technologies. I first wrote about drones, and the debate in drones was moving toward questions of autonomy, which of course implicates artificial intelligence.

In 2018, I was at an artificial intelligence workshop at a D.C. think tank and OpenAI gave a presentation about this new GPT-2 capability they had developed. We had just gone through the 2016 election and foreign election interference, which had been relatively easy to spot because of little things like grammatical errors of non-native English speakers — the kind of errors that were not surprising given that the interference had come from the Russian-backed Internet Research Agency. As OpenAI gave this presentation, I was immediately preoccupied with the possibility of generating credible disinformation at scale and then, through microtargeting, manipulating the psychology of American voters in far more effective ways than had been possible when these individuals were trying to write content by hand, where scale was always going to be a problem.

I reached out to OpenAI and became one of the early academic collaborators in their staged release strategy. My particular research was aimed at investigating the possible misuse case — whether GPT-2 and later GPT-3 were credible as political content generators. In a series of experiments, I evaluated whether the public would see this content as credible but then also conducted a large field experiment where I generated “constituency letters” that I randomized with actual constituency letters to see whether legislators would respond at the same rates to know whether they could be fooled — whether malicious actors could shape the legislative agenda with a large-scale letter writing campaign.

These questions struck at the heart of what it means to be a sovereign democracy and I concluded unequivocally that these new technologies did represent new threats to our democracy.

What work are you most proud of (in the AI field)?

I’m very proud of the field experiment I conducted. No one had done anything remotely similar and we were the first to show the disruptive potential in a legislative agenda context.

But I’m also proud of tools that unfortunately I never brought to market. I worked with several computer science students at Cornell to develop an application that would process legislative inbound emails and help them respond to constituents in meaningful ways. We were working on this before ChatGPT and using AI to digest the large volume of emails and provide an AI assist for time-pressed staffers communicating with people in their district or state. I thought these tools were important because of constituents’ disaffection from politics but also the increasing demands on the time of legislators. Developing AI in these publicly interested ways seemed like a valuable contribution and interesting interdisciplinary work for political scientists and computer scientists. We conducted a number of experiments to assess the behavioral questions of how people would feel about an AI assist responding to them and concluded that maybe society was not ready for something like this. But then a few months after we pulled the plug, ChatGPT came on the scene and AI is so ubiquitous that I almost wonder how we ever worried about whether this was ethically dubious or legitimate. But I still feel like it’s right that we asked the hard ethical questions about the legitimate use case.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

As a researcher, I have not felt those challenges terribly acutely. I was just out in the Bay Area and it was all dudes literally giving their elevator pitches in the hotel elevator, a cliché that I could see being intimidating. I would recommend that they find mentors (male and female), develop skills and let those skills speak for themselves, take on challenges and stay resilient.

What advice would you give to women seeking to enter the AI field?

I think there are a lot of opportunities for women — they need to develop skills and have confidence and they’ll thrive.

What are some of the most pressing issues facing AI as it evolves?

I worry that the AI community has developed so many research initiatives that focus on things like “superalignment” that obscure the deeper — or actually, the right — questions about whose values or what values we are trying to align AI with. Google Gemini’s problematic rollout showed the caricature that can arise from aligning with a narrow set of developers’ values in ways that actually led to (almost) laughable historical inaccuracies in their outputs. I think those developers’ values were good faith, but revealed the fact that these large language models are being programmed with a particular set of values that will be shaping how people think about politics, social relationships and a variety of sensitive topics. Those issues aren’t of the existential risk variety but do create the fabric of society and confer considerable power into the big firms (e.g. OpenAI, Google, Meta and so on) that are responsible for those models.

What are some issues AI users should be aware of?

As AI becomes ubiquitous, I think we’ve entered a “trust but verify” world. It’s nihilistic not to believe anything but there’s a lot of AI-generated content and users really need to be circumspect in terms of what they instinctively trust. It’s good to look for alternative sources to verify the authenticity before just assuming that everything is accurate. But I think we already learned that with social media and misinformation.

What is the best way to responsibly build AI?

I recently wrote a piece for the Bulletin of the Atomic Scientists, which started out covering nuclear weapons but has adapted to address disruptive technologies like AI. I had been thinking about how scientists could be better public stewards and wanted to connect some of the historical cases I had been looking at for a book project. I not only outline a set of steps I would endorse for responsible development but also speak to why some of the questions that AI developers are asking are wrong, incomplete or misguided.

More TechCrunch

PayHOA, a previously bootstrapped Kentucky-based startup that offers software for self-managed homeowner associations (HOAs), is an example of how real-world problems can translate into opportunity. It just raised a $27.5…

Meet PayHOA, a profitable and once-bootstrapped SaaS startup that just landed a $27.5M Series A

Restaurant365, which offers a restaurant management suite, has raised a hot $175M from ICONIQ Growth, KKR and L Catterton.

Restaurant365 orders in $175M at $1B+ valuation to supersize its food service software stack 

Venture firm Shilling has launched a €50M fund to support growth-stage startups in its own portfolio and to invest in startups everywhere else. 

Portuguese VC firm Shilling launches €50M opportunity fund to back growth-stage startups

Chang She, previously the VP of engineering at Tubi and a Cloudera veteran, has years of experience building data tooling and infrastructure. But when She began working in the AI…

LanceDB, which counts Midjourney as a customer, is building databases for multimodal AI

Trawa simplifies energy purchasing and management for SMEs by leveraging an AI-powered platform and downstream data from customers. 

Berlin-based trawa raises €10M to use AI to make buying renewable energy easier for SMEs

Lydia is splitting itself into two apps — Lydia for P2P payments and Sumeria for those looking for a mobile-first bank account.

Lydia, the French payments app with 8 million users, launches mobile banking app Sumeria

Cargo ships docking at a commercial port incur costs called “disbursements” and “port call expenses.” This might be port dues, towage, and pilotage fees. It’s a complex patchwork and all…

Shipping logistics startup Harbor Lab raises $16M Series A led by Atomico

AWS has confirmed its European “sovereign cloud” will go live by the end of 2025, enabling greater data residency for the region.

AWS confirms will launch European ‘sovereign cloud’ in Germany by 2025, plans €7.8B investment over 15 years

Go Digit, an Indian insurance startup, has raised $141 million from investors including Goldman Sachs, ADIA, and Morgan Stanley as part of its IPO.

Indian insurance startup Go Digit raises $141M from anchor investors ahead of IPO

Peakbridge intends to invest in between 16 and 20 companies, investing around $10 million in each company. It has made eight investments so far.

Food VC Peakbridge has new $187M fund to transform future of food, like lab-made cocoa

For over six decades, the nonprofit has been active in the financial services sector.

Accion’s new $152.5M fund will back financial institutions serving small businesses globally

Meta’s newest social network, Threads, is starting its own fact-checking program after piggybacking on Instagram and Facebook’s network for a few months.

Threads finally starts its own fact-checking program

Looking Glass makes trippy-looking mixed-reality screens that make things look 3D without the need of special glasses. Today, it launches a pair of new displays, including a 16-inch mode that…

Looking Glass launches new 3D displays

Replacing Sutskever is Jakub Pachocki, OpenAI’s director of research.

Ilya Sutskever, OpenAI co-founder and longtime chief scientist, departs

Intuitive Machines made history when it became the first private company to land a spacecraft on the moon, so it makes sense to adapt that tech for Mars.

Intuitive Machines wants to help NASA return samples from Mars

As Google revamps itself for the AI era, offering AI overviews within its search results, the company is introducing a new way to filter for just text-based links. With the…

Google adds ‘Web’ search filter for showing old-school text links as AI rolls out

Blue Origin’s New Shepard rocket will take a crew to suborbital space for the first time in nearly two years later this month, the company announced on Tuesday.  The NS-25…

Blue Origin to resume crewed New Shepard launches on May 19

This will enable developers to use the on-device model to power their own AI features.

Google is building its Gemini Nano AI model into Chrome on the desktop

It ran 110 minutes, but Google managed to reference AI a whopping 121 times during Google I/O 2024 (by its own count). CEO Sundar Pichai referenced the figure to wrap…

Google mentioned ‘AI’ 120+ times during its I/O keynote

Firebase Genkit is an open source framework that enables developers to quickly build AI into new and existing applications.

Google launches Firebase Genkit, a new open source framework for building AI-powered apps

In the coming months, Google says it will open up the Gemini Nano model to more developers.

Patreon and Grammarly are already experimenting with Gemini Nano, says Google

As part of the update, Reddit also launched a dedicated AMA tab within the web post composer.

Reddit introduces new tools for ‘Ask Me Anything,’ its Q&A feature

Here are quick hits of the biggest news from the keynote as they are announced.

Google I/O 2024: Here’s everything Google just announced

LearnLM is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.

LearnLM is Google’s new family of AI models for education

The official launch comes almost a year after YouTube began experimenting with AI-generated quizzes on its mobile app. 

Google is bringing AI-generated quizzes to academic videos on YouTube

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: Watch all of the AI, Android reveals

Google Play has a new discovery feature for apps, new ways to acquire users, updates to Play Points, and other enhancements to developer-facing tools.

Google Play preps a new full-screen app discovery feature and adds more developer tools

Soon, Android users will be able to drag and drop AI-generated images directly into their Gmail, Google Messages and other apps.

Gemini on Android becomes more capable and works with Gmail, Messages, YouTube and more

Veo can capture different visual and cinematic styles, including shots of landscapes and timelapses, and make edits and adjustments to already-generated footage.

Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024