AI

Karine Perset helps governments understand AI

Comment

Karine Perset, AI Expert, OCED Divison for Digital Economy Policy
Image Credits: Karine Perset

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Karine Perset works for the Organization for Economic Co-operation and Development (OECD), where she runs its AI unit and oversees the OECD.AI Policy Observatory and the OECD.AI Networks of Experts within the Division for Digital Economy Policy.

Perset specializes in AI and public policy. She previously worked as an adviser to the Internet Corporation for Assigned Names and Numbers (ICANN)’s Governmental Advisory Committee and as Counsellor of the OECD’s Science, Technology, and Industry Director.

What work are you most proud of in the AI field?

I am extremely proud of the work we do at OECD.AI. Over the last few years, the demand for policy resources and guidance on trustworthy AI has really increased from both OECD member countries and also from AI ecosystem actors.

When we started this work around 2016, there were only a handful of countries that had national AI initiatives. Fast-forward to today, and the OECD.AI Policy Observatory — a one-stop shop for AI data and trends — documents over 1,000 AI initiatives across nearly 70 jurisdictions.

Globally, all governments are facing the same questions on AI governance. We are all keenly aware of the need to strike a balance between enabling innovation and opportunities AI has to offer and mitigating the risks related to the misuse of the technology. I think the rise of generative AI in late 2022 has really put a spotlight on this.

The 10 OECD AI Principles from 2019 were quite prescient in the sense that they foresaw many key issues still salient today — five years later and with AI technology advancing considerably. The Principles serve as a guiding compass towards trustworthy AI that benefits people and the planet for governments in elaborating their AI policies. They place people at the center of AI development and deployment, which I think is something we can’t afford to lose sight of, no matter how advanced, impressive, and exciting AI capabilities become.

To track progress on implementing the OECD AI Principles, we developed the OECD.AI Policy Observatory, a central hub for real-time or quasi-real-time AI data, analysis, and reports, which have become authoritative resources for many policymakers globally. But the OECD can’t do it alone, and multi-stakeholder collaboration has always been our approach. We created the OECD.AI Network of Experts — a network of more than 350 of the leading AI experts globally — to help tap their collective intelligence to inform policy analysis. The network is organized into six thematic expert groups, examining issues including AI risk and accountability, AI incidents, and the future of AI.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

When we look at the data, unfortunately, we still see a gender gap regarding who has the skills and resources to effectively leverage AI. In many countries, women still have less access to training, skills, and infrastructure for digital technologies. They are still underrepresented in AI R&D, while stereotypes and biases embedded in algorithms can prompt gender discrimination and limit women’s economic potential. In OECD countries, more than twice as many young men than women aged 16 to 24 can program, an essential skill for AI development. We clearly have more work to do to attract women to the AI field.

However, while the private sector AI technology world is highly male-dominated, I’d say that the AI policy world is a bit more balanced. For instance, my team at the OECD is close to gender parity. Many of the AI experts we work with are truly inspiring women, such as Elham Tabassi from the U.S National Institute of Standards and Technology (NIST); Francesca Rossi at IBM; Rebecca Finlay and Stephanie Ifayemi from the Partnership on AI; Lucilla Sioli, Irina Orssich, Tatjana Evas and Emilia Gómez from the European Commission; Clara Neppel from the IEEE; Nozha Boujemaa from Decathlon; Dunja Mladenic at the Slovenian JSI AI lab; and of course my own amazing boss and mentor Audrey Plonk, just to name a few, and there are so many more.

We need women and diverse groups represented in the technology sector, academia, and civil society to bring rich and diverse perspectives. Unfortunately, in 2022, only one in four researchers publishing on AI worldwide was a woman. While the number of publications co-authored by at least one woman is increasing, women only contribute to about half of all AI publications compared to men, and the gap widens as the number of publications increases. All this to say, we need more representation from women and diverse groups in these spaces.

So to answer your question, how do I navigate the challenges of the male-dominated technology industry? I show up. I am very grateful that my position allows me to meet with experts, government officials, and corporate representatives and speak in international forums on AI governance. It allows me to engage in discussions, share my point of view, and challenge assumptions. And, of course, I let the data speak for itself.

What advice would you give to women seeking to enter the AI field?

Speaking from my experience in the AI policy world, I would say not to be afraid to speak up and share your perspective. We need more diverse voices around the table when we develop AI policies and AI models. We all have our unique stories and something different to bring to the conversation.

To develop safer, more inclusive, and trustworthy AI, we must look at AI models and data input from different angles, asking ourselves: What are we missing? If you don’t speak up, then it might result in your team missing out on a really important insight. Chances are that, because you have a different perspective, you’ll see things that others do not, and as a global community, we can be greater than the sum of our parts if everyone contributes.

I would also emphasize that there are many roles and paths in the AI field. A degree in computer science is not a prerequisite to work in AI. We already see jurists, economists, social scientists, and many more profiles bringing their perspectives to the table. As we move forward, true innovation will increasingly come from blending domain knowledge with AI literacy and technical competencies to come up with effective AI applications in specific domains. We see already that universities are offering AI courses beyond computer science departments. I truly believe interdisciplinarity will be key for AI careers. So, I would encourage women from all fields to consider what they can do with AI. And to not shy away for fear of being less competent than men.

What are some of the most pressing issues facing AI as it evolves?

I think the most pressing issues facing AI can be divided into three buckets.

First, I think we need to bridge the gap between policymakers and technologists. In late 2022, generative AI advances took many by surprise, despite some researchers anticipating such developments. Understandingly, each discipline is looking at AI issues from a unique angle. But AI issues are complex; collaboration and interdisciplinarity between policymakers, AI developers, and researchers are key to understanding AI issues in a holistic manner, helping keep pace with AI progress and close knowledge gaps.

Second, the international interoperability of AI rules is mission-critical to AI governance. Many large economies have started regulating AI. For instance, the European Union just agreed on its AI Act, the U.S. has adopted an executive order for the safe, secure, and trustworthy development and use of AI, and Brazil and Canada have introduced bills to regulate the development and deployment of AI. What’s challenging here is to strike the right balance between protecting citizens and enabling business innovations. AI knows no borders, and many of these economies have different approaches to regulation and protection; it will be crucial to enable interoperability between jurisdictions.

Third, there is the question of tracking AI incidents, which have increased rapidly with the rise of generative AI. Failure to address the risks associated with AI incidents could exacerbate the lack of trust in our societies. Importantly, data about past incidents can help us prevent similar incidents from happening in the future. Last year, we launched the AI Incidents Monitor. This tool uses global news sources to track AI incidents around the world to understand better the harms resulting from AI incidents. It provides real-time evidence to support policy and regulatory decisions about AI, especially for real risks such as bias, discrimination, and social disruption, and the types of AI systems that cause them.

What are some issues AI users should be aware of?

Something that policymakers globally are grappling with is how to protect citizens from AI-generated mis- and disinformation — such as synthetic media like deepfakes. Of course, mis- and disinformation has existed for some time, but what is different here is the scale, quality, and low cost of AI-generated synthetic outputs.

Governments are well aware of the issue and are looking at ways to help citizens identify AI-generated content and assess the veracity of the information they are consuming, but this is still an emerging field, and there is still no consensus on how to tackle such issues.

Our AI Incidents Monitor can help track global trends and keep people informed about major cases of deepfakes and disinformation. But in the end, with the increasing volume of AI-generated content, people need to develop information literacy, sharpening their skills, reflexes, and ability to check reputable sources to assess information accuracy.

What is the best way to responsibly build AI?

Many of us in the AI policy community are diligently working to find ways to build AI responsibly, acknowledging that determining the best approach often hinges on the specific context in which an AI system is deployed. Nonetheless, building AI responsibly necessitates careful consideration of ethical, social, and safety implications throughout the AI system life cycle.

One of the OECD AI Principles refers to the accountability that AI actors bear for the proper functioning of the AI systems they develop and use. This means that AI actors must take measures to ensure that the AI systems they build are trustworthy. By this, I mean that they should benefit people and the planet, respect human rights, be fair, transparent, and explainable, and meet appropriate levels of robustness, security, and safety. To achieve this, actors must govern and manage risks throughout their AI systems’ life cycle — from planning, design, and data collection and processing to model building, validation and deployment, operation, and monitoring.

Last year, we published a report on “Advancing Accountability in AI,” which provides an overview of integrating risk management frameworks and the AI system life cycle to develop trustworthy AI. The report explores processes and technical attributes that can facilitate the implementation of values-based principles for trustworthy AI and identifies tools and mechanisms to define, assess, treat, and govern risks at each stage of the AI system life cycle.

How can investors better push for responsible AI?

By advocating for responsible business conduct in the companies they invest in. Investors play a crucial role in shaping the development and deployment of AI technologies, and they should not underestimate their power to influence internal practices with the financial support they provide.

For example, the private sector can support developing and adopting responsible guidelines and standards for AI through initiatives such as the OECD’s Responsible Business Conduct (RBC) guidelines, which we are currently tailoring specifically for AI. These guidelines will notably facilitate international compliance for AI companies selling their products and services across borders and enable transparency throughout the AI value chain — from suppliers to deployers to end users. The RBC guidelines for AI will also provide a non-judiciary enforcement mechanism — in the form of national contact points tasked by national governments to mediate disputes — allowing users and affected stakeholders to seek remedies for AI-related harms.

By guiding companies to implement standards and guidelines for AI — like RBC — private sector partners can play a vital role in promoting trustworthy AI development and shaping the future of AI technologies in a way that benefits society as a whole.

More TechCrunch

The U.K.’s self-proclaimed “world-leading” regulations for self-driving cars are now official, after the Automated Vehicles (AV) Act received royal assent — the final rubber stamp any legislation must go through…

UK’s autonomous vehicle legislation becomes law, paving the way for first driverless cars by 2026

ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm. What started as a tool to hyper-charge productivity through writing essays and code with short text prompts has evolved…

ChatGPT: Everything you need to know about the AI-powered chatbot

SoLo Funds CEO Travis Holoway: “Regulators seem driven by press releases when they should be motivated by true consumer protection and empowering equitable solutions.”

Fintech lender Solo Funds is being sued again by the government over its lending practices

Hard tech startups generate a lot of buzz, but there’s a growing cohort of companies building digital tools squarely focused on making hard tech development faster, more efficient, and —…

Rollup wants to be the hardware engineer’s workhorse

TechCrunch Disrupt 2024 is not just about groundbreaking innovations, insightful panels, and visionary speakers — it’s also about listening to YOU, the audience, and what you feel is top of…

Disrupt Audience Choice vote closes Friday

Google says the new SDK would help Google expand on its core mission of connecting the right audience to the right content at the right time.

Google is launching a new Android feature to drive users back into their installed apps

Jolla has taken the official wraps off the first version of its personal server-based AI assistant in the making. The reborn startup is building a privacy-focused AI device — aka…

Jolla debuts privacy-focused AI hardware

OpenAI is removing one of the voices used by ChatGPT after users found that it sounded similar to Scarlett Johansson, the company announced on Monday. The voice, called Sky, is…

OpenAI to remove ChatGPT’s Scarlett Johansson-like voice

The ChatGPT mobile app’s net revenue first jumped 22% on the day of the GPT-4o launch and continued to grow in the following days.

ChatGPT’s mobile app revenue saw its biggest spike yet following GPT-4o launch

Dating app maker Bumble has acquired Geneva, an online platform built around forming real-world groups and clubs. The company said that the deal is designed to help it expand its…

Bumble buys community building app Geneva to expand further into friendships

CyberArk — one of the army of larger security companies founded out of Israel — is acquiring Venafi, a specialist in machine identity, for $1.54 billion. 

CyberArk snaps up Venafi for $1.54B to ramp up in machine-to-machine security

Founder-market fit is one of the most crucial factors in a startup’s success, and operators (someone involved in the day-to-day operations of a startup) turned founders have an almost unfair advantage…

OpenseedVC, which backs operators in Africa and Europe starting their companies, reaches first close of $10M fund

A Singapore High Court has effectively approved Pine Labs’ request to shift its operations to India.

Pine Labs gets Singapore court approval to shift base to India

The AI Safety Institute, a U.K. body that aims to assess and address risks in AI platforms, has said it will open a second location in San Francisco. 

UK opens office in San Francisco to tackle AI risk

Companies are always looking for an edge, and searching for ways to encourage their employees to innovate. One way to do that is by running an internal hackathon around a…

Why companies are turning to internal hackathons

Featured Article

I’m rooting for Melinda French Gates to fix tech’s broken ‘brilliant jerk’ culture

Women in tech still face a shocking level of mistreatment at work. Melinda French Gates is one of the few working to change that.

1 day ago
I’m rooting for Melinda French Gates to fix tech’s  broken ‘brilliant jerk’ culture

Blue Origin has successfully completed its NS-25 mission, resuming crewed flights for the first time in nearly two years. The mission brought six tourist crew members to the edge of…

Blue Origin successfully launches its first crewed mission since 2022

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood. With many…

Hollywood agency CAA aims to help stars manage their own AI likenesses

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets