AI

Here’s how The White House wants the U.S. to approach AI R&D

Comment

Image Credits: Kheng Guan Toh (opens in a new window) / Shutterstock (opens in a new window)

Since 1956, when computer science researchers gathered in the small town of Hanover, N.H. at Dartmouth College to talk about the field’s nascent investigations into artificial intelligence, both government and industry in the U.S. have grappled with how to structure a systematic approach to research and development in the newly important field.

From the government’s perspective, this is increasingly important. With both federal research institutions and private companies pursuing artificial intelligence breakthroughs at breakneck speed, the federal government is frankly having a bit of an existential crisis about its role in research efforts and the priorities it has for what AI research should look like.

To wit, in 2015 government spending on unclassified research and development in AI-related technologies was around $1.1 billion, according to one of the twin reports released today. But in the last five years alone, mergers and acquisitions among private companies vying for dominance in the AI market have far outstripped that figure, according to data from CB Insights.

Google’s acquistion of DeepMind was reportedly $600 million, and that’s one of over one hundred acquisitions made by companies like Facebook, Google, Apple, and Twitter since 2011.

So The White House has released a new pair of reports, offering a framework for how government-backed research into artificial intelligence should be approached and what those research initiatives should look like (basically, the government wants to avoid a Skynet scenario).

The main paper, entitled “Preparing for the Future of Artificial Intelligence,” focuses on the general state of and challenges faced by AI, and both of those things have a constant presence on our front page.

Justice for AI

Google, for instance, addressed the possibility of bias tainting the results of AI systems, which know not what they do. This is merely irritating when it’s an image recognition error, but what if it’s for “predictive policing”?

AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias. It is important that anyone using AI in the criminal justice context is aware of the limitations of current data.

The use of AI to make consequential decisions about people, often replacing decisions made by human actors and institutions, leads to concerns about how to ensure justice, fairness, and accountability.

Transparency concerns focused not only on the data and algorithms used, but also on the potential to have some form of explanation for any AI-based determination… Ethical training should be augmented with technical tools and methods for putting good intentions into practice by doing the technical work needed to prevent unacceptable outcomes.

Google’s solution, at least for now, is what they call the “equality of opportunity” method, which ensures a system doesn’t accidentally discriminate based on sensitive, non-relevant data, for instance race or religion, when calculating something not directly related to them. As for understanding the models created by machine learning — that’s a bigger problem.

Share and share a lot

As AI and AI-like systems proliferate, they begin to overlap with highly regulated areas, as we’ve seen with autonomous vehicles and drones. This creates a sort of wild west compared with the traditional sides of those industries, and things like reporting and risk management aren’t anywhere near formalized.

How detailed should Google’s self-driving car accident reports be? Can NTSB officials inspect Autopilot code? Where do federal and state authority interface?

To make informed decisions, the White House suggests more and better data is required:

Commercial aviation has mechanisms for sharing incident and safety data across the industry. No comparable system currently exists for the automotive industry… The lack of consistently reported incident or near-miss data increases the number of miles or hours of operation necessary to establish system safety, presenting an obstacle to certain AI approaches that require extensive testing for validation.

Federal actors should focus in the near-term on developing increasingly rich sets of data, consistent with consumer privacy, that can better inform policy-making as these technologies mature.

Furthermore, as AI systems infiltrate our infrastructure, the cowboys of private AI research should look to old school civil engineers for help, as little as they might like the idea:

Adapting gracefully to unforeseen situations is difficult yet necessary for safe operation. Experience in building other types of safety-critical systems and infrastructure, such as aircraft, power plants, bridges, and vehicles, has much to teach AI practitioners about verification and validation, how to build a safety case for a technology, how to manage risk, and how to communicate with stakeholders about risk.

AI ABCs

You’ve got to get them while they’re young, according to the White House. And we agree, of course: STEM education should start early — with an emphasis on the T, in this case.

An AI-enabled world demands a data-literate citizenry that is able to read, use, interpret, and communicate about data, and participate in policy debates about matters affected by AI. Data science education as early as primary or secondary school can help to improve nationwide data literacy, while also preparing students for more advanced data science concepts and coursework after high school.

Of course, a data-literate citizenry implies a literate citizenry, and the ethics of all this stuff won’t be learned in CS class, so we can’t neglect the humanities, either.

The report also calls for pushes for diversity, highlighting comments solicited from experts regarding “the importance of AI being produced by and for diverse populations.”

Doing so helps to avoid the negative consequences of narrowly focused AI development, including the risk of biases in developing algorithms, by taking advantage of a broader spectrum of experience, backgrounds, and opinions.

From goals to guidelines

The goal with both papers is to establish what an effective approach to artificial intelligence looks like from a government perspective. There’s an understanding that corporate interests will pursue corporate interests, but a range of issues exist in the development of artificial intelligence technologies that businesses are not necessarily equipped to deal with. And they don’t have any incentives to try and grapple with some of these issues anyway.

The report on the government’s strategic investment plan states:

The Federal government is the primary source of funding for longterm, high-risk research initiatives, as well as near-term developmental work to achieve department- or agency-specific requirements or to address important societal issues that private industry does not pursue. The Federal government should therefore emphasize AI investments in areas of strong societal importance that are not aimed at consumer markets—areas such as AI for public health, urban systems and smart communities, social welfare, criminal justice, environmental sustainability, and national security, as well as long-term research that accelerates the production of AI knowledge and technologies.

Alongside this emphasis on artificial intelligence for the public good, is an acknowledgement that these innovations could lead to job insecurity as the robots take over. That’s why one of the main thrusts of the government’s research is in how to make artificial intelligence work with humans rather than exclusively work for humans, or work instead of humans.

The meat of the government’s strategy, outlined in the bullet points below, deals with the human cost of artificial intelligence.

  • Strategy 1: Make long-term investments in AI research. Prioritize investments in the next generation of AI that will drive discovery and insight and enable the United States to remain a world leader in AI R&D.
  • Strategy 2: Develop effective methods for human-AI collaboration. Rather than replace humans, most AI systems will collaborate with humans to achieve joint optimal system performance and benefit. Research is needed to create effective interactions between humans and AI systems.
  • Strategy 3: Understand and address the ethical, legal, and societal implications of AI. We expect AI technologies to behave according to the formal and informal norms to which we hold our fellow humans. Research is needed to understand the ethical, legal, and social implications of AI, and to develop methods for designing AI systems that align with ethical, legal, and societal goals.
  • Strategy 4: Ensure the safety and security of AI systems. Before AI systems are in widespread use, assurance is needed that the systems will operate safely and securely, in a controlled, well-defined, and well-understood manner. Further progress in research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy.
  • Strategy 5: Develop shared public datasets and environments for AI training and testing. The depth, quality, and accuracy of training datasets and resources significantly affect AI performance. Researchers need to develop high quality datasets and environments and enable responsible access to high-quality datasets as well as to testing and training resources.
  • Strategy 6: Measure and evaluate AI technologies through standards and benchmarks. Essential to advancements in AI are standards, benchmarks, testbeds, and community engagement that guide and evaluate progress in AI. Additional research is needed to develop a broad spectrum of evaluative techniques.
  • Strategy 7: Better understand the national AI R&D workforce needs. Advances in AI will require a strong community of AI researchers. An improved understanding of current and future R&D workforce demands in AI is needed to help ensure that sufficient AI experts are available to address the strategic R&D areas outlined in this plan.

It’s also worth mentioning that these reports aren’t the last word (or even the first word) on the U.S. approach to artificial intelligence. There are at least seven other (probably very long) research and development strategic plans that deal with aspects of the government’s approach to AI research.

That’s a good thing, too because, as the White House report acknowledges, we’re no longer necessarily the leader in the field. Research from China has outstripped the U.S. (at least in terms of papers published on the subject).

screen-shot-2016-10-12-at-6-42-03-am

Now’s the time for a more invigorated policy, which perhaps these papers will help charge.

More TechCrunch

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo

Sony Music Group has sent letters to more than 700 tech companies and music streaming services to warn them not to use its music to train AI without explicit permission.…

Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI

Winston Chi, Butter’s founder and CEO, told TechCrunch that “most parties, including our investors and us, are making money” from the exit.

GrubMarket buys Butter to give its food distribution tech an AI boost

The investor lawsuit is related to Bolt securing a $30 million personal loan to Ryan Breslow, which was later defaulted on.

Bolt founder Ryan Breslow wants to settle an investor lawsuit by returning $37 million worth of shares

Meta, the parent company of Facebook, launched an enterprise version of the prominent social network in 2015. It always seemed like a stretch for a company built on a consumer…

With the end of Workplace, it’s fair to wonder if Meta was ever serious about the enterprise

X, formerly Twitter, turned TweetDeck into X Pro and pushed it behind a paywall. But there is a new column-based social media tool in town, and it’s from Instagram Threads.…

Meta Threads is testing pinned columns on the web, similar to the old TweetDeck

As part of 2024’s Accessibility Awareness Day, Google is showing off some updates to Android that should be useful to folks with mobility or vision impairments. Project Gameface allows gamers…

Google expands hands-free and eyes-free interfaces on Android

A hacker listed the data allegedly breached from Samco on a known cybercrime forum.

Hacker claims theft of India’s Samco account data

A top European privacy watchdog is investigating following the recent breaches of Dell customers’ personal information, TechCrunch has learned.  Ireland’s Data Protection Commission (DPC) deputy commissioner Graham Doyle confirmed to…

Ireland privacy watchdog confirms Dell data breach investigation

Ampere and Qualcomm aren’t the most obvious of partners. Both, after all, offer Arm-based chips for running data center servers (though Qualcomm’s largest market remains mobile). But as the two…

Ampere teams up with Qualcomm to launch an Arm-based AI server

At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the…

Google I/O was an AI evolution, not a revolution

TechCrunch Disrupt has always been the ultimate convergence point for all things startup and tech. In the bustling world of innovation, it serves as the “big top” tent, where entrepreneurs,…

Meet the Magnificent Six: A tour of the stages at Disrupt 2024

There’s apparently a lot of demand for an on-demand handyperson. Khosla Ventures and Pear VC have just tripled down on their investment in Honey Homes, which offers up a dedicated…

Khosla Ventures, Pear VC triple down on Honey Homes, a smart way to hire a handyman

TikTok is testing the ability for users to upload 60-minute videos, the company confirmed to TechCrunch on Thursday. The feature is available to a limited group of users in select…

TikTok tests 60-minute video uploads as it continues to take on YouTube

Flock Safety is a multibillion-dollar startup that’s got eyes everywhere. As of Wednesday, with the company’s new Solar Condor cameras, those eyes are solar-powered and use wireless 5G networks to…

Flock Safety’s solar-powered cameras could make surveillance more widespread

Since he was very young, Bar Mor knew that he would inevitably do something with real estate. His family was involved in all types of real estate projects, from ground-up…

Agora raises $34M Series B to keep building the Carta for real estate

Poshmark, the social commerce site that lets people buy and sell new and used items to each other, launched a paid marketing tool on Thursday, giving sellers the ability to…

Poshmark’s ‘Promoted Closet’ tool lets sellers boost all their listings at once

Google is launching a Gemini add-on for educational institutes through Google Workspace.

Google adds Gemini to its Education suite

More money for the generative AI boom: Y Combinator-backed developer infrastructure startup Recall.ai announced Thursday it has raised a $10 million Series A funding round, bringing its total raised to over…

YC-backed Recall.ai gets $10M Series A to help companies use virtual meeting data

Engineers Adam Keating and Jeremy Andrews were tired of using spreadsheets and screenshots to collab with teammates — so they launched a startup, CoLab, to build a better way. The…

CoLab’s collaborative tools for engineers line up $21M in new funding

Reddit announced on Wednesday that it is reintroducing its awards system after shutting down the program last year. The company said that most of the mechanisms related to awards will…

Reddit reintroduces its awards system

Sigma Computing, a startup building a range of data analytics and business intelligence tools, has raised $200 million in a fresh VC round.

Sigma is building a suite of collaborative data analytics tools