AI

How confidential computing could secure generative AI adoption

Comment

Blue Envelope Sealed With Gold Colored Wax Stamp Close-up Directly Above View.
Image Credits: MirageC (opens in a new window) / Getty Images

Ayal Yogev

Contributor

Ayal Yogev is the co-founder and CEO of Anjuna, a multi-cloud confidential computing platform.

Generative AI has the potential to change everything. It can inform new products, companies, industries, and even economies. But what makes it different and better than “traditional” AI could also make it dangerous.

Its unique ability to create has opened up an entirely new set of security and privacy concerns.

Enterprises are suddenly having to ask themselves new questions: Do I have the rights to the training data? To the model? To the outputs? Does the system itself have rights to data that’s created in the future? How are rights to that system protected? How do I govern data privacy in a model using generative AI? The list goes on.

It’s no surprise that many enterprises are treading lightly. Blatant security and privacy vulnerabilities coupled with a hesitancy to rely on existing Band-Aid solutions have pushed many to ban these tools entirely. But there is hope.

Confidential computing — a new approach to data security that protects data while in use and ensures code integrity — is the answer to the more complex and serious security concerns of large language models (LLMs). It’s poised to help enterprises embrace the full power of generative AI without compromising on safety. Before I explain, let’s first take a look at what makes generative AI uniquely vulnerable.

Generative AI has the capacity to ingest an entire company’s data, or even a knowledge-rich subset, into a queryable intelligent model that provides brand-new ideas on tap. This has massive appeal, but it also makes it extremely difficult for enterprises to maintain control over their proprietary data and stay compliant with evolving regulatory requirements.

This concentration of knowledge and subsequent generative outcomes, without adequate data security and trust control, could inadvertently weaponize generative AI for abuse, theft, and illicit use.

Indeed, employees are increasingly feeding confidential business documents, client data, source code, and other pieces of regulated information into LLMs. Since these models are partly trained on new inputs, this could lead to major leaks of intellectual property in the event of a breach. And if the models themselves are compromised, any content that a company has been legally or contractually obligated to protect might also be leaked. In a worst-case scenario, theft of a model and its data would allow a competitor or nation-state actor to duplicate everything and steal that data.

These are high stakes. Gartner recently found that 41% of organizations have experienced an AI privacy breach or security incident — and over half are the result of a data compromise by an internal party. The advent of generative AI is bound to grow these numbers.

Separately, enterprises also need to keep up with evolving privacy regulations when they invest in generative AI. Across industries, there’s a deep responsibility and incentive to stay compliant with data requirements. In healthcare, for example, AI-powered personalized medicine has huge potential when it comes to improving patient outcomes and overall efficiency. But providers and researchers will need to access and work with large amounts of sensitive patient data while still staying compliant, presenting a new quandary.

To address these challenges, and the rest that will inevitably arise, generative AI needs a new security foundation. Protecting training data and models must be the top priority; it’s no longer sufficient to encrypt fields in databases or rows on a form.

In scenarios where generative AI outcomes are used for important decisions, evidence of the integrity of the code and data — and the trust it conveys — will be absolutely critical, both for compliance and for potentially legal liability management. There must be a way to provide airtight protection for the entire computation and the state in which it runs.

The advent of “confidential” generative AI

Confidential computing offers a simple, yet hugely powerful way out of what would otherwise seem to be an intractable problem. With confidential computing, data and IP are completely isolated from infrastructure owners and made only accessible to trusted applications running on trusted CPUs. Data privacy is ensured through encryption, even during execution.

Data security and privacy become intrinsic properties of cloud computing — so much so that even if a malicious attacker breaches infrastructure data, IP and code are completely invisible to that bad actor. This is perfect for generative AI, mitigating its security, privacy, and attack risks.

Confidential computing has been increasingly gaining traction as a security game-changer. Every major cloud provider and chip maker is investing in it, with leaders at Azure, AWS, and GCP all proclaiming its efficacy. Now, the same technology that’s converting even the most steadfast cloud holdouts could be the solution that helps generative AI take off securely. Leaders must begin to take it seriously and understand its profound impacts.

With confidential computing, enterprises gain assurance that generative AI models learn only on data they intend to use, and nothing else. Training with private datasets across a network of trusted sources across clouds provides full control and peace of mind. All information, whether an input or an output, remains completely protected and behind a company’s own four walls.

On top of that, confidential computing delivers proof of processing, providing hard evidence of a model’s authenticity and integrity. Trust in the outcomes comes from trust in the inputs and generative data, so immutable evidence of processing will be a critical requirement to prove when and where data was generated.

This is particularly important when it comes to data privacy regulations such as GDPR, CPRA, and new U.S. privacy laws coming online this year. Confidential computing ensures privacy over code and data processing by default, going beyond just the data. While organizations must still collect data on a responsible basis, confidential computing provides far higher levels of privacy and isolation of running code and data so that insiders, IT, and the cloud have no access.

This is an ideal capability for even the most sensitive industries like healthcare, life sciences, and financial services. When data and code themselves are protected and isolated by hardware controls, all processing happens privately in the processor without the possibility of data leakage. While authorized users can see results to queries, they are isolated from the data and processing in hardware. Confidential computing thus protects us from ourselves in a powerful, risk-preventative way.

Crucially, the confidential computing security model is uniquely able to preemptively minimize new and emerging risks. For example, one of the attack vectors for AI is the query interface itself. To mitigate this vulnerability, confidential computing can provide hardware-based guarantees that only trusted and approved applications can connect and engage.

This restricts rogue applications and provides a “lockdown” over generative AI connectivity to strict enterprise policies and code, while also containing outputs within trusted and secure infrastructure.

Second, as enterprises start to scale generative AI use cases, due to the limited availability of GPUs, they will look to utilize GPU grid services — which no doubt come with their own privacy and security outsourcing risks.

The use of general GPU grids will require a confidential computing approach for “burstable” supercomputing wherever and whenever processing is needed — but with privacy over models and data. Emerging confidential GPUs will help address this, especially if they can be used easily with complete privacy. In effect, this creates a confidential supercomputing capability on tap.

Last, confidential computing controls the path and journey of data to a product by only letting it into a secure enclave, enabling secure derived product rights management and consumption. Confidential computing hardware can prove that AI and training code are run on a trusted confidential CPU and that they are the exact code and data we expect with zero changes.

This immutable proof of trust is incredibly powerful, and simply not possible without confidential computing. Provable machine and code identity solves a massive workload trust problem critical to generative AI integrity and to enable secure derived product rights management. In effect, this is zero trust for code and data.

When we look at the big picture, securing generative AI must span the following:

  • Trust in the infrastructure it is running on: to anchor confidentiality and integrity over the entire supply chain from build to run.
  • Control over what data is used for training: to guarantee that data shared with partners for training, or data acquired, can be trusted to achieve the most accurate outcomes without inadvertent compliance risks.
  • Privacy over processing during execution: to limit attacks, manipulation and insider threats with immutable hardware isolation.
  • Privacy over computation and query: to limit new threats and to meet state-of-the-art compliance requirements.

Fortunately, confidential computing is ready to meet many of these challenges and build a new foundation for trust and private generative AI processing.

Tilting the scales of the generative AI cost-benefit analysis

Generative AI is unlike anything enterprises have seen before. But for all its potential, it carries new and unprecedented risks. Fortunately, being risk-averse doesn’t have to mean avoiding the technology entirely. Confidential computing solves the cost-benefit generative AI equation for enterprises, ensuring that they can use LLMs without compromising on security, privacy, control, and compliance.

Going forward, scaling LLMs will eventually go hand in hand with confidential computing. When vast models, and vast datasets, are a given, confidential computing will become the only feasible route for enterprises to safely take the AI journey — and ultimately embrace the power of private supercomputing — for all that it enables.

If investments in confidential computing continue — and I believe they will — more enterprises will be able to adopt it without fear, and innovate without bounds.

More TechCrunch

Welcome back to TechCrunch’s Week in Review. This week had two major events from OpenAI and Google. OpenAI’s spring update event saw the reveal of its new model, GPT-4o, which…

OpenAI and Google lay out their competing AI visions

Expedia says Rathi Murthy and Sreenivas Rachamadugu, respectively its CTO and senior vice president of core services product & engineering, are no longer employed at the travel booking company. In…

Expedia says two execs dismissed after ‘violation of company policy’

When Jeffrey Wang posted to X asking if anyone wanted to go in on an order of fancy-but-affordable office nap pods, he didn’t expect the post to go viral.

With AI startups booming, nap pods and Silicon Valley hustle culture are back

OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, was promised 20% of the company’s compute resources, according to a person from that team. But…

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

A new crop of early-stage startups — along with some recent VC investments — illustrates a niche emerging in the autonomous vehicle technology sector. Unlike the companies bringing robotaxis to…

VCs and the military are fueling self-driving startups that don’t need roads

When the founders of Sagetap, Sahil Khanna and Kevin Hughes, started working at early-stage enterprise software startups, they were surprised to find that the companies they worked at were trying…

Deal Dive: Sagetap looks to bring enterprise software sales into the 21st century

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI moves away from safety

After Apple loosened its App Store guidelines to permit game emulators, the retro game emulator Delta — an app 10 years in the making — hit the top of the…

Adobe comes after indie game emulator Delta for copying its logo

Meta is once again taking on its competitors by developing a feature that borrows concepts from others — in this case, BeReal and Snapchat. The company is developing a feature…

Meta’s latest experiment borrows from BeReal’s and Snapchat’s core ideas

Welcome to Startups Weekly! We’ve been drowning in AI news this week, with Google’s I/O setting the pace. And Elon Musk rages against the machine.

Startups Weekly: It’s the dawning of the age of AI — plus,  Musk is raging against the machine

IndieBio’s Bay Area incubator is about to debut its 15th cohort of biotech startups. We took special note of a few, which were making some major, bordering on ludicrous, claims…

IndieBio’s SF incubator lineup is making some wild biotech promises

YouTube TV has announced that its multiview feature for watching four streams at once is now available on Android phones and tablets. The Android launch comes two months after YouTube…

YouTube TV’s ‘multiview’ feature is now available on Android phones and tablets

Featured Article

Two Santa Cruz students uncover security bug that could let millions do their laundry for free

CSC ServiceWorks provides laundry machines to thousands of residential homes and universities, but the company ignored requests to fix a security bug.

1 day ago
Two Santa Cruz students uncover security bug that could let millions do their laundry for free

TechCrunch Disrupt 2024 is just around the corner, and the buzz is palpable. But what if we told you there’s a chance for you to not just attend, but also…

Harness the TechCrunch Effect: Host a Side Event at Disrupt 2024

Decks are all about telling a compelling story and Goodcarbon does a good job on that front. But there’s important information missing too.

Pitch Deck Teardown: Goodcarbon’s $5.5M seed deck

Slack is making it difficult for its customers if they want the company to stop using its data for model training.

Slack under attack over sneaky AI training policy

A Texas-based company that provides health insurance and benefit plans disclosed a data breach affecting almost 2.5 million people, some of whom had their Social Security number stolen. WebTPA said…

Healthcare company WebTPA discloses breach affecting 2.5 million people

Featured Article

Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Microsoft won’t be facing antitrust scrutiny in the U.K. over its recent investment into French AI startup Mistral AI.

2 days ago
Microsoft dodges UK antitrust scrutiny over its Mistral AI stake

Ember has partnered with HSBC in the U.K. so that the bank’s business customers can access Ember’s services from their online accounts.

Embedded finance is still trendy as accounting automation startup Ember partners with HSBC UK

Kudos uses AI to figure out consumer spending habits so it can then provide more personalized financial advice, like maximizing rewards and utilizing credit effectively.

Kudos lands $10M for an AI smart wallet that picks the best credit card for purchases

The EU’s warning comes after Microsoft failed to respond to a legally binding request for information that focused on its generative AI tools.

EU warns Microsoft it could be fined billions over missing GenAI risk info

The prospects for troubled banking-as-a-service startup Synapse have gone from bad to worse this week after a United States Trustee filed an emergency motion on Wednesday.  The trustee is asking…

A US Trustee wants troubled fintech Synapse to be liquidated via Chapter 7 bankruptcy, cites ‘gross mismanagement’

U.K.-based Seraphim Space is spinning up its 13th accelerator program, with nine participating companies working on a range of tech from propulsion to in-space manufacturing and space situational awareness. The…

Seraphim’s latest space accelerator welcomes nine companies

OpenAI has reached a deal with Reddit to use the social news site’s data for training AI models. In a blog post on OpenAI’s press relations site, the company said…

OpenAI inks deal to train AI on Reddit data

X users will now be able to discover posts from new Communities that are trending directly from an Explore tab within the section.

X pushes more users to Communities

For Mark Zuckerberg’s 40th birthday, his wife got him a photoshoot. Zuckerberg gives the camera a sly smile as he sits amid a carefully crafted re-creation of his childhood bedroom.…

Mark Zuckerberg’s makeover: Midlife crisis or carefully crafted rebrand?

Strava announced a slew of features, including AI to weed out leaderboard cheats, a new ‘family’ subscription plan, dark mode and more.

Strava taps AI to weed out leaderboard cheats, unveils ‘family’ plan, dark mode and more

We all fall down sometimes. Astronauts are no exception. You need to be in peak physical condition for space travel, but bulky space suits and lower gravity levels can be…

Astronauts fall over. Robotic limbs can help them back up.

Microsoft will launch its custom Cobalt 100 chips to customers as a public preview at its Build conference next week, TechCrunch has learned. In an analyst briefing ahead of Build,…

Microsoft’s custom Cobalt chips will come to Azure next week

What a wild week for transportation news! It was a smorgasbord of news that seemed to touch every sector and theme in transportation.

Tesla keeps cutting jobs and the feds probe Waymo