AI

Microsoft looks to free itself from GPU shackles by designing custom AI chips

Comment

a photo of Microsoft's campus in Germany
Image Credits: Fink Avenue (opens in a new window) / Getty Images

Most companies developing AI models, particularly generative AI models like ChatGPT, GPT-4 Turbo and Stable Diffusion, rely heavily on GPUs. GPUs’ ability to perform many computations in parallel make them well-suited to training — and running — today’s most capable AI.

But there simply aren’t enough GPUs to go around.

Nvidia’s best-performing AI cards are reportedly sold out until 2024. The CEO of chipmaker TSMC was less optimistic recently, suggesting that the shortage of AI GPUs from Nvidia — as well as chips from Nvidia’s rivals — could extend into 2025.

So Microsoft’s going its own way.

Today at its 2023 Ignite conference, Microsoft unveiled two custom-designed, in-house and data center-bound AI chips: the Azure Maia 100 AI Accelerator and the Azure Cobalt 100 CPU. Maia 100 can be used to train and run AI models, while Cobalt 100 is designed to run general purpose workloads.

Image Credits: Microsoft

“Microsoft is building the infrastructure to support AI innovation, and we are reimagining every aspect of our data centers to meet the needs of our customers,” Scott Guthrie, Microsoft cloud and AI group EVP, was quoted as saying in a press release provided to TechCrunch earlier this week. “At the scale we operate, it’s important for us to optimize and integrate every layer of the infrastructure stack to maximize performance, diversify our supply chain and give customers infrastructure choice.”

Both Maia 100 and Cobalt 100 will start to roll out early next year to Azure data centers, Microsoft says — initially powering Microsoft AI services like Copilot, Microsoft’s family of generative AI products, and Azure OpenAI Service, the company’s fully managed offering for OpenAI models. It might be early days, but Microsoft assures that the chips aren’t one-offs. Second-generation Maia and Cobalt hardware is already in the works.

Built from the ground up

That Microsoft created custom AI chips doesn’t come as a surprise, exactly. The wheels were set in motion some time ago — and publicized.

In April, The Information reported that Microsoft had been working on AI chips in secret since 2019 as part of a project code-named Athena. And further back, in 2020, Bloomberg revealed that Microsoft had designed a range of chips based on the ARM architecture for data centers and other devices, including consumer hardware (think the Surface Pro).

But the announcement at Ignite gives the most thorough look yet at Microsoft’s semiconductor efforts.

First up is Maia 100.

Microsoft says that Maia 100 — a 5-nanometer chip containing 105 billion transistors — was engineered “specifically for the Azure hardware stack” and to “achieve the absolute maximum utilization of the hardware.” The company promises that Maia 100 will “power some of the largest internal AI [and generative AI] workloads running on Microsoft Azure,” inclusive of workloads for Bing, Microsoft 365 and Azure OpenAI Service (but not public cloud customers — yet).

Maia 100
Image Credits: Microsoft

That’s a lot of jargon, though. What’s it all mean? Well, to be quite honest, it’s not totally obvious to this reporter — at least not from the details Microsoft’s provided in its press materials. In fact, it’s not even clear what sort of chip Maia 100 is; Microsoft’s chosen to keep the architecture under wraps, at least for the time being.

In another disappointing development, Microsoft didn’t submit Maia 100 to public benchmarking test suites like MLCommons, so there’s no comparing the chip’s performance to that of other AI training chips out there, such as Google’s TPU, Amazon’s Tranium and Meta’s MTIA. Now that the cat’s out of the bag, here’s hoping that’ll change in short order.

One interesting factoid that Microsoft was willing to disclose is that its close AI partner and investment target, OpenAI, provided feedback on Maia 100’s design.

It’s an evolution of the two companies’ compute infrastructure tie-ups.

In 2020, OpenAI worked with Microsoft to co-design an Azure-hosted “AI supercomputer” — a cluster containing over 285,000 processor cores and 10,000 graphics cards. Subsequently, OpenAI and Microsoft built multiple supercomputing systems powered by Azure — which OpenAI exclusively uses for its research, API and products — to train OpenAI’s models.

“Since first partnering with Microsoft, we’ve collaborated to co-design Azure’s AI infrastructure at every layer for our models and unprecedented training needs,” Altman said in a canned statement. “We were excited when Microsoft first shared their designs for the Maia chip, and we’ve worked together to refine and test it with our models. Azure’s end-to-end AI architecture, now optimized down to the silicon with Maia, paves the way for training more capable models and making those models cheaper for our customers.”

I asked Microsoft for clarification, and a spokesperson had this to say: “As OpenAI’s exclusive cloud provider, we work closely together to ensure our infrastructure meets their requirements today and in the future. They have provided valuable testing and feedback on Maia, and we will continue to consult their roadmap in the development of our Microsoft first-party AI silicon generations.”

We also know that Maia 100’s physical package is larger than a typical GPU’s.

Microsoft says that it had to build from scratch the data center server racks that house Maia 100 chips, with the goal of accommodating both the chips and the necessary power and networking cables. Maia 100 also required a unique liquid-based cooling solution since the chips consume a higher-than-average amount of power and Microsoft’s data centers weren’t designed for large liquid chillers.

Image Credits: Microsoft

“Cold liquid flows from [a ‘sidekick’] to cold plates that are attached to the surface of Maia 100 chips,” explains a Microsoft-authored post. “Each plate has channels through which liquid is circulated to absorb and transport heat. That flows to the sidekick, which removes heat from the liquid and sends it back to the rack to absorb more heat, and so on.”

As with Maia 100, Microsoft kept most of Cobalt 100’s technical details vague in its Ignite unveiling, save that Cobalt 100’s an energy-efficient, 128-core chip built on an Arm Neoverse CSS architecture and “optimized to deliver greater efficiency and performance in cloud native offerings.”

Cobalt 100
Image Credits: Microsoft

Arm-based AI inference chips were something of a trend — a trend that Microsoft’s now perpetuating. Amazon’s latest data center chip for inference, Graviton3E (which complements Inferentia, the company’s other inference chip), is built on an Arm architecture. Google is reportedly preparing custom Arm server chips of its own, meanwhile.

“The architecture and implementation is designed with power efficiency in mind,” Wes McCullough, CVP of hardware product development, said of Cobalt in a statement. “We’re making the most efficient use of the transistors on the silicon. Multiply those efficiency gains in servers across all our datacenters, it adds up to a pretty big number.”

A Microsoft spokesperson said that Cobalt 100 will power new virtual machines for customers in the coming year.

But why?

So Microsoft’s made AI chips. But why? What’s the motivation?

Well, there’s the company line — “optimizing every layer of [the Azure] technology stack,” one of the Microsoft blog posts published today reads. But the subtext is, Microsoft’s vying to remain competitive — and cost-conscious — in the relentless race for AI dominance.

The scarcity and indispensability of GPUs has left companies in the AI space large and small, including Microsoft, beholden to chip vendors. In May, Nvidia reached a market value of more than $1 trillion on AI chip and related revenue ($13.5 billion in its most recent fiscal quarter), becoming only the sixth tech company in history to do so. Even with a fraction of the install base, Nvidia’s chief rival, AMD, expects its GPU data center revenue alone to eclipse $2 billion in 2024.

Microsoft is no doubt dissatisfied with this arrangement. OpenAI certainly is — and it’s OpenAI’s tech that drives many of Microsoft’s flagship AI products, apps and services today.

In a private meeting with developers this summer, Altman admitted that GPU shortages and costs were hindering OpenAI’s progress; the company just this week was forced to pause sign-ups for ChatGPT due to capacity issues. Underlining the point, Altman said in an interview this week with the Financial Times that he “hoped” Microsoft, which has invested over $10 billion in OpenAI over the past four years, would increase its investment to help pay for “huge” imminent model training costs.

Microsoft itself warned shareholders earlier this year of potential Azure AI service disruptions if it can’t get enough chips for its data centers. The company’s been forced to take drastic measures in the interim, like incentivizing Azure customers with unused GPU reservations to give up those reservations in exchange for refunds and pledging upwards of billions of dollars to third-party cloud GPU providers like CoreWeave.

Should OpenAI design its own AI chips as rumored, it could put the two parties at odds. But Microsoft likely sees the potential cost savings arising from in-house hardware — and competitiveness in the cloud market — as worth the risk of preempting its ally.

One of Microsoft’s premiere AI products, the code-generating GitHub Copilot, has reportedly been costing the company up to $80 per user per month partially due to model inferencing costs. If the situation doesn’t turn around, investment firm UBS sees Microsoft struggling to generate AI revenue streams next year.

Of course, hardware is hard, and there’s no guarantee that Microsoft will succeed in launching AI chips where others failed.

Meta’s early custom AI chip efforts were beset with problems, leading the company to scrap some of its experimental hardware. Elsewhere, Google hasn’t been able to keep pace with demand for its TPUs, Wired reports — and ran into design issues with its newest generation of the chip.

Microsoft’s giving it the old college try, though. And it’s oozing with confidence.

“Microsoft innovation is going further down in the stack with this silicon work to ensure the future of our customers’ workloads on Azure, prioritizing performance, power efficiency and cost,” Pat Stemen, a partner program manager on Microsoft’s Azure hardware systems and infrastructure team, said in a blog post today. “We chose this innovation intentionally so that our customers are going to get the best experience they can have with Azure today and in the future …We’re trying to provide the best set of options for [customers], whether it’s for performance or cost or any other dimension they care about.”

More TechCrunch

You’re running out of time to join the Startup Battlefield 200, our curated showcase of top startups from around the world and across multiple industries. This elite cohort — 200…

Startup Battlefield 200 applications close tomorrow

New York’s state legislature has passed a bill that would prohibit social media companies from showing so-called “addictive feeds” to children under 18, unless they obtain parental consent. The Stop…

New York moves to limit kids’ access to ‘addictive feeds’

Dogs are the most popular pet in the U.S.: 65.1 million households have one, according to the American Pet Products Association. But while cats are not far off, with 46.5…

Cat-sitting startup Meowtel clawed its way to profitability despite trouble raising from dog-focused VCs

Anterior, a company that uses AI to expedite health insurance approval for medical procedures, has raised a $20 million Series A round at a $95 million post-money valuation led by…

Anterior grabs $20M from NEA to expedite health insurance approvals with AI

Welcome back to TechCrunch’s Week in Review — TechCrunch’s newsletter recapping the week’s biggest news. Want it in your inbox every Saturday? Sign up here. There’s more bad news for…

How India’s most valuable startup ended up being worth nothing

If death and taxes are inevitable, why are companies so prepared for taxes, but not for death? “I lost both of my parents in college, and it didn’t initially spark…

Bereave wants employers to suck a little less at navigating death

Google and Microsoft have made their developer conferences a showcase of their generative AI chops, and now all eyes are on next week’s Worldwide Developers Conference, which is expected to…

Apple needs to focus on making AI useful, not flashy

AI systems and large language models need to be trained on massive amounts of data to be accurate but they shouldn’t train on data that they don’t have the rights…

Deal Dive: Human Native AI is building the marketplace for AI training licensing deals

Before Wazer came along, “water jet cutting” and “affordable” didn’t belong in the same sentence. That changed in 2016, when the company launched the world’s first desktop water jet cutter,…

Wazer Pro is making desktop water jetting more affordable

Former Autonomy chief executive Mike Lynch issued a statement Thursday following his acquittal of criminal charges, ending a 13-year legal battle with Hewlett-Packard that became one of Silicon Valley’s biggest…

Autonomy’s Mike Lynch acquitted after US fraud trial brought by HP

Featured Article

What Snowflake isn’t saying about its customer data breaches

As another Snowflake customer confirms a data breach, the cloud data company says its position “remains unchanged.”

2 days ago
What Snowflake isn’t saying about its customer data breaches

Investor demand has been so strong for Rippling’s shares that it is letting former employees particpate in its tender offer. With one exception.

Rippling bans former employees who work at competitors like Deel and Workday from its tender offer stock sale

It turns out the space industry has a lot of ideas on how to improve NASA’s $11 billion, 15-year plan to collect and return samples from Mars. Seven of these…

NASA puts $10M down on Mars sample return proposals from Blue Origin, SpaceX and others

Featured Article

In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

When Bowery Capital general partner Loren Straub started talking to a startup from the latest Y Combinator accelerator batch a few months ago, she thought it was strange that the company didn’t have a lead investor for the round it was raising. Even stranger, the founders didn’t seem to be…

2 days ago
In 2024, many Y Combinator startups only want tiny seed rounds — but there’s a catch

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

Welcome to Startups Weekly — Haje’s weekly recap of everything you can’t miss from the world of startups. Anna will be covering for him this week. Sign up here to…

Startups Weekly: Ups, downs, and silver linings

HSBC and BlackRock estimate that the Indian edtech giant Byju’s, once valued at $22 billion, is now worth nothing.

BlackRock has slashed the value of stake in Byju’s, once worth $22 billion, to zero

Apple is set to board the runaway locomotive that is generative AI at next week’s World Wide Developer Conference. Reports thus far have pointed to a partnership with OpenAI that…

Apple’s generative AI offering might not work with the standard iPhone 15

LinkedIn has confirmed it will no longer allow advertisers to target users based on data gleaned from their participation in LinkedIn Groups. The move comes more than three months after…

LinkedIn to limit targeted ads in EU after complaint over sensitive data use

Founders: Need plans this weekend? What better way to spend your time than applying to this year’s Startup Battlefield 200 at TechCrunch Disrupt. With Monday’s deadline looming, this is a…

Startup Battlefield 200 applications due Monday

The company is in the process of building a gigawatt-scale factory in Kentucky to produce its nickel-hydrogen batteries.

Novel battery manufacturer EnerVenue is raising $515M, per filing

Meta is quietly rolling out a new “Communities” feature on Messenger, the company confirmed to TechCrunch. The feature is designed to help organizations, schools and other private groups communicate in…

Meta quietly rolls out Communities on Messenger

Featured Article

Siri and Google Assistant look to generative AI for a new lease on life

Voice assistants in general are having an existential moment, and generative AI is poised to be the logical successor.

2 days ago
Siri and Google Assistant look to generative AI for a new lease on life

Education software provider PowerSchool is being taken private by investment firm Bain Capital in a $5.6 billion deal.

Bain to take K-12 education software provider PowerSchool private in $5.6B deal

Shopify has acquired Threads.com, the Sequoia-backed Slack alternative, Threads said on its website. The companies didn’t disclose the terms of the deal but said that the Threads.com team will join…

Shopify acquires Threads (no, not that one)

Featured Article

Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Two senior police officials in Bangladesh are accused of collecting and selling citizens’ personal information to criminals on Telegram.

3 days ago
Bangladeshi police agents accused of selling citizens’ personal information on Telegram

Carta, a once-high-flying Silicon Valley startup that loudly backed away from one of its businesses earlier this year, is working on a secondary sale that would value the company at…

Carta’s valuation to be cut by $6.5 billion in upcoming secondary sale

Boeing’s Starliner spacecraft has successfully delivered two astronauts to the International Space Station, a key milestone in the aerospace giant’s quest to certify the capsule for regular crewed missions.  Starliner…

Boeing’s Starliner overcomes leaks and engine trouble to dock with ‘the big city in the sky’

Rivian needs to sell its new revamped vehicles at a profit in order to sustain itself long enough to get to the cheaper mass market R2 SUV on the road.

Rivian’s path to survival is now remarkably clear

Featured Article

What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI

Apple is hoping to make WWDC 2024 memorable as it finally spells out its generative AI plans.

3 days ago
What to expect from WWDC 2024: iOS 18, macOS 15 and so much AI