The Big Tech giant Microsoft announced the launch of a new artificial intelligence (AI) chip, the Microsoft Azure Maia AI Accelerator, according to a blog post on Nov. 15.
The chip is designed for AI tasks and generative AI and debuted alongside the Microsoft Azure Cobalt CPU, which was designed to compute workloads on Microsoft Cloud. Microsoft called the two chips the “last puzzle piece” for Microsoft infrastructure systems.
According to the announcement, the chips will arrive in early 2024 first in Microsoft’s data centers where they will help power its Copilot or Azure OpenAI Service.
Scott Guthrie, the executive vice president of Microsoft’s Cloud + AI Group commented on the integration of the chip into the company’s data centers saying that:
“At the scale we operate, it’s important for us to optimize and integrate every layer of the infrastructure stack to maximize performance, diversify our supply chain and give customers infrastructure choice.”
The AI company OpenAI, which is backed by Microsoft, is said to have provided feedback on the new Maia 100 AI Accelerator and how its own workloads run on top of the new infrastructure.
Sam Altman, OpenAI’s CEO, said that these new chips will help make their AI models more “capable” and “cheaper” for its users.
Alongside these new Microsoft chips, the company also announced its expanding partnerships with two of the world’s major chip manufacturers Nvidia and AMD. It plans to integrate some of the manufacturers’ high performing chips into its operations.
This news comes as many major companies in the tech and AI industry are ramping up production of semiconductor chips.
In October Samsung revealed that it is developing AI chips and intellectual property for data centers. with the Canadian startup Tenstorrent. Shortly after, there were reports of OpenAi considering making AI chips in-house.
Most recently on Oct. 22 the global tech company IBM unveiled its new AI chip, which it claims offers 22x speedup and is reported to be more energy efficient than any current chip available.