
[ad_1]
Lisa Su shows an ADM Instinct M1300 chip while she delivers a keynote address at CES 2023 at The Venetian Las Vegas on January 4, 2023 in Las Vegas, Nevada.
David Baker | Getty Images
AMD It said on Tuesday that its most advanced AI graphics processor, the MI300X, will start shipping to some customers later this year.
AMD’s announcement poses the strongest challenge to nvidiawhich currently dominates the artificial intelligence chip market with a market share of more than 80%, according to analysts.
Graphics Processing Units (GPUs) are chips that companies like OpenAI use to build sophisticated AI software like ChatGPT.
If AMD’s AI chips, which it calls “accelerators,” are adopted by developers and server makers as alternatives to Nvidia’s products, they could represent a large untapped market for the chipmaker, which is best known for its traditional PC processors.
AMD CEO Lisa Su told investors and analysts in San Francisco on Tuesday that artificial intelligence is the company’s “biggest and most strategic long-term growth opportunity.”
“We are considering the growth of the data center AI accelerator (market) from $30 billion this year, at a compound annual growth rate of more than 50%, to more than $150 billion in 2027,” Su said.
Although AMD did not disclose the price, the move could put pressure on prices on Nvidia’s GPUs, such as the H100, which can cost $30,000 or more. Lower GPU prices may help lower the high cost of servicing generative AI applications.
Artificial intelligence chips are one of the bright spots in the semiconductor industry, while PC sales, the traditional driver of semiconductor processor sales, have declined.
Last month, AMD CEO Lisa Su said on an earnings call that while the MI300X will be available for sampling this fall, it will start shipping in larger quantities next year. Su shared more details about the chip during her presentation on Tuesday.
“I love this slide,” Sue said.
MI300X
AMD said the new MI300X chip and its CDNA architecture are designed for large language models and other high-end AI models.
“At the heart of this are GPUs. GPUs enable generative AI,” Su said.
The MI300X can use up to 192GB of memory, which means it can fit larger AI models than other chips. Nvidia’s competitor H100 only supports 120GB of memory, for example.
Large language models for generative AI applications use a lot of memory because they occupy an increasing number of computational operations. AMD demonstrated the MI300x running a 40 billion parameter model called the Falcon. OpenAI’s GPT-3 model contains 175 billion parameters.
“The model sizes are getting bigger, and you really need many GPUs to run the latest large language models,” Su said, noting that with the added memory on AMD chips, GPU developers won’t need as many GPUs.
AMD also said that it will introduce the Infinity Architecture, which combines eight M1300X accelerators into a single system. Nvidia and Google have developed similar systems that combine eight or more GPUs into a single box for AI applications.
One reason AI developers have historically favored Nvidia chips is because it has a well-developed software package called CUDA that enables them to access the chip’s core hardware features.
AMD said on Tuesday that it has its own program for artificial intelligence chips, which it calls ROCm.
“Now while this is a journey, we’ve made really great progress in building a robust software suite that works with the open ecosystem of models, libraries, frameworks, and tools,” said Victor Bing, president of AMD.
[ad_2]