Nvidia Unveils Faster Chip Aimed at Cementing AI Dominance

Nvidia Corp. announced an updated AI processor that gives a jolt to the chip’s capacity and speed, seeking to cement the company’s dominance in a burgeoning market.

(Bloomberg) — Nvidia Corp. announced an updated AI processor that gives a jolt to the chip’s capacity and speed, seeking to cement the company’s dominance in a burgeoning market.

The Grace Hopper Superchip, a combination graphics chip and processor, will get a boost from a new type of memory, Nvidia said Tuesday at the Siggraph conference in Los Angeles. The product relies on high-bandwidth memory 3, or HBM3e, which is able to access information at a blazing 5 terabytes per second.

The Superchip, known as GH200, will go into production in the second quarter of 2024, Nvidia said. It’s part of a new lineup of hardware and software that was announced at the event, a computer-graphics expo where Chief Executive Officer Jensen Huang is speaking.

Nvidia has built an early lead in the market for so-called AI accelerators, chips that excel at crunching data in the process of developing artificial intelligence software. That’s helped propel the company’s valuation past $1 trillion this year, making it the world’s most valuable chipmaker. The latest processor signals that Nvidia aims to make it hard for competitors like Advanced Micro Devices Inc. and Intel Corp. to catch up.

AMD’s rival offerings, two versions of its MI300 design, will be in the hands of customers in the fourth quarter of this year. One version is a graphics chip, and one is a combination product like Nvidia’s Superchip. AMD’s components will work with another version of HBM3 memory.

In the age of AI, Huang sees his technology as a replacement for traditional data center gear. He said that a $100 million facility built with older equipment can be replaced with an $8 million investment in his new technology. That type of facility would use 20 times less power, Huang said.

“This is the reason why the world’s data centers are rapidly transitioning to accelerated computing,” he told the audience. “The more you buy, the more you save.”

Read More: Bubble in AI Stocks Is Nearing a Peak, Morgan Stanley Says

Huang has turned a 30-year-old graphics chips business into the top seller of equipment for training AI models, a process that involves sifting through massive amounts of data. Now that AI tools like ChatGPT and Google Bard are catching on with consumers and businesses, companies are racing to add Nvidia technology to handle the workload.

Nvidia shares have more than tripled this year, bringing its valuation to about $1.1 trillion. That’s the biggest gain in 2023 for companies in the closely watched Philadelphia Stock Exchange Semiconductor Index, by a long shot. Though Nvidia was down about 2% on Tuesday, it briefly pared the losses after announcing the GH200 chip.

The Superchip will serve as the heart of a new server computer design that can handle a greater amount of information and access it more quickly — a key advantage given the tsunami of data flowing through AI models. Artificial intelligence training gets a boost if the chip can load a model in one go and update it without having to offload parts to slower forms of memory. That saves power and speeds the whole process up.

In servers, two of the chips can be deployed together, offering more than 3.5 times the capacity of an existing model, Nvidia said. That will let customers deploy fewer machines or get work done far quicker.

The latest Nvidia products are designed to spread generative AI — and its underlying hardware — to even more industries by making the technology simpler to use. A new version of the company’s AI Enterprise software will ease the process of training the models, which can then generate text, images and even video based on simple prompts. 

The lineup also includes new chips for workstations, computers designed for heavy workloads. New AI Workbench software, meanwhile, helps users switch their work on AI models between different types of computers.

With the Workbench tool, users can move their models and training work from PCs to workstations to data centers and even public cloud services. The software handles the process of adjusting the AI software to fit the current platform. The idea is to help spur further demand for Nvidia’s processors, regardless of what system they’re running on.

The Santa Clara, California-based company also announced a partnership with Hugging Face, a popular developer of AI models and data sets. Hugging Face will add a training service to its website that uses Nvidia DGX Cloud, allowing users to tap the chipmaker’s servers to handle their workloads.

In addition, Nvidia is adding generative AI to its Ominverse offerings, a platform designed to support metaverse-style virtual worlds. The company is using the technology to help corporate clients create online versions of real-world things, such as factories and vehicles.

To encourage others to use the technology, Nvidia has endorsed a standard called universal scene description that was originally developed by Walt Disney Co.’s Pixar. The chipmaker has formed an alliance with Pixar, Autodesk Inc., Adobe Inc. and Apple Inc. to try to speed up adoption.

On the hardware front, Nvidia is releasing three new RTX graphics cards for workstations. The RTX 5000 is available now for $4,000 and will more than double the speed of generative AI and the rendering of images, Nvidia said. The chipmaker also introducing new servers based on the L40S graphics chip and a design for top-of-the-line workstations that use four of its RTX 6000 cards.

(Updates with details of announced AMD in fourth paragraph.)

More stories like this are available on bloomberg.com

©2023 Bloomberg L.P.