comscore

Nvidia launches new chip for accelerated computing and generative AI

The new platform uses the Grace Hopper Superchip, which can be connected with additional Superchips by Nvidia NVLink

Published By: Pranav Sawant | Published: Aug 09, 2023, 08:49 PM (IST)

  • whatsapp
  • twitter
  • facebook

Highlights

  • Nvidia has launched a new chipset for accelerated computing.
  • The new chip will also be able to handle complex AI workloads.
  • The AI chip is based on GH200 Grace Hopper Superchip platform.
  • whatsapp
  • twitter
  • facebook

Chip maker Nvidia has unveiled a new AI chip built for accelerated computing and to handle the world’s most complex generative AI workloads, spanning large language models, recommender systems, and vector databases. news Also Read: Instagram Launches Limited-Edition Diwali Filters: Here’s How To Use Them

The next-generation GH200 Grace Hopper platform is based on a new Grace Hopper Superchip with the world’s first HBM3e processor, which will be available in a wide range of configurations, the company said. news Also Read: Meta AI Adds UPI Lite, Hindi Support, and Deepika Padukone’s Voice to Ray-Ban Glasses in India

“The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data centre,” Jensen Huang, founder and CEO of Nvidia, said in a statement. news Also Read: A Phone That Thinks And Moves? Honor Robot Phone Has A Camera That Pops Out

The new platform uses the Grace Hopper Superchip, which can be connected with additional Superchips by Nvidia NVLink, allowing them to work together to deploy the giant models used for generative AI.

This high-speed, coherent technology gives the GPU full access to the CPU memory, providing a combined 1.2TB of fast memory when in dual configuration, according to the company. “HBM3e memory, which is 50 per cent faster than current HBM3, delivers a total of 10TB/sec of combined bandwidth, allowing the new platform to run models 3.5x larger than the previous version, while improving performance with 3x faster memory bandwidth,” Nvidia said.

Leading system manufacturers are expected to deliver systems based on the platform in Q2 of calendar year 2024.

IANS