At GTC 2024, Nvidia announced its latest Blackwell series graphics processing unit (GPU) that will power the next generation of artificial intelligence (AI) accelerators. The new Blackwell GPU is the company’s first multi-chip product and contains 208 billion transistors; This significantly exceeds the previous flagship GH100 (80 billion). reports AnandTech portal.
Blackwell is equipped with two identical dies connected by a high-performance NV-High Bandwidth bus (NV-HBI) with a transfer rate of 10 TB per second. This device also includes eight HBM3e memory stacks for a total of 192 GB and an 8192-bit bus providing 8 TB per second throughput.
GPU production is carried out using 4 nanometer TSMC 4NP process technology. Nvidia claims the new Blackwell is 2.5 to 5 times faster than its predecessor, the GH100, depending on the operating mode, but performance in FP32 mode has not been disclosed, which may be due to the focus on AI computing.
At the same time, Nvidia introduced the GB200 Grace Blackwell Superchip accelerator, which includes two next-generation GPUs and a 72-core Arm Neoverse V2 processor capable of reaching 40 petaflops of performance in FP4 mode.
The largest manufacturers, including ASRock Rack, ASUS, Foxconn, Gigabyte and others, will soon offer versions of accelerators based on the Nvidia solution.
Previously recognizedHe said that Nvidia will not increase the memory of the main gaming graphics card.
What are you thinking?
Source: Gazeta
Jackson Ruhl is a tech and sci-fi expert, who writes for “Social Bites”. He brings his readers the latest news and developments from the world of technology and science fiction.