News
CRN rounds up seven new, cutting-edge AI chips from Nvidia and rivals such as Google and AMD that have been released recently ...
Today's news follows CoreWeave's early access period during which H100 PCIe instances launched in December and HGX H100s launched in February, further strengthening CoreWeave’s commitment to ...
Using the NVIDIA HGX H100 eight-GPU platform, BlueField-3 DPUs, Spectrum-X networking, and Marvell’s interconnects, they have developed NVIDIA Israel-1 to power AI applications with high efficiency.
At the GPU Technical Conference 2025 last week, Astera Labs did two things. First, it demonstrated the interoperability of its “Scorpio” P-Series PCI-Express 6.0 fabric switches and “Aries” ...
Astera linked its switch to an unspecified CPU, an Nvidia H100 GPU, and two Micron PCIe 6.x E3.S SSDs. The demo used Nvidia's Magnum IO GPUDirect Storage technology to establish a direct data path ...
In a partnership with Astera Labs, Micron paired two PCIe 6.0 SSDs with an Nvidia H100 GPU and Astera's PCIe 6.0 network fabric switch.
With double the bandwidth of PCIe 5.0, PCIe 6.x delivers up to 256GB/s of bidirectional throughput on an x16 lane configuration, significantly reducing bottlenecks in AI training and inference tasks.
Nvidia does the math a little more nicely and breaks down the B200 performance to a single GPU performance not specified in the MLPerf compared to the older H100 with 80 GB (submission 4.1.-0043).
1y
Tom's Hardware on MSNAMD MI300X performance compared with Nvidia H100 — low-level benchmarks testing cache, latency, inference, and more show strong results for single GPUsAMD's MI300X was tested by Chips and Cheese, looking at many low-level performance metrics and comparing the chip with rival ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results