Preview: Hot Chips Returns to Stanford for HC35

I will be at Stanford at the end of this month for the in-person return of Hot Chips. As always, the 35th edition (HC35) will have plenty of deep technical content, with AI/ML unsurprisingly getting lots of attention. I'm particularly interested in a set of talks exploring interconnects and networking for AI, HPC, and beyond.

Day 2 (Tuesday, August 29) features an ML-Training session with talks from Google and Cerebras. The technical lead for TPUs at Google, Norm Jouppi will expand on the paper presented at ISCA 2023 describing the TPUv4 supercomputer. That paper revealed Google's use of optical circuit switches in its TPUv4 cluster, following prior disclosures around OCS deployments in its data-center spine layer.

Sean Li, cofounder and chief hardware architect at Cerebras, will deliver a talk on the company's cluster architecture built around the CS-2 system and WSE-2 wafer-scale engine. This talk will explore how the MemoryX external-memory system and SwarmX fabric interconnect stream data across up to 192 CS-2 systems.

The Interconnects session, also on Day 2, includes talks from NVIDIA, Lightelligence, and Intel. An NVIDIA software architect, Omer Shabtai, will present a Resource-Fungible Network Processing ASIC, building on a vision presentation at NSDI 2022. The talk will discuss NIC programmability using match/action languages such as P4.

VP of engineering at silicon-photonics-startup Lightelligence, Maurice Steinman will disclose new details of the recently announced Hummingbird AI accelerator. This 3D chip stacks a compute die with 64 SIMD cores on top of a silicon-photonic die that provides an optical network-on-a-chip (NoC).

Moving into the research realm, Joshua Fryman, a Fellow and system architect at Intel, will talk about a mesh-to-mesh photonic fabric developed under the DARPA HIVE program. It will reveal that Intel's PUMA graph-computation ASIC implements 1TB/s copackaged optics to create a glueless low-latency interconnect between ASIC sockets.

There will be many more talks on AI/ML hardware, including the Day 1 keynote from Google and the Day 2 keynote from NVIDIA, but I'm excited that interconnects are back in the spotlight!

Comments

Popular posts from this blog

AMD Looks to Infinity for AI Interconnects

NVIDIA Networks NVLink

NVIDIA Reveals DGX GH200 System Architecture