Posts

Showing posts with the label CPO

Hot Conferences Feature Cool Optics

Image
Source: Hot Chips The accelerated life cycles that AI is driving woke up a normally sleepy August. Held virtually, Hot Interconnects (HotI 2025) spanned three days with a mix of invited talks, sponsor talks, and tutorials. Some of the brief sponsor talks merely previewed larger disclosures at Hot Chips, which was held at Stanford University the following week. This year, Hot Chips’ agenda included an Optical session that featured three startups plus Nvidia. It also included Networking and Machine Learning (ML) sessions with talks from leading vendors. Although Nvidia was the marquee name in Hot Chips' Optical session, Gilad Shainer's talk on co-packaged silicon photonics lacked any new technical details on the company's CPO switches. Instead, the company used the event to announce Spectrum-XGS, which extends its Spectrum-X Ethernet solution across data centers. Nvidia calls this "scale-across" networking because it primarily targets data center clusters, but it is...

Broadcom Pitches Ethernet for AI Scale Up

Image
Tomahawk 6 is First to 102.4T Through relentless execution, Broadcom has been first to market generation after generation in data-center switching. The company just announced sampling of Tomahawk 6 (TH6), its 102.4T Ethernet switch ASIC. This generation actually consists of two switch chips, TH6-200G with 512x200G SerDes, and TH6-100G with 1,024x100G SerDes, both of which are sampling now. A version with fully co-packaged optics, TH6-Davisson, will follow on a to-be-announced schedule.  Whereas Tomahawk 5 (TH5) is a monolithic 5nm chip, TH6 comprises a core die and separate chiplets for the two SerDes options, all of which use 3nm technology. Source: Broadcom For AI scale-out networks, TH6 enables a 128K-XPU network using only two switch tiers. Fewer tiers mean lower latency, simpler load balancing and congestion control, and fewer optics. The new chip is first to handle 1.6T Ethernet ports, but it also handles up to 512x200GbE ports for maximum radix. Beyond sheer port density, TH...

AI Unsurprisingly Dominates Hot Chips 2024

Image
This year's edition of the annual Hot Chips conference represented the peak in the generative-AI hype cycle. Consistent with the theme, OpenAI's Trevor Cai made the bull case for AI compute in his keynote. At a conference known for technical disclosures, however, the presentations from merchant chip vendors were disappointing; despite a great lineup of talks, few new details emerged. Nvidia's Blackwell presentation mostly rehashed previously disclosed information. In a picture-is-worth-a-thousand-words moment, however, one slide included the photo of the GB200 NVL36 rack shown below. GB200 NVL36 rack (Source: Nvidia) Many customers prefer the NVL36 over the power-hungry NVL72 configuration, which requires a massive 120kW per rack. The key difference for our readers is that the NVLink switch trays shown in the middle of the rack have front-panel cages, whereas the "non-scalable" NVLink switch tray used in the NVL72 has only back-panel connectors for the NVLink spin...