Cisco Systems Unveils Router Chips to Link AI Data Centers
By Reuters | 08 Oct, 2025
Data centers located at great distances to take advantage of available power sources must be linked to tackle intensive AI training as a single computing unit.
The logo of networking gear maker Cisco Systems Inc is seen during GSMA's 2022 Mobile World Congress (MWC) in Barcelona, Spain February 28, 2022. REUTERS/Nacho Doce
Cisco Systems launched on Wednesday a new networking chip designed to connect artificial intelligence data centers, with the cloud computing units of Microsoft and Alibaba enrolling as the chip's customers.
The P200 chip, as Cisco calls it, will compete against rival offerings from Broadcom. It will sit at the heart of a new routing device that the company also rolled out on Wednesday and is designed to connect the sprawling data centers that are located over vast distances and which train AI systems.
Inside those data centers, companies such as Nvidia are connecting tens of thousands and eventually hundreds of thousands of powerful computing chips together to act as one brain to handle AI tasks.
The purpose of the new Cisco chip and router is to connect multiple data centers together to act as one massive computer.
"Now we're saying, 'the training job is so large, I need multiple data centers to connect together,'" Martin Lund, executive vice president of Cisco's common hardware group, told Reuters in an interview. "And they can be 1,000 miles apart."
The reason for those big distances is that data centers consume huge amounts of electricity, which has driven firms such as Oracle and OpenAI to Texas and Meta Platforms to Louisiana in search of gigawatts. AI firms are putting data centers "wherever you can get power," Lund said.
He did not disclose Cisco's investment in building the chip and router or sales expectations from them.
Cisco said the P200 chip replaces what used to take 92 separate chips with just one, and the resulting router uses 65% less power than comparable ones.
One of the key challenges is keeping data in sync across multiple data centers without losing any, which requires a technology called buffering that Cisco has worked on for decades.
“The increasing scale of the cloud and AI requires faster networks with more buffering to absorb bursts" of data, Dave Maltz, corporate vice president of Azure Networking at Microsoft, said in a statement. "We’re pleased to see the P200 providing innovation and more options in this space."
(Reporting by Stephen Nellis in San Francisco; Editing by Muralikumar Anantharaman)
Articles
- How Charles and Sara Liang Survived Scandals to Build a $20-Billion AI Giant
- SpaceX, Tesla to Build AI Chip Factories in Austin
- The Mensch Way for Don to Smooth Over His Iran Bad
- Elon Musk Offers to Pay TSA Salaries During Partial Shutdown
- Tencent Debuts ClawBot to Take on Agentic AI from Alibaba, Baidu
- China Pledges More Balanced Trade After Record $1.2 Trillion Surplus
- Airports Step up to Feed Unpaid TSA Workers
- Don Struggles for a Face-Saving Exit from a Self-Created Nightmare
- OpenAI to Double Workforce to 8,000 by End of 2026
- BTS Comeback Concert Shuts Down Central Seoul
