The Silent Infrastructure Behind AI

Why Silicon Photonics Is Becoming the Next Data Center Battleground
Artificial intelligence has become synonymous with raw computing power. Headlines focus on the massive GPUs produced by companies such as NVIDIA and AMD, the chips that train and run the world’s largest AI models. Yet beneath that layer of compute lies another critical challenge—one that is rapidly becoming the next bottleneck in the AI revolution.
Training modern AI models requires tens of thousands of processors working together in tightly coordinated clusters. These chips must constantly exchange enormous volumes of data across servers and racks. The faster the models grow, the more intense this data exchange becomes.
Traditional copper interconnects are increasingly unable to keep up. As bandwidth rises, copper links consume more power and generate more heat, making them inefficient for the massive scale of hyperscale AI data centers. To keep pace with the exponential growth of AI workloads, the industry is turning to a different medium: light.
“The massive surge in demand is driven by AI. In silicon photonics we build all the optical components needed to meet this extreme, AI-driven demand.”
Dr. Marco Racanelli
President, Tower Semiconductor
Source: EE Times, Nov. 2024
The shift toward optical communication inside data centers is not entirely new. Fiber optics have long been used to transmit data across continents and between facilities. What is changing now is the scale and proximity at which optical technology must operate. Increasingly, photons are replacing electrons not just across long distances, but within racks, servers and eventually even inside the packages of computing chips themselves.
This transition is driven by a technology known as Silicon Photonics—a field that combines optical components with traditional semiconductor manufacturing. By guiding light through microscopic waveguides on silicon chips, silicon photonics allows data to travel faster and more efficiently than through electrical connections.
The market potential is enormous. According to analysts at Precedence Research, the global silicon photonics market is expected to grow from roughly $2.86 billion in 2025 to nearly $29 billion by 2034. That surge reflects the infrastructure demands of hyperscale cloud platforms and AI supercomputing clusters operated by companies such as Microsoft, Meta, Amazon and Google.
The Laser Problem
Despite its promise, silicon photonics has faced one persistent challenge: the laser.
Optical communication requires a light source, but silicon itself is not a good material for generating light. For years, this meant that lasers had to be manufactured separately and then precisely attached to photonics chips during assembly. The process added cost, complexity and manufacturing challenges, limiting scalability.
The solution many engineers pursued was the integration of alternative materials capable of generating light directly on the silicon wafer. One of the most promising of these materials is Indium Phosphide (InP), a compound semiconductor widely used in high-performance lasers.
This is where a relatively young but technologically significant company enters the picture: OpenLight.
“A major challenge for silicon photonics has been the high cost of discrete lasers and the complexity of assembling them. By integrating InP materials directly on the wafer, we reduce both the cost and the time required for mass production.”
OpenLight Executive Team
Source: PR Newswire
OpenLight focuses on integrating InP-based lasers directly into silicon photonics chips. Instead of attaching separate laser components after fabrication, the company’s process embeds the light source into the chip itself. The result is a more compact, reliable and scalable architecture.
An Ecosystem Approach
OpenLight does not operate in isolation. The company emerged from technology developed by Juniper Networks and is closely tied to the manufacturing capabilities of Tower Semiconductor. In this partnership, OpenLight provides the photonics platform and design ecosystem, while Tower Semiconductor manufactures the chips in its advanced foundries.
This model mirrors the broader semiconductor industry, where design and manufacturing are often separated. It also allows a wide range of companies to build custom photonics solutions using a shared technology platform.
“Providing an open silicon photonics platform with integrated lasers will help customers innovate and enable the next generation of designs at scale.”
Dr. Marco Racanelli
SVP and GM Analog Business Unit, Tower Semiconductor
The approach is particularly attractive for networking companies seeking to build specialized optical engines without developing the underlying photonics technology from scratch.
From 800G to 1.6 Terabit
The timing of these developments is critical. Data center networking is currently transitioning to a new generation of optical bandwidth.
Over the past decade, link speeds have progressed in steady increments: from 100 gigabit per second to 400G connections that dominate many modern hyperscale facilities. Today, however, the industry is entering the 800G era, with 1.6-terabit (1.6T) connections already appearing on technology roadmaps.
These bandwidth levels are essential for the massive GPU clusters used to train advanced AI systems. Thousands of processors must exchange model parameters and training data continuously, making network performance almost as important as compute power itself.
During a May 2025 earnings call, Tower Semiconductor confirmed that the company was already in a production ramp for 1.6T photonics components, highlighting how quickly demand from AI infrastructure providers is accelerating.
AI Data Centers Redefine Networking
For hyperscale cloud providers, the shift toward optical networking is no longer just about speed. It is also about energy efficiency.
AI data centers consume enormous amounts of electricity, and networking infrastructure represents a growing share of that power usage. More efficient optical interconnects can significantly reduce energy consumption while enabling higher bandwidth.
“As AI models grow larger, networks must deliver consistent high speeds. 1.6T solutions are critical for demanding GPU interconnects where bandwidth and reliability are essential.”
Jason Barrette
VP of Sales and Operations, ENET
Source: March 2026 statement
This growing demand has turned optical interconnects into one of the fastest-expanding segments of the semiconductor industry.
Major chip companies including Broadcom, Marvell Technology and Intel are all investing heavily in photonics technologies aimed at the data center market.
The Next Frontier: Bringing Light Closer to the Chip
Even today’s advanced pluggable optical modules are only an intermediate step. Engineers are already working on the next architectural shift in data center networking.
Two concepts dominate current research and development efforts.
The first is Linear Drive Pluggable Optics (LPO), which reduces the amount of electronic signal processing inside optical modules. By simplifying the electronics, LPO designs can significantly lower power consumption.
The second is Co-Packaged Optics (CPO), an architecture that brings optical connections directly into the same package as the processor or networking chip. Instead of sitting at the edge of a server, the optical interface is placed only millimeters from the compute silicon.
Analysts expect conventional pluggable optics to remain dominant in the near term due to their flexibility and compatibility with existing infrastructure. However, CPO could become essential for the largest AI supercomputers where electrical interconnects simply cannot scale further.
A Quiet Enabler of the AI Era
For most observers of the AI boom, the spotlight remains firmly on the processors themselves. Yet the future of AI infrastructure depends on far more than just computing power.
Without the ability to move vast amounts of data between chips efficiently, even the most advanced GPUs would quickly become constrained by network bottlenecks.
Companies like OpenLight are therefore playing a quieter—but equally crucial—role in the evolution of AI infrastructure. By enabling lasers to be integrated directly into silicon photonics chips and manufactured at scale, they are helping build the optical backbone required for the next generation of AI systems.
In that sense, the future of artificial intelligence may not only be shaped by faster processors, but also by something far less visible: the photons racing through microscopic waveguides inside tomorrow’s data centers.
Artificial intelligence is not just about software and algorithms. It also depends on a vast physical infrastructure of chips, photonics and data centers.
Explore the series: https://altairmedia.us/the-ai-infrastructure-stack/
Photo credit
AI-generated illustration (OpenAI)
Caption
Conceptual rendering of silicon photonics technology, in which integrated lasers send optical signals through microscopic waveguides on a chip—an emerging solution for high-bandwidth AI data center networks.
This shift—from electrons to photons—is part of a deeper transformation in how we understand intelligence: not just as software, but as something rooted in physics, energy and infrastructure.
I explore this idea further in my ebook The Age of Light — Meaning, Machines and the Physics of Intelligence, about how photonics and physical computing architectures are reshaping AI and global power.
Available worldwide on Amazon (Kindle):
https://www.amazon.com/dp/B0GMXLX56T
