The AI race is laser focused on solving for power and compute. The more advanced AI models become, the greater their appetite for energy and processing capabilities, and much of those happen inside the data center. By the end of the year, the top five hyperscalers are expected to have poured a combined total of $600 billion into AI-ready infrastructure. In the U.S. alone, there are now more than 3,000 operational facilities, with another 1,500 in development. But like a city without roads, no amount of infrastructure matters if the networks connecting it can’t keep up.
The real AI bottleneck is between data centers, not inside them
AI depends on data exchange between training clusters, cloud platforms, and increasingly, edge environments. As inference moves closer to users and machine-to-machine traffic surges, the volume of data flowing between systems is growing at a rate traditional networks weren’t designed to handle. The bottleneck isn’t inside the data center, it’s between them, and the pressure is building.
This is already showing up in the market. AI chips, now among the most valuable assets in the modern world, are depreciating rapidly. In some cases, they lose up to 90% of their value within 48 months, as the pace of innovation overtakes them. That creates pressure to extract value quickly, and that value depends entirely on data movement. When the underlying network can’t keep up, performance drops, latency creeps in, and costly compute resources are left waiting for data instead of processing it.
Fiber alone won’t close the gap
The challenge isn’t just a lack of infrastructure, but the speed at which it can be extended. Fiber remains the backbone of global connectivity, and it will continue to play a vital role, but it suited a different era of growth where connectivity demand could be forecast, planned, and built around on timetables that spanned years. But AI demand is anything but predictable – it scales quickly, shifts unpredictably, and places pressure on parts of the network that weren’t designed to handle it.
As of 2025, the U.S. has deployed more than 159 million miles of fiber, yet according to the Fiber Broadband Association, an additional 213 million miles is needed to support the performance and scalability requirements of AI-driven workloads. Demand is moving faster than the infrastructure designed to support it, and simply building more of the same isn’t enough to close the gap. It requires more than investment and the loosening of permitting requirements – it needs another layer of connectivity to bootstrap during the construction of data centers and complement an already overburdened system.
The future of connectivity won’t be buried underground.
Lighting a new path forward
Now, a different approach to connectivity is starting to gain traction. Wireless optical communication (WOC) uses narrow, invisible beams of light to transmit data between fixed points, creating direct, high-capacity links without the need for physical cables or licensed spectrum. Instead of broadcasting broad, scattered signals, these systems establish precise, point-to-point connections, capable of carrying large volumes of data over long distances at fiber-like speeds with minimal latency. Rather than building networks solely through trenching and physical expansion, capacity can be added quickly and precisely – in reconfigurable ways – linking data centers, extending networks into hard-to-reach areas, or increasing density and resilience around concentrated demand. Deployments that would traditionally take months or years can be completed in hours.
This changes the economics as much as the technology. AI is pushing infrastructure into a new phase where connectivity is the make-or-break factor. The industry can continue to invest in compute and power at scale, but without also addressing how data moves between systems, those investments alone won’t be enough to “win” AI.