
Intel’s Integrated Photonics Solutions Group attended the 2024 Optical Fiber Communication Conference where it debuted its OCI. The fully integrated optical compute interconnect utilized an Intel CPU to demonstrate the new technology’s ability to transmit 32 Gbps of data across 64 channels. Intel’s Integrated Photonics Solutions Group has said the OCI is capable of spanning up to 100 meters of fiber optics and designed to use lower power. OCI is being developed to aid in AI applications where the scalability of CPU/GPU clusters, data transmission rates, and low power consumption are a focus.
“The ever-increasing movement of data from server to server is straining the capabilities of today’s data center infrastructure, and current solutions are rapidly approaching the practical limits of electrical I/O performance. However, Intel’s groundbreaking achievement empowers customers to seamlessly integrate co-packaged silicon photonics interconnect solutions into next-generation compute systems. Our OCI chiplet boosts bandwidth, reduces power consumption and increases reach, enabling ML workload acceleration that promises to revolutionize high-performance AI infrastructure.”
–Thomas Liljeberg, senior director, Product Management and Strategy, Integrated Photonics Solutions Group
Press Release Excerpts:
“AI-based applications are increasingly deployed globally, and recent developments in large language models (LLM) and generative AI are accelerating that trend. Larger and more efficient machine learning (ML) models will play a key role in addressing the emerging requirements of AI acceleration workloads. The need to scale future computing platforms for AI is driving exponential growth in I/O bandwidth and longer reach to support larger processing unit (CPU/GPU/IPU) clusters and architectures with more efficient resource utilization, such as xPU disaggregation and memory pooling.”
“Electrical I/O (i.e., copper trace connectivity) supports high bandwidth density and low power, but only offers short reaches of about one meter or less. Pluggable optical transceiver modules used in data centers and early AI clusters can increase reach at cost and power levels that are not sustainable with the scaling requirements of AI workloads. A co-packaged xPU optical I/O solution can support higher bandwidths with improved power efficiency, low latency and longer reach – exactly what AI/ML infrastructure scaling requires.”


As explained by Intel’s Integrated Photonics Solutions Group, the first OCI implementation achieved a staggering 4 terabits per second (Tbps) bidirectional data transfer rate. That version was compatible with PCIe Gen5.
Press Release Excerpt:
“The live optical link demonstration showcases a transmitter (Tx) and receiver (Rx) connection between two CPU platforms over a single-mode fiber (SMF) patch cord. The CPUs generated and measured the optical Bit Error Rate (BER), and the demo showcases the Tx optical spectrum with 8 wavelengths at 200 gigahertz (GHz) spacing on a single fiber, along with a 32 Gbps Tx eye diagram illustrating strong signal quality.”
OCI’s next version was adapted to support 64 channels capable of up to 32 Gbps of data transmission in each direction with a maximum length of 100 meters. Intel states that depending on application design the transmission lengths may be limited to tens of meters. Intel adds that OCI can operate at a third of the power needed for pluggable optical transceiver modules.
Press Release Excerpt:
“The current chiplet supports 64 channels of 32 Gbps data in each direction up to 100 meters (though practical applications may be limited to tens of meters due to time-of-flight latency), utilizing eight fiber pairs, each carrying eight dense wavelength division multiplexing (DWDM) wavelengths. The co-packaged solution is also remarkably energy efficient, consuming only 5 pico-Joules (pJ) per bit compared to pluggable optical transceiver modules at about 15 pJ/bit. This level of hyper-efficiency is critical for data centers and high-performance computing environments and could help address AI’s unsustainable power requirements.”
Intel’s Integrated Photonics Solutions Group further explains how using a hybrid laser-on-wafer technology combined with direct integration has allowed higher yields at lower costs. Intel is already at work in developing the next generation of OCI which will support 200G/lane PICs capable of 800 Gbps and up to 1.6 Tbps.
Press Release Excerpts:
“This unique approach enables Intel to deliver superior performance while maintaining efficiency. Intel’s robust, high-volume platform boasts shipping over 8 million PICs with over 32 million integrated on-chip lasers, showing a laser failures-in-time (FIT) rate of less than 0.1, a widely utilized measure of reliability that represents failure rates and how many failures occur.”
“These PICs were packaged in pluggable transceiver modules, deployed in large data center networks at major hyperscale cloud service providers for 100, 200, and 400 Gbps applications. Next generation, 200G/lane PICs to support emerging 800 Gbps and 1.6 Tbps applications are under development.”