Leveraging silicon photonics for scalable and sustainable AI hardware

Leveraging silicon photonics for scalable and sustainable AI hardware

Researchers have developed a new superior hardware platform for AI accelerators using photonic integrated circuits on silicon chip. Credit: Bassem Tossoun from IEEE JSTQE

The emergence of AI has profoundly transformed numerous industries. Driven by deep learning technology and Big Data, AI requires significant processing power for training its models. While the existing AI infrastructure relies on graphical processing units (GPUs), the substantial processing demands and energy expenses associated with its operation remain key challenges. Adopting a more efficient and sustainable AI infrastructure paves the way for advancing AI development in the future.

A recent study published in the IEEE Journal of Selected Topics in Quantum Electronics demonstrates a novel AI acceleration platform based on photonic integrated circuits (PICs), which offer superior scalability and energy efficiency compared to conventional GPU-based architectures.

The study, led by Dr. Bassem Tossoun, a Senior Research Scientist at Hewlett Packard Labs, demonstrates how PICs leveraging III-V compound semiconductors can efficiently execute AI workloads. Unlike traditional AI hardware that relies on electronic distributed neural networks (DNNs), photonic AI accelerators utilize optical neural networks (ONNs), which operate at the speed of light with minimal energy loss.

“While silicon photonics are easy to manufacture, they are difficult to scale for complex integrated circuits. Our device platform can be used as the building blocks for photonic accelerators with far greater energy efficiency and scalability than the current state-of-the-art,” explains Dr. Tossoun.

The team used a heterogeneous integration approach to fabricate the hardware. This included the use of silicon photonics along with III-V compound semiconductors that functionally integrate lasers and optical amplifiers to reduce optical losses and improve scalability.

III-V semiconductors facilitate the creation of PICs with greater density and complexity. PICs utilizing these semiconductors can run all operations required for supporting neural networks, making them prime candidates for next-generation AI accelerator hardware.

Leveraging silicon photonics for scalable and sustainable AI hardware
Photographs of a wafer (a) after III/V material has been transferred to silicon and (b) after the completion of all fabrication processes. Credit: IEEE Journal of Selected Topics in Quantum Electronics (2025). DOI: 10.1109/JSTQE.2025.3527904

The fabrication started with silicon-on-insulator (SOI) wafers that have a 400 nm-thick silicon layer. Lithography and dry etching were followed by doping for metal oxide semiconductor capacitor (MOSCAP) devices and avalanche photodiodes (APDs).

Next, selective growth of silicon and germanium was performed to form absorption, charge, and multiplication layers of the APD. III-V compound semiconductors (such as InP or GaAs) were then integrated onto the silicon platform using die-to-wafer bonding. A thin gate oxide layer (Al₂O₃ or HfO₂) was added to improve device efficiency, and finally a thick dielectric layer was deposited for encapsulation and thermal stability.

“The heterogeneous III/V-on-SOI platform provides all essential components required to develop photonic and optoelectronic computing architectures for AI/ML acceleration. This is particularly relevant for analog ML photonic accelerators, which use continuous analog values for data representation,” Dr. Tossoun notes.

This unique photonic platform can achieve wafer-scale integration of all of the various devices required to build an optical neural network on one single photonic chip, including active devices such as on-chip lasers and amplifiers, high-speed photodetectors, energy-efficient modulators, and nonvolatile phase shifters. This enables the development of TONN-based accelerators with a footprint-energy efficiency that is 2.9 × 10² times greater than other photonic platforms and 1.4 × 10² times greater than the most advanced digital electronics.

This is indeed a breakthrough technology for AI/ML acceleration, reducing energy costs, improving computational efficiency, and enabling future AI-driven applications in various fields. Going forward, this technology will enable datacenters to accommodate more AI workloads and help solve several optimization problems.

The platform will be addressing computational and energy challenges, paving the way for robust and sustainable AI accelerator hardware in the future.

More information:
Bassem Tossoun et al, Large-Scale Integrated Photonic Device Platform for Energy-Efficient AI/ML Accelerators, IEEE Journal of Selected Topics in Quantum Electronics (2025). DOI: 10.1109/JSTQE.2025.3527904

Provided by
Institute of Electrical and Electronics Engineers

Citation:
Leveraging silicon photonics for scalable and sustainable AI hardware (2025, April 10)
retrieved 10 April 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Source link

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam! Read our privacy policy for more info.

More From Author

McIlroy vs. Scheffler: Contrasting styles, but both the clear favorites in Augusta

McIlroy vs. Scheffler: Contrasting styles, but both the clear favorites in Augusta

Avs star MacKinnon could sit out until playoffs

Avs star MacKinnon could sit out until playoffs

Leave a Reply

Your email address will not be published. Required fields are marked *