light computing

Laser light computing could significantly reduce AI energy consumption. (Credit: Summit Art Creations on Shutterstock)

The future of computing has arrived in a flash, literally.

In A Nutshell

  • Researchers created a computer that performs complex AI calculations by sending laser light through optical elements once, completing in nanoseconds what traditional chips need multiple steps to accomplish.
  • The system achieved over 94% accuracy running actual neural networks designed for GPUs, demonstrating it can handle real-world AI tasks without modification.
  • By encoding data into light and using physics to do the math, the technology could dramatically reduce the energy consumption and data movement that limits today’s AI hardware.
  • While still a laboratory prototype, the approach shows theoretical advantages of multiple orders of magnitude over current optical computing methods.

Researchers have built a computer that performs complex AI calculations in a single pass of light, completing what today’s fastest AI chips need multiple steps to accomplish. The breakthrough promises substantial gains in parallelism and energy efficiency for AI computations.

The system, called parallel optical matrix-matrix multiplication or POMMM, performs complex mathematical operations by encoding data into laser beams and letting physics do the work. Published in Nature Photonics, the technology executes an entire matrix multiplication (the core calculation in AI neural networks) through a single propagation of coherent light. No waiting for sequential processing. Just light passing through optical elements, minimizing data movement during the core computation.

Why Light Beats Electronics for AI Calculations

Traditional computer chips process AI calculations like an assembly line. They fetch numbers from memory, multiply them, add up the results, then store everything back in memory. Repeat millions of times. Each step burns time and energy.

The research team from Shanghai Jiao Tong University, Aalto University and the Chinese Academy of Sciences found a different approach. POMMM collapses that entire sequence into a single instant.

Their system encodes one set of numbers into a laser beam’s brightness and position, adds special patterns to organize the data, then uses shaped lenses to let the light waves naturally combine and separate the calculations. Everything happens at once as the light passes through.

When they tested it against conventional computing, the results matched closely across calculations of different sizes. The math happened during a single pass of light through the system.

Speed of light computer
While humans and classical computers must perform tensor operations step by step, light can do them all at once. (Credit: Photonics group / Aalto University)

Optical Hardware Runs Real Neural Networks

The real test came when the team ran actual AI programs designed for graphics chips. Their prototype correctly identified handwritten digits 94% of the time and recognized clothing items 84% of the time. These weren’t simplified demos. The researchers took neural networks trained on regular computer hardware (GPUs) and ran them directly on the light-based system.

The trick relies on wave physics that scientists have understood for over a century but never combined quite this way. Light waves have a useful property: you can shift them around in space without changing their essential character. POMMM exploits this by encoding different chunks of data with different wave patterns, then using the natural behavior of light to sort everything into the right places where a camera captures the answer.

Inside the Speed-of-Light Computer

The experimental prototype uses spatial light modulators to encode input matrices onto a 532-nanometer laser beam, cylindrical lens assemblies to perform the parallel optical transforms and a high-resolution quantitative CMOS camera to record results. The core calculation happens during a single pass of light through the optical elements. The speed of the modulators and camera determines how fast the system can run.

Taking this further, the team also showed they could use multiple colors of laser light at once. By putting different parts of a calculation on different laser wavelengths (540 and 550 nanometers, which are slightly different shades of green), they processed even more complex data in parallel. This points toward handling the multidimensional data that modern AI systems regularly work with.

Energy Efficiency and Future Potential

According to the team’s calculations, purpose-built versions of this technology could vastly outperform current optical computing methods in both speed and energy use. The key is that it needs only passive optical elements (think lenses and mirrors) to do the math, once you’ve loaded the data in.

Today’s AI chips face a major problem: they spend enormous amounts of time and energy moving data back and forth between the processor and memory.

“Imagine you’re a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins,” says lead author Dr. Yufeng Zhang, from the Photonics Group at Aalto University’s Department of Electronics and Nanoengineering, in a statement. “Normally, you’d process each parcel one by one. Our optical computing method merges all parcels and all machines together — we create multiple ‘optical hooks’ that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel.”

The challenges aren’t trivial. Building deep AI networks would require stacking multiple optical layers together, and everything needs precise alignment. The researchers found that training AI models to expect the specific quirks of the optical system helps compensate for small imperfections, but building reliable hardware still takes careful engineering.

Still, the approach works with multiple wavelengths and could scale up significantly. In computer simulations, the team successfully tested calculations with over two million individual operations, well beyond what the physical prototype currently handles.

This is early-stage lab research, not a product you can buy. But it demonstrates a fundamentally different way to do the calculations that power modern AI. As artificial intelligence demands grow, approaches like this could offer a path forward that doesn’t just make things incrementally faster. It reimagines how the computing happens in the first place.

Paper Summary

Limitations

The study notes several limitations in current POMMM implementation. Experimental accuracy depends on factors including spectral leakage from discrete periodicity effects, aperture limitations of optical components and diffraction constraints. Cascading multiple POMMM units for deep neural networks introduces engineering complexity not present in single-layer demonstrations. The paradigm’s requirement for additional phase modulation to enable real-valued operations increases deployment complexity compared to vision-focused diffractive computing approaches. Physical prototypes require precise component alignment and calibration to maintain accuracy across repeated operations. The research used relatively small matrix dimensions in physical experiments (up to 50Ă—50), though simulations explored larger scales.

Funding and Disclosures

The work was supported by research agencies in China and Finland, including the National Key Research and Development Program of China, Natural Science Foundation of China, Research Council of Finland, and Shanghai Science and Technology programs. The authors declared no competing interests. Open Access funding was provided by Aalto University.

Publication Details

Yufeng Zhang, Xiaobing Liu, Chenguang Yang, Jinlong Xiang, Hao Yan, Tianjiao Fu, Kaizhi Wang, Yikai Su, Zhipei Sun and Xuhan Guo. “Direct tensor processing with coherent light.” Nature Photonics (November 14, 2025). DOI:10.1038/s41566-025-01799-7. Affiliations include School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University; Department of Electronics and Nanoengineering, Aalto University, Finland; State Key Laboratory of Advanced Optical Communication Systems and Networks, Shanghai Jiao Tong University; Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences; University of Chinese Academy of Sciences; and Yiwu Zhiyuan Research Center of Electronic Technology.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply