Posted in | News | Optics and Photonics

Optical Computing Core Prototype can Accelerate Neural Networks Used in AI

Artificial intelligence and machine learning are already an integral part of our everyday lives online. For example, search engines such as Google use intelligent ranking algorithms and video streaming services such as Netflix use machine learning to personalize movie recommendations.

As the demands for AI online continue to grow, so does the need to speed up AI performance and find ways to reduce its energy consumption.

Now a University of Washington-led team has come up with a system that could help: an optical computing core prototype that uses phase-change material.

This system is fast, energy efficient and capable of accelerating the neural networks used in AI and machine learning. The technology is also scalable and directly applicable to cloud computing.

The team published these findings Jan. 4 in Nature Communications.

"The hardware we developed is optimized to run algorithms of an artificial neural network, which is really a backbone algorithm for AI and machine learning," said senior author Mo Li, a UW associate professor of both electrical and computer engineering and physics. "This research advance will make AI centers and cloud computing more energy efficient and run much faster."

The team is among the first in the world to use phase-change material in optical computing to enable image recognition by an artificial neural network. Recognizing an image in a photo is something that is easy for humans to do, but it is computationally demanding for AI.

Because image recognition is computation-heavy, it is considered a benchmark test of a neural network's computing speed and precision. The team demonstrated that their optical computing core, running an artificial neural network, could easily pass this test.

"Optical computing first appeared as a concept in the 1980s, but then it faded in the shadow of microelectronics," said lead author Changming Wu, a UW electrical and computer engineering graduate student. "Now, because of the end of Moore's law, advances in integrated photonics and the demands of AI computing, it has been revamped. That's very exciting."

[Note: Moore's law is the observation that the number of transistors in a dense, integrated circuit doubles about every two years.]

Other co-authors are Seokhyeong Lee and Ruoming Peng at the UW, and Heshan Yu and Ichiro Takeuchi at the University of Maryland.

This research was funded by the Office of Naval Research Multidisciplinary University Research Initiative. Some of this work was conducted at the Washington Nanofabrication Facility/Molecular Analysis Facility at the UW.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.