Sep 10 2020
Stanford University researchers have designed a new kind of X-ray vision without the proverbial X-rays.
The team worked with hardware that is analogous to what allows self-driving cars to “see” the realm around them, and subsequently improved their vision system with an extremely efficient algorithm that can rebuild 3D concealed scenes on the basis of the motion of individual photons, or particles of light.
Tests were performed in which the researchers’ system effectively rebuilt shapes hidden by 1-inch-thick foam. But to the human eye, it was similar to viewing through walls. The tests were described in the Nature Communications journal on September 9th, 2020.
A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible. This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.
Gordon Wetzstein, Study Senior Author and Assistant Professor of Electrical Engineering, Stanford University
The new method complements other kinds of vision systems that can view through obstacles on the microscopic scale—for use in medicine. This is because the method is more focused on large-scale circumstances such as navigating autonomous cars in heavy rain or fog, and satellite imaging of the surface of planets, including Earth, through the foggy atmosphere.
Supersight from Scattered Light
To view through surroundings that distribute light in every way, the vision system combines a laser with a highly sensitive photon detector that captures every part of laser light striking it.
When the laser scans an obstruction, such as a foam wall, an intermittent photon will manage to travel through the foam, strike the objects that are concealed behind it, and travel back through the foam wall to reach the detector.
The software, supported by the algorithm, subsequently utilizes those few photons—and also data about when and where they strike the detector—to rebuild the concealed objects in 3D.
While this is not the first system with the potential to expose the concealed objects through scattering surroundings, it certainly prevents the restrictions related to other methods. For instance, some people need an understanding of the distance of the target object.
It is also a common fact that such systems merely use data from ballistic photons—photons traveling back and forth to the concealed object via the scattering field but without truly scattering along the way.
We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image. This makes our system especially useful for large-scale applications, where there would be very few ballistic photons.
David Lindell, Study Lead Author and Graduate Student in Electrical Engineering, Stanford University
To make their algorithm conducive to the scattering complexities, the team had to closely co-develop their software and hardware, even though the components of the hardware used by them are only marginally more sophisticated than what is presently found in self-driving cars.
Based on the brightness of the concealed objects, the scanning performed in the researchers’ tests ranged between one minute and one hour. However, the algorithm rebuilt the hidden scene in real-time and can possibly be run on a laptop.
“You couldn’t see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don’t see anything,” Lindell added. “But, with just a handful of photons, the reconstruction algorithm can expose these objects—and you can see not only what they look like, but where they are in 3D space.”
Space and Fog
In the future, a successor of this vision system might be sent through space to moons as well as other planets to help observe deeper surfaces and layers through icy clouds. The investigators would prefer to work with different scattering surroundings to reproduce other situations where the new technology may prove handy.
We’re excited to push this further with other types of scattering geometries. So, not just objects hidden behind a thick slab of material but objects that are embedded in densely scattering material, which would be like seeing an object that’s surrounded by fog.
David Lindell, Study Lead Author and Graduate Student in Electrical Engineering, Stanford University
Both Lindell and Wetzstein are also excited about how this latest study signifies a profoundly interdisciplinary intersection of science and engineering.
“These sensing systems are devices with lasers, detectors and advanced algorithms, which puts them in an interdisciplinary research area between hardware and physics and applied math. All of those are critical, core fields in this work and that’s what’s the most exciting for me,” Wetzstein concluded.
Wetzstein is also the director of the Stanford Computational Imaging Lab and a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute.
The study was financially supported by a Stanford Graduate Fellowship in Science and Engineering; the National Science Foundation; a Sloan Fellowship; Defense Advanced Research Projects Agency (DARPA); the King Abdullah University of Science and Technology (KAUST); and the Army Research Office (ARO), an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.
Journal Reference:
Lindell, D B & Wetzstein G (2020) Three-dimensional imaging through scattering media based on confocal diffuse tomography. Nature Communications. doi.org/10.1038/s41467-020-18346-3.