A research team led by Professor Hui Qiao from the Institute for Brain and Cognitive Sciences and the Department of Automation at Tsinghua University has introduced a compact meta-imaging camera and an analytical framework for quantifying the precision of monocular depth sensing, utilizing the Cramér–Rao lower bound to calculate depth estimation accuracy.
Depth sensing is crucial in applications like robotics, augmented reality, and autonomous driving. Monocular passive depth sensing techniques have become popular due to their compact design and cost-effectiveness, offering an alternative to bulky and costly active depth sensors and stereo vision systems.
While light-field cameras can address the defocus ambiguity of traditional 2D cameras, enabling unambiguous depth perception, they often compromise spatial resolution and are affected by optical aberrations. These drawbacks make achieving accurate and robust monocular depth sensing a challenging task.
Quantitative evaluations revealed that the meta-imaging camera achieved not only higher precision across a wider depth range compared to the light-field camera but also greater robustness to variations in the signal-to-background ratio. Additionally, both simulation and experimental results confirmed that the meta-imaging camera reliably provided accurate depth information even in the presence of optical aberrations.
With promising compatibility with other point-spread-function engineering techniques, the researchers anticipated that this meta-imaging camera could significantly advance monocular passive depth sensing across various applications.
The meta-imaging camera integrates a main lens, microlens array, CMOS sensor, and piezo stage. Through a built-in scanning mechanism, the camera overcomes the typical trade-off between spatial and angular resolution, enabling multisite aberration correction via digital adaptive optics (DAO) techniques. As a result, it can optically capture depth information accurately and reliably, even in the presence of aberrations.
The scientists describe their camera’s operational principle as follows:
We present a compact meta-imaging camera and an analytical framework for the quantification of monocular depth sensing precision. Our results reveal that the meta-imaging camera outperforms the traditional light-field camera, exhibiting superior depth sensing capabilities and enhanced robustness against changes in signal-background ratio. Simulation and experimental depth estimation results further confirm the robustness and high precision of meta-imaging cameras in challenging conditions caused by optical aberrations.
“The meta-imaging camera complements rather than contradicts stereo vision. It can enhance the depth sensing performance when replacing 2D cameras with meta-imaging cameras in current stereo vision systems,” they noted.
“This technique could significantly expand the utility of passive depth sensing in challenging scenarios such as autonomous driving, unmanned drones, and robotics, where accurate and robust depth sensing is crucial. Additionally, this breakthrough opens new avenues for future advancements in long-range passive depth sensing, overcoming the limitations previously imposed by optical aberrations,” they concluded.
Journal Reference:
Cao, Z., et al. (2024). Aberration-robust monocular passive depth sensing using a meta-imaging camera. Light Science & Applications. doi.org/10.1038/s41377-024-01609-9.