Posted in | News | Imaging

New Artificial Neural Network-Based Method Overcomes Limitations of Holographic 3D Imaging

Digital holographic microscopy, an imaging modality, has the potential to digitally rebuild the images of 3D samples from a single hologram by digitally refocusing it through the whole 3D sample volume.

An overview of Bright-field Holography concept. (Image credit: Changchun Institute of Optics, Fine Mechanics and Physics)

When compared, scanning through a sample volume with a traditional light microscope needs a mechanical stage to shift the sample and to capture multiple images at different depths, which sets a limit on the attainable imaging speed and throughput. Furthermore, holographic imaging can be carried out at a fraction of the size and cost of a traditional bright-field microscope, also comprising of a much wider field of view.

This has allowed innumerable hand-held devices enabled by holography for biomedical diagnostics and environmental sensing applications. In spite of these benefits, the resulting images of a holographic microscope typically experience light interference-related spatial artifacts, which can restrict the attainable contrast in the reconstructed hologram.

Scientists at UCLA have created a new artificial neural network-based technique to overcome these drawbacks of holographic 3D imaging. This new technique, known as Bright-field Holography, incorporates the best of both domains because it integrates the image contrast benefit of bright-field microscopy and the snapshot volumetric imaging potential of holography. In Bright-field Holography, co-registered pairs of digitally refocused holograms and their corresponding bright-field microscope images are used to train a deep neural network to understand the statistical image transformation between two distinct microscopy modalities.

Once the training is completed, the deep neural network receives a digitally refocused hologram corresponding to a specified depth within the sample volume and converts it into an image that is analogous to a bright-field microscope image obtained at the same depth, comparable with the spatial and color contrast as well as the optical sectioning potential of a bright-field microscope. The training of such a neural network takes approximately 40 hours; however, after the training, the network remains unchanged and can quickly produce its output image, within just a second for a hologram with millions of pixels.

This study has been reported in Light: Science and Applications, an open access journal of Springer Nature. The study was headed by Dr Aydogan Ozcan, the Chancellor’s Professor of electrical and computer engineering at the UCLA Henry Samueli School of Engineering and Applied Science, and an associate director of the California NanoSystems Institute (CNSI), together with Yichen Wu a graduate student and Dr Yair Rivenson, an Adjunct Professor of electrical and computer engineering at UCLA.

Bright-field Holography bridges the contrast gap between the classical hologram reconstruction methods and a high-end bright-field microscope, while also eliminating the need to use complex hardware and mechanical scanning to rapidly image sample volumes.

Dr Aydogan Ozcan, Chancellor’s Professor of Electrical and Computer Engineering, Henry Samueli School of Engineering and Applied Science, UCLA.

Quick volumetric imaging of dynamic events inside huge volumes is one of the applications that will instantly gain from this technology, paving a new pathway for essentially developing high-throughput imaging of liquid samples using deep learning.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.