Holographic imaging has long grappled with unpredictable distortions in dynamic settings, making it a formidable challenge.
Traditional deep learning approaches often stumble when confronted with various scenarios because they heavily rely on specific data conditions.
To confront this issue, a team of researchers from Zhejiang University delved into the intersection of optics and deep learning. In doing so, they unveiled the pivotal role of physical priors in aligning data and pre-trained models effectively.
They investigated how spatial coherence and turbulence affect holographic imaging and proposed an inventive technique, TWC-Swin, to restore high-quality holographic images in the presence of these disruptions.
This groundbreaking research is published in the Gold Open Access journal Advanced Photonics.
Spatial coherence measures the orderly behavior of light waves. Chaotic light waves can render holographic images blurry and noisy, carrying less information. Maintaining spatial coherence is vital for achieving clear and sharp holographic imaging.
Dynamic environments, such as those characterized by oceanic or atmospheric turbulence, introduce fluctuations in the refractive index of the medium. This disrupts the phase correlation of light waves and distorts spatial coherence. Consequently, holographic images may become blurred, distorted, or even lost.
The Zhejiang University researchers devised the TWC-Swin method to address these challenges. TWC-Swin, an abbreviation for “train-with-coherence swin transformer,” harnesses spatial coherence as a physical prior to guiding the training of a deep neural network. This network, built on the Swin transformer architecture, excels at capturing local and global image features.
To evaluate their method, the authors developed a light processing system that generated holographic images under varying spatial coherence and turbulence conditions. These holograms featured natural objects and served as training and testing data for the neural network.
The results unequivocally demonstrate that TWC-Swin proficiently restores holographic images, even when spatial coherence is low and turbulence is arbitrary, surpassing traditional convolutional network-based methods.
Furthermore, the method exhibits robust generalization capabilities, extending its applicability to previously unencountered scenes not included in the training dataset.
This research marks a significant breakthrough in addressing image degradation in holographic imaging across diverse scenarios. By integrating physical principles into deep learning, this study reveals a successful synergy between optics and computer science.
The current research sets the stage for enhanced holographic imaging, granting the ability to perceive clearly through turbulence.
Journal Reference:
Tong, X., et al. (2023) Harnessing the magic of light: spatial coherence instructed swin transformer for universal holographic imaging. Advanced Photonics. doi.org/10.1117/1.AP.5.6.066003