Posted in | News | Imaging

Advancing Machine Vision: The Dual-Mode Charge-Coupled Phototransistor

A recent article in Advanced Materials introduces a new charge-coupled phototransistor capable of simultaneously capturing static grayscale information and dynamic events. This innovation addresses a long-standing limitation in traditional image sensors, which typically process motion and intensity data separately.

With its dual-mode functionality, the device aims to significantly enhance machine vision systems, particularly in fields like autonomous vehicles and robotics, where real-time visual accuracy is critical.

Close-up of a technician using tweezers to assemble components on a green printed circuit board, representing precision engineering in electronics and sensor technology.

Image Credit: Poppy Pix/Shutterstock.com

Why Machine Vision Matters

Advancements in machine vision are key to improving the performance and reliability of technologies such as self-driving cars and robots. These systems require sensors that can effectively capture both static and dynamic visual information.

However, existing solutions often fall short: active pixel sensors (APS) provide high-quality grayscale images but lack real-time responsiveness, while dynamic vision sensors (DVS) excel at detecting temporal changes but miss grayscale details.

Hybrid devices like dynamic and active pixel vision sensors (DAVIS) attempt to combine both functionalities but require complex designs, packing 15 to 50 transistors into each pixel. This complexity leads to higher power consumption and synchronization challenges.

In response, researchers developed a charge-coupled phototransistor that integrates dual photosensitive capacitors, enabling the concurrent capture of dynamic and static visual data within a compact, energy-efficient structure.

Inside the New Charge-Coupled Phototransistor

The newly presented device uses dual photosensitive capacitors to deliver gate voltages to a single transistor channel, enabling simultaneous static and dynamic capture. Compared to conventional DAVIS systems, this architecture offers significantly improved performance.

The design features a dual-gate field-effect transistor (FET) integrated with two silicon-based photosensitive capacitors, separated by dielectric layers of different thicknesses. When illuminated, the top gate collects photogenerated electrons, inducing stable current changes ideal for static image detection. Meanwhile, the bottom gate enables electron tunneling, producing short-lived current pulses for dynamic event capture.

Key metrics include a dynamic range exceeding 120 dB, a rapid response time of 15 microseconds, and ultra-low power consumption, just 10 picowatts per pixel. Researchers selected molybdenum disulfide (MoS2) for the transistor channel and graphite for the electrodes, using exfoliation, heterostructure stacking, and electron-beam lithography to achieve precise material placement. This compact design successfully merges frame-based and event-driven detection, advancing low-power, high-performance machine vision systems.

Performance Highlights

The new phototransistor achieved a dynamic range of over 120 dB and a response time of 15 µs, matching the performance of traditional DAVIS pixels while reducing power consumption to just 10 pW per pixel. This efficiency arises from its dual-gate design, which allows precise timing synchronization between static grayscale and dynamic event detection.

The device operates on the charge-coupling effect, wherein photogenerated electrons interact with dielectric layers of different thicknesses. Thicker dielectrics block electrons, supporting stable current shifts suited for static detection, while thinner layers allow electron tunneling, generating brief current spikes for dynamic detection. This design enables real-time responsiveness under diverse lighting conditions.

Built around a dual-gate MoS₂ FET with two silicon-based capacitors, the device exhibited ultra-low noise current, allowing for the detection of weak light signals. Tests confirmed its stability and durability, showing negligible photoresponse decay after 30,000 switching cycles. To explore scalability, the researchers simulated a 128 × 128 transistor array, demonstrating the potential of the device for high-density integration.

Broader Applications and Impact

The ability to simultaneously capture dynamic events and static images makes this phototransistor a strong candidate for real-time, high-resolution sensing applications. In autonomous vehicles, for example, it could enhance navigation and obstacle detection by simultaneously tracking moving objects and recognizing static road features.

In robotics, the technology offers improvements in object recognition and interaction, boosting overall system efficiency and reliability. Thanks to its low power usage, compact design, and scalability, the device also holds promise for portable electronics, wearable technology, and advanced surveillance systems where dynamic scene monitoring is critical.

Download your PDF copy now!

Conclusion and Future Directions

This new device marks a significant step forward for optics and machine vision technology. By integrating the capabilities of APS and DVS systems into a single, energy-efficient sensor, it addresses long-standing challenges around synchronization, power consumption, and integration density.

Future work should focus on miniaturizing the device and integrating it with silicon-based semiconductor processes to develop high-density photodetectors. Such advancements could lead to smarter, more responsive imaging systems. This progress would support the evolution of next-generation intelligent visual perception and advanced optical sensing technologies.

Journal Reference

Feng, S., et al. (2025). A Charge-Coupled Phototransistor Enabling Synchronous Dynamic and Static Image Detection. Advanced Materials. DOI: 10.1002/adma.202417675, https://advanced.onlinelibrary.wiley.com/doi/10.1002/adma.202417675

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2025, April 28). Advancing Machine Vision: The Dual-Mode Charge-Coupled Phototransistor. AZoOptics. Retrieved on April 28, 2025 from https://www.azooptics.com/News.aspx?newsID=30311.

  • MLA

    Osama, Muhammad. "Advancing Machine Vision: The Dual-Mode Charge-Coupled Phototransistor". AZoOptics. 28 April 2025. <https://www.azooptics.com/News.aspx?newsID=30311>.

  • Chicago

    Osama, Muhammad. "Advancing Machine Vision: The Dual-Mode Charge-Coupled Phototransistor". AZoOptics. https://www.azooptics.com/News.aspx?newsID=30311. (accessed April 28, 2025).

  • Harvard

    Osama, Muhammad. 2025. Advancing Machine Vision: The Dual-Mode Charge-Coupled Phototransistor. AZoOptics, viewed 28 April 2025, https://www.azooptics.com/News.aspx?newsID=30311.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.