Posted in | News | Imaging

AMI-EV Revolutionizes Image Capture for Robotics and Beyond

Scientists from the University of Maryland developed a camera system that enhances robot vision and response to their environment. This technology is inspired by the functioning of the human eye, imitating the small involuntary movements that the eye uses to sustain stable and clear vision throughout time. The researchers developed and tested the Artificial Microsaccade-Enhanced Event Camera (AMI-EV), and the results were published in the journal Science Robotics.

Image Credit: asharkyu/Shutterstock.com

 

Event cameras are a relatively new technology better at tracking moving objects than traditional cameras, but -today’s event cameras struggle to capture sharp, blur-free images when there is a lot of motion involved. It is a big problem because robots and many other technologies such as self-driving cars rely on accurate and timely images to react correctly to a changing environment. So, we asked ourselves: How do humans and animals make sure their vision stays focused on a moving object?

Botao He, Study Lead Author and Ph.D. Student, Department of Computer Science, University of Maryland

The team's solution was microsaccades, brief, rapid eye movements that happen unintentionally when someone tries to focus their field of vision. Because of these tiny yet constant motions, the human eye can maintain precise focus throughout time on an object and its visual textures, including color, depth, and shadowing.

He said, “We figured that just like how our eyes need those tiny movements to stay focused, a camera could use a similar principle to capture clear and accurate images without motion-caused blurring.”

By rerouting light beams recorded by the lens using a spinning prism inside the AMI-EV, the team successfully duplicated microsaccades. The prism's constant rotation mimicked the motions found in a human eye, enabling the camera to stabilize an object's textures in the same way that a human would.

After that, the group created software to account for the prism's movement inside the AMI-EV to combine steady images from the moving lights.

Professor of Computer Science at UMD and Study Co-Author Yiannis Aloimonos sees the team's creation as a significant advancement in the field of robotic vision.

Our eyes take pictures of the world around us and those pictures are sent to our brain, where the images are analyzed. Perception happens through that process and that is how we understand the world. When you are working with robots, replace the eyes with a camera and the brain with a computer. Better cameras mean better perception and reactions for robots.

Yiannis Aloimonos, Study Co-Author and Director, Computer Vision Laboratory, University of Maryland Institute for Advanced Computer Studies

The researchers think that beyond robotics and national security, their invention may have important ramifications. AMI-EV has the potential to be the primary solution to numerous issues faced by scientists employed in businesses that depend on precise image capture and shape detection. These scientists are continuously seeking methods to enhance their cameras.

With their unique features, event sensors and AMI-EV are poised to take center stage in the realm of smart wearables. They have distinct advantages over classical cameras—such as superior performance in extreme lighting conditions, low latency, and low power consumption. These features are ideal for virtual reality applications, for example, where a seamless experience and the rapid computations of head and body movements are necessary.

Cornelia Fermüller, Study Senior Author and Research Scientist, University of Maryland

During the initial testing phase, AMI-EV demonstrated precise movement capture and display in multiple scenarios, such as identifying quickly moving shapes and detecting human pulses.

Additionally, the researchers discovered that AMI-EV could record motion at tens of thousands of frames per second, surpassing the normal 30 to 1000 frames per second recorded by the majority of commercial cameras.

This more realistic and fluid portrayal of motion may prove essential for improving everything from security monitoring to more engaging augmented reality experiences, as well as for enhancing the way astronomers take pictures in space.

Aloimonos said, “Our novel camera system can solve many specific problems, like helping a self-driving car figure out what on the road is a human and what is not as a result, it has many applications that much of the general public already interacts with, like autonomous driving systems or even smartphone cameras. We believe that our novel camera system is paving the way for more advanced and capable systems to come.”

Journal Reference:

He, B., et al. (2024) Microsaccade-inspired event camera for robotics. Science Robotics. doi.org/10.1126/scirobotics.adj8124.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.