Posted in | News | Imaging

Novel 3-D Camera with Specially Designed Image Sensors

Photography used to be limited to flat, two-dimensional images. Today, the art and science of image capture is spurred by optics, computation and electronics, and Cornell engineers are working on the cutting edges of 3-D imaging.

A prototype angle-sensitive pixel camera is on the left. In the center, data is recorded and processed to recover a high-resolution light field. At right, details can be recovered from a single camera image. Credit: Suren Jayasuriya

Suren Jayasuriya, a graduate student in the lab of Alyosha Molnar, associate professor of electrical and computer engineering, is developing a 3-D camera with specially designed image sensors that could lead to previously unimagined applications, from smart cars to medical imaging to visually stunning computer graphics.

The sensors, which are made of pixels that can detect both the intensity and incident angle of light, can digitally refocus a photograph after an image is taken, get different perspective views of a scene from a single shot, and compute an image depth map.

In support of the work, Jayasuriya recently received a $100,000 Qualcomm Innovation Fellowship for his joint proposal with Achuta Kadambi, a doctoral student in Ramesh Raskar’s MIT Media Lab Camera Culture group. Their proposal is called “Nanophotography: Computational CMOS Sensor Design for 3-D Imaging.”

“What’s exciting about angle-sensitive pixels is that it’s innovating on the detector side, to help motivate new applications in computer graphics and vision, where we’re giving more dimensionality to our data, at a cost of computation,” Jayasuriya said. “But the way things are scaling, with Moore’s Law and [graphics processing units] and parallel computing, computation is becoming less and less of a problem. The age of big data is here. Now, it’s more like, what data do we present to these algorithms to make them smarter?”

In other words, image capture is no longer about just taking a picture. It’s about capturing an image, and using machine learning and computation to post-process the image, in the blink of an eye.

For the Qualcomm project, they’re working on a depth sensor that is based on an imaging technique called “time of flight,” which is increasingly popular and used, notably, in Microsoft Kinect cameras. Time of flight imaging measures the time it takes photons to reflect off objects in a scene. The researchers are adding time of flight coding to enable their imaging system to visualize light as it travels through a scene, and to see around corners. By capturing light in flight, the researchers can make a camera that performs – effectively – at 1 billion frames per second, through post-processing computation.

The angle-sensitive pixel image sensors are made from what’s called a complementary metal oxide semiconductor (CMOS) process, a well-established chip-making technique. That’s one of the advantages Jayasuriya brings to the project; his adviser, Molnar, has many years of experience designing CMOS-based chips for imaging, biomedical and radio frequency applications.

Jayasuriya and Kadambi’s project was one of eight Qualcomm Fellowship winners, out of 146 applicants. They will share the $100,000 prize.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.