MIT researchers develop a new technique for speeding up lensless camera technology

by

posted Tuesday, April 4, 2017 at 6:00 AM EDT

 
 

Compressed-sensing imaging systems rely on a relatively new computational technique for pulling a lot of information out of relatively small signals. For example, in 2006, researchers at Rice University built a camera that could produce 2-D images using a single light sensor. The downside to this technology is that it required thousands of individual exposures to produce a clear image. Researchers from the MIT Media Lab have announced that they’ve developed a new compressed-sensing technique which is 50 times more efficient.

Compressed-sensing imaging systems do not require lenses, which makes them potentially well-suited for capturing images in very harsh environments or in situations that require a camera to be able to capture wavelengths of light which don’t exist within the visible spectrum. Not needing to rely on a lens opens up many new possibilities. Guy Satat, a graduate student at the MIT Media Lab says, “Formerly, imaging required a lens, and the lens would map pixels in space to sensors in an array, with everything precisely structured and engineered. With computational imaging, we began to ask: Is a lens necessary?  Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is.  The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient." 

 
“Researchers from the MIT Media Lab developed a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens. Examples of this compressive ultrafast imaging technique are show on the bottom rows.” Image courtesy of the researchers.

The technique uses time-of-flight imaging. This technique relies on short bursts of light being projected into a scene and then the fast sensors inside the camera measure the time it takes for the light to reflect back onto the sensor.

To read more information on the new technique developed at MIT, see here.

(Via Photoxels)