Lensless cameras and HDR sensors: Computational photography may change everything

by

posted Tuesday, January 5, 2016 at 12:57 PM EDT

 
 

Although camera technology has continuously improved since the first camera was designed, the main components of a lens, aperture, dark box, and light recording material has remained constant. However, as discussed in arecent  New York Times article, computational photography may alter this basic setup in numerous ways.

Changing resolution, size, and energy efficiency, computational photography “stems from the idea that if you can capture visual data instead of a true image, then the picture can be reconstructed with software.” Computational photography means that even a lens might not be required, but visual data instead can be gathered through “microscopic grating or diffracting it through a glass sphere.” This technology has even found its way into modern smartphones.

Instead of using a camera to capture the visual image that you see, computational photography can record images that you actually cannot see. Gigapixel cameras, for example, can use five sensors to capture light from a glass sphere. By knowing the exact way in which light moves through the glass sphere and hits the five sensors, it’s possible to take the data from the five sensors and reconstruct a single image unlike one that we could actually see through the use of an algorithm.

 
Gigapixel camera prototype. Image credit: Columbia Vision Laboratory at Columbia University

A supporter of computational photography, engineer Shree K. Nayar of Columbia University’s Vision Laboratory has built high dynamic range sensors using computational photography technology. Using a similar principle but a different design, Sony has employed the use of HDR sensors in their Xperia smartphones.

With Mr. Nayar’s HDR sensor, not all pixels are created equally like they are in a traditional sensor. His HDR sensor instead has pixels with different reactivity to light, so that if some of the pixels are overexposed, surrounding pixels can actually recapture lost detail.

 
Eternal Camera. Image credit: Columbia Vision Laboratory at Columbia University

The Columbia Vision Laboratory has also designed a self-sustaining camera. Named the Eternal Camera, it “powers itself by the light of the image itself, because the sensors are made of the same basic electric parts used in solar panels.” By capturing a photo every second in a well-lit room, the Eternal Camera can run indefinitely. This sort of technology could eventually be used to improve the battery life of smartphones when taking pictures or even improve the battery life of remote cameras.

To read more about the future of digital photography, including a group at Rambus Labs that have made a camera that is less than a millimeter thick by using microscopic grating and removing the lens altogether, see the full New York Times article here.

(Seen via New York Times