Go to:
Previous Item
Current News
Next Item

Stanford researchers working on 3D imager
By
(Thursday, February 21, 2008 - 17:11 EST)

"Underexposed," a CNET blog authored by journalist Stephen Shankland, has reported on interesting research taking place at Stanford University.

The CNET article describes a prototype image sensor capable of recording the distance of its subjects. The "multi-aperture" imager was described by its creators at the recent International Solid State Circuits Conference in San Francisco, in a speech entitled "A 3MPixel Multi-Aperture Image Sensor with 0.7μm Pixels in 0.11μm CMOS". The technology is the result of research by Stanford student Keith Fife and Stanford's Department of Electrical Engineering professors Abbas El Gamal and H.-S. Philip Wong, all of the El Gamal Research Group.

The new image sensor breaks up the surface of the imager into subarrays of 16 x 16 pixels, with each subarray sharing one microlens between all 256 pixels that lay beneath. Some clever maths is then used to locate specific image elements as seen by multiple subarrays, and the positioning differences can then be used to calculate the element's distance from the camera.

The technique has the potential not only for 3D imaging, but also for reducing lens complexity by allowing for some of the work currently achieved by elements inside of the camera's lens to be performed at the sensor instead. The approach could also offer reduced noise, albeit at greatly reduced resolution due to the use of subarrays which each share a single microlens. On the flipside though, the processing required to generate a final image with depth map is said to be increased tenfold over traditional digital cameras, and the technique requires a subject containing texture or detail from which to determine the positioning differences seen by each subarray.

More info and some pictures (but no samples) can be found in the CNET blog item.

Go to:
Previous Item
Current News
Next Item