Digital Camera Home > The Invention of Simultaneous Bracketing

The Invention of Simultaneous Bracketing

By Mike Pasini, Editor
Imaging Resource Newsletter


"Why didn't I think of that?" we can hear you groaning. An echo of our own anguish as we read about Shree Nayar's proposal to enhance the dynamic range of your garden variety 8-bit CCD.

We all know perfectly well that a CCD is just a collection of sensors that are sensitive to brightness. The only way a CCD sensor can see color is to have a red, green, or blue filter slapped in front of its nose. By putting a matrix of filters over the CCD in our cameras -- in a regular pattern of two greens for every red and blue, usually -- our digicams can capture color, recording the blue while guessing at the missing red and green values from the adjacent sensors. Interpolating, that is.

Well, Nayar suggests, "Why not interpolate brightness itself?"

"Nayar suggests, 'Why not interpolate brightness itself?'"

It's actually more than a suggestion. The New York Times (http://www.nytimes.com/) published a brief article on Nayar's research in the Sept. 7 issue of Circuits. But Nayar's article describing the trick is available as a PDF on the Web at http://www.cs.columbia.edu/~srinivas/sve_cvpr_00.pdf (and it includes fascinating color image samples).

Nayar is a professor of computer science at Columbia University (which is why he succumbs to calling this Spatially Varying Pixel exposures), but he drinks Mountain Dew just like the rest of us.

So he isn't smoking anything when he says he can take an 8-bit sensor that can register 256 levels of brightness and turn it into a 12-bit behemoth that can detect 4,096 levels. And he can do it for a song, too.

By simply doing what you do when you bracket exposures. Except the bracketing is in adjacent sensors, not the whole CCD, so it can happen simultaneously. It's done with masks, in fact, just like color filters.

 

So if one sensor is registering full-blown white, an adjacent masked sensor may pick up some detail (as if you'd intentionally underexposed). Likewise, if one masked sensor can't pick up any light, an adjacent one might (as if you'd overexposed). You pick up more of reality.

Of course a lot of magic happens in software, behind the scenes (or CCD), so to speak. And a little resolution is lost along the way, but only in highlights that would be blown out anyway or shadows that would otherwise suffer artifacts.

Take a look at the sample images (OK, it's Vitamin C.C. Lemon, not Mountain Dew) and see for yourself. You can see the mask there, too.

We were particularly impressed by the indoor/outdoor scene that captured full daylight and a shadowy interior equally well by interpolating the SVE image (which looks to us like anything you might see through a screen door).

Nayar describes three ways the SVE mask might be deployed (one of which even works for film cameras) but since it also depends on software to interpolate the screen door mask's results, we think it will only see the light in cameras yet to be built. Which makes it yet another reason to procrastinate.