Go to:
Previous Item
Current News
Next Item

Kodak's logo. Click here to visit the Kodak website! Kodak: New sensor tech promises improved sensitivity
By
(Thursday, June 14, 2007 - 05:02 EDT)

Eastman Kodak Co. has today announced a new technology for image sensors that it says will offer a 2 - 4x increase in sensitivity over existing Bayer-filtered types, equivalent to a 1 - 2 stop improvement.

Almost all digital cameras to date use a single image sensor which is capable of distinguishing only one color per pixel location, the specific color being selected by a color dye overlaying the photoreceptor that prevents light passing unless it matches the desired wavelengths. Most current digicams use what's known as the Bayer color filter array, wherein subsequent rows of pixels alternate between repeating either blue and green pixels, or green and red pixels (a diagram illustrating this can be seen further below in this news item). Kodak itself is the source of the Bayer filter technology which was developed in 1976 by Dr. Bryce E. Bayer, a scientist with the company.

Now, Kodak has announced a new concept that it intends to replace the Bayer filter for many uses. There's not one specific pattern, but rather a family of color filter arrays that share one similar trait. In addition to the red, green, and blue pixels achieved via dyes on top of the image sensor, Kodak has added what it is calling "panchromatic" pixels - essentially pixels which are capable of capturing light of all visible wavelengths. The panchromatic pixels on the sensor can be used to capture a relatively high resolution monochromatic (black and white) image, while the lesser number of color pixels capture a lower-resolution color image. The two can then be combined to create a final image. The technique relies on the fact that the human eye is more sensitive to luminance (brightness) information than it is to chrominance (color). This same trait is already taken advantage of in other areas - for example, the JPEG image compression technology used by most digital cameras relies on it to help offer good compression with minimal loss of image quality.

The advantage of the new technology is that since the panchromatic pixels don't have a color filter they will capture all of the visible light landing at their locations, where traditional Bayer sensors discard up to two thirds of the light arriving at each pixel location. More light captured means a better ratio of signal to noise and hence higher sensitivity - which means that in theory the new tech should be able to achieve reduced noise, increased shutter speeds, smaller pixels, or some combination of the above. Given that the panchromatic pixels will be capturing more light than the color-filtered pixels, there is a potential problem raised in that the greater number of photons captured by the pan pixels will overflow the capacity of their wells well before the filtered pixels. Since the pan pixels would have to be read out before overflowing for a useable image, this would seem to suggest that the filtered pixels would still be mostly empty (leading to increased noise in the lower-res chrominance image), however this would be alleviated somewhat by the fact that multiple pixels of the same color would be combined to make the lower-res image from which the final image derives its color information. In talking with Kodak prior to announcement we were also told that there are tricks which can be applied in reading off the sensor to lessen the impact of this issue, but at press time it wasn't immediately clear what these were.

Kodak is describing the new tech as being "additive", in that the only change necessary to implement it in an existing sensor design is to alter the pattern of dyes in the color filter array (while omitting the dyes above some pixels altogether). Obviously, there would also need to be changes to a camera's processing path to account for the change in filtering type. While the technology is applicable to both CCD and CMOS image sensors, there is the potential for CMOS to offer advantages since it allows for on-chip image processing. A CMOS chip with panchromatic pixels could for example process the pan and filtered images on-sensor and output the final RGB image directly.

It should be noted that as mentioned earlier, Kodak is not announcing one single filter pattern for the technology. Instead there are to be several different patterns available, each offering specific advantages and disadvantages in terms of sensitivity, color resolution, the processing power required to create the final image, the ease of decimating the pattern to Bayer RGB for cameras capable of capturing video, and more. Press materials we received showed three possibilities (described simply as patterns A, B and C) - potentially there could be others as well.


Pattern A Pattern B Pattern C

Kodak is currently planning to offer the first image sensors using this technology to camera manufacturers for sampling in the first quarter of 2008. The first such sensors are expected to be targeted at use in consumer products. More information can be found in Kodak's Fact Sheet and FAQ documents which we've replicated below, as well as the official press release. Finally, following below are three sample images demonstrating the potential of the new technology. Kodak tells us that all three images are not simulations, but rather real-world samples designed to highlight the possible improvements. All three images were captured using a laptop-based engineering prototype consisting of an identical image sensor / pixel design, but with different color filter array, and changes to the software associated with reconstruction of the image (obviously necessary to account for the different array). The first image shows the potential for improved image sensitivity, showing markedly decreased image noise while retaining the same sensitivity and shutter speed. The second photo demonstrates the potential for reduced blurring caused by camera shake, courtesy of increased shutter speeds and sensitivity while yielding similar noise levels (and was shot tripod-mounted with the tripod being deliberately banged as the photo was taken to try and achieve similar levels of camera shake). Finally, the third shot demonstrates the potential for reduced blurring from subject motion (again, achieved by raising the sensitivity and shutter speeds while keeping roughly the same levels of image noise).

 


Fact Sheet:

Current Technology
Today, almost all color image sensors are designed using the “Bayer Pattern,” an arrangement of red, green, and blue (RGB) pixels that was first developed by Kodak scientist Dr. Bryce Bayer in 1976. A Bayer filter mosaic is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. The term derives from the name of its inventor, Dr. Bryce E. Bayer of Eastman Kodak, and refers to a particular arrangement of color filters used in most single-chip digital image sensors to create a color image.

In this design, half of the pixels on the sensor are used to collect green light, with the remaining pixels evenly split between sensitivity to red and blue light. After exposure, software is used to reconstruct a full RGB image at each pixel in the final image. This design is currently the de facto standard for generating color images with a single image sensor, and is widely used throughout the industry.

New Technology
The new approach builds upon the standard Bayer pattern by adding panchromatic pixels – pixels that are sensitive to all visible wavelengths – to the RGB pixels present on the sensor. Since no wavelengths of visible light are excluded, these panchromatic pixels allow a (black and white) image to be detected with high sensitivity. The remaining RGB pixels present on the sensor are then used to collect color information, which is combined with the information from the pan pixels to generate the final image.

Note that this is not one single pattern, but a concept – the use of panchromatic pixels to increase the overall sensitivity of the sensor. Depending on the application, different patterns may be more appropriate for use. For example, one natural trade-off is the balance between the sensor’s overall sensitivity (via the pan pixels) and how well the sensor collects color information (via the RGB pixels). The highest sensitivity would come from a sensor composed only of pan pixels, but would provide no color information. By changing the ratio of pan to RGB pixels, applications with different sensitivity and color needs can be best accommodated. Other considerations might be the ease of image reconstruction (i.e., patterns optimized for applications where reduced processing power is available), or for backward compatibility with video subsystems (where the raw data from the sensor easily decimates to a standard Bayer RGB pattern for input into video processors).

This technology increases the overall sensitivity of the sensor, as more of the photons striking the sensor are collected and used to generate the final image. This provides an increase in the photographic speed of the sensor, which can be used to improve performance when imaging under low light, enable faster shutter speeds (to reduce motion blur when imaging moving subjects), or the design of smaller pixels (leading to higher resolutions in a given optical format) while retaining performance.


FAQ:

General

  • What is being announced?
    Kodak has developed a new approach for determining color information on image sensors that provides a 2x to 4x improvement (from one to two photographic stops) in the sensitivity of the sensor compared to current designs. By deploying these new arrangements of color pixels on the image sensor, cameras will be able to offer improved performance and higher image quality under demanding photographic conditions such as image capture under low light, or stop-action photography of quickly moving subjects.

  • How is this accomplished?
    Today, almost color image sensors are designed using the “Bayer Pattern,” an arrangement of red, green, and blue pixels first developed by Kodak scientist Dr. Bryce Bayer in 1976. In this design, half of the pixels on the sensor are used to collect green light, with the remaining pixels evenly split between sensitivity to red and blue light. After exposure, software is used to reconstruct a full RGB image at each pixel in the final image. This design, the de facto standard for generating color images using a single image sensor, is widely used throughout the industry.

    The new approach builds upon the standard Bayer pattern by adding panchromatic pixels – pixels that are sensitive to all visible wavelengths – to the RGB pixels present on the sensor. Since all wavelengths of visible light are detected, these panchromatic pixels allow a black and white image to be detected with high sensitivity. The remaining RGB pixels present on the sensor are then used to collect color information, which is combined with the information from the panchromatic pixels to generate the final image.

  • What is the science behind how this works?
    The key is that human vision is much more sensitive to luminance information (the black and white detail of an image) than chrominance information (the color components of an image). Imaging systems routinely take advantage of this difference – for example, analog television signals are split into luma and chroma channels, with more bandwidth typically allocated to the luma channel (because of its importance in the final image). The impact of this luma / chroma split on perceived final image quality was in fact recognized in the original Bayer patent, where the green pixels were identified as being “luminance-sensitive” and the red and blue pixels as “chrominance-sensitive.”

    By using more sensitive panchromatic pixels to act as the luminance channel of the final image, image sensors using these new patterns and software demonstrate greater overall sensitivity while providing color information in the final image.

  • How many different designs are there – is there only one pattern, or are there multiple options?
    This is not one single pattern, but a concept – the use of panchromatic pixels to increase the overall sensitivity of the sensor. Depending on the application, different patterns may be more appropriate for use.

  • Is there a simple example of when one pattern might be more appropriate for use in an application than another?
    One natural trade-off is the balance between the sensor’s overall sensitivity (via the panchromatic pixels) and how well the sensor collects color information (via the RGB pixels). The highest sensitivity would come from a sensor composed only of pan pixels, but would provide no color information. By changing the ratio of pan to RGB pixels, applications with different sensitivity and color needs can be best accommodated.

    Other considerations might be the ease of image reconstruction (i.e., patterns optimized for applications where reduced processing power is available), or backward compatibility with video subsystems (where the raw, subsampled data from the sensor can be sent directly to standard processing chips for video signals).

  • When will Kodak image sensors using this technology be available?
    Kodak expects the first image sensor using this technology to be sampling in Q1 2008.

  • When will Kodak cameras be available that use this technology?
    This technology offers an exciting opportunity to improve the performance of digital cameras, and is planned for use in future Kodak digital cameras. However, Kodak does not comment on the specifics of new products before their announcement.

  • Is this technology targeted to only one market, or is it appropriate to several?
    While our initial plans are to deploy this technology into image sensors for consumer applications such as digital still cameras (DSCs) and camera phones, this technology can be broadly applied to a number of markets.

  • Where can I go to get more information on the science behind this new technology?
    Kodak has posted a blog entry on this new technology, which is available at http://1000nerds.kodak.com

For Consumers

  • Why should a consumer care about new image sensor technology?
    Image sensors act as the “eye” of a digital camera or camera phone by converting light into electric charge to begin the capture process. Improvements in image sensors directly impact the image quality and performance available from digital cameras and camera phones.

  • How will this new technology benefit consumers?
    By improving the sensitivity of the image sensor, cameras will now be able to take better pictures under low light, or use faster shutter speeds to stop action or reduce the image blur that can come with hand-held exposures.

  • When will cameras be available that use these new sensors?
    Kodak expects the first image sensors using this technology to be sampling in Q1 2008. As with any new image sensor, additional time will be required by camera manufacturers to design, develop and manufacture a consumer product using this component.

  • When will Kodak cameras be available that use this new technology?
    This technology offers an exciting opportunity to improve the performance of digital cameras, and is planned for use in future Kodak digital cameras. However, Kodak does not comment on the specifics of new products before their announcement.

For Design Partners

  • Does this technology apply to CCD sensors, CMOS sensors, or both? Are there advantages/disadvantages of deploying this technology on one sensor platform over another?
    This technology is appropriate to both CMOS and CCD sensors. While there is no significant difference in the underlying performance of this technology across these two platforms, CMOS does provide some other advantages for the deployment of this technology, such as the ability to include SOC components to process the data directly on the sensor.

  • What exactly will Kodak sell – image sensors with these new patterns, image sensors plus software to process the raw data, or CMOS image sensors that include processing software on the sensor itself?
    Initially, Kodak will offer sensors that deploy these new patterns, plus software to process the data. Longer term, Kodak plans to also include these software algorithms directly onto CMOS image sensors, allowing the sensor to output a full-color RGB image.

  • Are you working with any technology partners to enable the use of these new sensors?
    Kodak is in conversation with technology partners to provide chip-level support for the algorithms needed to process images from image sensors using these new patterns. At this time, no specific announcements are available.

Original Source Press Release:

New KODAK Image Sensor Technology Redefines Digital Image Capture

Next Generation Color Filter Patterns Deliver Higher Quality Photos Under Low-Light Conditions

ROCHESTER, N.Y., June 14, 2007 – Eastman Kodak Company (NYSE:EK) today introduced a groundbreaking advancement in image sensor technology that will help make dark, blurry digital photos a thing of the past.

Kodak’s new sensor technology provides a significant increase in sensitivity to light when compared to current sensor designs. With this new technology, users will realize a 2x to 4x increase in sensitivity (from one to two photographic stops), which will improve performance when taking pictures under low light and reduce motion blur when imaging moving subjects. In addition, this technology enables the design of smaller pixels (leading to higher resolutions in a given optical format) while retaining imaging performance.

This breakthrough advances an existing Kodak technology that has become a standard in digital imaging. Today, the design of almost all color image sensors is based on the “Bayer Pattern,” an arrangement of red, green, and blue pixels that was first developed by Kodak Scientist Dr. Bryce Bayer in 1976. In this design, half of the pixels on the sensor are used to collect green light, with the remaining pixels split evenly between sensitivity to red and blue light. After exposure, software reconstructs a full color signal for each pixel in the final image.

Kodak’s new proprietary technology builds on the existing Bayer Pattern by adding panchromatic, or “clear” pixels to the red, green, and blue pixels already on the sensor. Since these pixels are sensitive to all wavelengths of visible light, they collect a significantly higher proportion of the light striking the sensor. The remaining red, green, and blue pixels are then used to record the color information of the scene.

To reconstruct a full color image, Kodak has also developed new software algorithms specifically designed to work with the raw data generated from these new image sensors. These sophisticated algorithms use the more sensitive panchromatic pixels to act as the luminance channel of the final image, and derive chrominance information from the color pixels on the sensor. Leveraging over 30 years of Kodak image science, these new algorithms support the increased sensitivity provided by these new pixel patterns, while retaining the overall image quality and color fidelity required by customers.

“ This represents a new generation of image sensor technology and addresses one of the great challenges facing our industry – how to capture crisp, clear digital images in a poorly lit environment,” said Chris McNiffe, General Manager of Kodak’s Image Sensor Solutions group. “This is a truly innovative approach to improving digital photography in all forms, and it highlights Kodak’s unique ability to deliver advanced digital technologies that really make a difference to the consumer.”

Kodak is beginning to work with a number of leading companies to implement this new technology in system-wide solutions and to streamline the design-in process.

Initially, Kodak expects to develop CMOS sensors using this new technology consumer markets such as digital still cameras and camera phones. As the technology is appropriate for use with both CCD and CMOS image sensors, however, its use can be expanded across Kodak’s full portfolio of image sensors, including products targeted to applied imaging markets such as industrial and scientific imaging. The first Kodak sensor to use this technology is expected to be available for sampling in the first quarter of 2008.

For additional information regarding this technology, please contact Image Sensor Solutions, Eastman Kodak Company at (585) 722-4385 or by email at [email protected]. For more information on Kodak’s entire image sensor product line, please visit www.kodak.com/go/imagers.

Editor’s Note: For more information, including media b-roll and a blog posting on this new technology, go to:

http://www.kodak.com/go/mediabroll (Media B-Roll)
http://www.kodak.com/go/media_events (Podcast/Product Photography/Fact Sheets)
http://1000nerds.kodak.com (Blog)

About Eastman Kodak Company
Kodak is the world’s foremost imaging innovator. With sales of $10.7 billion in 2006, the company is committed to a digitally oriented growth strategy focused on helping people better use meaningful images and information in their life and work. Consumers use Kodak’s system of digital and traditional products and services to take, print and share their pictures anytime, anywhere; Businesses effectively communicate with customers worldwide using Kodak solutions for prepress, conventional and digital printing and document imaging; and Creative Professionals rely on Kodak technology to uniquely tell their story through moving or still images.

More information about Kodak (NYSE: EK) is available at www.kodak.com.

###

Kodak is a trademark of Eastman Kodak Company.
2007

Go to:
Previous Item
Current News
Next Item