Jump back to Wad's Color Notes
!! EARLY DRAFT !!
MIT Media Laboratory TVOT Group
November 18, 1998
This is a project exploring color constancy. Color constancy is the perceptual mechanism which provides humans with color vision which is relatively independant of the spectral content of the illumination of a scene. It may also be referred to as the computation of perceived surface color. Color constancy has two features: spectral normalization, and spatial decomposition. Spectral normalization refers to our ability to correct for temporal changes in the spectral content of the scene. Spatial decomposition, on the other hand, refers to out ability to ignore changes in the illuminant across the scene.
Ideally, a color constancy theory should also explain a closely related perceptual experience, color contrast. Color contrast refers to the change in perceived color of a coloured patch as its local surround (within 2 degrees of visual field [BB88]) is changed. Color constancy, on the other hand, refers to the lack of change in the perceived color of a coloured patch as the global illumination changes. The Human Visual System does not exhibit perfect color constancy [Hel38] [Jud40] [BW92]. It also performs poorly in lighting with abnormal spectral content (e.g. sodium arc, or early fluourescents).
When inputting video into a computer system, many of the same problems encountered by the human visual system are present. Existing video systems handle the color constancy problem using several techniques. Video cameras are typically calibrated using a matte white surface illuminated by the same lights as the scene. In addition, automatic extension of the sensor dynamic range is usually provided through the use of Automatic Iris and Automatic Gain Circuits. The camera ``white balance'' calibration process (a von Kries adaptation) is unwieldy and infrequently done, and not supported on cheaper cameras. The dynamic range techniques confound simpler vision algorithms by providing video where objects may change color if the overall luminance of the scene changes, and are typically disabled. The addition of a ``color constancy'' algorithm to the digital image processing pipeline is proposed to alleviate these problems, working in conjunction with automatic dynamic range enhancing (AGC) circuits in the camera to provide input video that may be considered lighted by a standard illuminant.
This report begins with a whirlwind tour of previous work in the area. The majority of computational models proposed for color vision may be classified into one of four groups: lightness algorithms, spectral basis algorithms, specularity algorithms, and segmentation algorithms. (I recommend Chapter 9 of Wandell [Wan95] for a better introduction.) I will then describe a series of experiments performed in an attempt at developing efficient color constancy algorithms for use in our object oriented video work. This project has no real conclusion, instead a summary of work done is presented, along with directions for the future.