You are on page 1of 32

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

Your access to this article is made possible by the author(s) via Author Select. Tools Share

Download PDF |

Print View

Optical Engineering / Volume 51 / Issue 1 / Review Papers / Back to Abstract

Previous Article | Next Article

Alexander Toet and Maarten A. Hogervorst


TNO, Kampweg 5, 3769 DE Soesterberg The Netherlands
(Received Sep 22, 2011; accepted Nov 17, 2011; published online Feb 06, 2012)

Introduction
bands
1,2,3,4,5,6,7,8,9

top

The increasing availability and deployment of imaging sensors operating in multiple spectral in combination with lenses that cover a broad spectral range
10

and dedicated image


15

fusion hardware

11,12,13,14

has led to a requirement for new presentation techniques that convey the Effective

information from these sensors to the human operator in an effective and ergonomic way.

combinations of complementary and partially redundant multispectral imagery can provide information that is not directly evident from the individual input images. Some potential benefits of image fusion are wider spatial and temporal coverage, decreased uncertainty, improved reliability, and increased system robustness. Image fusion has important applications in the areas of defense and security such as situational awareness, surveillance,
17 16 18 19 20,21,22,23,24,25

target tracking,

intelligence gathering,
26

concealed weapon detection,


27

detection of abandoned packages

and buried explosives,


30,31,32

and face recognition.

28,29

In the context of
12

several allied soldier modernization programs, image fusion has recently gained significant importance, particularly for application in head-borne systems. in industry, art analysis,
33 34 35,36,37,38 39,40,41,42

Other important image fusion applications are found and medicine (for a survey of

agriculture,

remote sensing,

different applications of image fusion techniques, see Ref. 43). The current study concerns the fusion of multiband imagery for observational tasks that benefit from realistic color rendering such as navigation and surveillance. Image fusion for human inspection should (1) combine information from two or more images of a scene into a single composite image that is more informative than each of the input images alone and should (2) present the fused imagery in a format that maximizes recognition speed by minimizing cognitive workload. The fusion process should therefore maximize the amount of relevant information and minimize the amount of irrelevant details (clutter), uncertainty, and redundancy in the output. Thus, image fusion should preserve task-relevant information from the source images, prevent the occurrence of artifacts or inconsistencies in the fused image, and suppress irrelevant features (e.g., noise) from the source images.
44

In addition, the representation of

fused imagery should optimally agree with human cognition so that humans can quickly grasp the gist and

1 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

meaning of the displayed information. For instance, the representation of spatial details should effortlessly elicit the recognition of familiar objects, and the color schemes used should be intuitive. Fused imagery has traditionally been represented in gray tones. However, the increasing availability of fused and multiband vision systems has led to the development of numerous color fusion schemes.
25,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67

In principle, color imagery has several

benefits over monochrome imagery for human inspection. While the human eye can only distinguish about 100 shades of gray at any instant, it can discriminate several thousands of colors. As a result, color may improve feature contrast and reduce visual clutter, thereby enabling better scene segmentation, object detection, and depth perception. Color imagery may therefore yield a more complete mental representation of the perceived scene, resulting in better situational awareness. For realistic and diagnostically colored scenes, scene understanding and recognition, reaction time, and object identification are indeed faster and more accurate with color imagery than with monochrome imagery.
76,77,78 68,69,70,71,72,73,74,75

(for an overview, see

Ref. 55). Also, observers are able to selectively attend to task-relevant color targets and to ignore nontargets with a task-irrelevant color. As a result, simply producing a fused image by mapping multiple spectral
79

bands into a 3-D color space may serve to increase the dynamic range of a sensor system

and may thus


50,80

provide immediate benefits such as improved detection probability, reduced false alarm rates, reduced search times, and increased capability of detecting camouflaged targets and discriminating targets from decoys.

This is illustrated in Fig. 1, the first row of which shows an example of a target that is only represented in one of the bands of a hypothetical two-band imaging system [Figs. 1(a)]. Simply fusing both bands [Figs. 1(a) and 1(b)] in an RG-false color representation [Figs. 1(c)] already yields a representation in which the target is clearly visible due to its color contrast. Note that the hue of the target indicates from which band the signal originates. By comparison, the target is less distinct in a grayscale representation of Figs. 1(c) [see Figs. 1(d)]. The second row shows a similar example, but now the target is only represented by the second band. The third row shows a textured target in a noisy background. Although the target is represented with both positive contrast in the first band and negative contrast in the second band, it cannot be distinguished in either of the bands individually, since its internal structure (which is similar to that of the background) has a camouflaging effect. However, in an RG-false color representation [Figs. 1(k)] of both bands [Figs. 1(i) and 1(j)] the target clearly pops out due to its color contrast. Notice that the target completely disappears in the grayscale-fused representation [Figs. 1(l)] of Figs. 1(k).

Fig. 1
[(a), (e), and (i)] The first and [(b), (f), and (j)] second columns represent the first and second channels of a hypothetical two-band imaging system, respectively. [(c), (g), and (k)] The third and [(d), (h), and (l)] fourth columns represent the RG false color and grayscale-fused representation of the two bands, respectively. View first occurence of Fig. 1in article.

In general, the color mapping should be chosen with care and should be adapted to the task at hand.

81

2 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

Although general design rules can be applied to ensure that the information available in the sensor image is optimally conveyed to the observer,
37

it is not trivial to derive a mapping from the various sensor bands to the

three independent color channels. In practice, many tasks may benefit from a representation that renders fused imagery in realistic colors. Realistic colors facilitate object recognition by allowing access to stored color knowledge.
82

Experimental evidence indicates that object recognition depends on stored knowledge of the
82

objects chromatic characteristics.

In natural scene recognition, optimal reaction times and accuracy are


68,69,71,75,83

typically obtained for realistically (or diagnostically) colored images, followed by their grayscale version, and lastly by their (nondiagnostically) false colored version.

When sensors operate outside the visible waveband, artificial color mappings inherently yield false color images whose chromatic characteristics do not correspond in any intuitive or obvious way to those of a scene viewed under realistic photopic illumination.
84

As a result, this type of false color imagery may disrupt the

recognition process by denying access to stored knowledge. In that case, observers need to rely on color contrast to segment a scene and recognize the objects therein. This may lead to a performance that is even worse than single band imagery alone.
85,86,87

Experiments have indeed demonstrated that a false color


88,89,90,91,92,93,94 95

rendering of fused night-time imagery that resembles realistic color imagery significantly improves observer performance and reaction times in tasks that involve scene segmentation and classification

and that the simulation of color depth cues by varying saturation can restore depth perception,
88,89,96

whereas

color mappings that produce counterintuitive (unrealistic looking) results are detrimental to human performance. constancy.
89

One of the reasons often cited for inconsistent color mapping is a lack of physical color

Thus, the challenge is to give night vision imagery an intuitively meaningful (realistic) color

appearance that is also stable for camera motion and changes in scene composition and lighting conditions. A realistic and stable color representation serves to improve the viewers scene comprehension and enhance object recognition and discrimination. imagery in color.
48,49,97,98,99,100 54

Several different techniques have been proposed to render night-time

Simply mapping the signals from different night-time sensors (sensitive in

different spectral wavebands) to the individual channels of a standard RGB color display or to the individual components of a perceptually decorrelated color space, sometimes preceded by a principal component transform or followed by a linear transformation of the color pixels to enhance color contrast, usually results in imagery with an unrealistic color appearance.
96,100,101,102,103

More intuitive color schemes may be obtained

by opponent processing through feedforward center-surround shunting neural networks similar to those found in vertebrate color vision.
104,105,106,107,108,109,110,111,112

Although this approach produces fused night-time

images with appreciable color contrast, the resulting color schemes remain rather arbitrary and are usually not strictly related to the actual day-time color scheme of the scene that is registered. We therefore suggest giving fused multiband night-time imagery a realistic color appearance by using statistical methods to transfer the color distribution of realistic daylight images. received considerable attention.
98

This approach has recently and has successfully been

45,48,49,60,62,63,64,65,66,97,113,114,115,116,117,118,119,120

applied to colorize fused intensified visual and thermal imagery, forward-looking infrared (FLIR) imagery, imagery,
113 120 117

45,48,49,60,62,63,64,65,66,114,115,118,119,121,122 97

synthetic aperture and FLIR imagery,

remote sensing

and polarization imagery.

However, color transfer methods based on global or semilocal

(regional) image statistics typically do not achieve color constancy and are computationally expensive.

3 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

To alleviate these drawbacks, we recently introduced a look-up-table transform-based color mapping to give fused multiband night-time imagery a realistic color appearance.
123,124,125

The transform can either be

defined by applying a statistical transform to the color table of an indexed false color night vision image or by establishing a color mapping between a set of corresponding samples taken from a day-time color reference image and a multiband night-time image. Once the mapping has been defined, it can be implemented as a color look-up-table transform. As a result, the color transform is extremely simple and fast and can easily be applied in real time with standard hardware. Moreover, it yields fused images with a realistic color appearance and provides object color constancy, since the relation between sensor output and colors is fixed. The sample-based mapping is highly specific for different types of materials in the scene and can therefore easily be adapted for the task at hand such as optimizing the visibility of camouflaged targets. In the next section, we present an overview of the historical development and the current state-of-the-art color-mapping techniques that can be deployed to give multiband night vision imagery a realistic or intuitive color appearance. First, we present a simple pixel-based false color-mapping scheme that was inspired by previously developed color opponent fusing schemes. This scheme yields fused false color images with large color contrast and preserves the identity of the input signals (thus making the images perceptually intuitive
126

). As a result, it has

successfully been deployed in different areas of research. However, the resulting color representation is not strictly realistic. Therefore, we developed a statistical extension of this coloring method that has the capability to produce colorized fused multiband imagery with a realistic color appearance.
98

This mapping transfers the

first order statistics of a given color distribution (typically the color distribution of a reference color image) to a multiband target image, thereby giving the target (night vision) image a similar color appearance as the reference (color photograph) image. Although the statistical mapping approach yields a realistic color rendering, it achieves no color constancy since the mapping depends on the relative amounts of the different materials in the scene (and therefore depends on the momentary scene content). Also, in its original form, this method is computationally expensive. However, we achieved both color constancy and computational simplicity (enabling real-time implementation) by applying the statistical mapping approach in a look-up-table framework. method
127 123,124,125

In contrast to the statistical color-mapping method, this sample-based color-transfer

is specific for different types of materials in the scene and can easily be adapted for the task at

hand, such as the detection of camouflaged objects. After explaining how the sample-based look-up-table color transformation can be derived from the combination of a multiband sensor image and a corresponding day-time reference image, we discuss how it can be deployed at night and implemented in real time.

Color Fusion Methods

top
top
46,111,128,129,130,131

Center-Surround Opponent-Color Fusion


derived from biological models of color vision radiation.
136,137 132,133,134,135

Opponent color image fusion was originally developed at the MIT Lincoln Laboratory

and is

and fusion of visible light and infrared (IR)

In the case of color vision in monkeys and man, retinal cone sensitivities are broad and overlapping, but the images are contrast enhanced within bands by spatial opponent processing (via cone-horizontal-bipolar cell

4 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

interactions), creating both ON and OFF center-surround response channels.


138,139

135

These signals are then

contrast enhanced between bands via interactions among bipolar, sustained amacrine, and singleopponent-color ganglion cells, all within the retina. Further color processing in the form of

double-\-opponent-color cells is found in the primary visual cortex of primates (and the retinas of some fish). Opponent processing interactions form the basis of such percepts as color opponency, color constancy, and color contrast, although the exact mechanisms are not fully understood. Double-opponent-color processing has been successfully applied to multispectral IR target enhancement.
130,140

A fusion of visible and thermal imagery has been observed in several classes of neurons in the optic tectum (the evolutionary progenitor of the superior colliculus) of rattlesnakes (pit vipers) and pythons (boid snakes).
136,137

These neurons display interactions in which one modality (e.g., IR) can enhance or depress

the response to the other sensing modality (e.g., visible) in a strongly nonlinear fashion. Such interactions resemble opponent processing between bands as observed in the primate retina. For opaque surfaces in thermodynamic equilibrium, spectral reflectivity and emissivity are linearly related at each wavelength, :() = 1(). This provides a rationale for the use of both on-center and off-center channels when treating IR imagery as characterized by thermal emissivity.
16

In the opponent-color image fusion methodology, the individual input images are first enhanced by filtering them with a feed-forward center-surround shunting neural network.
141

This operation serves

1. to enhance spatial contrast in the individual visible and IR bands, 2. to create both positive and negative polarity IR contrast images, and 3. to create two types of single-opponent-color contrast images. The resulting single-opponent-color contrast images represent grayscale-fused images that are analogous to the IR-depressed and -enhanced visual cells of the rattlesnake.
136,137

The opponent color fusion technique was tested in several field trials using commercially available components to fuse intensified visual and thermal (long-wave [LW]IR) short-wave IR, and midwave IR imagery color appearance.
107,109 106 106 107,109

as well as visual, LWIR,

in real time. The opponent-fused imagery generally had a realistic

In various scenarios (e.g., driving and surveillance, tracking, and camouflage detection),

users performed better with the opponent-colorfused imagery relative to each of the individual image modalities. In particular, it was found that opponent-colorfused imagery provided enhanced depth
109

perception and target pop-out capabilities.

Pixel-Based Opponent-Color Fusion


Inspired by the opponent-color fusion approach,

top

104,111,128,129,131

we derived a simplified, pixel-based version

of this method, which fused visible and thermal images into false color images with a relatively realistic or intuitive appearance. Let I and I be two input images with the same spatial resolution and dynamic range. The common
1 2

component of both signals is computed as the morphological intersection:

5 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

The unique or characteristic component I of each image modality remains after subtraction of the common component:

The characteristic components are emphasized in the fused image by subtracting them from the opposite image modalities. The color-fused image is then obtained by mapping these differences to the red (R) and green (G) bands, respectively, of an RGB false color image. The characteristic components of both image modalities can be further emphasized by mapping their difference to the blue (B) band of the fused false color image so that the final mapping is given by
126

In case of visual and thermal input images, I = Vis and I = IR. Because the method is computationally
1 2

simple, it can implemented in hardware or even be applied in real time using standard processing equipment.
107,109,131

The resulting color rendering enhances the visibility of certain details and preserves the

specificity of the sensor information. In addition, it has a fairly intuitive color appearance (see Figs. 2 and 3). The resulting images agree with our intuitive associations of warm (R) and cool (B). To further enhance the appearance of the fused results, the R, G, and B channels can be input to a color remapping stage in which, following conversion to hue, saturation, and value color space, hues can be remapped to alternative more realistic hues and colors can be desaturated and then reconverted back to R, G, and B signals to drive a color display.

Fig. 2
The scene represents a person walking along the outside of a fence around a building. (a) Visual and (b) thermal input images. (c) Result of mapping (a) and (b) to the G and R channels of an RGB display, respectively. (d) Result of the fusion of (a) and (b) by Eq. (3). View first occurence of Fig. 2in article.

Fig. 3
Left to right: successive frames of a time sequence. Top-down: (a)(c) Video, (d)(f) thermal, (g)(i) grayscale-fused (Laplacian pyramid), and (j)(l) color-fused images resulting from Eq. (3). View first occurence of Fig. 3in article.

Independent evaluation studies have established the benefits of the pixel-based color-fusion method defined

6 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

by Eq. (3) in several widely different application domains, such as military surveillance, in-flight information displays, and medical imaging. Experiments investigating the human visual search and detection capability in surveillance scenarios using visual and thermal image modalities have shown that observers can determine the position of targets in briefly presented dynamic scenes with significantly higher accuracy and greater confidence when they perform with images that are fused in color according to Eq. (3) [e.g., Figs. 2(d) and 2(j)2(l)] as opposed to each of the individual image modalities.
16

This indicates that the color-fused images

probably represent all relevant details at a sufficiently large perceptual contrast to allow rapid visual identification of the contents of a scene and its spatial layout. Simulated flight tests using IR and synthetic imagery that was fused in color according to Eq. (3) showed that this color fusion technique preserves all features that are relevant for obstacle avoidance and significantly improves detection distances for all simulated visibility conditions.
142,143

In ophthalmology, visual fundus images are often used in combination

with fluorescein angiogram images. Visual images of the retina clearly represent hard exudates. Fluorescein angiogram images represent the macula, the arteries, and veins at high contrast, thus allowing the detection of occluded and leaking capillaries, microaneurims, macular edema, and neovascularization. Both images are typically acquired with different devices over large periods of time. As a result, direct visual comparison of both images is not possible, and they first need to be warped, scaled, interpolated, and aligned before they can actually be compared.
144,145,146,147

To be useful for mass diagnostic screening (e.g., of diabetic

retinopathy), fundus and angiogram images should be automatically represented in a fused format. Fusing visual fundus images with fluorescein angiogram images using Eq. (3) provides better color contrast rendering than other opponent-color fusion methods, thus enhancing diagnostic performance and reducing visual workload. high contrast.
146,147

It appears that this mapping clearly represents neovessels and depicts the macula at

148

Figure 4 shows two examples of the fusion of grayscale visual fundus images [Figs. 4(a) and

4(d)] with corresponding fluorescein angiogram images [Figs. 4(b) and 4(e)]. The fused images [Figs. 4(c) and 4(f)] represent interesting details like the vascular network (purple veins) and the exudates (yellow [Y] lesions) with large color contrast. Also, when using Eq. (3) to fuse thermal and autofluorescent images of the retina,
145

the resulting false color images provide higher contrast for the hyperfluent areas of the

autofluorescent images (which are symptoms for glaucoma in its early stages) and clearly represent the position of the optic nerve head from the IR image.

Fig. 4
(a) and (d) Photographs, (b) and (e) fluorescein angiogram images, and (c) and (f) their color-fused representation resulting from Eq. (3). View first occurence of Fig. 4in article.

Statistical Color Mapping

top

Although the overall color appearance of images produced with the opponent-color fusion scheme is fairly intuitive, some details may still be depicted with unrealistic colors. In this section, we present a method that gives multiband nighttime images the appearance of regular daylight color images by transferring the first order color statistics from full color daylight imagery to the false color multiband night vision imagery. The

7 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

method is based on a technique that was originally developed to enhance the color representation of synthetic imagery.
149

The outline of the method is as follows. As input, the method requires a false color RGB

image. This can be produced by mapping the 2 or 3 individual bands (or the first two or three principal components when the sensor system provides more than three bands) of a multiband night vision system to the respective channels of an RGB image. Next, the false color RGB night vision image and a regular full color daylight reference image are both transformed into a perceptual decorrelated color space. Then, the mean and standard deviation of each of the three-color channels of the multiband night vision image are set equal to those of the reference image. Finally, the multiband night vision image is transformed back to RGB space for display. The result is a full color representation of the multiband night vision image with a color appearance that closely resembles the color appearance of the daylight reference image. The daylight reference image should display a scene that is similar (but not necessarily identical to) the one displayed by the multiband night vision image. The color transfer method will now be discussed in detail. First, the 2 or 3 individual bands of a multiband night vision system are assigned to the R, G, and B channels of an RGB image (in any arbitrary order). In case of two sensor bands, one of the RGB channels can be set to zero. The order of the mapping is irrelevant since the following procedure effectively rotates the color coordinate axes of the false color multiband night vision images such that these will be aligned with the axes of the referenced daylight color image in the final result.
98

Next, the RGB tristimulus values are converted to

device-independent XYZ tristimulus values. This conversion depends on the characteristics of the display on which the image was originally intended to be displayed. Because that information is rarely available, it is common practice to use a device-independent conversion that maps white in the chromaticity diagram to white in RGB space and vice versa.
150

The device-independent XYZ values are then converted to LMS space by

Combination of Eqs. (4) and (5) results in

The data in this color space show a great deal of skew, which is largely eliminated by taking a logarithmic transform:

The inverse transform from LMS cone space back to RGB space is as follows. First, the LMS pixel values are

8 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

raised to the power ten to go back to linear LMS space. Then, the data are converted from LMS to RGB using the inverse transform of Eq. (6):

Ruderman et al.

151

presented the l color space, which effectively minimizes the correlation between the

LMS axes. This result was derived from a principal component transform to the logarithmic LMS cone space
representation of a large ensemble of hyperspectral images that represented a good cross-section of realistic scenes. The principal axes encode fluctuations along an achromatic direction (l), a Y-B opponent direction (), and an R-G opponent direction (). The resulting data representation is compact and symmetrical and provides automatic decorrelation to higher than the second order. Ruderman et al.
151

presented the following simple transform to decorrelate the axes in the LMS space:

If we think of the L channel as R, the M as G, and S as B, we see that this is a variant of a color opponent model:

Thus the l axis represents an achromatic channel, while the and channels are chromatic Y-B and R-G opponent channels. After processing the color signals in the l space, the inverse transform of Eq. (9) can be used to return to the LMS space:

First, mean is subtracted from the source and target data points:

Then, the source data points are scaled with the ratio of the standard deviations of the source and target images, respectively:

After this transformation, the pixels comprising the multiband source image have standard deviations that

9 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

conform to the target daylight color image. Finally, in reconstructing the l transform of the multiband source image, instead of adding the previously subtracted averages, the averages computed for the target daylight color image are added. The result is transformed back to RGB space via logLMS, LMS, and XYZ color space using Eqs. (11) and (8). Figure 5 illustrates that the statistical color transform defined by Eqs. (4) to (13) effectively transfers the color appearance of a given target image to a multiband source image. This example shows that even when the reference color distribution is not fully characteristic for the scene that is displayed, the addition of color still has an ergonomic value by enhancing the perceptual distinctness of the different materials in the scene.

Fig. 5
Illustration of the statistical color transfer method and the use of different reference images. Top: False color image obtained by mapping respectively the visible and NIR images to the R and G channels of an RGB color image [see Figs. 7(b) and 7(c)]. Left column: Different reference images (a color photograph of the same scene, blossom trees with a blue sky, a rural landscape, and a synthetic urban scene). Right column: Results of transfer of the first order color statistics from the reference images on the left to the false color target image on top. Notice the resemblance between the color characteristics of the reference images and the corresponding recolored target images. View first occurence of Fig. 5in article.

The statistical color transform defined by Eqs. (4) to (13) is computationally expensive and therefore not suitable for real-time implementation. Moreover, although it can give multiband nighttime imagery a realistic daylight color appearance, it cannot achieve color constancy for dynamic imagery
65

because the actual

mapping depends on the relative amounts of different materials in (i.e., the composition or statistics of) the scene. Hence, large objects in the scene will dominate the color mapping. As a result, the color of objects and materials may change over time when the sensor system pans over (or zooms in on) a given scene. We therefore developed a fixed look-up-tablebased version of this statistical color mapping that is (1) computationally efficient so that it can easily be deployed in real time and (2) yields constant object colors. This new look-up-tablebased statistical color transfer method is illustrated in Fig. 6 for a multiband image consisting of two channels. First, the two sensor images are mapped on two of the three channels of an RGB image. In this particular example, the visual band [Figs. 6(a)] is (arbitrarily) mapped to the R channel and the near-IR (NIR) band [Figs. 6(b)] is mapped to the G channel. The result is an R-G false-color representation of the multiband image [Figs. 6(c)]. The statistical color transform defined by Eqs. (4) to (13) can then be derived from the first order statistics of respectively (a) Figs. 6(c) and (b) a given daylight color reference image, like the one shown in Figs. 6(d). The application of this statistical color transform to an input table of 2-tuples representing all possible sensor output values yields an output table containing all possible color values that

10 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

can occur in the colorized nighttime image. The input and output table pair defines the statistical color mapping and can therefore be deployed in a color look-up-table transform procedure. The square inset in Figs. 6(c) represents the table of all possible two-band signal values as different shades of R, G, and Y. Application of the statistical color transform to the inset in Figs. 6(c) yields the inset shown in Figs. 6(e). In a look-up-table paradigm, the insets in Figs. 6(c) and Figs. 6(e) together define the statistical color mapping. For any color in the false color representation of Figs. 6(c), the corresponding color after application of the statistical color transform can easily be found by (1) locating the color in the inset of Figs. 6(c) and (2) finding the color at the corresponding location in the transformed inset in Figs. 6(e). For instance, a pixel representing a high response in the visual channel and low response in the NIR channel is represented with an R color (high value of R, low value of G) in the inset in Figs. 6(c). In the inset of Figs. 6(e), the same pixel appears in a brownish color. The color transformation can thus be implemented by using the inset pair of Figs. 6(c) and Figs. 6(e) as color look-up tables. Then, the false color image in Figs. 6(c) can be transformed into an indexed image using the R-G color look-up table (the inset of Figs. 6(c)). Replacing the color look-up table of the indexed image Fig. 6c by the transformed color look-up table (the inset of Figs. 6(e)) then transforms Figs. 6(c) into Figs. 6(e). Note that the color-mapping scheme is fully defined by the two color look-up tables. When all possible combinations of an 8 bit multiband system are represented, these color look-up tables contain 256256 entries. When a color look-up table with less entries is used (e.g., only 256), the color mapping can be achieved by determining the closest match of the table entries to the observed multiband sensor values.

Fig. 6
(a) Visible and (b) NIR images. (c) False color image obtained by assigning (a) to the G and (b) to the R channel of an RGB false color image (the B channel is set to zero). The inset in this figure represents all possible combinations of signal values that can occur in the two sensor bands (upper left: both sensors give zero output; lower right: both sensors give maximal output). (d) Arbitrary reference daylight color image. (e) Result of mapping the first order statistics of the reference image (d) to the false color nighttime image (c) using Eqs. (4) to (13). The inset in this figure represents the result of applying the same transform to the inset of (c) and shows all possible colors that can occur in the recolored sensor image (i.e., after applying the color mapping). View first occurence of Fig. 6in article.

Once the color mapping has been derived from a multiband night-time image and its corresponding reference image and once it has been implemented as a look-up-table transform, it can be applied to different and dynamic multiband images. The advantage of this method is that the color of objects only depends on the multiband sensor values and is independent of the image content. Therefore, objects keep the same color over time when registered with a moving camera. Another advantage of this implementation is that it requires minimal computing power. Once the color transformation has been derived and the pair of color look-up tables that defines the mapping has been created, the new color look-up-table transform can be used in a

11 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

(real-time) application.

Sample-Based Color Mapping

top

In spite of all the afore-mentioned advantages of the look-up-tablebased statistical color transform, there is still room for improvement. For instance, in a mapping paradigm based on the global color characteristics of the depicted scene, there is no strict relationship between sensor output values and object color. It has been suggested to use semilocal or region-based methods to alleviate this problem.
46,60,63,64,65,66

However, these

approaches are only partly successful in achieving color constancy, and they typically require computationally expensive techniques like nonlinear diffusion, histogram matching, segmentation, and region merging, which diminishes their practical value. In this section, we describe an alternative look-up table based method for applying realistic colors to multi-band images, which alleviates this problem. The color transformation is derived from a corresponding set of samples for which both the multi-band sensor values and the corresponding realistic color (RGB-value) are known.
127

We show that this method results in rendered multiband images with colors that match the

day-time colors more closely than the result of the statistical approach. In contrast to the statistical method, the derivation of the color mapping requires a registered image pair consisting of a multiband image and a daytime reference image of the same scene, since corresponding sets of pixels are used to define color transform pairs in this approach. Once the color mapping has been derived, it can be applied to different multiband night-time images. Again, we implement the color transformation using a color look-up-table transform, thus enabling real-time implementation. The method is as follows. Given a set of samples (pixels) for which both the multiband sensor output and the corresponding day-time colors are known, the problem of deriving the optimal color transformation is finding a transformation that optimally maps the N-dimensional (in our examples, N = 2) multiband sensor output vectors (one for each sample) to the 3-D vectors corresponding to the daytime colors (RGB). The mapping should minimize the difference between the modeled colors and the measured colors. In addition, the transformation should predict the mapping of untrained samples. Several methods exist to derive a suitable mapping, such as neural networks and support vector machines. What constitutes a suitable mapping is determined by the function that is minimized. Also, the statement that the difference between the modeled colors and the measured colors is minimized should be formalized. We minimize the average perceptual color difference between the modeled color and the measured color. More precisely, we minimize the average squared distance between the perceptual color vectors l (for details, see Ref. 151). We describe a (relatively) simple and intuitive implementation that is not focused toward finding the theoretical optimum mapping, but instead leads straight to robust and good results. We now describe our new method for deriving a realistic color transformation using the example shown in Fig. 7, which depicts the full color day-time reference image, which is in this case is a color photograph taken with a standard digital camera. Figures 7(b) and 7(c) show a visible and NIR image of the same scene, respectively. Figure 7(f) shows the result of applying day-time colors to the two-band night-time sensor image using our new color-mapping technique.

12 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

Fig. 7
(a) Realistic daylight color reference image. (b) Visible and (c) NIR images of the same scene. (d) Combined RG false color representation of (b) and (c), obtained by assigning (b) to the G and (c) to the R channel of an RGB color image (the B channel is set to zero). (e) The color mapping derived from corresponding pixel pairs in (a) and (d). (f) Result of the application of the mapping scheme in (d) to the two-band false color image in (d). View first occurence of Fig. 7in article.

The method works as follows. First, the multi-band sensor image is transformed to a false-color image by taking the individual visual and NIR bands [Figs. 7(b) and 7(c), respectively] as input to the R and G channels (and B when the sensor contains three bands), referred to as the RG image (Figs. 7(d)). In practice, any other combination of two channels can also be used (one could just as well use the combinations R and B or B and R). Mapping the two bands to a false color RGB image allows us to use standard image conversion techniques such as indexing.
152

In the next step, the resulting false color (RG image), Figs. 7(d), is converted

to an indexed image. Each pixel in such an image contains a single index. The index refers to an RGB value in a color look-up-table (the number of entries can be chosen by the user). In the present example of a sensor image consisting of two bands [R and G; Figs. 7(d)], the color look-up table contains various combinations of R and G values (the B values are zero when the sensor or sensor pair provides only two bands). For each index representing a given R-G combination (a given false color), the corresponding realistic color equivalent is obtained by locating the pixels in the target image with this index and finding the corresponding pixels in the (realistic color) reference image [Figs. 7(a)]. First, the RGB values are converted to perceptually decorrelated

l values (for details, see Ref. 151). Next, the average l vector is calculated over this ensemble of pixels.
This ensures that the computed average color reflects the perceptual average color. Averaging automatically takes the distribution of the pixels into accountcolors that appear more frequently are attributed a greater weight. For instance, let us assume that we would like to derive the realistic color associated with color index

i. In that case, we locate all pixels in the (indexed) false color multiband target image with color index i. We
then collect all corresponding pixels (i.e., pixels with the same image coordinates) in the reference daytime color image, convert these to l, and calculate the average l value of this set. Next, we transform the resulting average l value back to RGB. Finally, we assign this RGB value to index i of the new color look-up table. These steps are successively carried out for all color indices. This process yields a new color look-up table containing the realistic colors associated with the various multiband combinations in the false color (RG) color look-up table. Replacing the RG color look-up table [left side of Figs. 7(e)] with the realistic color look-up table [right side of Figs. 7(e)] yields an image with a realistic color appearance, in which the colors are optimized for this particular sample set [Figs. 7(f)]. Figure 8 illustrates the difference between the statistical and the look-up-tablebased color transforms. In this example, we determine a color-mapping look-up table from a pair of images consisting of the original version of a full-color, day-light photograph [Figs. 8(a)] and the same image from which the B channel has been

13 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

removed [B = 0; Figs. 8(b)]. Note that the sample-based color remapping [Figs. 8(c)], using the color look-up table (inset) determined from the image pair Figs. 8(a) and Figs. 8(b), nicely restores most of the B hues in the scene, while the statistical color remapping procedure [Figs. 8(d)] is not capable of restoring the missing information.

Fig. 8
(a) Full-color RGB image. (b) Image from (a) after removal of the B band (B = 0). (c) Result of the statistical color remapping procedure. (d) Result of sample-based color remapping, using the color look-up table (inset) determined from the image pair (a, b). View first occurence of Fig. 8in article.

Because the sample-based color mapping is highly specific, it can effectively represent image details that may go unnoticed when using a global statistical color transform. Camouflage patterns usually consist of small irregular segments with colors matching those of the details in the background. A statistical color mapping based on the overall color distribution of an entire scene may render a camouflaged target therein less visible by eliminating the color contrast between the individual elements of the camouflage pattern on the one hand and between the pattern and the background on the other hand. In contrast, the sample-based color mapping may preserve all these color nuances. Figure 9 shows the result of the statistical [Figs. 9(e)] and sample-based [Figs. 9(f)] color transform on the false color image Figs. 9(d) that was composed by placing the intensified visual image of Figs. 9(a) and the NIR image of Figs. 9(b) in the G and R channels of an RGB composite, respectively. This example shows that the target closely resembles the road and vegetation after the statistical transform (Figs. 9(e)) but has distinct colored segments after the sample-based transform [Figs. 9(f)], which closely resembles a corresponding day-light color image [Figs. 9(g)]. Hence, color mapping may be useful for applications like surveillance and navigation
87,153

since these tasks require a

correct perception of the background (terrain) for situational awareness in combination with optimal detection of targets and obstacles. When deploying the color transfer method at night, an appropriate color-mapping scheme should be selected to give the night-time images a realistic color appearance. The color transformation consists of (1) creating a false color image [e.g., an RG image, see Figs. 7(d)], (2) converting this image into an index image using the RG color table [Figs. 7(e), left], and (3) replacing this table with its day-time RGB equivalent [Figs. 7(e), right]. The whole transformation is defined by the two color look-up tables (the RG and RGB color table pair). The software implementation can be very fast. Different environments require different color-mapping schemes to obtain a realistic color representation. Dedicated color-mapping schemes can be derived that are optimally tuned for different environments. The sample images used to derive the color-mapping scheme should reflect the statistics of the environment in which the mapping will be deployed.

Fig. 9

14 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

(a) Visual and (b) NIR infrared image of an YPR vehicle adorned with camouflage material. (c) NVG image of the scene. (d) False color image produced by assigning (a) to the G and (b) to the R channel of an RGB image. (e) Result of transferring the global color statistics of the scene to image (d). (f) Result of a sample based color transform of image (d) using samples from the image pair (d) and (g). (g) Daylight color image of the scene. View first occurence of Fig. 9in article.

In practice, the ratio between the sensor outputs is characteristic for different materials. This becomes apparent when one inspects the color maps [e.g., Figs. 7(e), right side] corresponding to the optimal color mapping of different reference and test images. In those color maps, the hue varies little along straight lines through the top-left corner (lines for which the ratio between the two sensor outputs has a constant value). This feature can be used to interpolate missing values when deriving a color mapping from a limited number of samples. Also, the color mapping [e.g., the one represented by Figs. 7(e), right side] is typically smooth; i.e., from point to point, the color variations will be gradual. When a smooth color-mapping scheme is used, more subtle differences between sensor outputs will become visible.

Real-Time Implementations

top

In the next sections, we describe three multiband night vision systems that deploy the previously described look-up-table color-transform method to provide real-time color-fused imagery. The two-band Gecko system provides an intensified night vision image with a realistic color appearance by remapping the RG color table of fused intensified visual and NIR images to a realistic RGB distribution. The two-band Viper system provides an intuitive target-enhancing color representation of the fused signals of an image intensifier and an LWIR thermal camera. The tri-band TRICLOBS system combines the benefits of the two previous systems and provides a night-time color image that has both a realistic color appearance and an enhanced target representation. This image is obtained by remapping the RGB false color table of fused intensified visual, NIR, and LWIR images to an RGB color map that enhances hot targets while representing the background in realistic colors.

The Gecko System: Color-Fused Visual and NIR Imagery

top
154

The Gecko system (named after nocturnal geckos that still have color vision at very dim light levels

provides co-axially registered visual and NIR images that are fused and represented in color in real time using a standard notebook computer (for details see Ref. 155). The system is portable and has been used to assess the benefits of color fusion in realistic target-detection scenarios.
156

The Gecko sensor system separates the incoming light by wavelengths below and above 700 nm. Since chlorophyll shows a steep rise in reflectance around 700 nm, this dual-band system is especially suited for discriminating materials containing chlorophyll from materials containing no chlorophyll. We therefore

15 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

developed a color mapping that gives details in a scene that contains chlorophyll (e.g., plants); the items that contain chlorophyll have a greenish appearance and objects without chlorophyll (e.g., targets and roads) have a reddish look to emphasize the distinction between artificial (manmade) details and a typical realistic background. To further increase the realistic appearance of the resulting color-fused image, elements with high output in both channels are displayed in white. Figure 10(d) shows the Gecko image of a park scene after the application of our new color remapping technique [in this case, swapping the color table of Figs. 10(c) by that of Figs. 10(d)]. This multiband night vision image closely resembles the corresponding day-time photograph [Figs. 10(f)]. For comparison, Figs. 10(e) shows a grayscale-fused version (a Laplacian pyramid with maximum selection rule) of Figs. 10(a) and Figs. 10(b). Note that it is much easier to distinguish different materials and objects in Fig. 10d compared to each of the individual bands [Figs. 10(a) and 10(b)], the grayscale fused image [Figs. 10(e)], and the RG false colorfused image [Figs. 10(c)].

Fig. 10
(a) Visual (wavelengths below 700 nm) and (b) NIR (wavelengths above 700 nm) images of a park scene. (c) False color combination with the visual image (a) in the R and the NIR image (b) in the G channel of an RGB color image. (d) The result of our color remapping technique. (e) A daytime color photograph of the same scene. The square insets in images (c) and (d) represent their corresponding color tables. View first occurence of Fig. 10in article.

Figure 11(d) shows the Gecko image of a road scene after the application of our new color remapping technique [in this case swapping the color table of Figs. 11(c) by that of Figs. 11(d)]. This multiband night vision image closely resembles the corresponding day-time photograph [Figs. 11(f)]. For comparison, we also show the standard intensified (NVG) image in Figs. 11(e). Note that it is much easier to distinguish the borders of the road in the Gecko image [Figs. 11(d)] compared to a standard NVG image [Figs. 11(e)]. Thus, the Gecko image provides obvious benefits in night-time driving scenarios.

Fig. 11
(a) Visual (wavelengths below 700 nm) and (b) NIR (wavelengths above 700 nm) images of a road scene. (c) False color combination with the visual image (a) in the R and the NIR image (b) in the G channel of an RGB color image. (d) The result of our color remapping technique. (e) Standard NVG image combining (a) and (b). (f) A daytime color photograph of the same scene. The square insets in images (c) and (d) represent their corresponding color tables.

16 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

View first occurence of Fig. 11in article.

The Viper System: Color-Fused Visual and LWIR Imagery

top

The portable Viper system (named after a species of snake that fuses in its optic tectum the visual images from its eyes with thermal images from IR-sensitive organs that function like pinhole cameras
137

) provides

co-axially registered visual and LWIR thermal images that are fused and represented in color in real time using a standard notebook computer (for details, see Ref. 155). Figure 12(f) shows the Viper image of a battle scene with a smoke curtain after the application of our new color remapping technique (in this case, swapping the color table of Figs. 12(e) by that of Figs. 12(f)). Note that both the soldier and the smoke cloud are clearly visible and represented in their true (realistic) colors in the Viper image [Figs. 12(f)], while none of the individual bands represents all of these details by itselfthe obscurant is only represented in the visual band [Figs. 12(a)] and the soldier is only visible in the thermal band [Figs. 12(b)]. For comparison, Figs. 12(c) shows the result of a standard grayscale fusion technique (Laplacian pyramid with maximum contrast selection rule) applied to Figs. 12(a) and Figs. 12(b). Note that there is no color contrast between the grass, the trees in the background, and the smoke cloud in the grayscale-fused image. In contrast, the color-fused Viper image [Figs. 12(f)] represents all these details in their true colors, making it much easier to recognize the contents of the scene. The color-rendered imagery of the Viper system enhances the users situational awareness, not only by providing the capability to see through the obscurant, but also by providing information about the presence and the type of the obscurant itself.

Fig. 12
(a) Visual intensified and (b) LWIR (8 to 12 m) images of a battle scene. (c) Grayscale-fused image (Laplacian pyramid) of (a) and (b). (d) A daytime color photograph of the same scene.(e) False color combination with the Visual image (a) in the R and the LWIR image (b) in the G channel of an RGB color image. (f) The result of our color remapping technique. The square insets in images (e) and (f) represent their corresponding color tables. View first occurence of Fig. 12in article.

The TRICLOBS System: Color-Fused Visual, NIR, and LWIR Imagery

top

The TRICLOBS (TRI-band color low-light observation) system provides co-axially registered visual, NIR, and LWIR or thermal images that are fused and represented in color in real time using a standard notebook computer.
157

Figure 13 shows an intensified visual [Figs. 13(a)], an NIR [13(b)], and an LWIR [Figs. 13(c)] image of a village square surrounded by low walls, a vehicle, and two armed soldiers. The image was registered at night, in conditions with a party overcast sky and some moonlight. The soldiers are wearing camouflage uniforms,

17 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

which make them hard to distinguish against the dark background in the visual and NIR images. They are represented at higher contrast in the LWIR image because of their temperature contrast with their surroundings. Notice that the low vertical wall surrounding the village square is highly distinct in the LWIR image, whereas it cannot be distinguished from the pavement of the square in the visual and NIR bands. Also, the texture of the pavement is well articulated in the LWIR image but not represented in the other bands. The fused and recolored image in Figs. 13(f) has a quite realistic color appearance and clearly represents the low vertical wall around the square, the armed soldiers, the texture of the pavement, and the vegetation in the background. Note that it is much easier to distinguish these different details in the color-fused representation in Figs. 13(f) than in the grayscale-fused version (Laplacian pyramid with maximum contrast selection rule) shown in Figs. 13(d).

Fig. 13
Scene representing a village square, surrounded by low walls, a jeep, and two armed soldiers. (a) Visual, (b) NIR, and (c) LWIR signals. (d) Grayscale-fused image (Laplacian pyramid) of (a), (b), and (c). (e) False color image obtained by mapping respectively (a) - (c) to the R, G, and B channels of a false color RGB image. (f) Image (e) after color remapping. View first occurence of Fig. 13in article.

Figure 14 shows a scene with barbed wire and a billboard in the foreground and a soldier running along some buildings in the background. Notice that the barbed wire is highly distinct in the color-fused image representation [Figs. 14(f)]; it is represented at high contrast in the LWIR image [Figs. 14(c)], whereas it can hardly be distinguished in the individual visual [Figs. 14(a)] and NIR [Figs. 14(b)] image bands. Again, different details like vegetation, bricks, sky, and clouds are easy to distinguish in the color-fused image representation [Figs. 14(f)] since they are represented with different colors, whereas they are much harder to resolve in a grayscale-fused representation [Figs. 14(d)]. Figure 15 illustrates that fusing LWIR with visual and NIR imagery provides realistic color imagery with the additional capability to see through smoke. The visual [Figs. 15(a)] and NIR [Figs. 15(b)] bands clearly represent the smoke cloud, which obscures the armed soldier crawling in the scene. The thermal image [Figs. 15(c)] clearly represents the soldier but shows no signs of the smoke cloud. Fused in false colors [Figs. 15(e)], the image shows both the smoke cloud and the soldier hiding behind, while realistic color remapping [Figs. 15(e)] makes it easy to distinguish the smoke cloud from the green grass and the blue sky. Hence, the realistic color representation clearly provides enhanced situational information. In contrast, in the grayscale-fused image [Figs. 15(d)], the smoke cloud is very hard to distinguish.

Fig. 14
As Fig. 13, for a scene with barbed wire and a billboard in the foreground and a running soldier in the background.

18 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

View first occurence of Fig. 14in article.

Fig. 15
As Fig. 13, for a scene showing an armed soldier crawling behind a smoke curtain between two houses. View first occurence of Fig. 15in article.

The TRICLOBS system has been successfully deployed on several occasions during nocturnal field trials.

192

Conclusions

top

In this paper, we presented a brief overview of our recent progress and the current state-of-the-art technology in color image fusion for night vision applications. Starting from a simple pixel-based false color-mapping scheme that was inspired by previously developed color-opponentfusing schemes,
126

we developed
98

statistical mapping with the capability to produce color-fused imagery with a realistic color appearance.
123,124,125 127

We

ultimately achieved both color constancy and computational simplicity by applying this statistical mapping approach in a sample-based color look-up-table framework. This method is specific for different

types of materials in the scene and can easily be adapted for the task at hand, such as the detection of camouflaged objects. The only a priori information needed to select an appropriate color mapping (i.e., to produce a realistic color setting of the fused multiband night vision imagery) is a general classification of the type of operating theatre in which the system will be deployed (e.g., urban, woodland, desert, maritime). Since the color remapping procedure gives fused multiband night vision imagery the general appearance of the reference imagery from which the color scheme has been derived,
98,123

general color schemes can be

derived and deployed for a set of typical operating theatres. The only requirement is that the imagery from which the color schemes are derived contains the same kind of materials that also occur in the intended operating theater.
98

Although the color setting of the resulting fused multiband night vision images may not be

identical to their actual daytime colors (e.g., when there is a large temporal offset between the derivation of the color transform and it application, a forest in spring may be depicted in autumn colors or vice versa), the color fused images will still have a realistic appearance, making it easy to grab the gist of the scene and perceive its content. detection,
156,161 68,69,158 159,160

The benefits of color image fusion for scene recognition,


123

target

camouflage breaking,

and situational awareness,

159,160

have already been convincingly

demonstrated in a wide range of different experimental conditions. Because the number of image fusion techniques and systems available is steadily increasing, there is an increasing demand for metrics to assess, compare, and optimize the quality of fused imagery. Clearly, the

19 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

ultimate image fusion scheme should provide intuitive and semantically meaningful image representations and should use fusion rules that give higher priority (weights) to regions with semantically higher importance to the operator. Generally, the ideal fused (reference) image is not available. In applications where fused images are intended for human observation, the performance of fusion algorithms can be measured in terms of improvement in user performance in tasks like detection, recognition, tracking, or classification. This approach requires a well-defined task that allows quantification of human performance.
16,55,85,89,91,159,162,163,164

However, this usually implies time-consuming and often expensive

experiments involving a large number of human subjects. Therefore, numerous objective computational image fusion quality assessment metrics have been proposed
165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181

(for recent overviews, see Refs. 10, 12,


46,184,185,186,187,188

26, 182, and 183). Some quality metrics apply to color-fused imagery.
189

Metrics that

accurately describe human performance are of great value since they can be used to optimize image fusion systems. Although some of the metrics that are currently available correlate with human visual perception,
190

most of them do not predict observer performance for arbitrary scenes and scenarios.

Currently, most

fusion metrics only provide a relative assessment on how information from the input images is transferred to the final fused result. As a result, the choice of a fusion metric still depends on the application requirements. Reliable universal human performancerelated fusion quality metrics are currently still not available.
191

Acknowledgments
Efforts sponsored by the Air Force Office of Scientific Research, Air Force Material Command, USAF, under grant number FA8655-11-1-3015. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purpose notwithstanding any copyright notation thereon.

Vitae
Alexander Toet received his PhD in physics from the University of Utrecht in the Netherlands in 1987, where he worked on visual spatial localization (hyperacuity) and image processing. He is currently a senior research scientist at TNO (Soesterberg, The Netherlands) where he investigates multimodal image fusion, image quality, computational models of human visual search and detection, and the quantification of visual target distinctness. Recently, he started investigating cross-modal perceptual interactions between the visual, auditory, olfactory, and tactile senses with the aim to provide users of serious gaming programs (training and simulation) a more compelling experience. He has published extensively in peer-reviewed journals, is a Fellow of SPIE and a senior member of IEEE.

Maarten A. Hogervorst received his PhD in physics from the University of Utrecht in the Netherlands in 1996 for his work on visual perception. From 1996 to 1999, he worked as a research assistant at Oxford University in the United Kingdom, where he continued his work in this area. Currently, he is employed at TNO as a research scientist. He has worked on perception of depth, tracking, human combination of

20 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

multiple sources of visual information, object detection and recognition, workload assessment, and low vision. His current research interests include visual information processing, electro-optical system performance, search and target acquisition modeling, image quality assessment, image enhancement, information fusion, color imaging, EEG, and human factors of camera surveillance systems. He has published numerous papers on various topics and is a Senior Member of SPIE.

View in Separate Window/Tab

1.

Cohen, N.et al., Integrated HBT/QWIP structure for dual color imaging, Infrared Phys. Technol. 47(12), 4352 (2005). [Inspec] | Vandersmissen, R., Night-vision camera combines thermal and low-light level images, Photonik Int 2, 24 (2008). | Bandara, S. V.et al., Four-band quantum well infrared photodetector array, Infrared Phys. Technol. 44(56), 369375 (2003). | Cho, E.et al., Development of a QWIP dual-color FPA for mine detection applications, Proc. SPIE 5074, 685695 (2003). | Goldberg, A. C.et al., Development of a dual-band LWIR/LWIR QWIP focal plane array for detection of buried land mines, Proc. SPIE 4721, 184195 (2002). | Goldberg, A. C., P. Uppal and M. Winn, Detection of buried land mines using a dual-band LWIR/LWIR QWIP focal plane array, Infrared Phys. Technol. 44(56), 427437 (2003). [ISI] |

2.

3.

4.

5.

6.

7.

Goldberg, A. C., A. C. Stann and N. Gupta, Multispectral, hyperspectral, and three-dimensional imaging research at the Army, U. S. Research Laboratory, Proc. Sixth Int. Conf. on Inform. Fusion (FUSION 2003), ISIF, Fairborn, OH, USA, pp. 499506 (2003). | Goldberg, A. C., T. Fischer and Z. I. Derzko, Application of dual-band infrared focal plane arrays to tactical and strategic military problems, Proc. SPIE 4796, 500514 (2003). | Kriesel, J. M. and N. Gat, True-color night vision (TCNV) fusion system using a VNIR EMCCD and a LWIR microbolometer camera, Proc. SPIE 7697, 76970Z (2010). | Palmer, T. A., C. C. Alexay and S. Vogel, Somewhere under the rainbow: the visible to far infrared imaging lens, Proc. SPIE 8012, 801223 (2011). | Wolff, L. B., D. A. Socolinsky and C. K. Eveland, Advances in low-power visible/thermal IR video image fusion hardware, Proc. SPIE 5782, 5458 (2005). | Angel, H., C. Ste-Croix and E. Kittel, Review of fusion systems and contributing technologies for SIHS, Report DRDC Toronto CR 2007-066, Humansystems, Incorporated, Guelph, Ontario, Canada (2007). |

8.

9.

10.

11.

12.

13.

Dwyer, D. J.et al., Real-time implementation of image alignment and fusion, Proc. SPIE 5612, 8593 (2004). | Dwyer, D. J.et al., Real time implementation of image alignment and fusion on a police helicopter, Proc. SPIE 6226, 622607 (2006). |

14.

21 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

15.

McDaniel, R. V.et al., Image fusion for tactical applications, in Proc SPIE 3436, 685695 (1998). |

16.

Toet, A.et al., Fusion of visible and thermal imagery improves situational awareness, Displays 18(2), 8595 (1997). [Inspec] [ISI] | Riley, P. and M. Smith, Image fusion technology for security and surveillance applications, in Proc. SPIE 6402, 640204 (2006). | Zou, X. and B. Bhanu, Tracking humans using multi-modal fusion, in 2nd Joint IEEE Int. Workshop on Object Tracking and Classification in and Beyond the Visible Spectrum (OTCBVS'05), IEEE Press,Los Alamitos, CA, USA, pp. W01-30-1W01-30-8 (2005). | O'Brien, M. A. and J. M. Irvine, Information fusion for feature extraction and the development of geospatial information, in Proc. 7th Int. Conf. on Inform. Fusion (FUSION 2004), ISIF, Mountain View, CA, pp. 976982 (2004). | Toet, A., Color image fusion for concealed weapon detection, Proc. SPIE 5071, 372379 (2003). |

17.

18.

19.

20.

21.

Xue, Z. and R. S. Blum, Concealed weapon detection using color image fusion, in Proc. Sixth Int.l Conf. on Inform. Fusion (FUSION 2003), ISIF, Fairborn, OH, USA, pp. 622627 (2003). |

22.

Xue, Z., R. S. Blum and Y. Li, Fusion of visual and IR images for concealed weapon detection, in Proc. of the Fifth Int. Conf. on Inform. Fusion, ISIF, Sunnyvale, CA, pp. 11981205 (2002). |

23.

Bhatnagar, G. and Q. M. J. Wu, Human visual system based framework for concealed weapon detection, in Proc. 2011 Canadian Conf. on Computer and Robot Vision (CRV), IEEE Computer Society, Washington DC, USA, pp. 250256 (2011). | Liu, Z.et al., Concealed weapon detection and visualization in a synthesized image, Patt. Analys. Appl.s 8(4), 375389 (2006). | Yajie, W. and L. Mowu, Image fusion based concealed weapon detection, in Proc. Int. Conf. on Comp. Intell. and Software Eng. 2009 (CiSE 2009), IEEE Press, Los Alamitos, CA, USA, pp. 14 (2009). |

24.

25.

26.

Beyan, C., A. Yigit and A. Temizel, Fusion of thermal- and visible-band video for abandoned object detection, J. Electron. Imag. 20(3), 033001 (2011). | Lepley, J. J. and M. T. Averill, Detection of buried mines and explosive objects using dual-band thermal imagery, Proc. SPIE 8017, 80171V (2011). | Kong, S. G.et al., Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition, Int. J. of Comp. Vis. 71(2), 215233 (2007). | Liu, Z. and C. Liu, Fusion of color, local spatial and global frequency information for face recognition, Pattern Recogn. 43(8), 28822890 (2010). [Inspec] | Estrera, J. P.et al., Modern night vision goggles for advanced infantry applications, Proc. SPIE 5079, 196207 (2003). | Estrera, J. P., Digital image fusion systems: color imaging and low-light targets, Proc. SPIE 7298, 72981E (2009). |

27.

28.

29.

30.

31.

22 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

32.

Frim, J., L. Bossi and D. Tack, Human factors considerations of IR sensors for the Canadian Integrated Soldier System Project (ISSP), Proc. SPIE 7298. 72981H (2009). | Zitov, B., M. Bene and J. Blaek, Image fusion for art analysis, Proc. SPIE 7869, 786908 (2011). |

33.

34.

Bulanona, D. M., T. F. Burks and V. Alchanatis, Image fusion of visible and thermal images for fruit detection, Biosys. Eng. 103(1), 1222 (2009). | Ghassemian, H., A retina based multi-resolution image-fusion, in Proc.IEEE Int. Geosci. Rem. Sens. Symp. IGRSS2001, IEEE Press, Los Alamitos, CA, USA, pp. 709711, (2001). | Jiang, D.et al., Survey of multispectral image fusion techniques in remote sensing applications, in Image fusion and its applications, In Tech Open, Rijeka, Croatia, pp. 122 (2011). |

35.

36.

37.

Jacobson, N. P. and M. R. Gupta, Design goals and solutions for display of hyperspectral images, IEEE Trans. Geosci. and Rem. Sens. 43(11), 26842692 (2005). | Jacobson, N. P., M. R. Gupta and J. B. Cole, Linear fusion of image sets for display, IEEE Trans. Geosci. and Rem. Sens. 45(10), 32773288 (2007). | Daneshvar, S. and H. Ghassemian, MRI and PET image fusion by combining IHS and retina-inspired models, Inform. Fusion 11(2), 114123 (2010). | Yongqiang, Z., Z. Lei and P. Quan, Spectropolarimetric imaging for pathological analysis of skin, Appl. Opt. 48(10), D236D246 (2009). [MEDLINE] | Zaidi, M., M. L. Montandon and A. Alav, The clinical role of fusion imaging using PET, CT, and MR imaging, PET Clinics 3(3), 275291 (2009). | Rojas, G. M.et al., Image fusion in neuroradiology: three clinical examples including MRI of Parkinson disease, Comp. Med. Imag. Graphics 31(1), 1727 (2006). [Inspec] | Blum, R. S. and Z. Liu, Multi-sensor image fusion and its applications, CRC Press, Taylor & Francis Group, Boca Raton, Florida, USA (2006). | Smith, M. I. and J. P. Heather, Review of image fusion technology in 2005, Proc. of SPIE 5782, 2945 (2005). | Li, G. and K. Wang, Applying daytime colors to nighttime imagery with an efficient color transfer method, Proc. SPIE 6559, 65590L (2007). | Shi, J.et al., Objective evaluation of color fusion of visual and IR imagery by measuring image contrast, Proc. SPIE 5640, 594601 (2005). | Shi, J. S., W. Q. Jin and L. X. Wang, Study on perceptual evaluation of fused image quality for color night vision, J. Infrared Millimeter Waves 24(3), 236240 (2005). | Tsagaris, V. and V. Anastassopoulos, Fusion of visible and infrared imagery for night color vision, Displays 26(45), 191196 (2005). | Zheng, Y.et al., Coloring night-vision imagery with statistical properties of natural colors by using image segmentation and histogram matching, Proc. SPIE 5667, 107117 (2005). | Horn, S.et al., Monolithic multispectral FPA, in Proc. Int. Military Sens. Symp., Paris France, pp.

38.

39.

40.

41.

42.

43.

44.

45.

46.

47.

48.

49.

50.

23 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

118pp. (2002). | 51. Hossny, M., S. Nahavandi and D. Creighton, Color map-based image fusion, in Proc. IEEE Int. Conf. Ind. Info. 2008 (INDIN 2008), IEEE Press, Los Alamitos, CA, USA, pp. 5256 (2008). |

52.

Kong, W., Y. Lei and X. Ni, Fusion technique for grey-scale visible light and infrared images based on non-subsampled contourlet transform and intensity-hue-saturation transform, IET Sig. Proc. 5(1), pp. 7580 (2011). [MEDLINE] | Scribner, D.et al., Infrared color vision: an approach to sensor fusion, Opt. Photon. News 9(8), 2732 (1998). | Scribner, D., P. Warren and J. Schuler, Extending color vision methods to bands beyond the visible, in

53.

54.

Proc. IEEE Workshop on Comp. Vis. Beyond Vis. Spec.: Methods and Applications, IEEE, pp. 3340
(1999). | 55. Stuart, G. W. and P. K. Hughes, Towards understanding the role of colour information in scene perception using night vision devices, Report DSTO-RR-0345, DSTO Defence Science and Technology Organisation, Fishermans Bend, Victoria, Australia, (2009). | Sun, F., S. Li and B. Yang, A new color image fusion method for visible and infrared images, in Proc. IEEE Int. Conf. on Robotics Biomim., IEEE Press, Los Alamos, CA, USA, pp. 20432048 (2007). |

56.

57.

Tsagaris, V. and V. Anastassopoulos, Multispectral image fusion for improved RGB representation based on perceptual attributes, Int. J. of Rem. Sens. 26(15), 32413254 (2005). |

58.

Su, Y.et al., Approach to maximize increased details and minimize color distortion for IKONOS and QuickBird image fusion, Opt. Eng. 43(12), 30293037 (2004). [ISI] | Tu, T. M.et al., Adjustable intensity-hue-saturation and Brovey transform fusion technique for IKONOS/QuickBird imagery, Opt. Eng. 44(11), 116201(2005). | Yang, B., F. Sun and S. Li, Region-based color fusion method for visible and IR image sequences, in Proc. Chinese Conf. Patt. Recog. 2008 (CCPR '08), IEEE Press, Los Alamitos, CA, USA, pp. 16 (2008). | Yin, S. F.et al., Color contrast enhancement method to improve target detectability in night vision fusion, J. Infrared Millim. Waves 28(4), 281284 (2009). | Yin, S.et al., One color contrast enhanced infrared and visible image fusion method, Infrared Phys. Technol. 53(2), 146150 (2010). [Inspec] | Zaveri, T.et al., An optimized region-based color transfer method for night vision application, in Proc. 3rd IEEE Int. Conf. Sig. Imag. Process. (ICSIP 2010), IEEE Press, Los Alamitos, CA, USA, pp. 96101 (2010). |

59.

60.

61.

62.

63.

64.

Zhang, J.et al., Region-based fusion for infrared and LLL images, in Image Fusion, INTECH, http://intechweb.org, pp. 285302 (2011). | Zheng, Y. and E. A. Essock, A local-coloring method for night-vision colorization utilizing image analysis and fusion, Inform. Fusion 9(2), 186199 (2008). | Zheng, Y., An exploration of color fusion with multispectral images for night vision enhancement, in

65.

66.

Image fusion and its applications, Y. Zheng, Ed., InTech Open, Rijeka, Croatia, pp. 3554 (2011). |

24 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

67.

Li, G. and K. Wang, Merging infrared and color visible images with an contrast enhanced fusion method, Proc. SPIE 6571, 657108 (2007). | Goffaux, V.et al., Diagnostic colours contribute to the early stages of scene categorization: behavioural and neurophysiological evidence, Vis. Cog. 12(6), 878892 (2005). | Rousselet, G. A., O. R. Joubert and M. Fabre-Thorpe, How long to get the gist of real-world natural scenes?, Vis. Cog. 12(6), 852877 (2005). | Cavanillas, J. A., The role of color and false color in object recognition with degraded and non-degraded images, Naval Postgraduate School, Monterey, CA (1999). | Oliva, A. and P. G. Schyns, Diagnostic colors mediate scene recognition, Cognit. Psychol. 41, 176210 (2000). [ISI] [MEDLINE] | Sampson, M. T., An assessment of the impact of fused monochrome and fused color night vision displays on reaction time and accuracy in target detection, Report AD-A321226, Naval Postgraduate School, Monterey, CA (1996). | Spence, I.et al., How color enhances visual memory for natural scenes, Psychol. Sci. 17(1), 16 (2006). | Gegenfurtner, K. R. and J. Rieger, Sensory and cognitive contributions of color to the recognition of natural scenes, Curr. Bio. 10(13), 805808 (2000). [ISI] [MEDLINE] | Wichmann, F. A., L. T. Sharpe and K. R. Gegenfurtner, The contributions of color to recognition memory for natural scenes, J. Exp. Psych.: Learn. Mem. Cog. 28(3), 509520 (2002). [ISI] [MEDLINE] |

68.

69.

70.

71.

72.

73.

74.

75.

76.

Ansorge, U., G. Horstmann and E. Carbone, Top-down contingent capture by color: evidence from RT distribution analyses in a manual choice reaction task, Acta Psychologica 120(3), 243266 (2005). |

77.

Green, B. F. and L. K. Anderson, Colour coding in a visual search task, J. Exp. Psychol. 51(1), 1924 (1956). [MEDLINE] | Folk, C. L. and R. Remington, Selectivity in distraction by irrelevant featural singletons: evidence for two forms of attentional capture, J. Exp. Psych.: Human Percep. and Perf. 24(3), 847858 (1998). |

78.

79.

Driggers, R. G.et al., Target detection threshold in noisy color imagery, in Proc. SPIE 4372, 162169 (2001). | Lanir, J., M. Maltz and S. R. Rotman, Comparing multispectral image fusion methods for a target detection task, Opt. Eng. 46(6), 066402 (2007). | Martinsen, G. L., J. S. Hosket and A. R. Pinkus, Correlating military operators' visual demands with multi-spectral image fusion, in Proc. SPIE 6968, 69681S 2008). | Joseph, J. E. and D. R. Proffitt, Semantic versus perceptual influences of color in object recognition, J. Exp. Psych.: Learn. Mem. Cogn. 22(2), 407429 (1996). | Oliva, A., Gist of a scene, in Neurobiology of Attention, L. Itti, G. Rees and J. K. Tsotsos, Eds., Academic Press, pp. 251256 (2005). |

80.

81.

82.

83.

25 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

84.

, Colouring the near-infrared, in Proc. IS&T/SID 16th Color Imag. Conf., pp. 176182 The Society for Imaging Science and Technology, Springfield, VA, USA (2008). | Sinai, M. J., J. S. McCarley and W. K. Krebs, Scene recognition with infra-red, low-light, and sensor fused imagery, in Proc. IRIS Spec. Groups Pass. Sens., IRIS, Monterey, CA, pp. 19 (1999). |

85.

86.

Krebs, W. K. and M. J. Sinai, Psychophysical assessments of image-sensor fused imagery, Human Fact. 44(2), 257271 (2002). [Inspec] [MEDLINE] | McCarley, J. S. and W. K. Krebs, Visibility of road hazards in thermal, visible, and sensor-fused night-time imagery, Appl. Erg. 31(5), 523530 (2000). [MEDLINE] | Toet, A. and J. K. IJspeert, Perceptual evaluation of different image fusion schemes, Proc. SPIE 4380, 436441 (2001). | Vargo, J. T., Evaluation of operator performance using true color and artificial color in natural scene perception, Report AD-A363036, Naval Postgraduate School, Monterey, CA (1999). |

87.

88.

89.

90.

Toet, A.et al., Fusion of visible and thermal imagery improves situational awareness, Proc. SPIE 3088, 177188 (1997). | Sinai, M. J.et al., Psychophysical comparisons of single- and dual-band fused imagery, Proc. SPIE 3691, 176183 (1999). | Essock, E. A.et al., Perceptual ability with real-world nighttime scenes: image-intensified, infrared, and fused-color imagery, Human Fact. 41(3), 438452 (1999). [ISI] [MEDLINE] | White, B. L., Evaluation of the impact of multispectral image fusion on human performance in global scene processing, Postgraduate School, Monterey, CA (1998). | Essock, E. A.et al., Human perceptual performance with nonliteral imagery: region recognition and texture-based segmentation, J. Exp. Psych.: Appl. 10(2), 97110 (2004). | Gu, X., S. Sun and J. Fang, Coloring night vision imagery for depth perception, Chinese Opt. Lett. 7(5), 396399 (2009). | Krebs, W. K.et al., Beyond third generation: a sensor-fusion targeting FLIR pod for the F/A-18, Proc. SPIE 3376, 129140 (1998). | Sun, S.et al., Color fusion of SAR and FLIR images using a natural color transfer technique, Chinese Opt. Lett. 3(4), 202204 (2005). | Toet, A., Natural colour mapping for multiband nightvision imagery, Info. Fus. 4(3), 155166 (2003). |

91.

92.

93.

94.

95.

96.

97.

98.

99.

Wang, L.et al., Color fusion schemes for low-light CCD and infrared images of different properties, Proc. SPIE 4925, 459466 (2002). | Li, J.et al., Color based grayscale-fused image enhancement algorithm for video surveillance, in Proc. Third Int. Conf. Imag. and Graph. (ICIG'04), IEEE Press, Los Alamitos, CA, USA, pp. 4750 (2004). |

100.

101.

Howard, J. G.et al., Real-time color fusion of E/O sensors with PC-based COTS hardware Proc. SPIE 4029, 4148 (2000). |

26 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

102.

Scribner, D.et al., Sensor and image fusion, in Encyclopedia of Optical Engineering, R. G. Driggers, Ed., Marcel Dekker Inc., New York, USA, pp. 25772582 (2003). | Schuler, J.et al., Multiband E/O color fusion with consideration of noise and registration, Proc. SPIE 4029, 3240 (2000). | Waxman, A. M.et al., Color night vision: fusion of intensified visible and thermal IR imagery, in Proc. SPIE 2463, 5868 (1995). | Warren, P.et al., Real-time, PC-based color fusion displays, Report A073093, Naval Research Laboratory, Washington DC (1999). | Fay, D. A.et al., Fusion of multi-sensor imagery for night vision: color visualization, target learning and search, in Proc. 3rd Int. Conf. Inform. Fusion, ONERA, Paris, France, pp. TuD3-3TuD3-10 (2000). |

103.

104.

105.

106.

107.

Aguilar, M.et al., Real-time fusion of low-light CCD and uncooled IR imagery for color night vision, Proc. SPIE 3364, 124135 (1998). | Waxman, A. M.et al., Opponent-color fusion of multi-sensor imagery: visible, IR and SAR, in Proceedings of the 1998 Conference of the IRIS Specialty Group on Passive Sensors, Environmental Research Institute of Michigan (ERIM), Ann Arbor, MI, USA, pp. 4361(1998). | Aguilar, M.et al., Field evaluations of dual-band fusion for color night vision, Proc. SPIE 3691, 168175 (1999). | Fay, D. A.et al., Fusion of 2- /3- /4-sensor imagery for visualization, target learning, and search, in Proc. SPIE 4023, pp. 106115, (2000). | Waxman, A. M.et al., Color night vision: opponent processing in the fusion of visible and IR imagery, Neural Net 10(1), 16 (1997). [Inspec] [ISI] [MEDLINE] | Huang, G., G. Ni and B. Zhang, Visual and infrared dual-band false color image fusion method motivated by Land's experiment, Opt. Eng. 46(2), 027001 (2007). | Li, Z., Z. Jing and X. Yang, Color transfer based remote sensing image fusion using non-separable wavelet frame transform, Pattern Recogn. Lett. 26(13), 20062014 (2005). | Li, G., S. Xu and X. Zhao, Fast color-transfer-based image fusion method for merging infrared and visible images, in Proc. SPIE 7710, 77100S (2010). | Li, G., S. Xu and X. Zhao, An efficient color transfer algorithm for recoloring multiband night vision imagery, in Proc. SPIE 7689, 76890A (2010). | Li, G., Image fusion based on color transfer technique, in Image fusion and its applications, Y. Zheng, Ed., InTech Open, Rijeka, Croatia, pp. 5572 (2011). | Shen, H. and P. Zhou, Near natural color polarization imagery fusion approach, in Proceedings of the 2010 3rd International Congress on Image and Signal Processing (CISP 2010), IEEE Press, Los Alamitos, CA, USA, pp. 28022805 (2010). | Sun, S.et al., Transfer color to night vision images, Chinese Opt. Lett. 3(8), 448450 (2005). |

108.

109.

110.

111.

112.

113.

114.

115.

116.

117.

118.

119.

Sun, S. and H. Zhao, Perceptual evaluation of color night vision image quality, in Spec. Sess. Imag. Fusion, Vid. Fusion Assess. Proc.10th International Conference on Information Fusion, S. Nikolov and

27 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

A. Toet, Eds., ISIF, Mountain View, CA, pp. 17 (2007). | 120. Sun, S. and H. Zhao, Natural color mapping for FLIR images, in Proc. 1st Int. Cong. Imag. Sig. Process. CISP 2008, IEEE Press, Los Alamitos, CA, USA, pp. 4448 (2008). | Wang, L.et al., Color transfer based on steerable pyramid and hot contrast for visible and infrared images, in Proc. SPIE 6833, 68331D (2008). | Wang, L.et al., Real-time color transfer system for low-light level visible and infrared images in YUV color space, Proc. SPIE 6567, 65671G (2007). | Hogervorst, M. A. and A. Toet, Fast natural color mapping for night-time imagery, Inform. Fusion 11(2), 6977 (2010). | Hogervorst, M. A. and A. Toet, Presenting nighttime imagery in daytime colours, in Proc.11th Int. Conf. Inform. Fusion, ISIF, Cologne, Germany, pp. 706713 (2008). | Hogervorst, M. A. and A. Toet, Method for applying daytime colors to nighttime imagery in realtime, Proc. SPIE 6974, 697403 (2008). | Toet, A. and J. Walraven, New false colour mapping for image fusion, Opt. Eng. 35(3), 650658 (1996). [ISI] | Hogervorst, M. A., A. Toet and F. L. Kooi, TNO Defense Security and Safety. Method and system for converting at least one first-spectrum image into a second-spectrum image, Patent Number PCT/NL2007050392, Application Number 0780855.5-2202 (2006). | Waxman, A. M.et al., Electronic imaging aids for night driving: low-light CCD, thermal IR, and color fused visible/IR, Proc. SPIE 2902, 6273 (1996). | Waxman, A. M.et al., Progress on color night vision: visible/IR fusion, perception and search, and low-light CCD imaging, Proc. SPIE 2736, 96107 (1996). | Gove, A. N., R. K. Cunningham and A. M. Waxman, Opponent-color visual processing applied to multispectral infrared imagery, in Proc.1996 Meet.IRIS Spec.Group Pass. Sens., Infrared Information Analysis Center, ERIM, Ann Arbor, MI, pp. 247262 (1996). | Waxman, A. M.et al., Solid-state color night vision: fusion of low-light visible and thermal infrared imagery, MIT Linc. Lab. J. 11, 4160 (1999). | Schiller, P. H., Central connections of the retinal ON and OFF pathways, Nature 297(5867), 580583 (1982). [Inspec] [ISI] [MEDLINE] | Schiller, P. H., The connections of the retinal on and off pathways to the lateral geniculate nucleus of the monkey, Vis. Res. 24(9), 923932 (1984). [Inspec] | Schiller, P. H., J. H. Sandell and J. H. Maunsell, Functions of the ON and OFF channels of the visual system, Nature 322(6082), 824825 (1986). [Inspec] [ISI] [MEDLINE] | Schiller, P. H., The ON and OFF channels of the visual system, Trend. Neurosci. 15(3), 8692 (1992). [ISI] [MEDLINE] | Newman, E. A. and P. H. Hartline, Integration of visual and infrared information in bimodal neurons of the rattlesnake optic tectum, Science 213, 789791 (1981). [Inspec] [MEDLINE] |

121.

122.

123.

124.

125.

126.

127.

128.

129.

130.

131.

132.

133.

134.

135.

136.

28 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

137.

Newman, E. A. and P. H. Hartline, The infrared vision of snakes, Sci. Amer. 246(3), 116127 (1982). |

138.

Gouras, P., Color vision, in Principles of Neural Science, 3rd ed, E. R. Kandel, J. H. Schwartz and T. M. Jessel, Eds., Elsevier, Oxford, UK, pp. 467480 (1991). | Schiller, P. H. and N. K. Logothetis, The color-opponent and broad-band channels of the primate visual system, Trend. Neurosci. 13(10), 392398 (1990). [ISI] [MEDLINE] | Waxman, A. M.et al., Neural processing of targets in visible, multispectral IR and SAR imagery, Neur. Net. 8(7/8), 10291051 (1995). [Inspec] | Grossberg, S., Neural networks and natural intelligence, MIT Press, Cambridge, MA (1988). |

139.

140.

141.

142.

Simard, P., N. K. Link and R. V. Kruk, Evaluation of algorithms for fusing infrared and synthetic imagery, Proc. SPIE 4023, 127138 (2000). | Simard, P., N. K. Link and R. V. Kruk, Feature detection performance with fused synthetic and sensor images, in Proc. 43rd Ann. Meet. Human Fact.Erg. Soc., Human Factors and Ergonomics Soc., pp. 11081112 (1999). | Gagnon, L., F. Lalibert and M. Lalonde, R&D Activities in Retinal Image Analysis at CRIM in Proc.28th Canadian Med. Bio. Eng. Soc. Conf. (CMBEC28), Quebec City, Quebec, Canada, pp. 14, 2004). | Kolar, R., L. Kubecka and J. Jan, Registration and fusion of the autofluorescent and infrared retinal images, Int. J. Biomed. Imag. 2008(1513478), 111 (2008). | Lalibert, F., L. Gagnon and Y. Sheng, Registration and fusion of retinal images - an evaluation study, IEEE Transac. Med. Imag. 22(5), 661673 (2003). [Inspec] [ISI] [MEDLINE] | Lalibert, F., L. Gagnon and Y. Sheng, Registration and fusion of retinal images: a comparative study, in Int. Conf. Patt. Recog. 2002, IEEE Computer Society, Washington DC, USA, pp. 715718 (2002). |

143.

144.

145.

146.

147.

148.

Lalibert, F. and L. Gagnon, Studies on registration and fusion of retinal images, in Multi-Sensor Image Fusion and its Applications R. S. Blum and Z. Liu, Eds., USA: Taylor & Francis CRC Press, Boca Raton, Florida, pp. 57106 (2006). | Reinhard, E.et al., Color transfer between images, IEEE Comp. Graph. Appl. 21(5), 3441 (2001). |

149.

150.

Fairchild, M. D., Color Appearance Models, Addison Wesley Longman, Inc., Reading, MA (1998). |

151.

Ruderman, D. L., T. W. Cronin and C. C. Chiao, Statistics of cone responses to natural images: implications for visual coding, J. Opt. Soc. of Am. A 15(8), 20362045 (1998). [ISI] |

152.

Heckbert, P., Color image quantization for frame buffer display, Comp. Graph. 19(3), 297307 (1982). [Inspec] | Das, S., Y. L. Zhang and W. K. Krebs, Color night vision for navigation and surveillance, in Proc.Ann. Meet. Transp. Res. Board No. 79, J. Sutton and S. C. Kak, Eds., National Research Council, Washington DC, pp. 4046 (2000). |

153.

29 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

154.

Roth, L. S. V. and A. Kelber, Nocturnal colour vision in geckos, Proc. Roy. Soc. London B: Bio. Sci. 271(Bio. Lett. Supp. 6), pp. S485S487 (2006). | Toet, A. and M. A. Hogervorst, Portable real-time color night vision, Proc. SPIE 6974, 697402 (2008). |

155.

156.

Hogervorst, M. A. and A. Toet, Evaluation of a color fused dual-band NVG-sensor Proc. SPIE 7345, 734502 (2009). | Toet, A. and M. A. Hogervorst, TRICLOBS portable triband lowlight color observation system, Proc. SPIE 7345, 734503 (2009). | Nijboer, T. C.et al., Recognising the forest, but not the trees: an effect of colour on scene perception and recognition, Consci. Cog. 17(3), 741752 (2008). | Toet, A. and E. M. Franken, Perceptual evaluation of different image fusion schemes, Displays 24(1), 2537 (2003). [Inspec] | Toet, A., Fusion of images from different electro-optical sensing modalities for surveillance and navigation tasks. in Multi-Sensor Image Fusion and Its Application, R. S. Blum and Z. Liu, Eds., Taylor & Francis CRC Press, Boca Raton, Florida, USA, pp. 237264 (2006). | Toet, A., Cognitive image fusion and assessment, in Image Fusion, O. Ukimura, Ed., INTECH, http://intechweb.org, pp. 303340 (2011). | Dixon, T. D.et al., Selection of image fusion quality measures: objective, subjective, and metric assessment, J. Opt. Soc. Am. A 24(12), B125B135 (2007). | Neriani, K. E., A. R. Pinkus and D. W. Dommett, An investigation of image fusion algorithms using a visual performance-based image evaluation methodology, Proc. SPIE 6968, 696819 (2008). |

157.

158.

159.

160.

161.

162.

163.

164.

Steele, P. M. and P. Perconti, Part task investigation of multispectral image fusion using gray scale and synthetic color night vision sensor imagery for helicopter pilotage, Proc. SPIE 3062, 88100 (1997). |

165.

Blum, R. S., On multisensor image fusion performance limits from an estimation theory perspective, Inform. Fusion 7(3), 250263 (2006). | Corsini, G.et al., Enhancement of sight effectiveness by dual infrared system: evaluation of image fusion strategies, in Proc. 5th Inter. Conf. Tech. Autom. (ICTA'05), pp. 376381 (2006). |

166.

167.

Tsagaris, V. and V. Anastassopoulos, Information measure for assessing pixel-level fusion methods, Proc. SPIE 5573, 6471 (2004). | Piella, G. and H. J. A. M. Heijmans, A new quality metric for image fusion, in Proceedings of the IEEE International Conference on Image Processing, IEEE Press, Los Alamitos, CA, USA, III-209III-212 (2003). | Toet, A. and M. A. Hogervorst, Performance comparison of different graylevel image fusion schemes through a universal image quality index, Proc. SPIE 5096, 552561 (2003). | Chen, H. and P. K. Varshney, A human perception inspired quality metric for image fusion based on regional information Inform. Fusion 8(2), 193207 (2007). | Yang, C.et al., A novel similarity based quality metric for image fusion, Inform.Fusion 9(2), 156160

168.

169.

170.

171.

30 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

(2007). | 172. Zheng, Y.et al., A new metric based on extended spatial frequency and its application to DWT based fusion algorithms, Inform. Fusion 8(2), 177192 (2007). | Wang, Q. and Y. Shen, Performance assessment of image fusion, in Advances in Image and Video Technology, Springer Verlag, Heidelberg/Berlin, Germany, pp. 373382 (2006). |

173.

174.

Chari, S. K.et al., LWIR and MWIR fusion algorithm comparison using image metrics, Proc. SPIE 5784, 1626 (2005). | Cvejic, N.et al., A novel metric for performance evaluation of image fusion algorithms, Trans. Eng. Comp. Tech., V7, 8085 (2005). | Ulug, M. E. and L. Claire, A quantitative metric for comparison of night vision fusion algorithms, Proc. SPIE 4051, 8088 (2000). | Xydeas, C. S. and V. S. Petrovic, Objective pixel-level image fusion performance measure, Proc. SPIE 4051, 8998 (2000). | Cvejic, N.et al., A similarity metric for assessment of image fusion algorithms, Int. J. Sig. Proces. 2(2), 178182 (2005). | Chen, Y. and R. S. Blum, A new automated quality assessment algorithm for image fusion, Imag. Vis. Comp. 27(10), 14211432 (2009). [Inspec] | Tsagaris, V. and V. Anastassopoulos, Global measure for assessing image fusion methods, Opt. Eng. 45(2), 026201 (2006). | Han, Y.et al., A new image fusion performance metric based on visual information fidelity, Inform. Fusion, in press (2011). | Chen, Y., Z. Xue and R. S. Blum, Theoretical analysis of an information-based quality measure for image fusion, Inform. Fusion 9(2), 161175 (2008). | Haghighata, M. B. A., A. Aghagolzadeh and H. Seyedarabi, A non-reference image fusion metric based on mutual information of image features, Comp. Elec. Eng., Online First (2011). |

175.

176.

177.

178.

179.

180.

181.

182.

183.

184.

Tsagaris, V., Objective evaluation of color image fusion methods, Opt. Eng. 48(6), 066201 (2009). |

185.

Tsagaris, V., N. Fragoulis and C. Theoharatos, Performance evaluation of image fusion methods, in

Image Fusion, O. Ukimura, Ed., InTech Education and Publishing, Vienna, Austria, pp. 7188 (2011). |

186.

Zhu, X. and Y. Jia, A method based on IHS cylindrical transform model for quality assessment of image fusion, Proc. SPIE 6044, 607615 (2005). | Yuan, Y.et al., Objective quality evaluation of visible and infrared color fusion image, Opt. Eng. 50(3), 033202 (2011). | Zhang, X., A novel quality metric for image fusion based on color and structural similarity, in 2009 Int. Conf. Sig. Proces. Sys., IEEE Press, Los Aalamitos, CA, pp. 258262 (2009). |

187.

188.

31 sur 32

25/03/2012 16:10

Progress in color night vision | Current Issue - Optical Engineering

http://spiedigitallibrary.org.www.sndl.arn.dz/oe/resource/1/opegar/v51/...

189.

Mitianoudis, N. and T. Stathaki, Optimal contrast for color image fusion using ICA bases, in Proc. 11th Int. Conf. Inform. Fusion, FUSION 2008, Article number 4632419 - ISIF, Cologne, Germany, 17pp. (2008). | Dixon, T. D.et al., Methods for the assessment of fused images, ACM Trans. Appl. Percep. 3(3), 309332 (2006). | Liu, Z.et al., Objective assessment of multiresolution fusion algorithms for context enhancement in night vision: a comparative study, IEEE Trans. Patt. Analy.Mach. Intel. PAMI, 34(1), 94109 (2011). [Inspec] |

190.

191.

192.

Toet, A.. See www.scivee.tv/node/29094, for a video presentation, and www.scivee.tv/node/29095, for a video documentary showing the system in action. 2011 |

2012 Society of Photo-Optical Instrumentation Engineers

32 sur 32

25/03/2012 16:10

You might also like