You are on page 1of 19

Signal Detection Theory Basics

by John Michael Williams


jmmwill@comcast.net jwill@BasicISP.net

2012-08-02
An elementary presentation of signal detection theory and of the modulation transfer function (MTF).

Keywords: signal detection, signal detection theory, psychophysics, likelihood ratio, forced-choice experiment, experimental psychology, receiver operating characteristic, ROC curve, jnd, sensitivity, criterion, isosensitivity curve, Poisson , absolute threshold, normality assumption, modulation transfer function, MTF.

Copyright (c) 2012 by John Michael Williams. All rights reserved.

Signal Detection Terminology1


In classical psychophysics, a threshold is determined by finding the lowest stimulus strength at which the subject's (S's) response experiences a change as a function of the stimulus applied. In signal-related work, the stimulus, of course, is a signal. The change in response to a signal at threshold is referred to as a just-noticeable-difference (jnd). For a sufficiently weak stimulus, the response by S will be the same as that for no stimulus at all; for a sufficiently strong stimulus, S ideally always will emit at least a jnd response. In any case, a response to noise (irrelevant sensory input) always is considered to be possible; noise always provides a background against which the true stimulus, a signal, may be present. Thus, presentation of the stimulus is treated as equivalent to presentation of a signal-plus-noise. When no signal is being emitted, all stimulation may be assumed to be because of noise alone. In signal detection theory, the classical approach above is elaborated by considering the jnd to represent an intervening variable called sensitivity. The problem to which signal detection theory is addressed, then, is that of separating the effects of sensitivity from those of a different intervening variable, called criterion. Intuitively, criterion may be identified with S's tendencies (or, likelihood) to emit a jnd. S is viewed as an organism which is capable of inhibiting a jnd independent of signal strength; if the tendency to inhibit is high, criterion is said to be high. Signal detection theory, as of the present time, most typically refers to two-alternative, forced-choice (2AFC) experimental tasks. This means that one of only two possible stimulus conditions is present on any given task, and that the experimental subject, S, is counted as either responding (jnd) or not responding on each task. Each new task is called a "trial". In such an experiment, a jnd will not always be emitted in the presence of a stimulus which is near the S's sensory threshold; this is of crucial interest to the experimenter. There are four possible results on any given 2AFC trial: 1. If the response is emitted on a trial in which a stimulus (signal) in fact was present, the jnd may be referred to as a "hit", or "correct detection". 2. If a response is emitted with no signal present, the jnd is referred to as a "false alarm" or "false positive". 3. If no response is emitted but the signal is presented, the trial is referred to as a "miss". 4. If no response is emitted and no signal is presented, the trial is scored as a "correct rejection".
==================== 1. This paper is based in part on work by the author in 1979 at a seminar presented at Southern Illinous University, Carbondale, under direction of Professor Alfred Lit. The references may require library access. However, the subject matter is current, because the science, excepting minor terminology and style differences, has remained unchanged over the past thirty or more years.

Criterion is considered to be a variable which is subject to experimental manipulation; it especially is considered to be a function of the instructions given to the experimental subject, and of the immediate past-history of any conditioning (training) given previous to, or during, the experiment (see Galanter, 1962, pp. 94 - 114). By contrast, sensitivity usually is not considered to vary (except parametrically) and may be treated as an inherent, relatively stable characteristic of the sensory system(s) and central nervous system of the organism under study. An underlying simile here is that the organism resembles a radio receiver and, like such a receiver, may be viewed as having a Receiver Operating Characteristic (ROC) curve.

Figure 1: Organization of a typical ROC curve. Pd = probability of detection; PFA = probability of false alarm. An ROC is a curve typically plotted against a 45o line which has been drawn ascenting from left to right inside a square region from (0,0) to (1,1). An example is sketched in Figure 1. The ROC measures any reception by bulging upward and to the left; the larger this bulge, the more salient the receiving signal. Such a curve would depend in part upon a parameter conventionally represented as d', which parameter gives the constant underlying sensitivity of that organism regardless either of criterion or of the particular stimulus being presented on an experimental trial. Relevant references to the standard works on signal detection theory may be found in Luce (1963).

Likelihoods and Criteria


Let s represent the signal and n the noise. Following Luce (1963), then, stimuli may k be considered to have certain effects x, x = (x 1, x 2, . . . , x k) being a vector in R , such that subjective probabilities psubj [anything /(s + n)] and psubj [anything /n] are estimated by S and control the response on each trial in the given experimental situation. When event x occurs, S is inferred to have based a decision upon the likelihood ratio L(x ) = P subj [x /(s + n)] . P subj [ x /n ] (1)

If the individual components of x are considered separately, of course, psubj [ x /(s+n)] may be defined as the joint probability psubj [ x1 /(s+n)]psubj [ x 1 /(s+n)]. . .psubj [ x k /(s+n)] , which may be written more compactly as psubj [ x /(s+n)] =

psubj [ xi /(s+n)]
i=1

(2)

In this analysis, if L(x ) in (1) above is large, S is assumed to have decided that (s+n) has been presented and acts accordingly. If L(x ) is small, S is assumed to have decided that n has occurred. The decision is assumed to have been based on a constantcriterion value of L(x ), which, if exceeded, will result in a jnd. As mentioned in Clark (1978), a criterion may be treated (a) as a fixed value of log [ L(x )] , as above, or (b) as a sort of "intensity" C ( x) of sensory experience, in subjective or objective standard-deviation units. An objective test of the difference between Clark's two alternatives is not simple: It is possible to generate an ROC curve by varying the experimental conditions in such a way that S changes criterion under known conditions. Because the ROC curve is monotonic, L(x ) defined as a ratio of probability densities (ordinates) is identical to C ( x) defined, for fixed d', as a point on an abscissa. The technicalities here have been somewhat simplified; the reader should seek further information and quantitative details in Luce (1963), and in Galanter (1962, esp. p. 102).

The ROC and likelihood. Once the ROC curve has been determined as above for a
given experiment, likelihood ratios for any given criterion, hit rate, or false-alarm rate may be calculated. These likelihood ratios are just slopes of the ROC curve (see Clark, 1978). The ROC curve also may be estimated by computing likelihood ratios at various criterion values. Such ratios give the slope of the ROC curve at the coordinates of the hits or false alarms determined by each distinct criterion value chosen. Line segments drawn to represent the likelihood ratios, if numerous enough, will bound from above a specific area approximately equal to that bounded by the ROC curve being estimated. Furthermore, given various assumptions as to randomness and the distributions of response errors, this same area may be used, in addition, to derive a value for d'.

Normality assumptions. In a 2AFC design, the signal detection theory depends

upon the assumption that both the signal + noise and the noise alone are producing sensory effects random in magnitude but different in probability distribution. Most often, s + n and n, in their effects, are assumed to be normally distributed and of equal variance. These assumptions make d', the sensitivity measure of the theory, equal to the value of Student's t statistic, should a significance test be performed between the s + n and n mean responses. It should be mentioned that the ideas of "signal" and "noise" derive from communications circumstances in which a genuine signal (or code) is sent from one human being to another; under such circumstances, the transmitted and received forms of a message, as reported or acted upon, can be compared directly for accuracy. By contrast, in empirical research such as Clark's 1978 study of pain, the stimulus applied may be known, but the response can be measured only as finally emitted. Under these conditions, whether one part of S's body may be considered as "signalling" some other part in any meaningful way becomes something of a question. This question usually, however, may be treated as one of pragmatic importance, only: If the approach works, then it is tenable -- even if it implies underlying factors not amenable to further study.

Multiple Alternatives
Signal detection theory is not limited to 2AFC designs. For example, the likelihood of any particular category in an n-alternative FC (nAFC) design may be computed under a variety of criterion conditions, thus allowing an ROC curve to be plotted just for each Yes allowed; the remaining stimulus alternatives in each case may be lumped together as "noise". Furthermore, there is no good reason why any two Yesses might not be considered a single Yes in the context of the combined signal-plus-noise of the two, the remaining alternatives again being lumped as "noise". However, because a likelihood ratio by definition is a ratio of two likelihoods, the application of signal detection theory must involve, at some point, a transformation of all likelihoods to a well-ordered set of ordinal values, so that a unique likelihood ratio might be made to correspond to each criterion value (see Luce, 1963, pp. 111 - 113).

Poisson Example
We now proceed to some detailed calculations, a worked example of a problem in signal detection theory, based on a 2AFC problem result given in summary form by Van Trees (1968, pp. 29 - 30, 41 - 44). An ROC curve relevant to the solution may be found in Van Trees (Figure 2.11); here, we merely work some detailed calculations necessary to quantify the solution. First, we state a few general equations common to all related solutions: The general Poisson relationship in this context, for Pr representing probability, and E representing expectation, may be written as Pr( X = n) = with E (n) = by definition. For Poisson random variables of identical n, we then may set up expressions for two hypotheses, H 0 being the null hupothesis, by writing 1 1 H 1 : Pr( X = n) = e , n = 0, 1, 2, .. . ; and, n! H 0 : Pr( X = n) = 0 0 e , n = 0, 1, 2, .. . . n!
n n

( i ) i e , n = 0, 1, .. . ; i = 0, 1 , n!

(3)

(4) (5)

For these hypotheses, a likelihood ratio test may be based upon some decision number by writing H 1 : L (n) = H 0 : L (n) =

() ()

1 n exp(0 1 ) > ; 0 1 n exp(0 1 ) < . 0

(6)

If we allow 1 > 0 , then an equivalence test with value may be based upon H1 : = H0 : = ln + 1 0 ln1 ln 0 ln + 1 0 ln1 ln0 < n ; (7) > n .

Using (4) above, we now may write the probability of correct detection P D as PD = e
1
1 1 = 1 e 1 1 , = 0, 1, 2, . .. . n! n = 0 n! n n

(8)

Using (5) above, the probability of a false alarm P F then may be written as PF = e
0
1 0 0 0 n ! = 1 e n! , = 0, 1, 2, . .. . n =0 n n

(9)

Example 1. If we assume that 1 = 4 and 0 = 2 , here is how we may obtain the


resulting ROC curve: Given these j and substituting values of in equations (8) and (9) above, we obtain

PD .906 .762 .567 .371 .215 .1107 .0511

PF .594 .323 .143 .053 .017 .0045 .0011

1 2 3 4 5 6 7

Table 1: Calculated probabilities of detection (PD) and of false alarm (PF) for the specific 1 and 0 assumed for this example. An ROC for such values is graphed in Van Trees (1968, Figure 2.11); or, an equivalent ROC may be plotted by hand, using the values in Table 1. In this problem, larger values of correspond to detection at higher criteria, raising the probability of responding "yes" ( = 1 = P D) .

Example 2. Determine the likelihood ratio L(n) for the condition 1 = 4 and
0 = 2 : Using equation (6) above, we obtain the values shown in Table 2. n
1 2 3 4 5 6 7 8 9 10

L(n)
.271 .541 1.083 2.165 4.331 8.661 17.323 34.646 69.292 138.583

ln L(n)
-1.307 -0.614 0.079 0.773 1.466 2.159 2.852 3.545 4.238 4.931

Table 2: Likelihood ratio (see text).

This result demonstrates that as the n obtained on a trial increases, so does the likelihood that the sample (= trial) was obtained as E (n) from the distribution formed from 1 rather than from 0 .

Example 3. Suppose that the data in Table 1 above were obtained in an experiment for
which we can let = n . Assuming this, estimate the values of 1 and 0 which must have been in force to obtain those values. We proceed in four distinct steps: Step 1: We begin by using the preceding Table 1 data to estimate L(n) as defined in equation (6) above. Recalling that L(n) is the slope of the ROC curve, we may use the points in (6) pairwise to calculate differences, thus obtaining equations, at all coordinates less one, for the log ratios of 1 / 0 : ln

ln

( (

.908 .762 .594 .323 .762 .567 .323 .143

) )

2ln

3ln

() ()
0 1 0

+ 0 1 + 0 1

(10)

(11)

ln

ln

ln

ln

( ( ( (

.567 .371 .143 .053 .371 .215 .053 .017

) ) )
)

4 ln

5ln

.215 .1107 .017 .0045

6 ln

.1107 .0511 .0045 .0011

7ln

( ( ( (

1 0

1 0

1 0 1 0

) ) ) )

+ 0 1

(12)

+ 0 1

(13)

+ 0 1 + 0 1

(14)

(15)

Step 2: Next, we evaluate left-hand sides and subtract the six preceding equations pair-wise in order to eliminate terms in ln( 1 / 0) : 1 ( 0 1) 2 1 0.4934 = ( 0 1 ) 4 1 0.4346 = ( 0 1 ) . 6 1.0078 = At this point, averaging (16) produces the result, 1 0 + 2.1988 . Step 3: We now can substitute result (17) into (10) - (15) above: 0.6185 = 2 ln 0.0800 = 3 ln 0.7783 = 4 ln 1.4663 = 5 ln 2.0823 = 6 ln 2.8639 = 7 ln (17)

(16)

( ( ( ( ( (

0 + 2.1988 0 0 + 2.1988 0 0 + 2.1988 0 0 + 2.1988 0 0 + 2.1988 0 0 + 2.1988 0

) ) ) ) ) )

2.1988 2.1988 2.1988 (18) 2.1988 2.1988 2.1988 .

Step 4: Finally, we now can average (18) and solve for 0 , yielding 0 = 1.9911 2 ; and, from (17), 1 = 4.1899 4 , completing the solution. (19)

Signal Detection at Absolute Threshold


The value of d' increases rapidly from zero as signal intensity rises above absolute threshold. For all signal intensities much below absolute threshold, d' uniformly will be equal to zero. Using the 1942 data of Hecht, Shlaer and Pirenne and the 1954 data of Denton and Pirenne, Barlow (1956) makes the point that a single quantum seems to be enough to excite a retinal rod, whereas more than one quantum per signal seems to be required to explain various effects of excitation area and duration of exposure. The results of Hecht, et al were that five to eight quanta were required as a reliable lower bound on retinal sensitivity adequate for detection at absolute threshold under their very favorable conditions. In terms of signal detection, this lower bound was one determined by retinal sensitivity and by criterion. If one assume that sensitivity in fact was constant under each of the two cited stimulus conditions, then Barlow's (1977, p. 342) result that the quanta per trial is just 6 follows at once. This means that s + n versus n distributions much closer than about 6 quanta/trial would yield few yes (detected) decisions; s + n versus n distributions considerably greater than 6 quanta/trial would yield many or all yes's. As Barlow (1977, pp. 341 ff.) points out, the Hecht, et al approach does not consider the existence or effect of retinal noise. Barlow estimates the threshold-with-noise to be about three times the size of the noiseless threshold. This suggests that the retinal noise intensity is twice the signal intensity when the probability of correct detection is 0.5. Stated quantitatively, including retinal noise, this means that the average Hecht, Schlaer, and Pirenne criterion of 6 may be written as

e na ! = n = 1
1 = 5 n =0

= 0.5 ; or, equivalently,

(20) (21)

ea an = 0.5 . n!

Solving (21), one obtains a 5.65 for the mean of the putative underlying Poisson distribution of signal + noise events at the retina. On each such trial, then, the average signal intensity will have been 5.65/3 1.88 events, whereas the average noise intensity will have been 2(5.65)/3 3.77 events.

The formulas given here, and for the worked example above, now may be used to compute the following table which, with Figures 2 and 3 below, describe the sought retinal signal and noise characteristics: n Pn(n); 0 = 3.77 0 1 2 3 4 5 6 7 8 9 10 .0231 .0869 .1638 .2059 .1940 .1463 .0919 .0435 .0233 .0098 .0037 Ps+n(n); 1 = 5.65 .0035 .0199 .0561 .1057 .1438 .1688 .1589 .1283 .0906 .0569 .0321 L(n)
H1 H0

ln[ L(n)]

PFA .8900 .7262 .5204 .3263 .1800 .0881 .0386 .0153 .0055 .0018

PD .9766 .9205 .8147 .6654 .4966 .3377 .2094 .1188 .0619 .0298

.1536 .2287 .3427 .5136 .7698 1.1536 1.7289 2.5911 3.8831 5.8196 8.7216

-1.8800 -1.4754 -1.0708 -0.6663 -0.2617 0.1429 0.5475 0.9521 1.3566 1.7612 2.1658

Table III: Estimated probabilities of false alarm and of detection for the equation (21) data.

Figure 2: Poisson distributions based on Barlow (1977) for retinal signal detection. Data from Hecht, et al (1942).

Figure 3: ROC for the Poisson distributions plotted in Figure 2 above. d = detection; FA = false alarm.

Photopigments and Thermal Noise


Barlow (1957) suggests that the major retinal noise source is Planckian thermal radiation from the tissues within the eye itself. The Planckian radiant energy (heat) decomposes the photopigments of the eye in a way indistinguishable from the decomposition caused by incident photons in the range of visible light. Using an approach he attributes to Stiles, Barlow (1957) gives the expression I = exp [(hc) / (2 k T )] ,
1

(22)

in arbitrary energy units, for the noise intensity at threshold which is determining the threshold for a photopigment with peak absorption at wavelength . In this formula, T represents the temperature of the eye in degrees Kelvin (about 310 K), and k is Boltzmann's constant.. Granting Barlow's assumptions, the Hecht, Schlaer, and Pirenne data mentioned above suggest a retinal noise intensity of about 3.77 quantal-equivalent events per trial at a wavelength of 510 nm, which is not far from the 507 nm wavelength at which the human retinal scotopic rod receptors have their maximum sensitivity. Thus, the noise comes to a total, in equivalent 510 nm radiation, of I 510 (hc ) (6.6310 )(310 )(3.77) 18 = = = 1.510 joule/trial . 9 51010 I1 (hc)(1/ 0 1/ 1) = 0exp I0 1 (2k T )
17 34 8

(23)

Barlow's (1957) formula

(24)

then yields a noise intensity of 48I 510 = about 710 joule/trial at the photopic peak of V at 555 nm, this being a wavelength about equal to that of greatest human photopic visual sensitivity. Thus, the optimum absolute threshold for photopic cone receptor vision predicted at 555 nm would be approximately I signal = I noise 2 = 7 17 10 = n E = n h c/555 nm . 2 (25)

Solving for n, we obtain n = about 100 quanta at 555 nm. This approach would put photopic threshold at about 1.4 log 10 units above scotopic threshold, in energy units. Examining our result, data of Hecht and Hsia (1945) republished in Graham (1965, Figure 4.6) suggests that under some conditions, at least, the 507/555 nm photochromatic difference is found to be about 1.55 log 10 units, in fairly good agreement with the above computations. Our above line of reasoning suggests that the central contribution to absolute threshold can be expected to be small, on the order of 0.2 log 10 units. However, a negligible central contribution was assumed by Barlow in the first place . . ..

More on the MTF and Noise


As shown by the Francon (1963, ch. 7) derivation for the image of a point source, an object-point on the optic axis ( x , y) will will yield an image amplitude U at an off-axis location ( x ' , y ' ) given by
U (x ' , y ' ) =

exp[ jK (' x ' + ' y ' )] d ' d '


W

(26)

For these first four paragraphs, we shall use z to represent the distance between the wave surface and the image plane; consistent with this, we shall use the two orthogonal planes ( x , y) and (, ) to represent the image plane and wave surface, respectively. If the amplitude and phase on the wave-surface W differs from unity by a factor g(' , ' ) , then U (x ' , y ' ) is given by the fourier transform of g, which makes U and g fourier transform pairs related as follows: U (x ' , y ' ) =
g(' , ' ) =

g (' , ' ) exp [ jK (' x ' + ' y ' )] d ' d '


W

(27) (28)

U (x ' , y ' ) exp[ jK (' x ' + ' y ' )] d ' d '


I

Here, I is the amplitude and phase of the corresponding center of the image. In section 7.2 of his 1963 paper, Francon, using slightly different ( x , y , z) coordinates, shows that these equations may be used to prove, under quite general conditions, that the fourier transform of the image of an incoherent object equals the product of the fourier transform of that object with the fourier transform of the image of an isolated point such as the on-axis point introduced to obtain (26) above. In these representations, the object and image are described in terms of luminous fluxes or intensities, because the vector addition of amplitudes is replaced by a scalar addition in all small local regions of the image and object. Thus, in effect, an optical system consisting, for simplicity, of a single simple lens, performs two fourier transforms in succession in order to process an image: First, the object amplitude o( x , y) is transformed to g(' , ' ) at the lens; then, the latter is transformed to U (x ' , y ' ) in the image plane. Finally, if we can assume a linear system (as in paraxial optics), the Modulation Transfer Function (MTF) of a system is given by the ratio of the fourier transform of the output amplitude to the fourier transform of the input amplitude: MTF (s , t) = G U (s , t) , F O (s , t) (29)

in which F is the transform of the object and G is the transform of the image as explained just above. The MTF in general is a complex function of the transform variables (spatial frequencies), the real part giving the amplitude attenuation and the imaginary part giving the phase shift.

The MTF and signal detection. To show how the preceding derivations can be
used in an actual application, suppose that we have a received signal r ( y) such that r ( y ' ) = m( y ' ) + n( y' ) , (30)

in which m is the signal, n is random noise, and y represents spatial location. In such a case, the MTF may be used to calculate the power spectrum of the signal and the noise. To begin, let us consider only one spatial dimension of the objects and images concerned. The one-dimensional fourier transform pairs corresponding to (27) and (28) above therefore may be written as U (y') =

g(' ) exp[ jK ' y ' ] d ' ,


{ }

(31)

in which is a segment of a great circle on W passing through the center of the lens; and, g(' ) =

U ( y ' ) exp[ jK ' y ' ] dy ' ,


{I }

(32)

in which I is a line passing through the center of the image. These two transforms are relevant in that, in general, a power spectrum may be obtained for problems such as ours by computing the fourier transform of the waveform autocorrelation function. To show an actual calculation, suppose that we have a system MTF as shown in Figure 4 immediately below; this is not unlike the MTF one would expect for a typical simple lens.

Figure 4: Simulated real part of our system MTF, with MTF (s) = |G(s)| / | F (s)| . Returning to our problem of (30) above, let us suppose further that the source spatial amplitudes are given by m( y) = cos( y) + (1/3)cos (3 y) , and, n( y) = 1/2 , for all y. (33) (34)

The total signal power then would be

[m( y)]2 dy = 1 +

1 cos 2 (3 y) dy 3 0 = 1 at s = 1 = 1/9 at s = 3.

()

/6

(35)

The received power would be

[ MTF (s) power(s) ] 2 ds ;


s

(36)

or, using Figure 4 above, we have, for m, about (.8) (1) + (.5) (1/9) = .67 ; and, for n, about
[ (.1) (90) + (.5) (10) ](1/4) = .85 .
2 2

(37)

(38)

MTF selective filtering. If we assume that the receiver input is designed to pass

spatial frequencies s = 1 and s = 3 and to exclude all others, we obtain an MTF more or less as shown in Figure 5:

Figure 5: Sketch of MFT for a selectively-filtered receiver which accepts only spatial frequencies of s = 1 and s = 3. Given this MTF, we very simply can continue the calculation to estimate the noise power; we arive at a result of (.8) (1/2) + (.5) (1/2) = about .45 , and the received signal power again will be about .67, as in (37) above.
2 2

(39)

Van Trees (1968, sect. 4.2) provides a statistical test for examples such as ours and yields
1/2 d ' = (2signal power / noise power)
1 /2 d ' = (2(.67 / .45)) = about 1.73 ,

(40) (41)

from which an ROC may be plotted to a fairly good approximation. The unfiltered case in (37) and (38) above yields a d ' of only about 1.26. This concludes our presentation on these topics.

References
Barlow, H. B. Retinal noise and absolute threshold. Journal of the Optical Society of America, 1956, 46, 634 - 639. Barlow, H. B. Retinal and Central Factors in Human Vision Limited by Noise. In H. B. Barlow and P. Fatt (Eds.): Vertebrate Photoreception. New York: Academic Press, 1977. Clark, W. C. Signal detection theory and pain. In F. W. L. Kerr and K. L. Casey (Eds.), Pain (= NRPB, 1978, vol. 16(1), 14 - 27). Denton, E. J. and Pirenne, M. H. The absolute sensitivity and functional stability of the human eye. Journal of Physiology, 1954, 123, 417 - 442. Francon, M. Modern Applications of Physical Optics. New York: John Wiley and Sons, 1963. Galanter, E. Contemporary psychophysics. In T. M. Newcomp (Fwd.), New Directions in Psychology I. New York: Holt, Rinehart and Winston, 1962, pp. 94 - 114. Graham, C. Some Fundamental Data. Ch. 4 of Vision and Visual Perception, C. H Graham (Ed.). New York: John Wiley and Sons, 1965. The referenced Figure 4.6 of this paper was reproduced from Hecht, S. and Hsia, Y., Dark adaptation following light adaptation to red and white lights: Journal of the Optical Society of America, 1945, 35, 261 - 267. Hecht, S., Shlaer, S. and Pirenne, M. H. Energy, quanta and vision. Journal of General Physiology, 1942, 25, 819 - 840. Luce, R. D. Detection and Recognition. Chapter 3, pp. 103 - 189, in R. D. Luce, R. R. Bush and E. Galanter, Handbook of Mathematical Psychology, Vol. 1. New York: John Wiley and Sons, 1963. Van Trees, H. L. Detection, estimation and modulation theory. New York: John Wiley and Sons, 1968.

You might also like