Professional Documents
Culture Documents
Safety:
Auto-off on overheating
Detailed Functionality
Temperature matching with specification for different ranges that can be set using
the regulator (e.g., woolen, silk, cotton etc.)
Performance
Time required to reach the desired temperature when range is changed using the
regulator
The story is not over. The list discussed above is the set of tests for ONLY electrical parameters.
Similarly, there will be a full list of tests for mechanical parameters, like maximum height from
which there is resistance to breaking of plastic parts if dropped on a tiled floor etc.
The number of tests performed depends on the time being allocated and which in turn is decided
by the target price of the product. Increasing the number of tests lead to more test time,
requirement of sophisticated equipments, working hours of experts etc. thereby adding to cost of
the product. In case of example of the iron, if it is a cheap one (may be local made) the only test
is to verify if it heats, while if it is branded, a lot more tests are done to assure quality, safety
and performance of the product.
The simple test to verify the proper functionality of the NAND gate would comprise subjecting
the gate with inputs listed in Table 1 and then checking if the outputs match the ones listed in the
table.
Table 1. Test set for NAND gate
Just like the example of the electric iron, this test for the NAND gate is just the starting point. A
more detailed test can be enumerated as follows
Detailed tests for the NAND gate
Digital Functionality
......
........
.......... v1=1, v2=1 changed to v1=1, v2=0; After this change in input, time taken by o1to
change from 0 to 1.
............ v1=1, v2=1 changed to v1=0, v2=1; After this change in input, time taken by o1to
change from 0 to 1.
............ v1=1, v2=1 changed to v1=0, v2=0; After this change in input, time taken by o1to
change from 0 to 1.
.......
............ v1=0, v2=0 changed to v1=1, v2=1; After this change in input, time taken by o1 to
change from 1 to 0.
............... v1=1, v2=0 changed to v1=1, v2=1; After this change in input, time taken by o1to
change from 1 to 0.
................ v1=0, v2=1 changed to v1=1, v2=1; After this change in input, time taken by o1to
change from 1 to 0.
Fan-out capability:
........
....... . Static power: measurement of power when the output of the gate is not switching. This
power is consumed because of leakage current
....... . Dynamic power: measurement of power when the output of the gate switches from 0
to 1 and from 1 to 0.
Threshold Level
......
......
......
.....
......
. Noise generated when the NAND gate switches from 0 to 1 and from 1 to 0
Test at extreme conditions
......
. Performing the tests at temperatures (Low and High Extremes) as claimed in the
specification document.
These tests are for the logic level [1,2] implementation of the NAND gate.
Figure 2 shows the detailed CMOS level implementation of the gate. Tests for the NAND gate at
transistor level would be many fold more than the ones discussed above [3]. They would
comprise verifying all parameters of a transistor. Some tests for the NMOS (T3) are enumerated
below,
Output Characteristics
a set of IDS vs V DS curves for different constant values of the gate-source voltage
o
VGS
Transfer characteristics
a set of IDS vs VGS curves for different values of the substrate-source voltage V BS ,
at constant VDS
Now, Figure 3 shows the layout of the gate in a silicon die. Once can easily estimate the number
of tests by looking at the increase in complexity from logic gate level details to silicon level
details.
about a million samples to be tested. So time for complete testing of the ICs would run into
years. Thus, test set for the NAND gate should be such that results are accurate (say 99% above)
yet time for testing is low (less than a millisecond). Under this requirement, it has been seen
from experience over a large number of digital chips that nothing more than validating the
logical functionality (Table 1 for the NAND gate) and at proper time (i.e., timing test) can be
accomplished. Latter we will see that not even the full logic functionality of a typical circuit can
be tested within practical time limits.
Now we define DIGITAL TESTING. DIGITAL TESTING is not testing digital circuits
(comprised of logic gates); as discussed, all tests possible for a digital circuit are not applied in
practical cases. DIGITAL TESTING is defined as testing a digital circuit to verify that it
performs the specified logic functions and in proper time.
For the discussion so far, it may appear that testing digital VLSI circuits and other systems like
electrical iron are similar. However, there are some fundamental differences that make VLSI
circuit testing more important step to assure quality, compared to classical systems like electric
iron, fan etc.
The intension to make single chip implementation of complex systems has reached a point
where effort is made to put millions of transistors on a single chip and increase the operation
speed to more than a GHz. This has initiated a race to move into deep sub-micron technology,
which also increases the possibility of faults in the fabricated devices. Just when a technology
matures and faults tend to decrease, a new technology based on lower sub-micron devices
evolves, thereby always keeping testing issues dominant. Figure 4 shows the transistor count
(which rises with lowering the sub-micron of the manufacturing technology) versus years. This is
unlike traditional systems where the basic technology is matured and well tested, thereby
resulting in very less number of faults.
In case of detection of faults in a traditional system it is diagnosed and repaired. However, in
case of circuits, on detection of a fault the chip is binned as defective and scrapped (i.e., not
repaired). In other words, in VLSI testing chips are to be binned as normal/faulty so that only
fault free chips are shipped and no repairing is required for faulty ones.
design. The physical layout is converted into photo masks that are used in the fabrication
process. Fabrication consists of processing silicon wafers through a series of steps involving
photo resist, exposure through masks, etching, ion implantation, etc. Backtrack from an
intermediary stage of the design and test flow may be required if the design constraints are not
satisfied. It is unlikely that all fabricated chips would satisfy the desired specifications.
Impurities and defects in materials, equipment malfunctions, etc. are some causes leading to the
mismatches. The role of testing is to detect the mismatches, if any. As shown in Figure 5, the test
phase starts in form of planning even before the logic synthesis and hardware based testing is
performed after the fabrication. Depending on the type of circuit and the nature of testing
required, some additional circuitry, pin out, etc. need to be added with the original circuit so that
hardware testing becomes efficient in terms of fault coverage, test time, etc.; this is called design
for testability (DFT). After logic synthesis, (binary) test patterns are generated that need to be
applied to the circuit after it gets manufactured. Also, the expected responses (golden response)
for these test patterns are computed which are matched with the response obtained from the
manufactured circuit.
Attributes
method
of
testing Terminology
When tested?
1.
Once
manufacture
after
1.
Manufacturing Test
2.
Built in self test
2. Once before startup (BIST)
of circuit
On-line
testing
(OLT)
Where is the source of
1. An external tester
Test patterns?
2. Within the chip
1.
Automatic Test
Equipment (ATE) based
testing
1. BIST
2.
Automatic
Test
2. ATE
Equipment
testing
(ATE)
based
4. Test Economics
The basic essence of economics of a product is minimum investments and maximum returns.
To under test economics the investments (price paid) and returns (gains) for a VLSI testing
process are to be enumerated.
Investments
.....1. Man hours for test plan development:
Expert test engineers are required to make elaborate test plans.
.....2. CAD tools for Automatic Test Pattern Generation
Given a circuit, binary input patters required for testing is automatically generated by
commercial CAD tools.
.....3. Cost of ATE
ATE is a multimillion dollar instrument. So cost of testing a chip in an ATE is dependent on
......... time a chip is tested,
......... the number of inputs/outputs pins
......... frequency the test patters are to be applied
.....4. DFT or BIST circuitry
Additional circuitry kept on-chip to help in testing results in raise in chip area, thereby by
increasing the unit price (because of more use of silicon). Further, power consumption of the
chip may also rise due to addition of extra circuits. Also, it may be noted that if individual chips
are larger in area, then more number of fabricated chips are found faulty (i.e., yield is less). To
cope up with fewer yields, cost of individual unit is increased.
At-speed testing is more effective than lowered-speed testing. However, the ATEs that can
perform at-speed testing for the latest chips are extremely expensive. It may be noted that
generating and capturing high speed patterns using on-chip circuitry is manifold simple and
cheaper than doing so using an external tester (ATE). So additional DFT/BIST circuit is required
to apply patterns and capture response at high speed. Once the response is captured, they can be
downloaded by the ATE at a lower speed for analysis. The BIST/DFT circuitry will reduce the
yield of the VLSI chip, thus increasing the cost. As this cost increase is offset by cost reduction
in ATE, this additional BIST/DFT circuitry is economically beneficial.
Returns
.......a) Proper binning of Chips:
The more a testing process can be made perfect, the less will be the errors in binning the normal
chips and the faulty ones. In case of VLSI testing, it is not of much concern as how many chips
are binned as faulty, rather important is how many faulty chips are binned as normal. Faulty
chips binned as normal are shipped and that compromises the quality of test solution and the
brand name of the company. So, economic return from VLSI testing is the accuracy in
shipping functionally perfect chips.
From the next lectures, we will go into the details of Digital VLSI Testing. The tentative breakup
of the lectures would be the following
1. Introduction
In the last lecture we learnt that Digital VLSI testing is to verify if logic functionality is as per
the specifications. For example, for the 2 input NAND gate the test would comprise only the 4
input combinations and verifying the output. The time required to verify anything more than
logic functionally is beyond practical limits in terms of ATE time, man hours, cost of the chip
etc. Also, the quality of test solution (i.e., proper binning of normal and faulty chips) is
acceptable for almost all classes of circuits. This is called Functional Testing.
Now let us consider a digital circuit with 25 inputs which does bit wise ANDing of the inputs.
The black box of the circuit is shown in Figure 1. The complete functional test is given in Table
1.
Out
put
00000000000000000000000
00
00000000000000000000000
01
...........
0
..
.......
225
1111111111111111111111111
We need to apply 225 test patterns to complete the test. If we apply 1000000 patterns per second
(Mega Hz Tester), then time required is 33 Seconds per chip. In a typical scenario about 1
million chips are to be tested in a run, thereby taking about 33000000 Seconds or 550000 Hours
or 22916 Days or 62 years. So one can understand the complexity when a circuit has 100+
inputs. So for a typical circuit even Functional Testing cannot be performed due to extremely
high testing time.
To solve this issue we perform Structural Testing, which takes many fold less time compared
Functional Testing yet maintaining the quality of test solution. Structural testing, introduced by
Eldred, verifies the correctness of the specific structure of the circuit in terms of gates and
interconnects. In other words, structural testing does not check the functionality of the entire
circuit rather verifies if all the structural units (gates) are fault free. So structural testing is a kind
of functional testing at unit (gate) level.In the next section we elaborate the gain in test time for
structural testing using the example of the 25 input ANDing circuit. Also, the cost that needs to
be paid is illustrated using the same example.
2. Structural Testing: An Example Of 25 Input Bit Wise ANDing Circuit
To perform structural testing a white box view of the circuit (i.e., gate level implementation) is
required. Following that functional testing would be performed on the individual gates. Figure 2
shows a gate level implementation of the 25 input bit wise ANDing circuit (of Figure 1).
Structural testing for the circuit would comprise the patters discussed below:
1.
Testing of the 5 input AND gate G1 with the input patterns as given in Table 2.
Table 2. Test patterns for testing of gate G1
Out
put
00000
00001
32
0
..
11111
2. Repeat similar test patterns for all the 5 input AND Gates G2, G3, G4, G5 and G6.
Each individual gate is tested; however, the integration is not tested. From history of
several thousand cases of chips being fabricated and tested, it was observed that the quality of
test solution given by structural testing is acceptable.
2.
To test the individual gates, controlling and observing values of intermediary nets in a
circuit becomes mandatory, which adds to extra pins and hardware. For example, to test Gate G1
(for circuit shown in Figure 2) we need to apply signals at pins I1 through I5 which are primary
inputs and brought out of the chip as pins. But we also need to observe the output at net OG1,
which is difficult as it is not a primary output and is internal to the chip. So, for structural testing,
involving testing of Gate G1 individually, OG1 is to be brought out of the chip as a special
output (test) pin. Similarly for individual testing of the gates G2 through G5, lines OG2 through
OG5 are to be made observable by additional pin outs. So for structural testing some internal
nets are to be made observable by extra pin outs.
3.
Now for testing G6, the problem is different. The output of the gate G6 can be observed
as it is a primary output. However, to test G6, inputs are to be given through internal nets OG1
through OG5. It may be noted that these nets are outputs of other AND gates and to drive these
nets directly to a required value, not only they are to be brought out of the chip as pins but also
are to be decoupled from the corresponding AND gates. So for structural testing some internal
nets are to be made controllable by extra pin outs and circuitry. Controllability is achieved by
adding extra 2-1 Multiplexers on these nets. This is illustrated in Figure 3 (as bold lines boxes).
During normal operation of the circuit the Test Mode signal (i.e., connected to select lines of the
2-1 Multiplexers) is made 1; the outputs of the AND gates (G1-G5) are passed to the inputs of
the AND gate G6. When G6 is to be tested, Test Mode signal is made 0; inputs to Gate G6 are
decoupled from Gates G1 through G5 and now they can be driven directly by additional input
pins TI1 through TI5. These additional circuits (Multiplexers and pins) to help in structural
testing are called Design for Testability (DFT).
Figure 3. Extra pins and hardware for structural testing of circuit of Figure 2.
It is to be noted that a circuit with about a million internal lines needs a million 2-1 Multiplexers
and same number of extra pins. This requirement is infeasible. So let us see in steps how these
problems can be minimized, yet maintaining quality of test solution. In the next section we
illustrate using an example, to show internal memory can reduce extra pin outs for DFT.
load and) shift register where the carry output bit of the ( i-1) th adder is connected to
the i th input of the output shift register, 1 i 31. Once the values of all the carry bits are latched
in the register, which is done in parallel during test, they are shifted out sequentially. In this case
a full adder is tested functionally and structural information is used at the cascade level.
Now let us see the gains and costs paid by the DFT circuitry (shift registers) for structural testing
Gain:
.. ....... Instead of 2 32 test patters for functional testing, only 2 3 patterns are enough for
structural testing
......... The number of extra pins for structural testing is only 3. It many be noted that if shift
register is not used, then extra pins required are 64 as carry output bit of the full adders are to be
brought out for observation and carry in bit of the full adders are to be brought out for sending
the required test signal (controllability).
Price
......... 2-1 Multiplexers and registers are required for each internal net to controlled
......... Registers required for each internal net to observed
......... 3 extra pin outs
From Section 2 and Section 3 we note that by the use of internal registers, the problem of huge
number of extra pins could be solved but it added to requirement of huge size of shift registers
(equal to number of internal nets). In a typical circuit there are tens of thousand of internal lines,
making the on-chip register size and number of 2-1 multiplexers extremely high. So addition to
cost of a chip by such a DFT is unacceptable.
So our next target for achieving an efficient structural testing is the one with less number of onchip components and yet maintaining the quality of test solution. Structural testing with Fault
Models is the answer to the requirement. In the next section we will study Fault Models and then
see why DFT requirement is low when structural testing is done with fault models.
Before going to next section, to summarize, `` structural testing is functional testing at a level
lower than the basic input-output functionality of the system ''. For the example of the bitwise
ANDing circuit, unit for structural testing was gates and for the case of 32-bit adder it was full
adders. In general, in the case of digital circuits, structural testing is `` functional testing at the
level of gates and flip-flops ''. Henceforth in this course, basic unit of a circuit for structural
testing would be logic gates and flip-flops.
4.
Structural
Testing
with
Fault
Models
Structural testing with fault models involves verifying each unit (gate and flip flop) is free from
faults of the fault model.
4.1 What is Fault Models
A model is an abstract representation of a system. The abstraction is such that modeling reduces
the complexity in representation but captures all the properties of the original system required for
the application in question.
So, fault model is an abstraction of the real defects in the silicon such that
........ the faults of the model are easy to represent
........ should ensure that if one verifies that no faults of the model are in the circuit, quality of
test solution is maintained.
In perspective of the present discussion, the following definitions are important
Defect: A defect in a circuit is the unintended difference between the implemented hardware in
silicon and its intended design.
Error: An error is an effect of some defect.
Next we elaborate more on the stuck-at fault model which is most widely accepted because of its
simplicity and quality of test solution.
4.3 Single Stuck-at Fault Model
For single stuck-at fault model it is assumed that a circuit is an interconnection (called a netlist)
of Boolean gates. A stuck-at fault is assumed to affect only the interconnecting nets between
gates. Each net can have three states: normal, stuck-at-1 and stuck-at-0. When a net has stuck-at1 (stuck-at-0) fault it will always have logic 1(0) irrespective of the correct logic output of the
gate driving it. A circuit with n nets can have 2n possible stuck-at faults, as under single-at fault
model it is assumed that only one location can have a stuck-at-0 or stuck-at-1 fault at a time. The
locations of stuck-at faults in the AND gate G1 (in the circuit of Figure 2) are shown in Figure 6
with black circles. G1 is a 5 input AND gate thereby having 5 input nets and an output net. So,
there are 12 stuck-at faults possible in the gate.
one for the gate output (stem of the fanout) and the others for the inputs of the gates that are
driven (branches of the fanout). When the stuck-at fault is at the output of the gate (OG1, in this
case) then all the nets in the fanout have the value corresponding to the fault. However, if fault is
in one fanout (OG1' or OG1, in this case) then only the corresponding gate being driven by the
branch of the fanout is effected by the stuck-at fault and others branches are not effected.
Now the question remains, if fanout along with all its branches is a single electrical net, then
why fault in a branch does not affect the others. The answer is again by history of testing of
chips with single-stuck at fault model. It may be noted that a net getting physically stuck may not
occur. When we perform structural testing with single stuck-at fault model, we verify that none
of the sites have any stuck-at fault. Assuring this ensures with 99.9% accuracy that the circuit has
no defect. Considering different branches of a fanout independent, more locations for stuck-at
faults are created. Testing history on stuck-at faults has shown that this increased number of
locations is required to ensure the 99.9% accuracy.
To summarie, single stuck-at fault model is characterized by three assumptions:
1. Only one net is faulty at a time.
2. The faulty net is permanently set to either 0 or 1.
3. The branches of a fanout net are independent with respect to locations and affect of a stuck-at
fault.
In general, several stuck-at faults can be simultaneously present in the circuit. A circuit
with n lines can have 3n -1 possible stuck line combinations; each net can be: s-a-1, s-a-0, or
fault-free. All combinations except one having all nets as normal are counted as faults. So
handling multiple stuck-at faults in a typical circuit with some hundreds of thousands of nets is
infeasible. As single stuck-at fault model is manageable in number and also provides acceptable
quality of test solution, it is the most accepted fault model.
Now we illustrate structural testing with single stuck-at fault model.
4.4 Structural
Testing
with
Stuck-at
Fault
We will illustrate structural testing of stuck-at faults of the circuit in Figure 2.
Model
First let us consider s-a-0 in net I1, as shown in Figure 8. As the net I1 is stuck-at-0, we need to
drive it to 1 to verify the presence/absence of the fault. To test if a net is stuck-at-0(stuck-at-1),
obviously it is to be driven to opposite logic value of 1 (0). Now all other inputs (I2 through I5)
of G1 are made 1, which propagates the effect of fault to OG1; if fault is present then OG1 is 0,
else 1.To propagate the effect of fault to O, in a similar way, all inputs of G2 though G5 are made
1. Now, if fault is present, then O is 0, else 1. So, I1=1, I2=1,.,I25=1, is a test pattern for the
stuck-at-0 fault at net I1.
Now let us consider s-a-1 in an internal net (output of G1), as shown in Figure 10. As the net
output of G1 is stuck-at-1, we need to drive it to 0 to verify the presence/absence of the fault. So
at least on input of G1 is to be made 0; at G1, I1=0 and I2 through I5 are 1. To propagate the
effect of fault to O, all inputs of G2 though G5 are made 1. Now, if fault is present, then O is 1,
else 0. So, I1=0, I2=1,.,I25=1, is a test pattern for the stuck-at-1 fault at net output of G1. It is
interesting to note that same test pattern I1=0, I2=1,.,I25=1 tests both s-a-1 at net I1 and output
of G1. In other words, in structural testing with stuck-at fault model, one test pattern can test
more than one fault.
Figure 10. s-a-1 fault in net OG1 with input test pattern
Now let us enumerate the gains and price paid for structural testing with stuck-at fault model
Gains
No extra pin outs or DFT circuitry like 2-1 Multiplexers and shift resisters for
controlling and observing internal nets
Low test time as one test pattern can test multiple stuck-at faults
Price
Functionality is not tested, even for the units (gates and Flip-flops). However,
testing history reveals that even with this price paid, quality of test solution is maintained.
To conclude, Table 2 compares Structural and Functional Testing
Table 2. Comparison of structural and functional testing
Functional testing
Structural Testing
Manually
generated
verification test patterns.
design
Automatic test pattern generation (ATPG).
1. Introduction
In the last lecture we learnt that structural testing with stuck-at fault model helps in reduction of
the number of test patterns and also there is no requirement of DFT hardware. If there are n nets
in a circuit then there can be 2 n stuck-at faults and one test pattern can verify the
presence/absence of the fault. So, the number of test patterns is liner in the number of nets in a
circuit. Then we saw that one pattern can test multiple stuck-at faults, implying the total number
of test patterns required is much lower than 2 n. In this lecture we will see if faults can be
reduced, utilizing the fact that one pattern can test multiple faults and retaining any of these
faults would suffice. Let us consider the example of an AND gate in Figure 1, with all possible
stuck-at-0 faults. To test the fault at I1, input pattern is I1=1,I2=1; if the output is 0, s-a-0 fault in
I1 is present, else it is absent. Now, also for the s-a-0 fault in net I2, the pattern is I1=1,I2=1.
Same, pattern will test the s-a-0 fault in the output net O. So, it may be stated that although there
are three s-a- 0 faults only one pattern can test them. In other words, keeping one fault among
these three would suffice and these faults are equivalent.
However, an interesting case occurs for fanout nets. The faults in the stem and branches are not
equivalent. This is explained as follows. Figure 3 shows the stuck-at faults in a fanout which
drives two nets and the corresponding test patterns.
Figure 4. Circuit without fanout and step wise collapsing of equivalent faults.
Step1: Collapse all faults at level-1 gates (G1 and G2). In the AND gate G1 (and G2) the
s-a-0 faults at output and one input (first) are collapsed; s-a-0 fault is retained only at one input
(second).
Step-2: Collapse all faults at level-2 gates (G3). In the OR gate G3 the s-a-1 faults at
output and one input (from G1) are collapsed; s-a-1 fault is retained only at one input (from G2).
Now we consider a circuit with fanout. Figure 5 gives an example of a circuit with a fanout
driving two nets and illustrates the step wise collapsing of faults.
Step1: Collapse all faults at level-1 gate (G1). In the AND gate G1 the s-a-0 faults at
output and one input (first) are collapsed; s-a-0 fault is retained only at one input (second).
Step-2: Collapse all faults at level-2 gates (G2). In the OR gate G2 the s-a-1 faults at
output and one input (from G1) are collapsed; s-a-1 fault is retained only at one input (from
fanout net).
Figure 5 Circuit with fanout and step wise collapsing of equivalent faults.
Is collapsing by fault equivalence the only way to reduce the number of faults? The answer is no.
In the next section we will discuss another paradigm to reduce stuck-at faultsby fault
dominance
If all tests of a stuck-at fault f1 detect fault f2 then f2 dominates f1. If f2 dominants f1 then f2 can
be removed and only f1 is retained.
Now we consider the circuits illustrated in Figure 4 and Figure 5 and see further reduction in the
number of faults by collapsing using fault dominance. Figure 8 illustrates fault collapsing using
dominance for the circuit without fanout. As in case of equivalence, collapsing using dominance
is done in level wise as discussed below.
Step1: Collapse all faults at level-1 gates (G1 and G2). In the AND gate G1 (and G2) the
s-a-1 faults at output is collapsed; s-a-1 faults is retailed only at the inputs.
Step-2: Collapse all faults at level-2 gates (G3). In the OR gate G3 the s-a-0 fault at
output is collapsed and the s-a-0 faults at the inputs are retained. It may be noted that the s-a-0
faults at inputs of G3 are retained indirectly by s-a-0 faults at the inputs of G1 and G2 (using
fault equivalence).
Figure 8. Circuit without fanout and step wise collapsing of faults using dominance.
Now we consider the circuit with fanout given in Figure 5 and see reduction in the number of
faults by collapsing using fault dominance (in Figure 9). The steps are as follows
Step1: Collapse all faults at level-1 gate (G1). No faults can be collapsed.
Step-2: Collapse all faults at level-2 gates (G2). In the OR gate G2 the s-a-0 fault at the
output is collapsed, as s-a-0 fault at the inputs are retained; s-a-0 fault from the fanout branch is
retained explicitly and the one at the output of gate G1 is retained indirectly by s-a-0 faults at the
inputs of G1 (using fault equivalence).
Figure 9. Circuit with fanout and step wise collapsing of faults using dominance.
The following can be observed after faults are collapsed using equivalence and dominance:
1.
A circuit with no fanouts, s-a-0 and s-a-1 faults is to be considered only at the
primary inputs (Figure 8(c)). So in a fanout free circuit test patters are 2. (Number of primary
inputs).
2.
For circuit with fanout, checkpoints are primary inputs and fanout branches.
Faults are to be kept only on the checkpoints (Figure 9(c)). So a test pattern set that detects all
singlestuck-at faults of the checkpoints detects all single stuck-at faults in that circuit.
Points 1 and 2 are termed as check point theorem.
1.
For what class of circuits, maximum benefit is achieved due to fault collapsing and when
the benefits are less? What is the typical number for test patterns required to test these classes of
circuits?
Answer1.
For circuit with no fanouts, maximum benefit is obtained because faults in the primary inputs
cover all other internal faults. So total number of test vectors are 2.(number of primary inputs).
In a circuit with a lot of fanout branches, minimum benefit is obtained as faults need to be
considered in all primary inputs and fanout branches. So total number of test vectors are 2.
(number of primary inputs + number of fanout branches).
...........2. What faults can be collapsed by equivalence in case of XOR gate?
Answer
2.
From the figure given below it may be noted that all stuck-at faults in the inputs and output of a
2-input XOR gate, results in different output function. So fault collapsing cannot be done for
XOR gate using fault equivalence.
1.Introduction
In the last lecture we learnt how to determine a minimal set of faults for stuck-at fault model in a
circuit. Following that a test pattern is to be generated for each fault which can determine the
presence/absence of the fault under question. The procedure to generate a test pattern for a given
a fault is called Test Pattern Generation (TPG). Generally TPG procedure is fully automated and
called Automatic TPG (ATPG). Let us revisit one circuit discussed in last lecture and briefly note
the steps involved in generating a test pattern for a fault. Consider the circuit in Figure 1 and s-a1 fault at the output of gate G1. Three steps are involved to generate a test pattern
1.
2.
3.
Test
Pattern No.
Output
Test Pattern
I1
I2
I3
I4
I6..................I25
0 0 0 0 0 11111111111111111111
I5
1 if fault
0 if no Fault
0 0 0 0 1 11111111111111111111
1 if fault
0 if no Fault
....................
..
.
225
....................
1 1 1 1 0 11111111111111111111
1 if fault
0 if no Fault
Now the question is, do we require these three steps for all faults? If this is the case, then TPG
would take significant amount of time. However, we know that one test pattern can test multiple
faults. In this lecture we will see how TPG time can be reduced using the concept of one TP can
test multiple faults.
Let us understand the basic idea of TPG time reduction on the example of Figure 1. Let us not try
to generate a test pattern for a fault, rather apply a random pattern as see what faults are covered.
Table 2 shows two random patterns and the faults they test. For example, the random pattern No.
1 generates a 0 at the output of G1, 1 at the outputs of gates G2 through G5 and 0 at the output of
G6. It can be easily verified that this pattern can test s-a-1 fault at two locations.
.................. output of G1
.................. output of G6
...........In case of faults, output at O is 1 and under normal conditions it is 0.
However, the >> random pattern (No.2) can detect a lot more s-a faults in the circuit. It can test
s-a-0 faults in all nets of the circuit. For example, let us take s-a-0 fault at output of G3. It can be
easily verified that if this pattern is applied, O will be 0 if fault is present and 1 other wise.
Similarly, we can verify that all possible s-a-1 faults in the circuit (31 in number) would be
tested.
Pattern
No.
Faults Detected
Random Pattern
I1 I2 I3 I4 I5
I6I25
10001
11111111111111111111
11111
11111111111111111111
So we can see that by using 2 random patterns we have generated test patterns for 33 faults. On
the other hand if we would have gone by the sensitize-propagate-justify approach these three
steps would have been repeated 33 times. Now let us enumerate the steps used to generate the
random test patters and determine the list of faults detected ( called faults covered ).
............1. Generate a random pattern
...........2. Determine the output of the circuit for that random pattern as input
...........3. Take fault from the fault list and modify the Boolean functionally of the gate whose
input has the fault. For example, in the circuit in Figure 1, the s-a-1 fault at the output of gate G1
modifies the Boolean functionality of gate G6 as 1 AND I2 AND I3 AND I4 AND I5 (which is
equivalent toI2 AND I3 AND I4 AND I5) .
...........4. Determine output of the circuit with fault for that random pattern as input.
...........5. If the output of normal circuit varies from the one with fault, then the random pattern
detects the fault under consideration.
...........6. If the fault is detected, it is removed from the fault list.
...........7. Steps 3 to 6 are repeated for another fault in the list. This continues till all faults are
considered.
...........8. Steps 1 to 7 are repeated for another random pattern. This continues till all faults are
detected.
In simple words, new random patterns are generated and all faults detected by the pattern are
dropped. This continues till all faults are detected by some (random) pattern. Now the question
arises, How many random patterns are required for a typical circuit? The answer is very high.
This situation is identical to the game of Balloon blasting with air gun. If there are a large
number of balloons in the front, aim is not required and almost all shots (even without aim) will
make a successful hit. However, as balloons become sparse blind hits would rarely be successful.
So, the balloons left after blind shooting are to be aimed and shot. The same is true for testing.
For initial few random patterns faults covered will be very high. However, as the number of
random patterns increase the number of new faults being covered decreases sharply; this
phenomenon can be seen from the graph of Figure 2. It may be noted that typically beyond 90%
fault coverage, it is difficult to find a random pattern that can test a new fault. So, for the
remaining 10% of faults it is better to use the sensitize-propagate-justify approach. These
remaining 10% of faults are called difficult to test faults.
Use random patters, till a newly added pattern detects a reasonable number of new faults
2.
Figure 3(B). C code for the circuit of Figure 3(A) required for compiled code simulation
Compiled code simulation is one of the simplest techniques of all circuit simulators. However,
for every change in input the complete code is executed. Generally, in digital circuits, only 110% of signals are found to change at any time. For example, let us consider the circuit given in
Figure 3(A) where input I1 changes from 1 to 0; this is illustrated in Figure 4. We can easily see
that change in input modifies only 3 out of 8 lines. On the other hand a compiled code version
would require reevaluation of all the 8 variables. >> we will discuss event-driven simulators
which may not require evaluating all gates on changing of an input.
The procedure is simple, but is too complex in terms of time required. Broadly speaking, time
required
is
.
Now we discuss algorithms which attempt to reduce this time. Basically such algorithms are
based on two factors
1.
Determine more than one fault that is detected by a random pattern during one simulation
run
2.
Minimal computations when the input pattern changes; the motivation is similar to event
driven simulation over complied code simulation.
3. 3.1 Serial Fault Simulation
4. This is the simplest fault simulation algorithm. The circuit is first simulated (using event
driven simulator) without any fault for a random pattern and primary output values are
saved in a file. >>, faults are introduced one by one in the circuit and are simulated for
the same input pattern. This is done by modifying the circuit description for a target fault
and then using event driven simulator. As the simulation proceeds, the output values (at
different primary outputs) of the faulty circuit are dynamically compared with the saved
true responses. The simulation of a faulty circuit halts when output value at any primary
output differs for the corresponding normal circuit response. All faults detected are
dropped and the procedure repeats for a new random pattern. This procedure is illustrated
in Figure 7. The circuit has two inputs and two primary outputs. For the random input
pattern I1=1, I2=1 the output is O1=0, O2=1 under normal condition. Now let us consider
a s-a-0 fault in I2. Event driven simulation (with scheduled events and activity list) for
the circuit for input pattern I1=1, I2=1 and s-a-0 fault at I2 is shown in Table 3. It may be
noted that that event driven simulation for this circuit requires 4 steps (t=0 to t=3).
However, at step t=2, we may note that value of O2 is determined as 0, but under normal
condition of the circuit O2 is 1. So, the s-a-0 fault in I2 can be detected by input pattern
I1=1, I2=1 when O2 is 0. In other words, for input pattern I1=1, I2=1 the s-a-0 fault in I2
is manifested in primary output O2. As the fault is manifested in at least one primary
output line, we need not evaluate other outputs (for that input pattern and the fault). In
this example, for s-a-0 fault at I2 (after t=3 step of simulation), input pattern I1=1, I2=1
gives O1=0, O2=0; so fault can be detected ONLY at O2. However, we need not do this
computation----if only one primary output line differs for an input pattern under normal
and fault condition that input pattern can test the fault.
5. So, s-a-0 fault at I2 is dropped (i.e., determined to be tested by pattern I1=1, I2=1) after
t=2 steps of the event driven simulation of the faulty circuit. To summarize, detection of
stuck at faults in a circuit using event driven fault simulation may save computation time,
as for many cases all the steps need not be carried out.
6.
Time
Scheduled Event
t=0
I1=1, I2=1
t=1
I2(G1)=0,I2(G2)=0
t=2
t=3
Activity List
I2(G1), OG1,I2(G2),O2
OG1, OG2,O2
O1
8. Now we discuss the cases for the other faults shown in Figure 7.
Event driven simulation for the circuit (of Figure 7) for input pattern I1=1, I2=1 and fault s-a-1
in OG2 is shown in Table 4. It may be noted that the fault s-a-1 in OG2 is detected by input
pattern I1=1, I2=1 at output O1; in case of fault O1=1, were as in case of normal circuit O1=0.
Also, all four steps of the event driven simulation are required to detect the fault. So, detecting sa faults using event driven fault simulation may not always save computation time, as in the
worst case all the steps may be needed.
Table 4. Event driven simulation for circuit of Figure 7 for input pattern I1=1, I2=1 and s-a1fault in OG2
Tim
e
t=0
t=1
Time
Scheduled Event
t=0
I1=1, I2=1
I2(G1)=0,I2(G2)=0
Activity List
I2(G1), OG1,I2(G2),O2
OG1, OG2,O2
t=2
t=3
O1=1
O1
Event driven simulation for the circuit (of Figure 7) for input pattern I1=1, I2=1 and fault s-a-0
in I2(G1) is shown in Table 5. Like s-a-0 fault in I2, s-a-0 fault in I2(G1) is detected by I1=1,
I2=1 at O2 and the fault simulator requires up to t=2 steps.
Table 5. Event driven simulation for circuit of Figure 7 for input pattern I1=1, I2=1 and fault s-a0 fault in I2(G1)
Tim
e
t=0
Time
Scheduled Event
t=0
I1=1, I2=1
t=1
I2(G1)=0,I2(G2)=0
t=2
t=3
Activity List
I2(G1), OG1,I2(G2),O2
OG1, OG2,O2
O1
Serial fault simulation is simple, however as discussed earlier, for n faults the computing time is
O(
). With fault-dropping, this time can be
significantly lower, especially if many faults are detected (and dropped) by random patterns used
earlier. >> we will see advanced algorithms to reduce the complexity of fault simulation, mainly
using two points
Determine in one simulation, more than one fault if they can be detected by a given
random pattern
Use information generated during simulation of one random pattern for the >> set of
patterns.
Before we go for the advanced algorithms, a small question remains. When we are talking of
event driven simulation, there is no question for nets being stuck at 0 or 1. So how can we use
standard event driven simulator to simulate circuits having stuck at faults. Let us look at Table 3,
t=1. We may note that I2(G1)=0, I2(G2)=0, because of the s-a-0 fault in I2; under normal
condition I2(G1)=1, I2(G2)=1. It is very obvious to find how the values are obtained in faulty
case. However, we need to see how we can use event driven simulator to simulate circuits with
stuck at faults. It is to be noted that we cannot modify the simulator algorithm to handle faults;
instead we will modify the circuit. Modifying the circuit for simulating stuck at faults is
extremely simple. If a net I say, has a s-a-0 fault, we insert a 2-input AND gate with one input of
the gate fixed to 0 and the other input of the gate is driven by I. The gate(s) previously driven by
I is now driven by output of the AND gate added. Similarly, for s-a-1 fault a 2-input OR gate
with one input of the gate fixed to 1 is added.
Figure 8 illustrates insertion of the stuck at faults in the circuit shown in Figure 7. s-a-0 fault at
I2 is inserted as follows. A 2-input AND gate with one input connected to 0 and the other to I2 is
added. Now the newly added AND gate drives G1, which was earlier driven by I2.
Similarly, a 2-input OR gate with one input connected to 1 and the other to OG2, inserts s-a-1
fault at OG2. Now the newly added OR gate drives G3, which was earlier driven by OG2.
Figure 8. Insertion of faults in the circuit of Figure 7 for fault simulation in event driven
simulator
To summarize, with simple modifications of the circuit, an event driven simulator can determine
the output of a circuit with s-a-0 and s-a-1 faults. Henceforth, in our lectures we will not modify
the circuits by inserting gates for fault simulation of stuck at faults; we will only mark the faults
(as before by circles) and assume that the circuit is modified appropriately.
3.2 Parallel Fault Simulation
As discussed in the last section, serial fault simulation processes one fault in one iteration.
Parallel fault simulation, as the name suggests can processes more than one fault in one pass of
the circuit simulation. Parallel fault simulation uses bit-parallelism of a computer. For example,
in a 16-bit computer (where a word comprises 16 bits) a logical operation (AND, OR etc.)
involving two words performs parallel operations on all respective pairs of 16-bits. This allows a
parallel simulation of 16 circuits with same structure (gates and connectivity), but different
signal values.
In a parallel fault simulator, each net of the circuit is assigned a 1-dimensional array of width w ,
wherew is the word size of the computer where simulation is being performed. Each bit of the
array for a net I say, corresponds to signal at I for a condition (normal or fault at some point) in
the circuit. Generally, the first bit is for normal condition and the other bits correspond to w
1 stuck at faults at various locations of the circuit.
In case of parallel simulation, input lines for any gate comprise binary words of length w (instead
of single bits) and output is also a binary word of length w. The output word is determined by
simple logical operation (corresponding to the gate) on the individual bits of the input words.
Figure 9 explains this concept by example of an AND gate and an OR gate where w is taken to
be 3. In the example, in gate G1, input word at I1 is 110 and that in I2 is 010. Output word 010 is
obtained by bit wise ANDing of the input words at I1 and I2; this is similar to simulating the gate
for three input patterns at a time namely, (i) I1=1, I2=0 (ii) I1=1, I2=1 and (iii) I1=0, I2=0.
t may be noted that each net of the circuit has a 4 word array, where the
1.
2.
Copy the values of array of the stem to the arrays of the branches
If there is any s-a-0 (or s-a-1) fault is a branch, change the corresponding bit of the
array to 0 (or 1).
The array at OG1 is obtained by parallel logic AND operation on the words at the input lines I1
and I2(G1). The array at OG2 is obtained by parallel logic NOT operation on the word at the
input line I2(G2) and making the third bit as 1 (as it corresponds to s-a-1 fault at OG2). So, to fill
the values of the array in the output of a gate when the values at the input are known, there are
two steps
1.
2.
Obtain the values of array by parallel logic operation on the bits of the input words.
If there is s-a-0 (or s-a-1) fault at the output of the gate, change the corresponding bit of
the array to 0 (or 1).
In a similar way the whole example can be explained.The array at O1 is 0010. It implies that on
the input I1=1 and I2=1
O1 is 0 under normal condition
O1 is 0 under s-a-0 fault at I2(G1),
O1 is 1 under s-a-1 fault at OG2,
O1 is 0 under s-a-0 at I2.
It can be seen that only s-a-1 fault at OG2 causes a measurable difference at primary output O1
under normal and faulty condition for input pattern I1=1,I2=1. So pattern I1=1,I2=1 can detect
only s-a-1 fault at OG2 (at output O1) but cannot detect s-a-0 fault at I2(G1) and s-a-0 fault at I2.
The array at O2 is 1010. It implies that on the input I1=1 and I2=1
O2 is 1 under normal condition
O2 is 0 under s-a-0 fault at I2(G1),
O2 is 1 under s-a-1 fault at OG2,
O2 is 0 under s-a-0 fault at I2.
It can be seen that s-a-0 fault at I2(G1) and s-a-0 fault at I2 cause a measurable difference at
primary output O2 under normal and faulty condition for input pattern I1=1,I2=1. So pattern
I1=1,I2=1 at output O2 can detect s-a-0 fault at I2(G1) and s-a-0 fault at I2 but cannot detect s-a-
1 fault at OG2. However, I1=1,I2=1 at output O1 detects s-a-1 fault at OG2. So all the three
faults are detected by I1=1,I2=1.
Thus, under once scan of the circuit, information about three faults for a random pattern is
discovered. It may be noted from Figure 7, that three scans of the circuit were required to find
the same fact. So parallel fault simulation, speeds us the serial fault simulation scheme by w1 times.
After an iteration of parallel fault simulation, >> set of w-1 faults are considered and the
procedure repeated. After all the faults are considered (i.e., total number of faults/(w1) iterations) the ones detected by the random pattern are dropped. >> another random pattern is
taken
and
a
new
set
of
iterations
are
started.
So parallel fault simulation speeds up serial fault simulation w-1 times, but for a random pattern
more than one iterations are required. In the >> section we will see a scheme where in one
iteration, information about ALL faults for a random pattern can be generated.
3.3 Deductive Fault Simulation
Form the discussion in the last section it can be noted that parallel fault simulation can speed up
the procedure only by a factor that is dependent on the bit width of the computer being used. In
this section we will discuss about deductive fault simulation, a procedure which can determine in
a single iteration, detectability/undetectability about all faults by a given random pattern. In the
deductive method, first the fault-free circuit is simulated by a random pattern and all the nets are
assigned the corresponding signal values. Deductive, as the name suggests, all faults detectable
at the nets are determined using the structure of the circuit and the signal values of the nets.
Since the circuit structure remains the same for all faulty circuits, all deductions are carried out
simultaneously. Thus, a deductive fault simulator processes all faults in a single pass of
simulation augmented with the deductive procedures. Once detectability of all the faults for a
random pattern is done, the same procedure is repeated for the >> random pattern after
eliminating the covered faults.
We will explain the procedure by a simple example circuit given in Figure 11.
.....
>> we will see the fault deductions at the various nets if I1=1. This situation is illustrated in
Figure 12.
o, now the question remains, what are the rules for fault deduction at a 2-input AND gate with
one input as 1 and the other 0. The rules are explained using another example in Figure 13.
Figure 13. Example of deductive fault simulation with input I1=0 (Inputs of the AND gate are 1
and 0)