You are on page 1of 53

ABSTRACT

A new low-power (LP) scan-based built-in self test (BIST) technique is


proposed based on weighted pseudorandom test pattern generation and reseeding.
A new LP scan architecture is proposed, which supports both pseudorandom
testing and deterministic BIST. During the pseudorandom testing phase, an LP
weighted random test pattern generation scheme is proposed by disabling a part of
scan chains. During the deterministic BIST phase, the design-for-testability
architecture is modified slightly while the linear-feedback shift register is kept
short. In both the cases, only a small number of scan chains are activated in a
single cycle.

The proposed methodology 3-weighted codes and Walsh encoded


technique to improve coverage for structural faults. This BIST methodology uses
an automatic test pattern generation tool to generate the constrained test patterns to
effectively test the combinational fundamental intellectual properties used in the
processor. The effectiveness of proposed methodology is demonstrated by the
achieved fault coverage, test program size and improve PRPG compression is an
effective way with good fault coverage and attain high quality, accuracy and run
time efficiency. Sufficient experimental results are presented to demonstrate the
performance of the proposed LP BIST approach. Experimental results demonstrate
that the proposed ATPG method can achieve good structural fault coverage

1
CHAPTER I
INTRODUCTION

1. INTRODUCTION

In general, combinational circuits are not pseudo exhaustively testable and so there
a deterministic test sets have to be applied, if the circuit is not allowed to be segmented
by test points for timing or area reasons. Earlier DFT (Design for Testability) techniques
are concentrated on fault coverage, test length, test application time and test quality.
Today’s ATPG (Automatic Test Pattern Generator) tends to produce less test patterns
only and also it cannot cover most of the test scenarios. Apart from this these test pattern
produces number of internal nodes switching.

Fig. 1.1: BIST Configuration.


As the complexity of VLSI circuits constantly increases, there is a need of a to be
used. Built-in self-test enables the chip to test itself and to evaluate the circuit’s response.
There have been proposed many BIST equipment design methods. In most of the state-
of-the-art methods some kind of a (PRPG) is used to produce vectors to test the circuit.
These vectors are applied to the circuit either as they are, or the vectors are
modified by some additional circuitry in order to obtain better functional

2
coverage.Patterns generated by simple LFSRs or CA often do not provide a satisfactory
functional coverage.

Thus, these patterns have to be modified somehow. One of the most known
approaches is the weighted random pattern testing. Here the LFSR code words are
modified by a weighting logic to produce a test with given probabilities of occurrence of
0’s and 1’s at the particular circuit under test (CUT) inputs. As digital systems become
more complex, they become much harder and more expensive to test. One solution to this
problem is to add extra logic to the IC so that it can test itself. This is referred to as Built-
In-Self-Test (BIST).BIST approach is beneficial in many ways. First, it can reduce
dependency on external costly Automatic Test Equipment (ATE). In addition, BIST can
provide at speed, in system testing of the Circuit-Under Test
Low Power consumption has become increasingly important in hand-held
communication systems and battery operated equipment, such as laptop computers, audio
and video-based multimedia products, and cellular phones. For this new class of battery-
powered devices, the energy consumption is a critical design concern since it determines
the lifetime of the batteries. In addition, the capabilities presented by advanced submicron
CMOS technology allowing putting millions on transistors on chip and clocking at
hundreds of MHz have compounded the problem of power/energy consumption. A strong
push towards reducing power consumption is also coming from producers of high-end
systems. The cost associated with packaging and cooling of such devices is huge and
technological constraints are severe: unless power consumption is reduced, the resulting
heat limits systems performance
Although over the next years, the primary objective of manufacturing test will
remain essentially the same to ensure reliable and high quality semiconductor products
conditions and consequently also test solutions may undergo a significant evolution. The
semiconductor technology, design characteristics, and the design process are among the
key factors that will impact this evolution. With new types of defects that one will have
to consider to provide the desired test quality for the next technology nodes such as 3-D,

3
it is appropriate to pose the question of what matching design-for-test (DFT) methods
will need to be deployed.
The most important role of the test pattern generator is to relate test patterns to the
circuit under test (CUT). The output response analyzer is analyzed from the resultant
output patterns as shown in the figure 1. Ideally, a BIST scheme is effortless to
implement and must provide high fault coverage. When test pattern is applied to CUT
(Circuit under Test) a number of nodes switching happens and also the correlation
between two test vectors are very less. The test generation module that has been widely
used in BIST design is the linear feedback shift register (LFSR). ). Linear Feedback Shift
Registers (LFSR) are extensively used in BCH encoders and CRC operations.
However, an LFSR is a linear system, leading to fairly easy cryptanalysis. A
sequential LFSR circuit cannot meet the speed requirement when high-speed data
transmission is required. The new test pattern generation method using the processing
units of DSP (digital signal processing). This proposed method utilizes GBMAC
(multiply and accumulate) units, which consist of multiplier and several accumulator, in
DSP to generate test patterns. Multiplier has a seed value and performs a Galois based
Multiplication.
In order to generate pseudo random test patterns the addition of two products is
done in accumulator. Consequently, block can be analyzed by using a mixture of test-
pattern combinations helps to Decreasing power consumption, area overhead and delay
that are associated with sequential circuits.

OBJECTIVES

 To improve PRPG compression is an effective way with good fault coverage


 To attain high quality, accuracy and run time efficiency.
 To reduce the memory size for occupy the instructions.

4
1.2. ADVANTAGE

• To improve PRPG compression is an effective way with good fault coverage


• To attain high quality, accuracy and run time efficiency.
• To reduce the memory size for occupy the instructions.

1.3 APPLICATION

• Communication application
• To test modern microprocessor
• Testing of VLSI circuits

5
CHAPTER II
EXISTING SYSTEM

Test compression introduced a decade ago, has quickly become the main
stream DFT methodology. However, it is unclear whether test compression will be
capable of coping with the rapid rate of technological changes over the next
decade. Interestingly, logic built-in self-test (LBIST), originally developed for
board, system, and in-field test, is now gaining acceptance for production test as it
provides very robust DFT and is used increasingly often with test compression.
This hybrid approach seems to be the next logical evolutionary step in DFT. It has
potential for improved test quality; it may augment the abilities turn at-speed
power aware tests, and it can reduce the cost of manufacturing test while
preserving all LBIST and scan compression advantages.
Attempts to overcome the bottleneck of test data bandwidth between the
tester and the chip have made the concept of combining LBIST and test data
compression a vital research and development area. In particular, several hybrid
BIST schemes store deterministic top-up patterns (used to detect random pattern
resistant faults) on the tester in a compressed form, and then use the existing BIST
hardware to decompress these test patterns.
Some solutions embed deterministic stimuli by using compressed weights or
by perturbing pseudorandom vectors in various fashions .If BIST logic is used to
deliver compressed test data, then underlying encoding schemes typically take
6
advantage of low fill rates, as originally proposed in LFSR coding ,which
subsequently evolved first into static LFSR reseeding, and then into dynamic
LFSR reseeding.
Thorough surveys of relevant test compression techniques can be found, As
with conventional scan-based test, hybrid schemes, due to the high data activity
associated with scan-based test operations, may consume much more power than a
circuit under test was designed to function under. With overstressing devices
beyond the mission mode, reductions in the operating power of ICs in a test mode
have been of concern for years.
Full-toggle scan patterns may draw several times the typical functional mode
power, and this trend continues to grow, particularly over the mission mode’s peak
power. This power induced over-test may result in thermal issues, voltage noise,
power droop, or excessive peak power over multiple cycles which, in turn, cause a
yield loss due to instant device damage, severe decrease in chip reliability, shorter
product lifetime, or a device malfunction because of timing failures following a
significant circuit delay increase
The first method is the exhaustive testing method and in that, all feasible
input patterns are apply to the CUT i.e. intended for an n-input combinational
circuit, all possible 2n patterns are needed to be applied. The advantage of this
method is that all non-redundant faults can be detected but a bridging fault cannot
be detected.

The disadvantage of this method is when n is large, the test application time
becomes excessive, even with high clock speeds. Thus, exhaustive testing is
suitable only for the circuits with a partial number of inputs. The second method is
that a slight modification has been done in the previous exhaustive testing and is so
called the pseudo exhaustive testing (Bo Ye., Tian-wang Li., July 2010). It has the

7
identical advantages of exhaustive testing while considerably dropping the number
of test patterns to be useful.

The basic idea in the pseudo exhaustive testing is to partition the circuit
under test into several sub-circuits. In that, each sub-circuit has few adequate
inputs is available for the exhaustive testing is to be handled for it. This concept
has been used in autonomous design verification procedure.

The test patterns compression method is basically based on finding the best
overlap of test patterns pre-generated by an ATPG. The test patterns are serially
shifted into the scan chain. This idea was described for the first time. This
algorithm generally tries to find contiguous and consecutive test patterns having
the maximum overlap. Deterministic test patterns are generated by an ATPG and
compacted. Patterns in the scan chain are checked whether they match with one or
more test patterns which were not employed in the sequence yet. In, the pattern
overlapping problem is converted into a Traveling Salesman Problem (TSP), for
which different heuristics have been presented.
New test patterns compression algorithm (SAT-Compress) based on a
modification of SAT-based ATPG is presented. This algorithm utilizes a CNF
(Conjunctive Normal Form) implicit representation of test patterns and tries to
compress the test patterns by overlapping. In contrast to competitive state-of-the-
art test compression techniques, the proposed algorithm does not rely on a pre-
generated test set; most suitable test patterns are being generated on the fly.

8
PROPOSED SYSTEM

we propose a new LP scan-based BIST architecture, which supports LP


pseudorandom testing, LP deterministic BIST and LP reseeding. We present the
major contributions of this paper in the following. 1) A new LP weighted
pseudorandom test pattern generator using weighted test-enable signals is proposed
using a new clock disabling scheme. The design-for testability (DFT) architecture
to implement the LP BIST scheme is presented. Our method generates a series of
degraded sub circuits. The new LP BIST scheme selects weights for the test-enable
signals of all scan chains in each of the degraded sub circuits, which are activated
to maximize the testability.
A new LP deterministic BIST scheme is proposed to encode the
deterministic test patterns for random pattern- resistant faults. Only a part of flip
flops are activated in each cycle of the whole process of deterministic BIST. A new
procedure is proposed to select a primitive polynomial and the number of extra
variables injected into the linear-feedback shift register (LFSR) that encode
all deterministic patterns. The new LP reseeding scheme can cover a number of
vectors with fewer care bits, which allows a small part of flip flops to be activated
in any clock cycle.
We propose a new weighted PRPG for the new LP BIST approach. The new
design is significantly different from the one. This is mainly because the proposed
LP design uses the gating technique to disable most of the scan chains, where the
pseudo primary inputs (PPIs) of the disabled scan chains are set to constant values .
9
all scan chains in the same scan tree are selected into the same subset of scan
chains, which are driven by the same clock signal.

Our method selects weights for each scan chain in the degraded subcircuits.
Let the scan chains be partitioned into k subsets, where only one subset of scan
chains is activated in any clock cycle. Our method selects optimal weights for all
scan chains in the subset of scan chains in each round. It requires k separate rounds
to determine optimal weights for all scan chains.

Weighted Pseudorandom Test Pattern Generation

Our method generates the degraded sub circuits for all subsets of scan chains
in the following way. All PPIs related to the disabled scan chains are randomly
assigned specified values (1 and 0). Note that all scan flip flops at the same level of
the same scan tree share the same PPI. For any gate, the gate is removed if its
output is specified; the input can be removed from a NAND, NOR, AND, and OR
gates if the input is assigned a no controlling value and it has at least three Inputs.
For a two-input AND or OR gate, the gate is removed if one of its inputs is
assigned a no controlling value.
For a NOR or NAND gate, the gate degrades to an inverter if one of its
inputs is assigned a non controlling value. For an XOR or NXOR gate with more
than three inputs, the input is simply removed from the circuit if one of its inputs is
assigned value 0; the input is removed if it is assigned value 1, an XOR gate
changes to an NXOR gate, and an NXOR gate changes to an XOR gate. For an
XOR gate with two inputs, and one of its inputs is assigned value 0, the gate is
deleted from the circuit.
10
For a two-input NXOR gate, the gate degrades to an inverter. If one of its
inputs is assigned value 1, two-input XOR gate degrades to an inverter. If one of its
inputs is assigned value 1, a two-input NXOR gate can be removed from the
circuit. We first propose a new procedure to generate the weights of the test-enable
signals for all scan chains in the LP DFT circuit after the degraded subcircuits for
each subset of scan chains, which are driven by a single clock signal, have been
produced. The i -controllability C_ i (l) (i ∈ {0, 1}) of a node l is defined as the
probability that a randomly selected input vector sets l to the value i . The
observability O_(l) is defined as the probability that a randomly selected input
vector propagates the value of l to a primary output. The signal probability of a
node is defined in the same manner as its 1-controllability measure.

Low-Power Deterministic BIST and Reseeding

An effective seed encoding scheme is used here to reduce the storage


requirements for the deterministic test patterns of the random-pattern-resistant
faults. The encoded seed is shifted into the LFSR first. A deterministic test vector
is shifted into the scan trees that are activated by the gating logic, where each scan-
in signal drives a number of scan trees, and only one of the scan trees driven by the
same scan-in signal is activated. The extra variables are injected into the LFSR
when the seed is shifted into the activated scan trees. The gating logic partitions
scan trees into multiple groups. The first group of scan trees is disabled after they
have received the test data.
11
The second group of scan trees is activated simultaneously, and all other
scan trees are disabled. The seed can be stored in an extra shadow register, which
is reloaded to the LFSR in a single clock cycle. The scan shift operations are
repeated when the extra variables are injected into the LFSR. This process
continues until all scan trees have received test data.
The outputs of all scan chains, which are driven by the same clock signal,
are connected to the same response compactor during the deterministic BIST
phase. This offers additional flexibility for test encoding. The test responses of the
previous test vector can be shifted out with only a few clock cycles. (corresponding
to the depth of the scan trees in the pseudorandom testing phase). For a scan chain
architecture, the number of clock cycles needed to shift-out test responses of the
previous deterministic test vector is much larger.
The proposed LP tree-based architecture makes the reseeding scheme much
easier to implement. Let us describe the details about constructing the scan forest.
Assume that the number of scan flip flops at each level in the same scan tree is l
and the depth of the scan forest is d. For a given scan-in pin, l scan flip flops are
selected among all scan flip flops for the first level of the scan tree.
The routing overhead is minimized when constructing the scan trees, which
can be easily estimated using tools. Experimental results reported in this paper
were obtained using the Astro tool. All scan flip flops at the same level in the same
scan tree meet the following condition. Each pair of scan flip flops has no
combinational successor in the circuit.

12
Each scan flip flop p at the first level of the scan tree is connected to a scan
flip flop f at the second level that has the minimum distance from p among all scan
flip flops that can be placed at the second level of the scan tree, where all scan flip
flops at the second level of the same scan tree have no common combinational
successor. Repeat the above process until the scan trees have been constructed. It is
not necessary for scan flip flops at the same level of the same scan tree to be in the
neighborhood.
We propose an LP deterministic BIST scheme with reseeding. The
deterministic test vectors for the random-pattern resistant faults are ordered
according to the number of care bits. Our method partitions all scan chains into
multiple subsets, while only one subset of scan trees is activated at any clock cycle.
The gating logic controls the whole test application process. The first deterministic
test vector is shifted into all scan trees as follows.
The seed is first shifted into the LFSR. The extra variables with calculated
values are injected into the LFSR when the seed is applied to the first subset of
activated scan trees. The same values on the extra inputs are delivered after the
same seed is loaded to the LFSR again for the second subset of activated scan
trees. This process continues until all scan trees have received the test vector. The
capture process starts after the LP shift period.
The first subset of scan trees captures test responses when all other scan
trees are disabled. The seed is loaded to the LFSR, and the extra variables with the
calculated values are injected again simultaneously in order to fill the deterministic
test vector again. The captured test responses are shifted into the MISR when
13
refilling the test vector. After the activated subset of scan trees has been refilled
with the test vector, the second activated subset of scan trees captures test
responses.
This process continues until all scan trees have captured test responses. The
captured test responses of the last subset of scan trees are shifted out when the
second test vector is shifted into this subset of scan trees. Our method turns to the
reseeding process. The final values in the LFSR remain unchanged. The activated
subset of scan trees performs d shift cycles when the extra variables with the same
values are injected. The second subset of activated scan trees performs d shift
cycles when the same values of the extra variables are injected. This process
continues until the values of the extra variables have been shifted into all scan
trees. Our method begins to check the values of the scan trees to see whether they
are compatible with any remaining deterministic test vector.
If so, the test vector is deleted from the ordered test sequence, and another
LP capture period is applied as stated earlier from this state. If the values kept in
the scan chains are compatible with a deterministic vector, our method continues
the responses capturing process. Assume that the initial values kept in the LFSR
are stored in the shadow register. The first subset of scan trees is activated, which
captures the test responses. The values kept in the shadow register are reloaded to
the LFSR. The values of the extra variables are injected again when activated scan
trees are filled. The above process continues until all scan trees have captured test
responses.

14
CHAPTER III
LITERATURE SURVEY
3.1 BISD: Scan-Based Built-In Self-Diagnosis-P.H. Bardell (Nov 1987)
Built-In Self-Test (BIST) is less often applied to random logic than to
embedded memories due to the following reasons: Firstly, for a satisfiable fault
coverage it may be necessary to apply additional deterministic patterns, which
cause additional hardware costs. Secondly, the BIST-signature reveals only poor
diagnostic information. Recently, the first issue has been addressed successfully.
The paper at hand proposes a viable, effective and cost efficient solution for the
second problem. The paper presents a new method for Built-In Self-Diagnosis
(BISD). The core of the method is an extreme response compaction architecture,
which for the first time enables an autonomous on-chip evaluation of test responses
with negligible hardware overhead.
The key advantage of this architecture is that all data, which is relevant for a
subsequent diagnosis, is gathered during just one test session. The BISD method
comprises a hardware scheme, a test pattern generation approach and a diagnosis
algorithm. Experiments conducted with industrial designs substantiate that the
additional hardware overhead introduced by the BISD method is on average about
15% of the BIST area, and the same diagnostic resolution can be obtained as for
external testing.

15
An Efficient Test Data Reduction Technique Through Dynamic Pattern
Mixing Across Multiple Fault Models-J.Torik(Nov 2011)
ATPG tool generated patterns are a major component of test data for large
SOCs. With increasing sizes of chips, higher integration involving IP cores and the
need for patterns targeting multiple fault models for better defect coverage in
newer technologies, the issues of adequate coverage and reasonable test data
volume and application time dominate the economics of test. We address the
problem of generating compact set of test patterns across multiple fault models.
Traditional approaches use separate ATPG for each fault models and minimize
patterns either during pattern generation through static or dynamic compaction, or
after pattern generation by simulating all patterns over all fault models for static
compaction.
We propose a novel ATPG technique where all fault models of interest are
concurrently targeted in a single ATPG run. Patterns are generated in small
intervals, each consisting of 16, 32 or 64 patterns. In each interval fault model
specific ATPG setups generate separate pattern sets for their respective fault
model. An effectiveness criterion then selects exactly one of those pattern sets. The
selected set covers untargeted faults that would have required the most additional
patterns. Pattern generation intervals are repeated until required coverage for faults
of all models of interest is achieved.
The sum total of all selected interval pattern sets is the overall test set for the
DUT. Experiments on industrial circuits show pattern count reductions of 21% to
68%. The technique is independent of any special ATPG tool or scans compression
technique and requires no change or additional support in an existing ATPG
system
16
CHAPTER IV
PROJECT DESCRIPTION
TEST PATTERN METHODOLOGY

Several techniques are used to generate test sequences that achieve high fault coverage at
low computational complexity. The overall test generation process as shown in fig 5.3.Usage of
static test compaction where the input vector holding with optimal numbers of hold cycles, input
vector perturbation and identification of subsequence is useful in extending the test sequence.
But the test generation method does not use deterministic test generation steps such as
implication or branch and bound, it does not identify undetectable faults. Restoration based
compaction procedures that reduce the length of test sequence for a circuit but it made with
reduced fault coverage. Enhanced scan and skewed load techniques made the test that is
unrealizable in normal operation. But it went on vain due to the fact of negative impact on the
fault coverage.

Fig. 5.3 Overall Test Generation Process

So our work mainly concentrates on the improvement of fault coverage and reduced test
sequence. The upcoming section portrays about the LTRTPG and its architecture, the concept of
3-weighted WRBIST and proposed architecture for TPG.

17
5.8 LT-RTPG

The combination block of sequential circuit can be considered as a collection of output


cones, where an output cone is composed of all logic and inputs that form the output. The input
pair said to be compatible if there exists no circuit cone to which they both belong. Correlation
between these values will not reduce the fault coverage for any test length for any fault. Let S jbe
the distance between the first and last flip-flops in the scan chain as shown in fig 5.4. If αth and
βthflipflops of the scan chain drive the first and last flip-flops of cone j then Sj=β-α+1, then span
S of the circuit is defined as the maximum of spans of all its cones. For the above type of faults,
it is sufficient to apply all possible patterns to each set of S consecutive flip-flops of the scan
chain to guarantee coverage of all faults.

Fig.5.4. Low Transition RTPG

18
5.8.1 Architecture of LT-RTPG

The LT-RTPG reduces SA during BIST by reducing transitions at scan flip-flops during
scan shift operations. The LT-RTPG is comprised of an r-stage LFSR, a k-input AND gate, and a
T flip-flop. Hence, it can be implemented with very little hardware. We assume that every LFSR
stage Di, where i=1, 2…r, has a normal as well as an inverted output, Qi and Qi respectively.
The normal (inverted) output of a stage Di of the LFSR,Qi(Qi), is said to have inversion parity of
0(1).Each of K inputs of the AND gate is connected to either a normal or an inverting output of
the LFSR stages. If an output of an LFSR stage is connected to an input of the AND gate, the
stage is said to have been tapped. The tap configuration of the TPG,TC={Q 1* ,Q2* …..Qx* }
where Qx* =Qxor Qi and x=1,2,…k, denotes a set of tapped stages of the LFSR and the inversion
parities of the outputs of the stages tapped.

The output of the AND gate is connected to the T-FF, Finally, the output of the AND
gate is connected to the scan chain. If large sets of neighbouring state inputs will be assigned
identical values in most test patterns, resulting in the decrease fault coverage or the increase in
test sequence length. Hence, LT-RTPGs with only K=2or3 are used. Since a T flip-flop holds
previous values until the input of the T flip-flop is assigned a 1, the same value v, where v {0,
1}, is repeatedly scanned into the scan chain until the value at the output of the AND gate
becomes 1. The schematic process is shown in Fig.5.4.

5.9 3-WEIGHTED WRBIST

In 3-weight WRBIST, detection probabilities of RPRFs are improved by fixing part of


inputs of the CUT to binary values specified in test cubes for targeted RPRFs. A test cubefor a
fault is a test that has unspecified inputs and the detection probabilityof a fault is defined as the
probability that a randomly generated vector detects the fault.

19
A generatoror a weight setis a vector that conveys information on inputs to be fixed and
values to which these inputs are to fixed during 3-weight Weighted random BIST. C={c0,
c1,…,cd-1} are the set of test cubes for RPRFs in a CUT, where ={ , ,..., } is an m-bit test cube
and where m is the number of inputs of the CUT, c {0,1,X}i , where X is a don’t care. Generator
of the BIST gen(C)={g0,g1,…,gm-1}is denoted as an m bit tuple where gk{0,1,X,U}
(k=0,1,…,m-1), m is the number of inputs, and gkis defined as,

𝒋
𝟏 𝒊𝒇 𝒄𝒌 = 𝟏 𝒐𝒓 𝑿 𝒊𝒏 ∀ 𝒄𝒋 ∈ 𝑪
𝒋
𝒂𝒏𝒅 𝒂𝒕𝒍𝒆𝒂𝒔𝒕 𝒐𝒏𝒆 𝒄𝒌 = 𝟏,
𝒋
𝟎 𝒊𝒇 𝒄𝒌 = 𝟎 𝒐𝒓 𝑿 𝒊𝒏 ∀ 𝒄𝒋 ∈ 𝑪
𝒋
𝒂𝒏𝒅 𝒂𝒕𝒍𝒆𝒂𝒔𝒕 𝒐𝒏𝒆 𝒄𝒌 = 𝟏,
𝒈𝒌 = ------------------------
U 𝒊𝒇 𝒄𝒂𝒌 = 𝟏 𝒂𝒏𝒅 𝒄𝒃𝒌 = 𝟎, 𝒘𝒉𝒆𝒓𝒆
𝒄𝒂 , 𝒄𝒃 ∈ 𝑪,

𝐗 𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
{
(5.14)

When input pk that is assigned a 1(0) in the generator, fixing to a 1(0) improves detection
probabilities of faults that require a 1(0) at input pk to be detected by a factor of 2. On the other
hand, fixing inputs that assigned a U in the generator to a binary value 0 or 1 may make some
faults undetectable. Since those inputs are assigned 1 in some test cube and 0 in other test cubes.
If a circuit contains a large number of RPRFs, then test cubes for RPRFs may be assigned
opposite values in many inputs resulting in a generator where mose inputs are assigned U’s. Only
a few inputs can be fixed in such generators without making any faults untreatable. Hence, if a
circuit has a large number of RPRFs, then multiple generators, each of which is calculated from
test cubes for a subset of RPRFs in the circuit, may be required to achieve high fault.

20
5.9.1 Architecture of 3weighted WRBIST

Two different scan based 3-weight WRBIST scheme are serial fixing and parallel fixing
3-weight WRBIST. However, in this paper, the serial fixing 3-weight WRBIST is exclusively
used because it has the property to reduce transitions at scan inputs during scan shift operations.
The low transition property of the serial fixing 3-weight WRBIST is the implementation of a
serial fixing 3-weight WRBIST for the generator. The scan counter is an (m+1) modulo counter,
where m is the number of scan elements in the scan chain. When the content of the generator
counter is k, a value for input is scanned to the scan chain input.

The generator counter selects appropriate generators; if the content of the generator
counter is i, generator gen(Ci) is selected to generate Ti 3-weight WRBIST patterns. Pseudo
random pattern sequence generated by the LFSR are fixed by controlling the gates, AND & OR,
with overriding signalss0 and s1; fixing a random value to a 0 is achieved by setting s0 to a 1and
s1 to a 0 and fixing a random value to a 1 is achieved by setting s1 to a 1. Since a random value
can be fixed to a 1by setting s1 to a 1 independent of the state of s0, the state of s0 is a don’t-care
when fixing a random value to a 1.

The outputs of the decoding logic, D0 and D1, are generated by the outputs of the scan
counter and the generator counter as inputs. T flip-flop TF0 (TF1) toggles its state when the
output of the decoding logic D0 (D1) is assigned a 1. In consequence, the on-set of the function
for the decoding logic lists the contents of the generator and scan counter at test cycles when TF0
and/or TF1 require toggling. The scan counter is required by all scan-based BIST schemes and is
not particular to the proposed BIST scheme. All BIST controllers also need a vector counter that
counts the number of test patterns applied. It is shown in Fig.5.5.

21
Fig.5.5. Architecture of 3weigh WRBIST

The generator counter can be implemented with [log2m] MSB stages of the existing
vector counter, where m is the number of generators. Hence, no additional hardware is required
for the generator counter, either. Hence, hardware over- head for implementing a 3-weight
WRBIST is incurred only by the decoding logic and fixing logic, which includes the two T flip-
flop and ANDand OR gates. In order to minimize hardware overhead for the decoding logic, the
number of minterms in the on-set (or off-set) of the function for the decoding logic is minimized.
This is achieved by ordering scan chains such that the number of toggles at TF0 and TF1 required
to scan in appropriate values for scan inputs, which are specified in generators, is minimized. In
order to minimize the number of toggles required at T flip- flops TF0 and TF1, Inputs that are
assigned the same values in most generators are placed in neighbour in the same chain by the
scan ordering procedure. These pairs of inputs are placed in neighbours to minimize toggles
required at the T flip-flop.

22
5.10ANALYSIS

As a first attempt to examine the combined effect of different noise sources, very recently
we performed a dynamic simulation based study to establish the importance of considering gate
leakage induced loading noise while performing signal integrity analysis for nanoscale CMOS
designs. The nets that cause logic violation at the fan-out stage under pattern-dependent dynamic
environment.

The proposed algorithm retains the completeness of the solution in the sense that, given
enough time and space, it will find out the pattern pair that causes maximal noise condition on a
given interconnect net and evaluates the existence of a sensitized path from the fault site to an
observation point to propagate the fault effect. Therefore, given a set of coupled nets for a given
circuit, the proposed technique identifies a subset of failing nets and their respective tests.

4.5 TEST DATA COMPRESSION

As Figure 1 illustrates, test data compression involves adding some


additional on-chip hardware before and after the scan chains. This additional
hardware decompresses the test stimulus coming from the tester; it also compacts
the response after the scan chains and before it goes to the tester. This permits
storing the test data in a compressed form on the tester. With test data
compression, the tester still applies a precise deterministic (ATPG-generated) test
set to the circuit under test (CUT).
This process differs from that of hybrid BIST, which applies a large number
of patterns, including both pseudorandom and deterministic data. Although hybrid
BIST can reduce the amount of test data on the tester more than test data
compression can, hybrid BIST generally requires longer test application time
because you must apply more patterns to the CUT than with test data compression
(in essence, hybrid BIST trades off more test application time for less tester
storage). The advantage of test data compression is that it generates the complete

23
set of patterns applied to the CUT with ATPG, and this set of test patterns is
optimizable with respect to the desired fault coverage. Test data compression is
also easier to adopt in industry because it’s compatible with the conventional
design rules and test generation flows for scan testing.
Test data compression provides two benefits. First, it reduces the amount of
data stored on the tester, which can extend the life of older testers that have limited
memory. Second—and this is the more important benefit, which applies even for
testers with plenty of memory it can reduce the test time for a given test data
bandwidth. Doing so typically involves having the decompressor expand the data
from n tester channels to fill greater than n scan chains. Increasing the number
of scan chains shortens each scan chain, in turn reducing the number of clock
cycles needed to shift in each test vector.
Test data compression must compress the test vectors loss lessly (that is, it
must reproduce all the care bits after decompression) to preserve fault coverage.
The output response, on the other hand, can use lossy compaction (which does not
reproduce all data, losing information) with negligible impact on fault coverage.
Ideally, the output response could be compacted using just a multiple-input
signature register (MISR). However any unknown (nondeterministic) values in the
output response would corrupt the final signature.
Researchers have developed several schemes to address the problem of
unknown values in the output response, including eliminating the source of the
unknown values, selectively masking the unknown values in the output stream, or
using an output compaction scheme that can tolerate the unknown values. Output
compaction is an entire subject in itself, and I will not discuss it further in this
article

24
Fig:1.4 Test Data Compression

4.6 LINEAR FEEDBACK SHIFT REGISTER(LFSR)


Linear Feedback shift registers (LFSRs) are commonly used in data-
compression circuits implementing a signature analysis technique called cyclic-
redundancy check (CRC). Autonomous LFSRs are used in applications requiring
pseudo-random binary numbers. For example, an autonomous LFSR can be a
random pattern generator providing stimulus patterns to a circuit.
The response to these patterns can be compared to the circuit’s expected
response and thereby reveal the presence of an internal fault. The autonomous
LFSR shown in Figure 3.2 has binary tap coefficients C1,………CN that
determine whether Y [N] is connected directly to the input of the left most stage.
In general if CN-j+1 = 1, then the input to stage j is formed as the exclusive-
or of Y [j - 1] and Y [N], for j=2, . . . . .N. Otherwise, the input to stage j is the
output of stage j-1-Y[j] <= Y [j - 1]. The vector of tap coefficients determines
coefficients of the characteristics polynomial of the LSFR, which characterize its

25
cyclic nature. The characteristic polynomial determines the period of the register
(the number of cycles before a pattern repeats).

Fig:1.5 Pin Description of Linear Feedback Shift Register

Use of linear feedback shift register (LFSR) is being studied extensively by


engineers, designers and researchers working in testing design for testability and
built-in self-test Environments. LFSRs are rather attractive structures for use in
these environments fir some of the following reasons:
1) LFSRs have a simple and fairly regular structure,
2) Their shift property i easily integratable in the scan design environment,
3) They are capable of generating exhaustive and / or random vectors, and
4) Their error correction and error detection properties make them prime
candidates for signature analysis applications
In built-in self-test (BIST) techniques, storing all the circuit outputs on chip
is not possible, but the circuit output can be compressed to form a signature which
later will be compared to the golden signature (of the good circuit) to detect faults.
Since this compression is loss, there is always a probability that a faulty output also
generates the same signature as the golden signature and the faults cannot be

26
detected. This condition is called error masking or aliasing. This is accomplished
by using a multiple-input signature register (MISR or MSR) which is a type of
LFSR. A standard LFSR has a single XOR or XNOR gate where the input of the
gate is connected to several "taps" and the output is connected to the input of the
first flip-flop.
A MISR has the same structure; however, the input to every flip-flop is fed
through an XOR/XNOR gate. For example, a four bit MISR has a four-bit parallel
output and a four-bit parallel input. The input of the first flip-flop is XOR/Oxnard
with parallel input bit zero and the "taps." Every other flip-flop input is
XOR/Oxnard with the preceding flip-flop output and the corresponding parallel
input bit. Consequently, the next state of the MISR is dependent on the last several
states opposed to just the current state. Therefore, a MISR will always generate the
same golden signature given that input sequence is the same every time.

4.7 Basic Architecture


The main challenging areas in VLSI are performance, cost, testing, area,
reliability and power. The demand for portable computing devices and
communications system are increasing rapidly. These applications require low
power dissipation for VLSI circuits. The power dissipation during test mode is
200% more than in normal mode.
Hence it is important aspect to optimize power during testing. Power
optimization is one of the main challenges. There are various factors that affect the
cost of chip like packaging, application, testing etc. In VLSI, according to thumb
rule 5000 of the total integrated circuits cost is due to testing.
During testing two key challenges are:
Cost of testing that can't be scaled.
Engineering effort for generating test vectors
27
Fig.1.6 : Low-Power Linear feedback shift register (LP-LFSR)

There are main two sources of power dissipation in digital circuits; these are
static and dynamic power dissipation. Static power dissipation is mainly due to
leakage current and its contribution to total power dissipation is very small
Testing of integrated circuits (ICs) is of crucial importance to ensure a high
level of quality in product functionality in both commercially and privately
produced products. The impact of testing affects areas of manufacturing as well as
those involved in design. Given this range of design involvement, how to go about
best achieving a high level of confidence in IC operation is a major concern.
This desire to attain a high quality level must be tempered with the cost and
time involved in this process. These two design considerations are at constant
odds. It is with both goals in mind (effectiveness vs. cost/time) that Built-In-Self
Test (BIST) has become a major design consideration in Design-For-Testability
(DFT) methods.
The self-testing using MISR and parallel SRSG (STUMPS) architecture is
shown in below figure. The STUMPS architecture introduced by Bardell and
McAnney 1982, 1984. It was originally applied at the board level, and
subsequently at the chip level.it has the following attributes:

28
Centralized and separated BIST architecture;
Multiple scan paths;
No boundary scan.
The scan paths are driven in parallel by a PRPG, and the signature is
generated in parallel from each scan path using a MISR. At the board level, each
scan path corresponds to the scan path in a separate chip, at the chip level each
scan path is just one segment of the entire scan path of a chip.
The use of multiple scan paths leads to a significant reduction in test time.
Since the scan paths may be of different lengths, the PRPG is run for K clock
cycles to load up the scan paths, where K is the length of the longest scan path. For
short scan paths, some of the data generated by the PRPG flow over into the MISR.
When this approach is applied at the board level to chips designed with a scan
path, then the PRPG and the MISR can be combined into a special-purpose test
chip, which must be added to the board.

4.9.3 WALSH ENCODER


In mathematical analysis, the set of Walsh functions form
an orthogonal basis of the square-integrable functions on the unit interval. The
functions take the values -1 and +1 only, on sub-intervals defined by dyadic. They
are useful in electronics, and other engineering applications.

The orthogonal Walsh functions are used to perform the Hadamard


transform, which is very similar to the way the orthogonal sinusoids are used to
perform the Fourier transform.

29
The Walsh functions are related to the Haar functions; both form a complete
orthogonal system. The Haar function system may on the one hand be preferable
because of its wavelet properties (e.g. localization), on the other hand the Walsh
functions are bounded (in fact of modulus 1 everywhere).

The order of the function is 2s, where s is an integer, meaning that there are
2s (time-) intervals in which the value is -1 or 1.

A list of the 2s Walsh functions make a Hadamard matrix. One way to define
Walsh functions is using the binary digit representations of real and integers. For
an integer k consider the binary digit representation

k = k0 + k12+...+km2m
(2.1)

For some integer m, and with ki equal to 0 or 1. Then if k is the Gray


code transform of j-1, the j-th Walsh function at a point x, with 0 ≤ x < 1, is

wal j(x) = (-1)(k0x0+...kmxm),

if

x = x0/2+ x1/22 + x2/23+ (2.2)

where again xi is 0 or 1 (only finitely often 1, if x is a dyadic number).

Walsh functions can be interpreted as the characters of (Z2) N, the group of


sequences over Z2; using this viewpoint, several generalizations have been defined.
Applications (in mathematics) can be found wherever digit representations are
used, e.g. in the analysis of digital quasi-Monte Carlo methods.

It is an example of a linear code over a binary alphabet that maps messages


of length n to code words of length n to code word of length 2n. It is a unique code

30
that each non-zero code word has hamming weight of exactly 2n-1.The
generator matrix for the Walsh–Hadamard code of dimension is given by

(2.3)

where is the vector corresponding to the binary representation of . In


other words, is the list of all vectors of in some
lexicographic order.For example, the generator matrix for the Walsh–Hadamard
code of dimension 3 is possible for any linear code generated by a generator
matrix, we encode a message , viewed as a row vector, by computing
its code word using the vector-matrix product in the vector space over
the finite field :

31
Fig: 1.8 ISCAS-85 benchmark C17 model

This way, the matrix defines a linear operator and we


can write .A more explicit, equivalent definition of uses the scalar
product over : For any two strings , we have

(2.5)

Then the Walsh–Hadamard code is the function that


maps every string into the string satisfying for
every (where denotes the th coordinate of , identifying with
in some way).

When a set of m Aggressor {A1, A2 …Am} coupled with a victim V and a set
o n fan-outs {F1,F2,…Fn} associated with them. The variable representing the
victim V at time slot i is denoted XVi . For crosstalk pulse problem, we assume the
victim to be static either at logic 1 or 0. It is based on the fact that the effect of
usual sources of signal noise will be insignificant in the context of worst case noise
produced by the combined effect of crosstalk and gate oxide loading.

The following conditions are representing the constraint.

Constraint 1: Victim static at its logic state for any two consecutive time slots i
and i-1

𝑋𝑉 𝑖 − 𝑋𝑉 𝑖−1 = 0 ∀𝑖 = 𝑆 + 1, … .,
(2.6)

32
Constraints for Maximal Crosstalk Noise: If we consider the Aggressor A k which
makes a transition (either 0 →1 or 1 →0 ) at time slot j within a time window of 2
with respect to the victim’s current time slot I toward computing the cumulative
𝑡
coupling noise and define a variable µ(𝐴𝑘𝑗 ) such that
𝑡 j j−1
µ (𝐴𝑘𝑗 ) = XAk ⨁ XAk ∀𝑖, 𝑗: |𝑗 − 𝑖| ≤ 2

(2.7)
where the variable representing the aggressor Ak at time slot j is
j 𝑗
denoted as XAk . We also define variable 𝜆(𝐴𝑘 ; 𝑉 𝑖 ) to represent the condition
that the final value of the aggressor Ak at time slot j and the victim V at time slot i
are opposite

𝑗 𝑗
𝜆(𝐴𝑘 ; 𝑉 𝑖 ) = 𝑋𝐴𝑘 ⊕ 𝑋𝑉 𝑖 ∀𝑖, 𝑗: |𝑗 − 𝑖| ≤ 2

To determine whether a given aggressor transition act toward contributing or


compensating the cumulative coupling noise and propose the following two
constrains.

Constrain2: if a given aggressor Ak switches at time slot j such that the final logic
value of the aggressor at time slot j and the victim at time slot i are different, the
aggressor is said to contribute to the cumulative coupling noise.

𝑗
To express this constraint with the aid of the variable 𝜙(𝐴𝑘 ; 𝑉 𝑖 )in the following
way:

𝑗 𝑡 𝑗
𝜙(𝐴𝑘 ; 𝑉 𝑖 ) = µ (𝐴𝑘𝑗 ) ● 𝜆(𝐴𝑘 ; 𝑉 𝑖 ) (2.9)

Constraint 3: If a given aggressor Ak switches at time slot j such that the final
logic value of the aggressor at time slot j and the victim at time slot I are same, the
aggressor is said to act toward compensating the cumulative coupling noise.
33
To express this constraint with the aid of the variable in the following way:

𝑗 𝑡 𝑗
𝛹(𝐴𝑘 ; 𝑉 𝑖 ) = µ (𝐴𝑘𝑗 ) ● 𝜆(𝐴𝑘 ; 𝑉 𝑖 )

Formation of the Objective function for the Combined Signal Noise: The
cumulative noise on a given victim net V due to capacitive cross-coupling with
neighbor aggressor nets as well as gate leakage loading from its fan-out gates at a
given time slot tc is expressed as

𝑖𝑁 (𝑉 𝑡𝑐 ) = 𝑖𝐶 (𝑉 𝑡𝑐 ) + 𝑖𝐺𝐿 (𝑉 𝑡𝑐 ) (2.11)

Therefore, the objective function would be to maximize the cumulative noise


over all the time slots S to T when the victim V is active Maximize 𝑂𝑏𝑗 =
𝑖𝑁 (𝑉 𝑡𝑐 ) ∀𝑡𝑐 = 𝑆, … . . , 𝑇.

Constraints for Fault Effect Propagation: To ensure the propagation of the


fault effect from the output of the victim net to a primary output, we create a
duplicate copy of the output logic cone of the victim V, which represents the
“faulty” value for any given gate K at the output logic cone of the victim at time
slot i. The “good” value of the gate K is represented by the XOR of the “good”
value and the “faulty” value at time slot I, represented by the D value will
propagate the fault effect from the victim V through the gate K on its output logic
cone.

𝐷𝐾 𝑖 = 𝑋𝐾𝑔𝑖 ⨁𝑋𝐾𝑓𝑖
(2.12)

Constraint4: A D value at a gate output implies that at least one of the gate inputs
in the output logic cone of the victim net V has a D value. Therefore, for a gate K

34
at time slot I with inputs K1 at time slot i 1 and k2 at time slot i2, the following
implication formally expresses the previous constraint

CHAPTER V

SOFTWARE DESCRIPTION

5.1.1 VERY-LARGE-SCALE INTEGRATION

Very-large-scale integration (VLSI) is the process of creating integrated


circuits by combining thousands of transistors into a single chip. VLSI began in the
1970s when complex semiconductor and communication technologies were being
developed. The microprocessor is a VLSI device. The term is no longer as
common as it once was, as chips have increased in complexity into billions of
transistors

The first semiconductor chips held two transistors each. Subsequent


advances added more and more transistors, and, as a consequence, more individual
functions or systems were integrated over time. The first integrated circuits held
only a few devices, perhaps as many as ten diodes, transistors, resistors and
capacitors, making it possible to fabricate one or more logic gates on a single
device. Now known retrospectively as small-scale integration (SSI), improvements
in technique led to devices with hundreds of logic gates, known as medium-scale
integration (MSI). Further improvements led to large-scale integration (LSI), i.e.
systems with at least a thousand logic gates. Current technology has moved far past
this mark and today's microprocessors have many millions of gates and billions of
individual transistors.

35
5.1.2 VHDL (VHSIC Hardware Description Language):

VHDL (VHSIC hardware description language)is a hardware description


language used in electronic design automation to describe digital and mixed-signal
systems such as field-programmable gate arrays and integrated circuits.

VHDL was originally developed at the behest of the U.S Department of


Defense in order to document the behavior of the ASICs that supplier companies
were including in equipment. That is to say, VHDL was developed as an
alternative to huge, complex manuals which were subject to implementation-
specific details.

The idea of being able to simulate this documentation was so obviously


attractive that logic simulators were developed that could read the VHDL files. The
next step was the development of logic synthesis tools that read the VHDL, and
output a definition of the physical implementation of the circuit.

Because the Department of Defense required as much of the syntax as


possible to be based on Ada, in order to avoid re-inventing concepts that had
already been thoroughly tested in the development of Ada, VHDL borrows heavily
from the Ada programming language in both concepts and syntax.

The initial version of VHDL, designed to IEEE standard 1076-1987,


included a wide range of data types, including numerical (integer and real), logical
(bit and boolean), character and time, plus arrays of bit called bit_vector and of
character called string.
36
A problem not solved by this edition, however, was "multi-valued logic",
where a signal's drive strength (none, weak or strong) and unknown values are also
considered. This required IEEE standard 1164, which defined the 9-value logic
types: scalar std_ulogic and its vector version std_ulogic_vector

5.1.3 DESIGN

VHDL is commonly used to write text models that describe a logic circuit.
Such a model is processed by a synthesis program, only if it is part of the logic
design. A simulation program is used to test the logic design using simulation
models to represent the logic circuits that interface to the design. This collection of
simulation models is commonly called a test bench.

VHDL has constructs to handle the parallelism inherent in hardware designs,


but these constructs (processes) differ in syntax from the parallel constructs in Ada
(tasks). Like Ada, VHDL is strongly typed and is not case sensitive. In order to
directly represent operations which are common in hardware, there are many
features of VHDL which are not found in Ada, such as an extended set of Boolean
operators including nans and nor.

VHDL also allows arrays to be indexed in either ascending or descending


direction; Both conventions are used in hardware, whereas in Ada and most
programming languages only ascending indexing is available.

VHDL has file input and output capabilities, and can be used as a general-
purpose language for text processing, but files are more commonly used by a
simulation test bench for stimulus or verification data. There are some VHDL
compilers which build executable binaries. In this case, it might be possible to use
VHDL to write a test bench to verify the functionality of the design using files on

37
the host computer to define stimuli, to interact with the user, and to compare
results with those expected. However, most designers leave this job to the
simulator.

It is relatively easy for an inexperienced developer to produce code that


simulates successfully but that cannot be synthesized into a real device, or is too
large to be practical. One particular pitfall is the accidental production of
transparent latches rather than D-type flip-flops as storage elements.

One can design hardware in a VHDL IDE (for FPGA implementation such
as Xilinx ISE, Altera Quartus, Synopsys Synplify or Mentor Graphics HDL
Designer) to produce the RTL schematic of the desired circuit. After that, the
generated schematic can be verified using simulation software which shows the
waveforms of inputs and outputs of the circuit after generating the appropriate test
bench. To generate an appropriate test bench for a particular circuit or VHDL code,
the inputs have to be defined correctly. For example, for clock input, a loop
process or an iterative statement is required.

5.1.4 Xilinx

Xilinx ISE (Integrated Synthesis Environment) is a software tool produced by


Xilinx for synthesis and analysis of HDL designs, enabling the developer to
synthesize ("compile") their designs, perform timing analysis, examine RTL
diagrams, simulate a design's reaction to different stimuli, and configure the target
device with the programmer.

The Xilinx ISE is a design environment for FPGA products from Xilinx,
and is tightly-coupled to the architecture of such chips, and cannot be used with
FPGA products from other vendors.[2] The Xilinx ISE is primarily used for circuit

38
synthesis and design, while the Model Sim logic simulator is used for system-level
testing. Other components shipped with the Xilinx ISE include the Embedded
Development Kit (EDK), a Software Development Kit (SDK) and ChipScope Pro

5.1.5 User Interface

The primary user interface of the ISE is the Project Navigator, which
includes the design hierarchy (Sources), a source code editor (Workplace), an
output console (Transcript), and a processes tree (Processes).

The Design hierarchy consists of design files (modules), whose dependencies are
interpreted by the ISE and displayed as a tree structure. For single-chip designs
there may be one main module, with other modules included by the main module,
similar to the main() subroutine in C++ programs. Design constraints are specified
in modules, which include pin configuration and mapping.

5.1.6 Simulation

System-level testing may be performed with the Model Sim logic simulator,
and such test programs must also be written in HDL languages. Test bench
programs may include simulated input signal waveforms, or monitors which
observe and verify the outputs of the device under test.

Model Sim may be used to perform the following types of simulations:

 Logical verification, to ensure the module produces expected results


 Behavioural verification, to verify logical and timing issues
 Post-place & route simulation, to verify behaviour after placement of the
module within the reconfigurable logic of the FPGA

39
5.1.7 Synthesis

Xilinx's patented algorithms for synthesis allow designs to run upto 30%
faster than competing programs, and allow greater logic density which reduces
project costs. Also, due to the increasing complexity of FPGA fabric, including
memory blocks and I/O blocks, more complex synthesis algorithms were
developed that separate unrelated modules into slices, reducing post-placement
errors.

IP Cores are offered by Xilinx and other third-party vendors, to implement


system-level functions such as digital signal processing (DSP), bus interfaces,
networking protocols, image processing, embedded processors, and peripherals.
Xilinx has been instrumental in shifting designs from ASIC-based implementation
to FPGA-based implementation.

40
CHAPTER VI
RESULTS AND DISCUSSION
3-weighted code, RTL Diagram

Fig 1.9Shows that LFSR the RTL diagram

6.1.2 ALGORITHM:

Register Transfer Logic which portrays the design of three state logic
diagram where it has the values of 0, 1 and high impedance level. From these
logic, Automatic Test pattern Generation has produced with some constraints
which is discussed in previous topics.

41
Fig 2.0Showsthat the LFSR output

6.1.3 RTL SCHEMATIC

Fig 2.1 Shows that the RTL diagram

42
Fig 2.2 Shows that the PLANAHEAD diagram

6.1.4 POWER ANALYZER OUPUT:

Fig 2.3 Shows that the power analyzer output

43
6.1.5 WALSH ENCODER OUPUT

Walsh transform matrix is a dot product of vector and data. The vector is of 8
pattern and data is of 12 pattern .each pattern is of 2 bits. Total 12 x 8 =96 pattern and 96
x 2=192 pattern .192 pattern is generated as a test pattern to reduce the crosstalk effect.

Fig 2.4 shows that walsh encoder RTL diagram

44
Fig 2.5 shows that the RTL Diagram

Fig 2.6.shows that the Walsh encoder

45
Fig 2.7 Shows that the power analyzer output

6.1.6 Synthesis report


===========================================================
==============
HDL Synthesis Report
Macro Statistics
# Adders/Subtractors :3
3-bit adder :1
4-bit adder :1
6-bit adder :1
# Registers :7
1-bit register :4
3-bit register :1
4-bit register :1

46
6-bit register :1
# Multiplexers :1
5-bit 2-to-1 multiplexer :1

===========================================================
==============
Advanced HDL Synthesis Report
Macro Statistics
# Counters :3
3-bit up counter :1
4-bit up counter :1
6-bit up counter :1
# Registers : 285
Flip-Flops : 285
# Multiplexers :1
5-bit 2-to-1 multiplexer :1
===========================================================
==============
===========================================================
==============
* Low Level Synthesis *
===========================================================
==============
===========================================================
==============
Final Register Report
Macro Statistics
47
# Registers : 298
Flip-Flops : 298
===========================================================
==============
===========================================================
==============
* Partition Report *
===========================================================
==============
Partition Implementation Status
-------------------------------
No Partitions were found in this design.
-------------------------------
===========================================================
==============
* Design Summary *
===========================================================
==============
Top Level Output File Name : testcube_cw.ngc
Primitive and Black Box Usage:
------------------------------
# BELS : 122
# GND :1
# INV : 100
# LUT2 :7
# LUT3 :6
# LUT4 :3
48
# LUT5 :1
# LUT6 :3
# VCC :1
# FlipFlops/Latches : 298
# FDE : 126
# FDRE : 162
# FDSE : 10
# Shift Registers :5
# SRL16E :5
# Clock Buffers :1
# BUFGP :1
# IO Buffers : 209
# IBUF :1
# OBUF : 208
# Others :3
# rs_encdr_v7_0_8dab21d957c3a16b: 1
# TIMESPEC :1
# xlpersistentdff :1
Device utilization summary:
---------------------------
Selected Device : 6vsx315tff1156-3
Slice Logic Utilization:
Number of Slice Registers: 298 out of 393600 0%
Number of Slice LUTs: 125 out of 196800 0%
Number used as Logic: 120 out of 196800 0%
Number used as Memory: 5 out of 81440 0%
Number used as SRL: 5
49
Slice Logic Distribution:
Number of LUT Flip Flop pairs used: 301
Number with an unused Flip Flop: 3 out of 301 0%
Number with an unused LUT: 176 out of 301 58%
Number of fully used LUT-FF pairs: 122 out of 301 40%
Number of unique control sets: 11
IO Utilization:
Number of IOs: 211
Number of bonded IOBs: 210 out of 600 35%
Specific Feature Utilization:
Number of BUFG/BUFGCTRLs: 1 out of 32 3%
---------------------------
Partition Resource Summary:
---------------------------
No Partitions were found in this design.
---------------------------
===========================================================
==============
Timing Report
NOTE: THESE TIMING NUMBERS ARE ONLY A SYNTHESIS ESTIMATE.
FOR ACCURATE TIMING INFORMATION PLEASE REFER TO THE
TRACE REPORT
GENERATED AFTER PLACE-and-ROUTE.
Clock Information:
------------------
-----------------------------------+------------------------+-------+
Clock Signal | Clock buffer(FF name) | Load |
50
-----------------------------------+------------------------+-------+
clk | BUFGP | 303 |
-----------------------------------+------------------------+-------+
Asynchronous Control Signals Information:
----------------------------------------
No asynchronous control signals found in this design
Timing Summary:
---------------
Speed Grade: -3
Minimum period: 1.177ns (Maximum Frequency: 849.618MHz)
Minimum input arrival time before clock: 0.463ns
Maximum output required time after clock: 0.917ns
Maximum combinational path delay: 0.428ns

51
CHAPTER VII
CONCLUSION

A new LP BIST method has been proposed using weighted test-enable signal-based
pseudorandom test pattern generation and LP deterministic BIST and reseeding. The new
method consists of two separate phases: LP weighted pseudorandom pattern generation and LP
deterministic BIST with reseeding. The first phase selects weights for test-enable signals of the
scan chains in the activated subcircuits. A new procedure has been proposed to select the
primitive polynomial and the number of extra inputs injected at the LFSR. A new LP reseeding
scheme, which guarantees LP operations for all clock cycles, has been proposed to further reduce
test data kept on-chip. Experimental results have demonstrated the performance of the proposed
method by comparison with a recent LP BIST method. The LP reseeding technique is a little
more complicated. This work can be extended to latch-on-capture transition fault testing and
small delay defect testing
Experimental results demonstrate that the proposed ATPG method can achieve
good structural fault coverage with compact test programs on modern processors. The
proposed hybrid solution allows one to efficiently combine test compression with logic
BIST, where both techniques can work synergistically to deliver high quality test. It is
therefore a very attractive LP test scheme that allows for trading-off test coverage, pattern
counts, and toggling rates in a very flexible manner.

52
53

You might also like