You are on page 1of 17

Exploiting a Workstation Network for Automatic Generation of Test Patterns for Digital Circuits

F. CORNO, P. PRINETTO, M. REBAUDENGO, M. SONZA REORDA


Politecnico di Torino Dipartimento di Automatica e Informatica

A.R. MEO, E. VEILUVA


Centro Supercalcolo Piemonte Torino

Abstract
This1 paper deals with the problem of generating test pattern sequences for large synchronous sequential circuits. A workstation network is exploited in order to tame the growth in CPU time requirements caused by the increase in circuit size and complexity. An approach based on Genetic Algorithms is adopted. Experimental results show that the CPU time required to reach a given fault coverage can be greatly reduced with respect to previous methods, and that the approach is able to provide a good speed-up with respect to the mono-processor version.

1. Introduction
A significant amount of research activities have been devoted to the automatic test pattern generation for digital circuits: efficient algorithms have been proposed for combinational circuits, while for synchronous sequential ones the problem is still open.

Generating test sequences for synchronous sequential circuits is a challenging problem for several reasons, which concur to make it harder than the corresponding problem for combinational circuits:

This work has been partially supported by CNR through Progetto Finalizzato Sistemi Informatici e

Calcolo Parallelo.

each fault must be first excited, and this goal is generally reached when a given value is present not only on the Primary Inputs, but also on the Flip-Flop outputs (Pseudo-Primary Inputs); a certain number of clock cycles can thus be devoted to force the Flip-Flop outputs to the required value (Excitation Sequence); each fault must be then propagated to the Primary Outputs: this goal can seldom be reached in the same time frame which excites it; more often, the fault is first propagated to the FlipFlops inputs (Pseudo-Primary Outputs), and a given number of clock cycles is required to propagate the fault effects from the Flip-Flops to the Primary Outputs (Propagation Sequence); the length of both the Excitation and the Propagation Sequence is neither known a priori, nor any meaningful upper bound can be found for this length; untestable faults require a big amount of computation to be recognised.

Several approaches for solving the problem have been proposed in the literature:

the topological approach, based on extending the algorithms for combinational circuits to sequential ones, adopting the Huffman Model [ScAu89] [GDNe91]

the simulation-based approach, in which the results coming from a fault simulator are exploited [ACAgr88] [CCPS92] [HIHH92]; methods based on Genetic Algorithms [RHSP94] can be seen as an evolution of this approach;

the symbolic approach [CHSo93], based on the knowledge of the boolean function implemented by the output and next-state output of the circuit.

Several works are described in the literature, trying to exploit the characteristics of parallel and distributed systems to solve the ATPG problem: in [RaBa92] a portable code is described, which implement a parallel/distributed version of a topological ATPG; however, the largest circuits are not considered there.

To the best of our knowledge, no method is known, which is able to successfully handle the complete set of real-world circuits a test engineer has to face with: some methods are completely unable to work with large circuits, and cannot produce any test pattern, while other ones can only produce sequences with very low fault coverage figures. Commercial products are even worst, as they are often able to solve the problem only introducing some modification in the circuit, like transforming it according to the partial scan strategy. Therefore, circuits composed by a large number of gates and Flip-Flops still remain an open problem, unless some Design for Testability methodology is accepted.

In this paper we propose a new method based on Genetic Algorithms (GAs), which is able to exploit the computational power of a workstation network. When compared with the other approaches the method appears attractive especially because it can handle circuits of any size and complexity. When the largest circuits are considered, experimental results show that the method has small requirements in terms of memory and is able to reach the same fault coverage figures with CPU times which are one order of magnitude less than the previous methods. No special hardware is required, as workstation networks new often exist in many design centers. Moreover, Genetic Algorithms allow the user to easily trade-off results accuracy and CPU time. Finally, the method is quite flexible, as it can be extended to multi-level circuit descriptions and to fault models different than the stuck-at.

The structure of the paper is the following: Section 2 briefly outlines what Genetic Algorithms are; Section 3 is devoted to the description of our mono-processor algorithm, while the distributed one is presented in Section 4. Section 5 reports the experimental results we gathered and Section 6 draws some conclusions.

2. Genetic Algorithms
Genetic Algorithms [Holl75] have been deeply investigated in the last two decades as a possible approach to solve many search and optimisation problems.

GAs belong to the class of the evolutionary algorithms; they are based on a mechanism which mimics the way followed by nature to improve the characteristics of living beings. Each solution of the problem to be solved (individual) is represented as a string (chromosome) of elements (genes). Each gene assumes a value known as allele. A fitness value is associated to each individual, based on the value returned by an evaluation function. The fitness value measures how far the individual is from the optimum solution. A set of individuals composes a population; the population evolves from one generation to the following through the insertion of new individuals and the deletion of some old ones. The process starts with an initial population of individuals created in some way, e.g., randomly.

New individuals are generated through reproduction, which is based on the following two mechanisms: cross-over: two existing individuals are selected and their chromosomes are combined in order to obtain a new individual; mutation: the characteristics of a selected individual are randomly changed. Individual selection for the cross-over operator is generally made through techniques (like the roulette-wheel technique) which ensure that the selection probability of the individuals is proportional to their fitness.

Reproduction causes the initial population to evolve and to improve: the new individuals are inserted in the population in place of some other individuals, e.g., the ones with the lowest fitness (elitism). In other cases the new population completely substitutes the old one (regeneration).

The process is stopped when either the desired result is reached or a maximum CPU time has elapsed.

3. Our ATPG Algorithm


3.1. The Overall Algorithm
The test generation process is composed by three phases: in the first a fault is selected among the undetected ones; a test sequence for the fault is generated in the second phase; the test sequence is then fault simulated in the third phase, looking for additional detected faults. The three steps are repeated until all the faults have been processed, or for a predefined maximum number of times M. As the fault simulation problem can be solved through well-known problems, we will concentrate on the first and second phase. The latter can be seen as a search process in the space of the input sequences and can be solved resorting to a Genetic Algorithm. The pseudo-code describing the behaviour of the whole algorithm is shown in Fig. 1.
for( i=0; i<M; i++) { 1. selecting the target fault; 2. generating a test sequence; 3. fault simulating the sequence; }

Fig.1: Pseudo-Code for the whole algorithm.

3.2. The Genetic Algorithm for ATPG


Generating a sequence able to detect a fault can be seen as a search problem in the space of all the sequences for the circuit. The problem can be solved through a Genetic Algorithm, provided that a suitable encoding for the generic solution, and an effective evaluation function are found. In the following the solutions we adopted for the two points are described.

As far as the encoding is considered, an individual corresponds to a sequence composed by a variable number of input vectors following a reset command. A population is a set of individuals, each corresponding to a sequence.

Finding an effective evaluation function is a much more complex task: while it is easy to understand whether a given sequence is a solution for the problem (i.e., it detects the fault), understanding how far it is from the goal is much harder. A heuristic parameter can be used, coresponding to the number of FlipFlops having a different value in the good and faulty circuit: increasing this number is normally the right way towards detecting the fault, provided that the fault has been already excited and propagated to the Pseudo Primary Outputs.

An evaluation function can thus be defined, which is based on the parameter above, and is able to rank different sequences according to their distance from being testing sequences.

Note that no parameter has been identified, which says how far a sequence is from exciting the fault and propagating its effects to the Pseudo Primary Outputs. For this reason, a fault can be elected to be the target of the ATPG process only when at least one sequence is known, which is able to reach this goal.

3.2.1. Phase 1: choosing the target fault The main goal of this phase is to determine the target fault: a number of sequences are randomly generated, till the effects of one fault are propagated to the Flip-Flops. The sequences are randomly generated in groups on N, and are all composed of L vectors. Each sequence is fault simulated for all the not yet detected faults. For each input vector vjk belonging to a sequence sj and for each fault fi the following function is computed, saying how far the fault is from being detected:

h(vjk, fi ) = nFlip-Flops (vjk, fi) where nFlip-Flops is the heuristic parameter introduced before, i.e., the numbers of Flip-Flops whose value is different in the good and faulty circuit for fault fi.

The target fault is the fault scoring the maximum value of the function h for some vector vjk, provided that h is greater than a given threshold T; if there isnt any fault overcoming this threshold, a new set of N random sequences is generated, whose length L is increased. All the sequences are fault simulated, so that if some fault is detected during this phase, it is dropped from the fault list, and the corresponding sequence is inserted in the final set of test sequences.

Whenever the first phase is activated, the length of the test sequence generated in the last phase 2 is taken as the starting value for L.

3.2.2. Phase 2: Generating a Test Sequence for the Target Fault The second phase is based on a Genetic Algorithm: each individual is a sequence and N sequences constitute a population. The initial population is the set of sequences coming from the first phase, and includes the sequence which excites the target fault. To each sequence sj a fitness function H(sj) is associated, which corresponds to the maximum value of the same function h used in the first phase when computed for the target fault ft on all the Lv vectors belonging to the sequence: H(sj)=max(h(vjk,ft)) vector vjk belonging to sj

From each population, a new one is computed through the reproduction process, which is based on the cross-over and mutation operators: the cross-over operator (Fig. 2) selects two parent individuals (i.e., test sequences) from the current population, randomly generates two numbers x1 and x2 belonging to the range 1 to mi,

being mi the length of the i-th sequence, and build a new individual composed of the first x1 vectors of the first sequence, and the last x2 vectors from the second sequence; the mutation operator randomly selects a newly generated test sequence, and complements a single bit in one of the vectors belonging to it. This operator is activated with probability pm.

Each time an individual must be selected by the cross-over operator, a procedure is activated, which returns an individual from the current population; the probability of each individual to be selected is proportional to its fitness, so that the best sequences are more likely to provide vectors for the new individuals than the worse ones.

A given number R of new individuals is created for each generation: these are inserted in the new population in place of the worst R individuals of the old one. The best individuals are therefore guaranteed to survive from one generation to the other.

x1

AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA

x2

AA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA

x1

x2

AA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAA

Parent 1

Parent 2

New Individual

Fig. 2: The Cross-over Operator.

Once a new population has been generated, the fitness function is computed again for the target fault and for each sequence. The process repeats until one of the following conditions is met: the target fault is detected; the corresponding sequence is then inserted in the final set of test sequences; a given maximum number max_gen of generations has been generated without detecting the fault: in this case this is marked as aborted. Aborted faults are not eligible any more as target faults.

The pseudo-code for the second phase is reported in Fig. 3.

1. create an initial population P0={x10,...,xN0} 2. for each individual in P0 compute the fitness function F(xk0) 3. i=0 9

4. A= Pi 5. for s=1 to R select 2 individuals in Pi apply the cross-over operator, and produce a new individual xsi+1 6. A=A xsi+1 7. for s=1 to T select one individual in A apply the mutation operator 8. for each individual in A compute the fitness function F(xki+1) 9. Pi+1 is composed of the best N individuals from A 10.i=i+1 11. if (the target fault has been detected) or (i=max_gen) then stop;else goto 4

Fig. 3: Pseudo-Code for the second phase.

4. The Distributed Algorithm


The mono-processor version of our algorithm spends much of the time for the fault simulation of the generated sequences. An effective approach for exploiting the large computational power provided by a workstation network is therefore to distribute the fault simulation task among the available machines. A master process is in charge of executing the kernel of the algorithm, while a slave process can be activated on a remote workstation each time the fault simulation of a sequence is required. Several fault simulation processes thus work in parallel in many phases of the algorithm, while communications and synchronizations are very reduced.

The master process performs the following tasks: it performs all the I/O operations, both towards the user and towards the file system for netlist and fault list reading and for generated test sequences writing; it spawns on the available machines several slave processes, to which a copy of the internal format of the netlist and fault list is initially distributed; it starts executing the algorithm, thus cycling through the three phases: as soon as a sequence has to be fault simulated, a slave process is activated through a proper message. Fault simulation can be required for several reasons:
10

in phase 1, it is necessary to evaluate whether each untested fault has been detected and (if not), on how many Flip-Flops its effects have been propagated; in phase 2, just the target faults must be considered, and for each one, the fitness function must be computed; in phase 3, all the untested faults must be considered, but just the information about their possible detection must be computed.

The master stores all the global data structures, like the ones containing the current population and the fitness of each individual. The latter is updated with the data coming from the slaves and is accessed each time new individuals must be created through the cross-over operator. A critical problem in phases 1 and 3 is how to update the local fault lists with the detection information coming from other slaves: continuous updating is expensive, but it can save time, as simulating faults already detected by other slaves is a useless work. A trade-off has been reached by transmitting to each slave, together with the sequence to be simulated, the list of faults detected by the other slaves since the last activation of the slave itself.

Several synchronization points are placed within the algorithm: at the end of phase 1; before starting the creation of a new generation in phase 2; at the end of phase 3. When the the master reaches these points, it does not proceed on before all the slave processes have finished their work, thus guaranteeing the global correctness of the whole algorithm.

Load balancing among the slaves has been reached through a dynamic distribution of work: when a set of sequences is ready to be simulated, the master distributes them among the slaves and

11

starts waiting for their answers. As soon as an answer comes, the master get the information, updates its data structures and sends a new sequence to the slave, which is thus idle for a small amount of time. As it will be shown in the following Section, the mechanism works well up to a given number of slaves, after which the master becomes a bottle-neck and the speed-up does not increase any longer.

5. Experimental Results
We implemented the distributed algorithm in the C language using the PVM [GBDJ93] library for implementing message passing and process spawning. As a consequence, our tool is independent from the communication system used by the workstation network, and is portable over a wide range of architectures, not limited to distributed systems but including also several parallel machines. For the experiments whose results are reported here the code has been run on a workstation network composed by 8 DEC 3000/500. The standard set of synchronous sequential circuits proposed at the ISCAS89 conference [BBKo89] has been considered. Tab. 1 reports the values of the parameters introduced during the algorithms description; their optimal values have been experimentally found.

The cost of our algorithm is mainly due to the fault simulation task: it is therefore essential to use an efficient fault simulation algorithm. The single fault propagation, parallel-fault technique [NTPa92] represents the state-of-the-art approach to this problem, and it has been adopted in our implementation. To fully exploit the power of the parallel-fault approach in the second phase, where just the target fault must be simulated, we modified our algorithm so that 32 faults (instead of just one) are selected at the end of the first phase. They are then used as target faults in the second phase, whose goal is thus to generate a set of test sequences able to detect them.

12

Tab. 2 and 3 show the results obtained. Only the largest ISCAS89 circuits have been considered, for which other methods, like VERITAS [CHSo93] and STEED [GDNe91], are not always able to produce any test sequence. The following points must be noted: changing M allows the user to easily trade-off the fault coverage and the required CPU time; the value for M has been chosen so that the CPU time still remains acceptable; memory occupation is very small, as it mainly corresponds to the one required by the fault simulation; the CPU times are significantly lower than for the other methods able to handle the biggest circuits, like [RHSP94], [HHIH92] and [ACAg88]; the method shows a good scalability (as it can be seen in Fig. 4), provided that the number of slaves is less than 10; for greater values, the master becomes a bottleneck, and the slaves are too often idle.

Parameter N R Lin max_gen M pm

Description Population Size Number of new individuals for each generation Initial Sequence Length Maximum Number of Generations before aborting a fault Number of Iterations of the whole process Activation Probability of the Mutation Operator

Value 2030 0.8N 4 20 1020 0.03

Tab. 1: Parameters Values.

A A # F aults AA F ault Coverage A AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA A A AA Circuit A T otal Detected AA Aborted % A AA A AA A AA S 1196 A 1238 1146AA 73 92.57 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA A AA S 1238 A 1326 1219AA 66 91.93 A AA A AA S 1423 A 1322 837AA 326 63.31 A AA A AA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA S 1488 A 1471 978AA 6 66.49 A AA A AA A AA S 1494 A 1481 978AA 6 66.04 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA A AA A AA S 5378 A 4241 2914AA 168 68.71 A AA A AA A AA S 9234 A 5968 320 6.20 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA370AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA A AAAAAAAA AA A S 13207 A 8471 1800AA 288 21.25 AA A AA S 15850 A A 11383 652AA AA 469 5.73 A AA

Tab. 2: Fault Coverage Results.

13

Circuit S 1196 S 1238 S 1423 S 1488 S 1494 S 5378 S 9234 S 13207 S 15850

2 123 71 369 86 89 666 2451 2743 1730

# Proces s ors 4 6 55 31 25 19 141 101 41 25 43 24 240 162 918 618 987 706 794 403

8 25 17 82 21 21 124 421 477 350

Tab. 3: CPU times [s].

Fig. 4: Speed-up.

6. Conclusions
Existing methods for automatic generation of test sequences for synchronous sequential circuits often fail when dealing with large circuits, due to the unacceptable memory size and CPU time they require. The paper presents a new algorithm, based on Genetic Algorithms, which is especially suited for such large circuits and is able to run on a workstation network. The problem of generating a test sequence is seen as a search problem in the space of all the possible test sequences; state-of-the-art high performance fault simulation techniques are exploited.

14

The experimental results show that the method is able to produce test sequences with a high fault coverage for all the largest ISCAS89 circuits; the requirements in terms of memory and CPU time are very small when compared with the other methods. The user can trade-off results accuracy and CPU time by simply changing the value of a parameter. The approach is flexible as it can easily handle mixed-level circuit descriptions and fault models different than the stuck-at, provided that a suitable fault simulator tool is available and an appropriate fitness function is used. The computational power of the workstation network is well exploited, and a good speed-up is obtained with respect to the mono-processor version.

7. Acknowledgements
The authors wish to thank Prof. J.L. DeKeyser of the Universit de Science et Technologies de Lille (USTL) in France for giving us the access to the workstation network existing at his institution.

8. References
[ACAgr88] V.D. Agrawal, K.-T. Cheng, P. Agrawal: CONTEST: A Concurrent Test Generator for Sequential Circuits, IEEE/ACM Design Automation Conference, Anahaim (CA), USA, June 1988, pp. 84-89 [BBKo89] F. Brglez, D. Bryant, K. Kozminski: Combinational profiles of sequential benchmark circuits, ISCAS-89: IEEE Int. Symposium on Circuits And Systems, Portland (OR), USA, May 1989, pp. 1929-1934 [CCPS92] P. Camurati, F. Corno, P. Prinetto, M. Sonza Reorda: A simulation-based approach to test pattern generation for synchronous circuits,

15

VTS'92: 10th IEEE VLSI Test Symposium, Atlantic City (USA), April 1992, pp. 263-267 [CHSo93] H. Cho, G.D. Hatchel, F. Somenzi: Redundancy Identification/Removal and Test Generation for Sequential Circuits Using Implicit State Enumeration, IEEE Transactions on Computer-Aided Design, July 1993, Vol. 12, No. 7, pp. 935945 [GBD93] A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, V. Sunderam: PVM 3 Users Guide and Reference Manual, Oak Ridge National Laboratory, Internal Report ORNL/TM-12187, May 1993 [GDNe91] A. Ghosh, S. Devadas, A.R. Newton: Test Generation and Verification for Highly Sequential Circuits, IEEE Transactions on Computer-Aided Design, May 1991, Vol. 10, No. 5, pp. 652667 [HHIH92] K. Hatayama, K. Hikone, M. Ikeda, T. Hayashi: Sequential Test Generation based on Real-Valued Logic Simulation, ITC92 : IEEE International Test Conference, September 1992, Baltimore (MD), USA, pp. 41-48 [Holl75] J.H. Holland: Adaption in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, 1975 [NTPa92] T.M. Niermann, W.-T. Cheng, J.H. Patel: PROOFS: A Fast, Memory-Efficient Sequential Circuit Fault Simulator, IEEE Transactions on Computer-Aided Design, February 1992, Vol. 11, No. 2, pp. 198-207 [RaBa92] B. Ramkumar, P. Banerjee:
16

``Portable Parallel Test Generation for Sequential Circuits,'' ICCAD92: IEEE International Conference on Computer Aided Design, Santa Clara (CA), USA, November 1992, pp. 220-223 [RHSP94] E.M. Rudnick, J.G. Holm, D.G. Saab, J.H. Patel: Application of Simple Genetic Algorithms to Sequential Circuit Test Generation,IEEE European Design & Test Conference, Paris (F), February 1994, pp. 40-45 [ScAu89] M.H. Schulz, E. Auth: ESSENTIAL: an efficient self-learning test pattern generation algorithm for sequential circuits, ITC89 : IEEE International Test Conference, Washington (DC), USA, September 1989, pp. 28-37

17

You might also like