You are on page 1of 10

STDF Fail Datalog Standard

Ajay Khochel, Phil Burlison', John Rowe2, Glenn Plowman3 1- Verigy Pte Ltd. Cupertino, CA USA, 2- Teradyne nc, 3- Qualcomm
-

A Tutorial on

Abstract
Advances in technology are making it imperative to collect detailed structural IC fail data during manufacturing test to improve yield. However, there is currently no standard format for communicating and storing such structural fail data efficiently. This leads to ad-hoc tool-specific solutions, which do not offer interoperability required in a typical multi-tool, multi-vendor customer environment. These ad-hoc solutions result in unnecessary investment in development of point-to-point interface solutions that are ultimately still not integrated with the traditional data collection for a unified yield analysis data format. Expanding an established datalogging standard to accommodate the new requirements solves these issues. Standard Test Data Format (STDF) is the predominant format used today for traditional failure datalogging storage, but in its current form falls far short in handling the new high-volume structuralfailures for yield learning. A group of more than 20 companies from ATE, EDA, Semiconductor and Yield Management companies has been working on a new enhanced STDF standard that addresses the new requirements. This paper provides the overview of the new enhanced standard.

structures is statistically analyzed to identify the yield improvement opportunities.

Figure 1 shows a flow for volume diagnostic mentioned above. In this flow the test patterns generated by ATPG tools from EDA vendors are applied to the production device by the ATE of choice. The ATE then collects the fail information in failure files.

Figure 1: Volume Diagnostic Flow

Throughput

Introduction
Storage efficiency
Bandwidth

Manufacturing yield is a very important factor in the production of a semiconductor product. It is important in all phases of the production: first silicon, volume ramp and normal production. Historically the yield fallout was mainly caused by random defects and process problems, so traditional yield improvement strategies included clean room improvements and process improvements. Correspondingly the data collection from ATE for yield monitoring was focused on parametric and gross pass/fail information collection. However, advances in technology have created a situation where yield loss is now dominated by the systematic design and process interactions, which are hard to understand before the silicon implementation. To understand these interactions requires fail data collection in volume manufacturing. Moreover the designand pattern-dependent nature of these issues requires fail data to be collected on the internal nodes using structural test techniques. Therefore the industry is moving toward Volume Diagnostics flow, where fail data on internal nodes is collected during volume manufacturing and processed by the downstream diagnostic tools to identify the failing structures. The information on the failing

Figure 2: Multi-Vendor Multi-tool Environment

The failure information in these failure files is then used by the diagnosis tools to identify the failing structure, e.g., failing gate, failing via, failing net, etc. Currently the format in which these failure files are written depends on the ATE at hand. Each ATE vendor writes this information in a custom format. To complicate the matters further, each diagnosis tool also has its custom input format. So a multi-ATE and multi-diagnosis tools environment for a customer looks like the one shown in Figure 2. A customer in this situation has to create an IT infrastructure to store information in multiple formats, one for each ATE, and then translate this information into the input format of the diagnosis tool. The responsibility for developing and supporting the translation tools can be with the EDA

Adv.Ind.Prac. 2.1

INTERNATIONAL TEST CONFERENCE 1-4244-4203-0/08/$20.00 C2008 IEEE

vendors

or

with

ATE

vendors

or

the

end for
a

users.

The

situation in the EDA and the ATE

cases

multi-tool,

multi-vendor environment is shown in


4

Figure

3 and

Figure
the

respectively.

In either

case

it is

an

extra overhead for the

party

responsible
as

for

developing (or,

and
user

maintaining mess).

conversion tools

well for the end

who has to deal

with the conversion tools mesh

better put,

4EI

Figure

Dataflow with

standard format

In

recognition
a

of the
group

need
was

for the

standard mentioned

above,
volume format

working

formed to standardize the fail

datalogs by extending diagnostics.


due
to

the STDF V4 to enable efficient


was use

STDF V4

selected
in

as

the base

its

widespread

the

conventional

(functional

and

parametric) datalogs. organized


as

Figure

3:

Mess with EDA tool translation of ATE formats

The rest of the paper is

follows:

Section 2: overview of the STDF V4 Section volume


3:
gaps

in

the

current

format

to

support

diagnosis applications
stiructure

for

yield learning

Section 4: scope and


0 0 0 0

of the standardization standard.

Section 5: data model for the

new

Section 6: details of Scan fail data formats Section 7: details of

Memory fail data formats

Section 8: the validation tool and and writers for the Section
9:
new

strategies plans

for reader

standard and future for the

status

Figure

4: Mess with EDA

tool-specific

format

by

ATE

standardization groups Section 10: conclusion.

The overhead associated with the conversion tools


as

development

and support of
can a

well

as

with IT infrastructures

Overview of STDF

be eliminated if all the ATE could create failure files in standard format and all the
same

STDF

provides
and

common

platformn
data

to

allow

tester,
and

diagnosis

tools
5.
a

can

read the

database management systems and data


to store

analysis
in
a

software

standard format

as

shown in

Figure
as

communicate
a a a

test

general

flexible format. STDF is


In

addition,
the

any format that is used


to

standard must the

binary

format where the data is ordered set of records.

meet

following requirements diagnostics

create

simple,

organized

in the form of

partially

Each record consists of

header section and information

streamlined environment mentioned above and to enable


an

section. The header section contains the size of the

efficient volume
Data Data

flow:

record,
of

the record type


the the
are

(which
and
a

indicates the

general category
(which

Sufficiency: Efficiency:

Meet the needs of downstream tools Low

information) specific

record sub type

indicates

data

volume

and

no-/low-

information in that the

category).
etc.

The record types


e.g., lot, addition, the

throughput impact
Data Data

organized along

manufacturing entities,
In

Integrity: provide Integration:

data

consistency checking
well with other

wafer, part, test,

test

execution,

Must

integrate

format also supports two other types of data: the STDF file information data and the

existing datalog

formats

generic type

of data.

The

records in the STDF file information category contain the information made to it.
on

the

STDF

itself and any

updates being general

The

generic type

records store any

information. The maximum size of any record is 64 KB

Adv.nd.rac.2.1INTERNATIONAL Adv.lnd.Prac. 2.1

TEST CONFERENCE2

and the actual size could be less than the maximum, depending on the data being stored in the record. The actual size of a record is indicated in the size field of the header. Table 1 shows the categories of data and the record types that are supported in STDF V4.
Table 1: STDF Record Types and Subtypes
Rec. Rec. Pneu- Description Type Sub monic
type

STDF also supports various data types for describing data in each record. Table 2 shows the data types supported in STDF.
Table 2: Data Type Codes in STDF
Code C*n U* 1,U*2,U*4 1* 1,1*2,1*4 R*4,R*8 V*n B*n D*n

0 10 20
1 FAR ATR
MIR MRR

Information about the STDF file

File attribute record Audit trail record


Data collected on a per lot basis

10 20 30 40 50 60 62 63

Master information record Master results record Part count record

PCR
HBR

N* 1

Description Character string of length n One, two and four bytes unsigned integer One, two and four bytes signed integer Four-, eight-byte floating point number Variable data type filed, data type specified in the first byte (max 255) Variable length bit encoded field; first byte denotes the count; first data bit in LSB of second byte Variable length bit-encoded field; first two bytes specify the count Unsigned integer stored in a nibble

SBR
PMR

PGR
PLR

70 80
2

RDR SDR
WIR WRR

Hardware BIN record Software BIN record Pin map record Pin group record Pin list record Retest data record Site description record
Data collected per wafer

STDF V4 and Gaps

10 20 30
5

WCR
PIR PRR

Wafer information record Wafer result record Wafer configuration record


Data collected on per part basis

10
20

Part information record Part result record


Data collected per test in the test program

As shown in the previous section, the STDF records are classified according to the manufacturing entities, e.g., lot, wafer, part, test, test execution etc. Also note that the data gathered per test execution is limited to functional and parametric tests only. Thus no predefined records exist in the standard for scan and memory fail datalog. In addition, there are no records to bring the design information forward from design into manufacturing that can be carried around the design-to-test-to-design loop for yield learning through volume diagnosis. The data types of STDF V4 also fall short on some of the requirements of volume diagnosis, e.g., cycle numbers supported by ATE could exceed the maximum value that can be represented by the existing U*4 data type.

STDF Fail Datalog Standardization

10
30 15 TSR
PTR MPR FTR

Test synopsis record


Data collected per test execution

10
15 20 20

Parametric test record

The standard described in this paper addresses these gaps and enhances the standard to meeting the volume diagnosis requirement efficiently. The new standard is called "STDF V4-2007" to indicate the enhancements made to V4 in year 2007. The new standard is designed to support:
* Scan and Memory (embedded and standalone) fail datalog for volume diagnosis * Fail logging at wafer as well as package test * Format should allow capture of millions of failures (what one would expect in year 2010) * STDF V4 as the base format (the new standard can co-exist with the STDF V4) * Format should support the design-test-design flow

Multiple-result parametric record Functional test record


Data collected per program segment

10
20 50 10 30

BPS EPS
GDR

DTR

Begin program section record End program section record Generic data Generic data record Datalog text record

The standard is developed as two separate efforts/groups, for scan and memory respectively. Both the efforts/groups

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

are organized as open industry work groups where anyone can join and contribute. The working groups have a representative from companies across the various segments of the industry:
*

Design Information

Identification

Device

Identification

FEH

Test Identification

* * *
*

ATE: Verigy, Teradyne, Advantest, Credence, LTX (now part of Credence) EDA: Mentor, Cadence, Synopsys YMS: PDF, Yield Dynamics (now MKS) IDM/Fabless: Qualcomm, TI, IBM, Infineon, Freescale, ST Microelectronics. Others: Nano ISI, Soto Technologies, Open Source Consortium

Fail Data

1..N

-STDF M emory Fail Datalog

Specification

Environment

Validation & Synchronization

Format Specification

Basic Concepts

Figure 7: Datamodel for the standard 5.2 Data type enhancement The applications were analyzed for the data type requirements and two new data types were added to the standard to support potentially larger field values then can be accommodated by the existing data types. The U*8 (8 byte unsigned integer) was added to ensure any potential cycle counts that may exceed the existing 4-byte unsigned integer can be accommodated. To maintain optimum data sizes, however, use of the U*8 data type is optional when used in data arrays (specifically in the STR record).

5.1 Data Model A data model for both scan and memory formats was derived from the analysis of the application flow shown in Figure 6. The conceptual dataflow diagram shows the dataflow in the volume diagnostic application as well as the transformation that the data may undergo. As the data flows through the loop, it is important to keep track of the changes to the extent that the design/diagnosis tools can correctly relate the fail datalog to the original files from which the test patterns were generated to perform correct diagnosis. Any changes to the test data in this loop must be communicated to the analysis tools. In addition to supporting the test data and its corresponding fail data, it is also important to collect information on the conditions and equipment used to perform the test when the fail data was collected, to enable volume fail analysis over wafers/lots.

The existing limit on string sizes of 255 characters may be insufficient to list part of the full net name path of scan cells. The "S*n" variable length string has been added to resolve this. This data type uses 2 bytes to specify the number of characters.

Scan datalog format

As mentioned above, the scan datalog format consists of new and existing records. Table 3 shows the new records that have been added in various categories. The rest of this section will describe the details of these records as well as the existing records that have been leveraged.
Ree. Type Sub monic type 0 Information about the STDF file

Table 3: STDF Records for Scan Datalog Ree. Pneu Description

30

VUR

Version update record


Data collected on a per lot basis

Figure 6: Application information flow

1 90 91 92 93 94 15 30 PSR

Figure 7 depicts the data model for the new standard. It shows the various classes of data that need to be supported. These information objects have been mapped onto STDF records (existing and new). Section 6 will provide the details of the STDF format for Scan and Section 7 for memory format.

Pattern sequence record NMR Name map record CNR Cell name record SSR Scan structure record SCR Scan chain record
Data collected per test execution

STR

Scan test record

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

6.1 VUR Version Update Record (VUR) is used to identify the updates over version V4. Presence of this record indicates that the file may contain records defined by the new standard. This records contains only one string type field and is supposed to contain "V4-2007" to indicate the format type to be the standard mentioned in this paper.
6.2 STR - Logging Scan Failures The Scan Test Record (STR) is a new STDF record that has been added in STDF V4-2007. It is the primary data vehicle for logging scan failures. While the STR record is too complex to describe in complete detail in this document, the following describes its general attributes and usage.

failures on pins 21, 33, 77, the this would be logged as 3 entries in each array, e.g.:
Array Index: CYC_OFST: PMR_INDX:

679 21

n+l 679 33

n+2 n+3 679 812 77 33

n+4 923 44

If it is desired to also log the expect data of each failure, then the following array can be added to the record:

EXP_DATA: This array may be specified (by the DATA_BIT field) as 1, 2, 4, or 8 bits per failure. The DATA_CHR field maps these bits to the appropriate ASCII character
To illustrate the usage of this array, assume the following values in the following fields (each EXP_DATA value is a byte):

The key objectives that guided the creation of the STR record specification were:
Can facilitate different levels of information content Data compaction: Can compact large volume of failures into minimum size data structures Expandability: Can expand to accommodate larger data sets

Flexibility:

DATA_BIT =2 DATA_CHR = "LHZX" EXP_DATA = 17, 81, 84, 5, 80, 5, 16, 17


One would specify the following expect data sequence:
HLHLHLHHLHHHHHLLLLHHHHLLLLHLHLHL

The flexible features of the record allow the logging of scan failures in various different formats, including:
* * *

To log failures in the Pattern # / Chain # / Bit Position format, the following arrays would be used:

Conventional cycle/pin format Pattern/chain/bit format Direct scan cell reference

PAT_NUM: Contains the pattern number of each failure BIT_POS: Contains the failing scan cell's bit position in the chain CHN_NUM: The chain #
To further expand the information provided the previous example, any number of ScanCell Name Records (CNR) can be added to the STDF file. Each CNR record contains a scan cell name with its chain number and bit position.

In addition, the record can be used to log additional information:

* *

Data capture logs (not failures, but captured per cycle DUT output data states) Pattern modification records

This level of flexibility was achieved including a wide assortment (twelve) of optional data arrays in the record. An STR record may contain any combination of these arrays, while a minimum of two would be required to log scan failures. This minimum set would be:

CYC_OFST: An array containing the cycle number of each failure. The sizes of the entries in the array may be specified as either a 4- or 8byte value PMR_INDX: An array containing the PMR index (Pin ID) of each failure (2 bytes each)
Note that each array entry identifies a unique failure on a single pin. For example, if on cycle #679 there were

The STR record can be used to log information other than just failure information. In applications where non-fail data need to be logged (e.g., scan dumps), the CAP_DATA array can be used. Like the EXP_DATA array, data can be compacted into this array as 1, 2, 4, or 8 bits per data state, mapped by the CHR_DATA field. Another application of the STR record is to record modifications from the original ATPG pattern file that may have been made within the test program. This may be critical information for downstream tool that are using the original ATPG files as a reference. To perform this function the NEW_DATA array is used in conjunction with the CYC_OFST and PMR_INDX arrays (or PAT_NUM, BIT_POS, and CHN_NUM if the pattern/chain bit format is used). Like the EXP_DATA

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

and CAP_DATA arrays, the NEW_DATA array can be specified as 1, 2, 4, or 8 bits per cycle. To provide further extended expandability, there are three user-defined generic arrays available. Each of these arrays may be defined as employing 1-, 2-, or 4-byte values.
In addition, there is also a fixed length string array available, USER_TXT, in which the lengths of the entries are specified by the TXT_LEN field. 6.2.1 Adding The Test Conditions Other than just the failing locations, the test environment under which the failures occurred may be critical information for downstream yield diagnosis. Since each STR (or set of concatenated STRs) constitutes the results of a single test, the ability to add any arbitrary number of "test conditions" to the record is provided. Example usages might be the VCC level and capture frequency for AC tests. Test conditions are facilitated by the COND_NAM (condition name) and COND_VAL (condition value) string arrays.
6.2.2 Solving the STDF Record Size Constraint The STR record structure facilitates the ability to include relatively large volumes of data. To circumvent the STDF constraint of 64KB maximum record sizes, the ability to "concatenate" several STR records together is provided (this includes breaking up the individual arrays across multiple concatenated records). This is facilitated by adding "x" of "y" fields plus "local" counts for each array. Figure 8 depicts an example of record concatenation where we are logging the two test condition arrays and 60,000 failures in the three data arrays CYC_OFST, PMR_IDX, and EXP_DATA (with 4 bits per failure)
Vxd #1
(; Gen_ra) IN n

6.3 PSR Record - Synchronizing to ATPG files An STR record (or set of concatenated records) represents the results of a single test execution and the test number is dutifully included in the record. However, this information generally provides no help at all to downstream diagnosis tools as to where the failures came from. In a majority of applications, a single test will be comprised from the translation of multiple ATPG pattern files (e.g., STIL or WGL files) into a single ATE pattern. The diagnosis tool will generally need to relate failures back to the original ATPG file(s) to perform proper diagnosis. To provide all of the necessary information to the diagnosis tool, the STR record includes a "PSR Index" field which references a Pattern Sequence Record (PSR), another new record type that has been added to STDF V4-2007. The PSR record specifies information about the specific ATPG files that were used to create the test plus where these patterns reside (cycle number) in the test. This information is provided in seven arrays in the PSR record (3 required, 4 optional).
* * * * * * *

PAT_BGN: PAT_END: PAT_FILE: FILE_UID: ATPG_DSC: SRC_ID: PAT_LBL:

Cycle # ATPG patterns begins on Cycle # ATPG patterns ends on ATPG pattern file name Unique file identifier code (optional) General ATPG information (optional) Other ATPG information (optional) Symbolic pattern label (optional)

While creating the PSR records may place additional burden on the test program generation process to extract this information during the ATPG file translation process, it is a critical and absolutely required function.
6.4 NMR - Getting the Signal Name Right In a traditional STDF file, Pin Map Records (PMR) are used to relate a PMR numeric index in the other records to the appropriate ATE/DUT pin. The diagnosis tools must be able to relate the failure to the signal name used in the ATPG pattern file. The simple notion would seem to be to just insure that the PMR contains the ATPG-created name. However, this may not always be feasible due to unique naming conventions required by the ATE and/or legacy usage of the PMR records. To address this issue, the Name Map Record (NMR) has been added to STDF V4-2007. A NMR record contains a numeric array of PMR indexes and associated string array of the ATPG signal names. 6.5 CNR As mentioned in Section 6, ScanCell Name Records (CNR) contains a scan cell name with its chain number and bit position. This record has been added to the format to enable the storing of failing scan cell name in the datalog. In actual datalog a reference to the corresponding CNR is added with the chain number and bit position that

kd Rr #3

.nt

R<cd #4 Rod #5 Rcd #6

EXP_D.M .... . . . - . . . . . . . . . . .

Figure 8: Concatenating data arrays


Note that in this example we are completing the contents of one array (e.g., CYC_OFST) before we are starting another (e.g., PMR INDX). However, this is an arbitrary STDF writer implementation and there are no specific rules concerning how the arrays can be distributed across

the concatenated records.

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

is used to reference the CNR that contains the scan cell name. SSR & SCR Records - Even More Scan Information The STR, PSR, and NMR records that have been described provide the entire infrastructure required to datalog scan failures. Two additional records have been added to STDF V4-2007, the Scan Structure Record (SSR) and the Scan Chain Record (SCR). The STIL files generated by ATPG tools generally contain ScanStructure blocks which describe in detail the makeup of the scan chains, including the complete net name path of each scan cell (flipflop) in the design. While generally not required, this information can be passed on in the STDF file by these two records. A single SSR record will list the ScanStructure name with a numeric array of SCR record indexes. Each SCR record will contain the information for a single scan chain, including the scan in/out ports, clock signals, chain length, and a string array containing the full net name path of each flipflop in the chain. An SCR record will generally be quite large, thus the concatenation features described in the STR records are deployed here as well. While the SSR and SCR records may be useful in specialized applications, they are not generally recommended as they can represent significant data volume. 6.6

only contains the information on the memory model and orientation.

Figure 9: Design object

Memory Fail Datalog Format

The memory fail datalog format addresses both the embedded as well as standalone memories. The memory datalog format has the same data model objects as the scan datalog format. However, the contents of the classes are different for objects other than the device identification and FEH identification. In addition, the contents of these classes are different for embedded and the standalone memory case. The rest of this section will describe the contents of each of these categories. Please note that the working group is in the process of mapping these objects onto STDF records so the details of the STDF records are not included in this paper. Design Information The design information objects contain the information about the design that needs to be passed around the design-to-test-design loop to enable proper diagnosis and data translations. In particular, for the embedded memory case as shown in figure 9, it contains information on the controller, the instances that each controller controls, the memory details in terms of row/column/bank and its port information and the orientation. It also contains a link to the layout file to enable logical to physical mapping for diagnosis. For the embedded case the design information
7.1

7.2 Test Identification The test identification object contains the information on the test facility, test stage (wafer/package), test flow, test suite and the pattern itself. Both standard March as well as user-defined algorithms are allowed. In the case of a userdefined algorithm being used for a test, the object contains a pointer to the file containing the description.
7.3 Environment Specification Environment specification object contains the information about the test condition that were set when a test was executed during fail data collection. The standard supports specification of temperature, by default. It also support the specification of the user defined environment conditions as

(attribute, value) pair.

7.4 Format Specification The format specification object contains the information on the type of the fail data that is supported. The following format types are supported in the standard: 7.4.1 *

Standalone (pin, cycle no): In this format it is assumed that the outputs of the memory are directly connected to the pins, and the failures are directly observed by the ATE which can record the cycle number where the fail was observed.

(Pin, address)/(Row/col): This is similar to the format mentioned above, with the difference that the log contains (pin, Addr) or (Row, Col) information instead of (pin, Cycle no). It is assumed that the conversion from the (pin, Cycle

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

no) to the (pin, Addr) or (Row, col) will be done by the ATE.

Bit stream (compressed/uncompressed): This format allows the raw bit stream to be stored in the log for debug purposes. The data could be uncompressed or compressed. In the case of compressed datalog, the bits are compressed using standard compression algorithm, e.g., zlib. For the uncompressed bit stream, the data volume is reduced by storing only the words that have any fails in them. The log contains the starting address of the words, the count of words that failed starting from that address and then the word bits as shown in Figure 10. Please note that the word bits 0/1 could indicate either the measured 0/1 or it could be used to indicate fails (0-no fail and 1- fail). A flag in the format specification is used to indicate the meaning of bits.

controller inside the log. The frame-based format contains two sections: the frame format specification section and the frames themselves. The frame format specification is added in the datalog to enable inter-operability among the custom BIST frame format, such that a reader could understand the format of the fail log to be able to process the fail frames. The frame format follows the following rules: * Fixed order for the field types: The field types allow the specification of design entities, e.g., controllers, memory instances, ports, etc.

* * *

Each frame field specification is done following this format/syntax : (FieldType, FieldName, Width, Default Value) tuples

Multiple fields per type are allowed

Fail count: The fail count format allows only the fail count instead of actual fails to be stored in the log. This is useful in the volume production for monitoring and statistical analysis. Four levels of abstraction for fail count are supported: o Total fails: This field is mandatory and indicates the total fails during the test o Block fail count: This field contains that number of block/banks that failed o Row fail count: This field contains the number of rows that failed o Column fail count: This field contains the number of columns that failed

Mask capability to remove fields for certain portions of the log. Redefinition of mask possible Figure 11 shows an example of the format specification and the frames.
*

Bit stream (compressed/uncompressed): This is same as the one for the standalone memories except that the bit stream data is sent to ATE by the BIST controllers. Fail count: This is similar to the standalone case as well. However, in case the counts are not directly generated by the BIST controllers, the ATE will have to compute from the fail frames that are sent out by the BIST controller.
TestCtrl
MemID MemID

Ctril 4 15 Meml 2 Mem2 2 Addr Addrl 4 Addr Addr2 4 FailedData Datal 4 FailedData Data2 4

MaskSpec 111111
Exam pl es
4

1 OOK

1-

1)

IRoldd

ord Cni 10k

WordlO
Word 0 101010

Word 1i
Word 100
RI

Wod 100k
Word 101

2)

dd1OOordC.

0101111 0000000m ........

Word 102

Figure 10: Frame structure for Bitstream format

7.4.2
*

Embedded Frame based: In the case of embedded memories ,the fails are communicated to ATE by the BIST controller using frames and a protocol. This field allows the capture of the frame data from BIST

NoOfRec 1 Merm1 Mer 2 Addrl Addr2 Datal 01 10 0011 0100 0100 MaskSpec 101010 NoOfRec 3 Me01 Addrl Data0 01 0010 0010 01 0100 0011 1111 01 1000 MaskSpec 010101 NoOfRec 2 Data2 Mer 2 Addr2 10 0011 0011 0111 10 0100

Data2 1100

Figure 1 1: Example of frame format

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

7.5 Validation and Synchronization The validation and synchronization object contains the information to perform data consistency checks across the loop during fail data collection. The standard provides the following fields for validation and synchronization:
*

TotalFails: Total fails observed by the ATE

TotalLoggedFails: Total fails logged in the STDF file. This could be smaller than all the fails observed by the ATE TotalFrames: Total frames sent by the BIST controller
TotalFramesLogged: Total frames that logged in the STDF file
are

FileIncomplete: In case the fails were split into multiple STDF files, this flag would be
set

TestProgramSignature: This field allows the synchronization of the log to a test program. This is a unique signature that is derived from the test program ID.

Figure 12: Proposed STDF reader validation flow

7.6 Fail Data This object actually contains the fails. The fails are stored in the format that is specified in the format specification section.

Validation Approach
4.............
r

As ATE companies and diagnosis tool providers implement support for the new STDF V4-2007 capabilities, it would be of enormous benefit if a standardized validation flow could be provided. This document proposes a validation approach for both STDF readers (downstream diagnostic tools) and STDF writers (ATE).
STDF Navigator is a specialized Windows-based program that has been developed to support this process. Among the program's functions:
*
*

STDF Write

Reads and validates the structure of STDF files Creates STDF files using a Datalog Scripting Language (DSL). Parses and extracts the ScanStructure data from STIL
files

Simulates scan cell faults and creates Fault Files using DSL commands Creates STIL pattern files with forced failures

Figure 13: Proposed STDF writer validation flow

Fault Files are standardized syntax text files that specify the STIL file name, pattern number, chain number, scan cell bit position, and optionally the expected data of each failure.

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

STDF Reader Validation Approach 8.1 The proposed approach for the validation of downstream diagnosis tools that implement STDF readers for STDF V4-2007 is described here (and depicted in Figure 12). The premise requires the diagnosis tool to process the STDF file created by STDF Navigator and create a resultant "Read Fault File". This file is then compared to the "Source Fault File" that was created and/or used by STDF Navigator to initially create the STDF file. Note that in this process no actual fault simulation by the diagnosis tool is necessary but rather just determination of the pattern number, chain number, and bit position failures. If appropriate, expect data can be included in the comparison data as well. As depicted in Figure 12, the Source Fault File can either be created externally or generated by STDF Navigator's fault generator option. STDF Writer Validation Approach 8.2 The approach for the validation of STDF writers by ATE systems may be a little more complicated. The process depicted in Figure 13 provides a very robust approach. Again the STDF Navigator program is used. In this flow no actual testing of an IC is required. Using a Fault File that was either created externally or automatically generated, STDF Navigator then creates a set of STIL files that have the faults specified in the fault file inserted into a set of scan patterns. This process is performed using multiple STIL file to validate the PSR record creation as well. The STIL waveform specifications for the scan-out pins are set up to drive the expect data on passing cycles/pins, and to drive the opposite state on failing cycles/pins. Figure 14 depicts an example STIL timing lock that would accomplish this. The "L" and "H" pattern alias characters should invoke a pass state, while the "f' and "F" characters invoke a fail state. This technique means that no actual IC or special connections for these pins are required on the ATE. An ATE program is created from these STIL files and executed. The captured failures are written into the appropriate records in an STDF V42007 file. The STDF Navigator program then reads the resultant STDF file, extracts the failures, and compares them to the original fault file.
Timing "ForceFails"
WaveformTable "Wft_LHfF" Period '50ns'; Waveforms { SOut {L {: 'Os' D; '25ns' SOut {H {: 'Os' U; '25ns' SOut {f {: 'Os' U; '25ns' SOut {F {: 'Os' D; '25ns'

Status and Future Plan

The format specification for the scan fail datalog has been completed with a successful ballot. The first version of the validation tool for the scan datalog has been completed and is undergoing testing at the time of writing this paper. The group is currently discussing the deployment plans. The group is planning to do a beta-site evaluation after the vendor's prototypes are ready. Discussions are underway with the vendors to develop prototypes and also to add the support on their roadmaps.

The memory fail datalog working group has finished the data model discussions and is working towards the next milestone to release version 1.0 by October 2008. The group is currently working on mapping the data model presented in this paper into STDF records. The memory datalog group will follow the similar procedure as scan for review, ballot, validation and deployment. The validation tool mentioned in section 8 will be expanded to cover the memory fail data format as well.

10 Conclusion
Yield analysis and improvement in advanced technologies require structural fail information to be collected during volume manufacturing. There is no standard format to store this information, which leads to a complex environment for collection, storage and processing of this information with a lot of overhead. A new standard format presented in this paper has been developed to meet the requirements of volume diagnosis for yield enhancement in advanced technologies with the support from a wide spectrum of companies.

11

References

[1] Rehnani et al., "ATE Data Collection-A comprehensive requirements proposal to maximize ROI of test", ITC 2004 [2] Standard Test Data Format Specification, Version 4.0, Teradyne Inc. [3] Constantino, A., "BITMAP Test Data in STDF File", EMTC 2006. [4] Leininger A., et. al., "The Next Step in Volume Scan Diagnosis: Standard Fail Data Format", ATS 2006. [5] A. Khoche, "STDF Fail datalog standardization": Streamlining the dataflow for volume diagnosis", EMTC
2007.

L;}} H;}} L;}} H;}}

//pass //pass //fail //fail

Acknowledgements:
The work presented in this paper is a report on the standardization working group that consists of representatives from several companies. We would like to acknowledge their contributions to this work, for without them this work would not have been possible.

Figure 14: STIL timing block for forced fails

Adv.lnd.Prac. 2.1

INTERNATIONAL TEST CONFERENCE

10

You might also like