Professional Documents
Culture Documents
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Typographic and Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Encounter Test Documentation Roadmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Getting Help for Encounter Test and Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Extended Message Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Contacting Customer Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Encounter Test And Diagnostics Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Using Encounter Test Contrib Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
What We Changed for This Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Revisions for Version 15.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Revisions for Version 15.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Revisions for Version 15.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1
LBIST Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
LBIST Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Top-Down Test Synthesis Flow with Insertion of JTAG-Driven LBIST Logic . . . . . . . 16
Top-Down Test Synthesis Flow with Insertion of JTAG-Driven LBIST and 1500 Logic 21
Top-Down Test Synthesis Flow with Insertion of Direct-Access LBIST Logic . . . . . . 26
Encounter Test Flow for JTAG-Driven LBIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Encounter Test Flow for Direct-Access LBIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Example of Encounter Test JTAG-Driven LBIST Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Build Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Build Parent (JTAG) Testmode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Report Parent (JTAG) Test Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Build Child (LBIST) Testmode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Verify Child (LBIST) Test Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Build Faultmodel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Read LBIST Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Create LBIST Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2
OPCG Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Processing OPCG Logic Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Processing Standard, Cadence Inserted OPCG Logic Designs . . . . . . . . . . . . . . . . 65
Processing Custom OPCG Logic Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Unique Encounter Test Tasks for OPCG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Creating OPCG Testmode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Creating an OPCG Pin Assignment File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Building Test Mode Initialization Sequence Input File . . . . . . . . . . . . . . . . . . . . . . . . 74
OPCG Test Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3
Low Power Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Managing Power Consumption During Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Preparing a Netlist for Low Power Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Encounter Test Low Power Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Building the Low Power Logic Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Building a Low Power Test Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Analyzing Low Power Fault Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Generating and Analyzing Low Power Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4
RAM Sequential Tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Command Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Use model flow selecting faults on the perimeter of all memories on the design . . . 101
Selecting specific memory modules for RAM sequential test by module name . . . . 102
Selecting specific memory modules for RAM sequential test by instance name . . . 102
5
Hierarchical Test Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Core Processing Methodology Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Chip Processing Methodology Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Example of Out-of-Context Core Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Create Tests for Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Prepare for Core Test Data Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Chip Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Requirements and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6
On-Product XOR Compression Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
XOR Compression Macro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Modes of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
XOR Compression Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
XOR Compression Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7
SmartScan Compression Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Compression Serial and Parallel Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
SmartScan Testmodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Performing ATPG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Converting Parallel Interface Patterns to Serialized Patterns . . . . . . . . . . . . . . . . . . 137
Compression with Serial Only Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Debugging Miscompares in SmartScan Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
SmartScan Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Using OPCG with SmartScan Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8
Generating IEEE 1687 (IJTAG) Compliant Macro Tests . . . . 157
IJTAG IEEE 1687 Macro Test Generation Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Building Encounter Test Model and Testmode(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Reading ICL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Migrating PDL Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Processing Tester Controlled Clocks Asynchronous to TCK . . . . . . . . . . . . . . . . . . 188
Processing Tester Controlled Clocks Correlated to TCK . . . . . . . . . . . . . . . . . . . . . 190
Handling Scan Chains Spread Across Multiple Macros . . . . . . . . . . . . . . . . . . . . . . 191
Assumptions and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Preface
Getting
Started Overview and
New User
Quickstart
Models
Testmodes
Guides
Test Structures
Faults
ATPG
Test Vectors
Diagnostics
Click the Help or ? buttons on Encounter Test forms to navigate to help for the form and its
related topics.
Refer to the following in the Encounter Test: Reference: GUI for additional details:
Help Pull-down describes the Help selections for the Encounter Test main window.
View Schematic Help Pull-down describes the Help selections for the Encounter Test
View Schematic window.
Display Interactive extended help information for a message by entering one of the following
commands either directly on the command line or in the GUI Command Input field:
msgHelp <message_prefix-error_number1> <message_prefix-
error_number1> ...
For example,
msgHelp TSV-001 TSV-314
displays interactive help information for messages TSV-001 and TSV-314.
help <message_prefix-error_number1> displays interactive help for the
specified message.
The GUI Session Log is also available to view message text and extended help. Refer to
Using the Session Log to View Message Help in the Encounter Test: Reference: GUI for
details
1
LBIST Flow
Introduction
Logic built-in self-test (LBIST) is inserted into a design to generate patterns for self-testing.
LBIST allows for field/system testing without the need for automated test equipment (ATE)
and at times it is used during wafer/burn-in testing. Figure 1-1 shows a typical ASIC with
LBIST logic (in yellow) and other test components. RTL Compiler provides an automated way
to insert LBIST logic, while Encounter Test provides support to generate the patterns and
observe the responses.
The LBIST solution that is supported (shown in Figure 1-2) is based on a STUMPS (Self Test
Using MISR and Parallel SRPG) architecture and (optionally) supports run-time programming
via JTAG. The inserted LBIST logic uses:
A pseudo-random pattern generator (PRPG), also referred to as Shift Register
Sequence Generator (SRSG), to generate input patterns that are applied to the scan
channels.
A multiple input signature register (MISR) to obtain the response to these test input
patterns. An incorrect MISR output indicates a defect in the chip.
For more information on the architecture and features of the current LBIST solution inserted
by RTL Compiler, refer to the chapter Inserting Logic Built-In-Self-Test Logic in the
Design For Test in Encounter RTL Compiler Guide.
LBIST Flows
The following LBIST flows are currently supported for RC DFT and Encounter Test:
JTAG-Driven LBIST
This includes OPCG and testing of MBIST logic
JTAG-Driven LBIST with 1500 Logic
Direct Access Controlled LBIST
This does not have OPCG support
Figure 1-3 Top-Down Test Synthesis Flow with JTAG-Driven LBIST Insertion
Recommended Flow
1. Read Libraries, Design, and SDC Constraints
c. Read in an SDC file to define the timing constraints for the functional design.
2. Define DFT Control Signals - Specify the DFT setup to define the (full scan) test signals,
test clocks, and mark the objects that do not need to be mapped to scan.
a. synthesize -to_generic
4. Insert Boundary Scan Logic.
a. Enable the insertion of OPCG domain blocking logic for inter-domain paths:
- set_attribute dft_opcg_domain_blocking true /
a. check_dft_rules -advanced
7. Fix DFT Rule Violations If there are any X-source violations, they must be fixed
a. fix_dft_violations
8. Insert MBIST logic (Optional)
a. synthesize -to_mapped
Note: Only required if you started from RTL.
10. Add ATPG-Related Testability Logic (Optional) - Insert shadow logic for blackboxes and
RRFA test points for improved test coverage.
a. Connect the fullscan scan chains and generate the full scan chain reports.
- connect_scan_chains [-preview] ...
- report dft_chains
b. (Only if OPCG) Connect the OPCG macro segments into the full scan chains, and
build the OPCG side-scan chains. Report the full scan and side scan chains.
- connect_opcg_segments [-preview] ...
- report dft_chains [-opcg_side_scan]
c. (Only if OPCG) If you enabled OPCG domain blocking, insert toggle muxes to
increase ATPG effectiveness.
- set_opcg_equivalent ...
- replace_opcg_scan -edge_mode ...
Note: For more information, see Inserting On-Product Clock Generation Logic in
Design For Test in Encounter RTL Compiler Guide.
12. Compress Scan Chains -- Insert the scan chain compression logic and generate the
compression chain report.
b. report dft_chains
13. Insert LBIST
b. write_et_bsv -library
c. write_et_atpg -library
Note: Refer to Generating Files for LBIST Pattern Generation and Simulation in
Design For Test in Encounter RTL Compiler Guide for more information.
Figure 1-4 Top-Down Test Synthesis Flow with JTAG-Driven LBIST and 1500 Insertion
Recommended Flow
1. Read Libraries, Design, and SDC Constraints
c. Read in an SDC file to define the timing constraints for the functional design.
2. Define DFT Control Signals - Specify the DFT setup to define the (full scan) test signals,
test clocks, and mark the objects that do not need to be mapped to scan.
a. synthesize -to_generic
4. Insert JTAG Macro
Note: If OPCG logic is to be inserted, you also define the OPCGLOAD instruction.
- define_dft jtag_instruction -name OPCGLOAD -opcode
011 - register OPCGLOAD -length 1
a. Enable the insertion of OPCG domain blocking logic for inter-domain paths:
- set_attribute dft_opcg_domain_blocking true /
a. check_dft_rules -advanced
7. Fix DFT Rule Violations If there are any X-source violations, they must be fixed
a. fix_dft_violations
8. Insert MBIST logic (Optional)
a. synthesize -to_mapped
Note: Only required if you started from RTL.
10. Insert 1500 or Isolation Logic
Note: For more information, see Inserting Core-Wrapper Logic in Design For Test
in Encounter RTL Compiler Guide.
11. Add ATPG-Related Testability Logic (Optional) - Insert shadow logic for blackboxes and
RRFA test points for improved test coverage.
a. Connect the fullscan scan chains and generate the full scan chain reports.
- connect_scan_chains [-preview] ...
- report dft_chains
b. (Only if OPCG) Connect the OPCG macro segments into the full scan chains, and
build the OPCG side-scan chains. Report the full scan and side scan chains.
- connect_opcg_segments [-preview] ...
- report dft_chains [-opcg_side_scan]
c. (Only if OPCG) If you enabled OPCG domain blocking, insert toggle muxes to
increase ATPG effectiveness.
- set_opcg_equivalent ...
- replace_opcg_scan -edge_mode ...
Note: For more information, see Inserting On-Product Clock Generation Logic in
Design For Test in Encounter RTL Compiler Guide
13. Compress Scan Chains -- Insert the scan chain compression logic and generate the
compression chain report.
b. report dft_chains
14. Insert LBIST
b. write_et_bsv -library
c. write_et_atpg -library
Note: Refer to Generating Files for LBIST Pattern Generation and Simulation
in Design For Test in Encounter RTL Compiler Guide for more information.
Recommended Flow
1. Read Libraries, Design, and SDC Constraints
c. Read in an SDC file to define the timing constraints for the functional design.
2. Define DFT Control Signals - Specify the DFT setup to define the (full scan) test signals,
test clocks, and mark the objects that do not need to be mapped to scan.
a. synthesize -to_generic
4. Run Advanced DFT Rule Checker - to find DFT rule violations and x-source violations.
a. check_dft_rules -advanced
5. Fix DFT Rule Violations If there are any X-source violations, they must be fixed
a. fix_dft_violations
6. Insert MBIST logic (Optional)
a. synthesize -to_mapped
Note: Only required if you started from RTL.
8. Insert 1500 or Isolation Logic
b. report dft_chains
11. Compress Scan Chains -- Insert the scan chain compression logic and generate the
compression chain report.
b. report dft_chains
12. Insert LBIST
If the design had LBIST inserted by RTL-Compiler, the last step of the process generates a
set of directories and scripts that automate the Encounter Test flow. The flows for running
BSV (1149 boundary scan verification), ATPG, and LBIST are contained in three separate
directories with a run script for each flow that starts with Build Model. This simplifies the
process, and you can simply run the scripts in the following order to complete the Encounter
Test processing:
1. atpg_lbist_jtag_workdir/runet.atpg (creates scanchain and logic test vectors)
2. bsv_lbist_jtag_workdir/runet.bsv (generates patterns to verify the 1149.1 TAP,
instructions, and test data registers)
3. lbist_jtag_workdir/run_lbist_RUNBIST or run_lbist_SETBIST (simulates
the self test and generates signatures)
If working with a large design, you may want to combine scripts to avoid duplication of steps
and, if you are using LBIST for manufacturing test, take advantage of cross-mode fault
markoff (as shown in Figure 1-6).
Note: If your design has JTAG-Driven LBIST that is not inserted by RTL-Compiler, it must
meet the same requirements as the LBIST inserted with RC (see Inserting LBIST Logic for
more information).
Tip
The LBIST RAK on the Customer Online Support web site (http://
support.cadence.com) includes a Lab on JTAG-Driven LBIST. If this methodology is
new to you it is highly recommended that you try out the RAK. The RAK uses the
RC-DFT methodology for Encounter Test rather than the consolidated one, but the
steps are basically the same.
Figure 1-6 Encounter Test JTAG-Driven Logic Built-In Self Test Processing Flow
Report Parent (JTAG) Test Structures Fix any problems with JTAG structures
before continuing
Build ATPG Testmodes (FULLSCAN, If you are creating your own script, these
COMPRESSION,COMPRESSION DECOMP) testmodes and the child (LBIST) testmode
can be built simultaneously
Verify ATPG Testmodes Test Structures Analyze and fix any severe warnings
before continuing
Build Faultmodel
Recommended Flow
1. Build an Encounter Test model.
- build_model cell=top_module_name
designsource=<verilog_netlist_location>
techlibs=<technology_library_location> blackbox=yes
blackboxoutputs=z
Setting blackboxoutputs to z keeps them from being X-sources.
Refer to Performing Build Model in the Encounter Test: Guide 1: Models for more
information.
2. Build BSV Testmode and Verify 1149.1 Boundary Logic
- build_testmode testmode=1149 bsdlinput=<bsdlname>
bsdlpkgpath=<location of 2001 bsdl package files>
assignfile=<assignfilename>
- verify_11491_boundary testmode=1149 bsdlinput=<bsdlname>
bsdlpkgpath=<location of 2001 bsdl package files>
Refer to Verify 1149.1 Boundary Scan in Encounter Test: Guide 3: Test Structures
for additional information.
3. Write Test Vectors for 1149 and TB_EXTEST_CAP_UPDT Testmodes - writes out
Verilog test vectors for Verilog simulation.
- write_vectors testmode=1149 inexperiment=11491expt
scanformat=serial
- write_vectors testmode=TB_EXTEST_CAP_UPDT
inexperiment=11491expt scanformat=serial
Note: TB_EXTEST_CAP_UPDT testmode is created automatically by
verify_11491_boundary for the iopinmapping checks.
Refer to Writing Verilog in Encounter Test: Guide 6: Test Vectors for more
information.
4. Build Parent (JTAG) Testmode.
- build_testmode testmode=MODE_JTAG_RUNBIST
modedefpath=<location of mode definition file>
seqdef=<location of file with mode initialization sequence>
assignfile=<assignfilename>
Note:
If you are not using RC-DFT, the testmode name may be something different but it
should follow the example provided by RC-DFT.
The RC-DFT testmode name when using SETBIST is MODE_JTAG_SETBIST.
All input files associated with RUNBIST processing contain the string RUNBIST.
All input files associated with SETBIST processing contain the string SETBIST.
RC-DFT puts a mode definition file in the WORKDIR with the same name as the
testmode. If the name of the mode definition file is different than the testmode, then
the modedef keyword also must be specified to identify the mode definition file.
RC-DFT puts a sequence definition file with the mode initialization sequence in the
WORKDIR with the name TBDseqPatt.JTAG_RUNBIST or
TBDseqPatt.JTAG_SETBIST.
RC-DFT puts an assignfile in the WORKDIR with the name
assignfile.JTAG.RUNBIST or assignfile.JTAG.SETBIST. The assignfile
defines the JTAG pin functions (TMS, TRST, TCK, TDI, TDO) and the clocks (PI and
OPCG) to be used for LBIST.
Refer to Multiple Test Modes in Encounter Test: Guide 2: Testmodes for additional
information.
5. Report Parent (JTAG) Test Structures
- report_test_structures testmode=MODE_JTAG_RUNBIST
reportscanchain=all
The scanchain from TDI to TDO should be both controllable and observable.
6. Build Child (LBIST) Testmode
- build_testmode testmode=MODE_LBIST_RUNBIST modedef=MODE_LBIST
modedefpath=<location of mode definition file>
seqdef=<location of file with mode initialization sequence>
assignfile=<assignfilename>
Note:
If you are not using RC-DFT, the testmode name may be something different but it
should follow the example provided by RC-DFT.
The RC-DFT testmode name when using SETBIST is MODE_LBIST_SETBIST.
RC-DFT puts the mode definition file in the WORKDIR with the name shown in the
sample command. If you are using SETBIST, the name of the mode definition file is
the same; there is no difference in the mode definition file between RUNBIST and
SETBIST for this testmode.
RC-DFT puts a sequence definition file with the mode initialization sequence in the
WORKDIR with the name TBDseqPatt.LBIST_RUNBIST or
TBDseqPatt.LBIST_SETBIST. Note that the Begin_Test_Mode statement in
the mode initialization sequence for this testmode must point to the correct name for
the parent testmode.
RC-DFT puts an assignfile in the WORKDIR with the name
assignfile.LBIST.RUNBIST or assignfile.LBIST.SETBIST. The
assignfile defines the clocks and identifies the PRPG and MISR.
7. Verify Child (LBIST) Test Structures
- verify_test_structures testmode=MODE_LBIST_RUNBIST
For SETBIST the testmode=MODE_LBIST_SETBIST.
Non conformance to these guidelines may result in poor test coverage or invalid test
data. If you receive any severe warnings analyze them to understand the condition and
fix the issue. It is especially important to fix any X-source issues. Refer to Verify Test
Structures in the Encounter Test: Guide 3: Test Structures.
8. Build ATPG test modes
- build_testmode testmode=FULLSCAN
- build_testmode testmode=COMPRESSION
- build_testmode testmode=COMPRESSION_DECOMP
9. Verify ATPG Testmodes Test Structures
- verify_test_structures testmode=FULLSCAN
- verify_test_structures testmode=COMPRESSION
- verify_test_structures testmode=COMPRESSION_DECOMP
Refer to Performing Verify Test Structures in the Encounter Test: Guide 3: Test
Structures for more information
10. Build a Fault model
- build_faultmodel
This step is not required unless you are planning to fault grade the lbist sequences or run
ATPG.
Refer to the Building a Fault Model in the Encounter Test: Guide 4: Faults for more
information.
11. Read LBIST Test Sequences
- read_sequence_definition testmode=MODE_LBIST_RUNBIST
importfile=<file containing the test sequence>
LBIST requires a sequence to be read in and simulated.
RC-DFT generates the sequence in file TestSequence.seq.
If you are writing your own test sequence, the following is a Universal Test Sequence that
can be used as a template.
See Coding Test Sequences in Encounter Test: Guide 5: ATPG for more information.
The following example of the RC-DFT generated sequence uses OPCG (PPI's) for the
LBIST sequence:
TBDpatt_Format (mode=node, model_entity_form=name);
[Define_Sequence Universal_Test (test);
[ Pattern ; # Set Test Constraints to post-scan value
Event Stim_PPI ():
"int_SE"=0
"int_capture"=1 ; ] Pattern ;
[ Pattern ; Event Pulse_PPI ():"scancaptck"=+; ] Pattern ;
[ Pattern ; Event Channel_Scan (); ] Pattern ;
] Define_Sequence Universal_Test;
Run this step if you are using LBIST for manufacturing tests; otherwise commit_tests
is not done.
Note: commit_tests is not part of the flow in the scripts generated by RC-DFT.
The master fault status is updated so faults marked off from the LBIST simulation will not
be tested in subsequent test generation runs.
For complete information, refer to Utilities and Test Vector Data in Encounter Test:
Guide 6: Test Vectors.
15. Create Tests, Write Vectors, and Commit Tests for ATPG Testmodes
- create_scanchain_tests testmode=COMPRESSION
experiment=chip_compression
- create_logic_tests testmode=COMPRESSION
experiment=chip_compression append=yes
- write_vectors testmode=COMPRESSION
inexperiment=chip_compression
- commit_tests testmode=COMPRESSION
inexperiment=chip_compression
- create_logic_tests testmode=COMPRESSION_DECOMP
experiment=chip_compression
- write_vectors testmode=COMPRESSION_DECOMP
inexperiment=chip_compression
- commit_tests testmode=COMPRESSION_DECOMP
inexperiment=chip_compression
- create_logic_tests testmode=FULLSCAN
experiment=chip_compression
- write_vectors testmode=FULLSCAN inexperiment=chip_compression
The commit_tests for each testmode updates the master fault status so the test
generation for the next testmode starts at that fault status (faults already tested in one
testmode will not be re-tested in the next testmode).
16. Simulate Vectors with ncsim or other Verilog Simulator
- ncverilog +TESTFILE1=<verilog_file_from write_vectors>
This is done for each set of vectors that was written during this flow.
Figure 1-7 Encounter Test Direct-Access Logic Built-In Self Test Processing Flow
Build ATPG Testmodes (FULLSCAN, If you are creating your own script, these
testmodes and the child (LBIST_DIRECT)
COMPRESSION,COMPRESSION DECOMP) testmode can be built simultaneously
Analyze and fix any severe warnings
Verify ATPG Testmodes Test Structures
before continuing
Build Faultmodel
Recommended Flow
1. Build Encounter Test model.
- build_model cell=top_module_name
designsource=<verilog_netlist_location>
techlibs=<technology_library_location> blackbox=yes
blackboxoutputs=z
Setting blackboxoutputs to z keeps them from being X-sources.
Refer to Performing Build Model in the Encounter Test: Guide 1: Models for more
information.
2. Build (LBIST_DIRECT) Testmode.
- build_testmode TESTMODE=MODE_LBIST_DIRECT
assignfile=assignfile.MODE_LBIST_DIRECT seqdef=<workdir>/
TBDseqPatt.MODE_LBIST_DIRECT modedef=MODE_LBIST
modedefpath=<workdir>
Since direct-access LBIST does not read out a signature, there is no need to have a
parent testmode where the values can be scanned out. Therefore, no parent testmode
is defined.
3. Verify (LBIST_DIRECT) Test Structures
- verify_test_structures testmode=MODE_LBIST_DIRECT
Refer to Verify Test Structures in the Encounter Test: Guide 3: Test Structures.
4. Build ATPG Testmodes (optional)
- build_testmode testmode=FULLSCAN
- build_testmode testmode=COMPRESSION
- build_testmode testmode=COMPRESSION_DECOMP
5. Verify ATPG Testmodes Test Structures
- verify_test_structures testmode=FULLSCAN
- verify_test_structures testmode=COMPRESSION
- verify_test_structures testmode=COMPRESSION_DECOMP
Refer to Performing Verify Test Structures in the Encounter Test: Guide 3: Test
Structures for more information.
6. Build a fault model for the design.
- build_faultmodel
Refer to the Building a Fault Model in the Encounter Test: Guide 4: Faults for more
information.
7. Read LBIST Test Sequences
- read_sequence_definition testmode=MODE_LBIST_DIRECT
importfile=<workdir>/TestSequence.seq
To use user-defined clock sequences, read the test sequence definitions.
See Coding Test Sequences in Encounter Test: Guide 5: ATPG for an explanation of
how to manually create test (clock) sequences.
8. Create LBIST Tests
- create_lbist_tests testmode=MODE_LBIST_DIRECT
experiment=lbist_test_direct testsequence=Universal_Test
prpginitchannel=yes forceparallelsim=yes . . .
If you are using LBIST for manufacturing tests, it is recommended that you use fault
simulation, which is the default for this command. Otherwise, you may want to specify
keyword gmonly=yes to do good machine simulation instead.
Refer to Create Logic Built-in Self Test (LBIST) Tests in Encounter Test: Guide 5:
ATPG for complete information.
9. Commit Tests (optional)
This task is run only if you are using LBIST for Manufacturing tests.
- commit_tests testmode=MODE_LBIST_DIRECT
inexperiment=lbist_test_direct
10. Identify Signature Comparison Value
Use report_vectors to generate the ASCII format of the vectors and find the
signature. There is no official command to find this data and re-format it to the required
format for updating the Verilog. However, there is a contrib script,
IdentifyMISRCompareValue.pl, that can be used for this purpose. Contrib scripts
can be used directly or copied to your own space and modified for customized
requirement. To use this contrib script:
- report_vectors testmode=MODE_LBIST_DIRECT
experiment=lbist_test_direct outputfile=STDOUT
| IdentifyMISRCompareValue.pl > <workdir>/MISR_RESULTS.log
For complete information, refer to Utilities and Test Vector Data in Encounter Test:
Guide 6: Test Vectors.
11. Edit Verilog to Include Signature
Use your favorite editor to edit the input netlist. You will insert the signature generated
from create_lbist_tests into the netlist so the signature is stored in the design
when it is simulated by ncverilog.
- Find the last occurrence of .misr_compare in the file.
- Comment out the existing definition for .misr_compare
- Include the .misr_compare value, shown as the Final Signature value in the
MISR_RESULTS.log file, in place of the definition you commented out.
12. Build NCSIM Testmode
- build_testmode testmode=NCSIM modedef=FULLSCAN
assignfile=<workdir>/assignfile.NCSIM
This is a testmode used to allow RC-generated patterns to be converted to the Verilog
format required for functional simulation.
13. Read Vectors for NCSIM
- read_vectors testmode=NCSIM importfile=TBDpatt.NCSIM
experiment=read
These patterns created by RC DFT allow verification of the signature you edited into the
netlist in the previous step.
14. Write Vectors for NCSIM
- write_vectors testmode=NCSIM inexperiment=read
includemodeinit=no
This writes out the patterns in Verilog format. The testmode initialization for NCSIM does
not matter for these patterns so there is no need to write it out.
For complete information, see Writing and Reporting Test Data in Encounter Test:
Guide 6: Test Vectors for complete information.
15. Create Tests, Write Vectors, and Commit Tests for ATPG Testmodes
- create_scanchain_tests testmode=COMPRESSION
experiment=chip_compression
- create_logic_tests testmode=COMPRESSION
experiment=chip_compression append=yes
- write_vectors testmode=COMPRESSION
inexperiment=chip_compression
- commit_tests testmode=COMPRESSION
inexperiment=chip_compression
- create_logic_tests testmode=COMPRESSION_DECOMP
experiment=chip_compression
- write_vectors testmode=COMPRESSION_DECOMP
inexperiment=chip_compression
- commit_tests testmode=COMPRESSION_DECOMP
inexperiment=chip_compression
- create_logic_tests testmode=FULLSCAN
experiment=chip_compression
- write_vectors testmode=FULLSCAN inexperiment=chip_compression
The commit_tests for each testmode updates the master fault status so the test
generation for the next testmode starts at that fault status (faults already tested in one
testmode will not be re-tested in the next testmode).
16. Simulate Vectors with ncsim or other Verilog Simulator
- ncverilog +TESTFILE1=<verilog_file_from write_vectors>
This is done for each set of vectors that was written during this flow.
Build Model
There is nothing unique about building the model for LBIST. The LBIST structures are
represented in the netlist and technology libraries. You only need to run the command and
ensure that the log reflects that the model was built successfully.
Command:
build_model cell=DLX_TOP blackbox=yes blackboxoutputs=z industrycompatible=yes
designsource=./DLX_TOP.et_netlist.v.gating_pgmclk_shiftdr_exit1_v2 techlib=./
techlibs/include_libraries.v,home_rcap_nightly_lib_sim/
tsmc25.v,home_rcap_nightly_lib_sim/tpz013g3.v,home_rcap_nightly_lib_sim/tsmc13.v
teiperiod=__rcETdft_
This command is the default generated by RC DFT and does the following:
Allows blackboxes to be included in the model and sets their outputs to z rather than x.
Requests the model to be built to allow for an industry compatible faultmodel. See
Example 11: Improving Fault Model Compatibility with Other ATPG Tools in Encounter
Test: Guide 1: Models and Build Fault Model Examples for Cell Boundary Fault Model
in Encounter Test: Guide 4: Faults for more information on industry compatible, cell
boundary, faultmodels.
Points to the location of the design source, the verilog netlist output from RC-DFT
Points to the technology libraries used to define the test view of each library cell
Identifies a unique string to use in place of a period in a name. Encounter Test uses
periods to delimit hierarchy. If the name of a module includes a period (for example,
abc.def), then a character string is used to represent the period so that it is confused
as a level of hierarchy by the applications. The default character sting is _p_ but RC-DFT
uses __rcETdft_ to ensure it will not conflict with any other string in the model.
Result
When the run completes, ensure you see the "Circuit Statistics" in the log. Since the LBIST
structure (PRPG, MISR, and Channels) are all comprised of flops/latches, there is no
indication of these structures in this report; but you should see the number of flops/latches is
large enough to include these structures.
Ensure the end of the log; above the message summary, shows Flat Model Build Completed.
If you do not see this message, or if it shows as failed, look for ERROR or Severe WARNING
messages to determine the problem.
Look at the message summary at the end of the log. If there are any WARNING messages
that you do not understand, look up the message help and ensure there is no problem that
needs to be corrected.
Command
build_testmode testmode=MODE_JTAG_RUNBIST modedef=MODE_JTAG_RUNBIST modedefpath=.
assignfile=./assignfile.JTAG.RUNBIST seqdef=./TBDseqPatt.JTAG_RUNBIST
When you code your own mode initialization sequence for an 1149.1 testmode, you also
must code a custom scan protocol as the application cannot determine the correct scan
sequence for you. The scan protocol for this testmode will include two scan sections, one
each for LBIST and OPCG. The one for LBIST will be used to scan unload the MISR in
the child testmode. The one for OPCG will be used to program the OPCG. Refer to
Encounter Test: Reference: Test Pattern Formats for complete information on the
syntax for coding these sequences.
As a default tester description rule is used, there is no need to include the tdrpath.
Result
You should see that there are two scan chains identified as "controllable and observable" in
the TTM-357 message. In this example, the OPCG scan chain was 14 bits and the LBIST
scan chain was 456 bits. If you do not see a TTM-357 message, or if the message only reports
one scan chain, there is a problem. See previous messages in the log. If you do not see any
problem, move on to the next step.
Command
report_test_structures testmode=MODE_JTAG_RUNBIST reportscanchain=all
This command will report the details of the scan chains. You should have one complete scan
chain from TDI to TDO for each of the two scan sections. Therefore, you will see two complete
scan chains. If the scanchain is not complete, you may want to run
verify_test_structures testmode=MODE_JTAG_RUNBIST to enable the interactive
analysis of the broken scan chain(s).
Command
build_testmode testmode=MODE_LBIST_RUNBIST modedef=MODE_LBIST_RUNBIST
modedefpath=. assignfile=./assignfile.LBIST.RUNBIST seqdef=./
TBDseqPatt.LBIST_RUNBIST
The name of the file containing the testmode initialization and custom scan protocol
sequences. A sample of the mode initialization sequence is shown in Figure 1-12. Note
that the mode initialization sequence for this mode starts with the initialization of the
parent testmode. A sample of the custom scan sequence is shown in Figure 1-13. Refer
to Encounter Test: Reference: Test Pattern Formats for additional information on
sequence definition statements and syntax.
A default tester description rule is used so tdrpath need not be included.
Tester_Description_Rule = dummy.tdr;
scan type = gsd /* Standard scan design */
boundary=internal /* Internal boundary; reduced pin count test using JTAG */
in = on_board /* input signals are from on_board PRPG */
out = on_board; /* output signals are to on_board MISR */
/* Only logic signature tests or scan chain tests are allowed in this testmode */
/* The tests may be either static or dynamic, but dynamic tests are intended */
test_types dynamic logic signatures only shift_register;
Result
You should see that your STUMPs channels are all identified as "controllable and observable"
in the TTM-357 message. If you do not see a TTM-357 message, or if the message does not
report the right number of scan chains, there is a problem. Check previous messages in the
log. If you do not see any problem, move to the next step that will provide additional
information about issues with the test structures.
Command
report_test_structures testmode=MODE_LBIST_RUNBIST reportscanchain=all
reportprpgmisr=all
This command reports all the bits in the PRPGs and MISRs and all the bits in each scan chain
(the STUMPs channels). The report starts with a summary of the number of PRPGs, MISRs
and Scan Chains.
Command
verify_test_structures testmode=MODE_LBIST_RUNBIST
This command requests the default checks for the specified testmode; in this case, the default
checks for LBIST. These include the checks for clocking, scan chains, and X-sources. If the
results from build_testmode did not look correct, the messages from
verify_test_structures will usually give you more information about the problem.
Use interactive analysis to analyze any messages you do not understand. See Analyzing Test
Structure Problems in the Design in Encounter Test: Guide 3: Test Structures for
information on analyzing TSV messages that are generated by
verify_test_structures.
Results
Once you have corrected any problems and have a good testmode, you are ready to continue
to the next steps.
Build Faultmodel
If you are using LBIST for manufacturing test, or want to fault grade the LBIST for another
reason, you need to build the faultmodel. If you just want to simulate the LBIST and generate
the signatures without fault grading, then this step is not needed for LBIST.
Command
build_faultmodel
Results
At the end of the log, you see the global fault statistics and the fault statistics for each
testmode that is defined. At this point, there is no fault coverage since no test generation or
fault simulation has been done.
The global statistics are repeated to the right of the testmode statistics for each testmode,
these were eliminated from the example shown in Figure 1-14 below.
See Encounter Test: Guide 4: Faults for more information about faults, fault coverage, and
cross-mode markoff.
Global Statistics
Command
read_sequence_definition testmode=MODE_LBIST_RUNBIST importfile=./
TestSequence.seq
importfile is the name of the file that contains the sequence to be processed. Figure 1-15
shows an example of the test sequence generated by RC-DFT that can be used as a template
if you are coding your own sequences. Notice that this is a dynamic (delay test) sequence
(see Pattern 1.2).
Command
create_lbist_tests testmode=MODE_LBIST_RUNBIST experiment=fault_par testsr=yes
testsequence=Universal_Test testvectorformat=dynamic simdynamic=yes
prpginitchannel=yes gmonly=no detectthresholdstatic=0 detectthresholddynamic=0
detectInterval=32 signatureInterval=32 maxseqpatterns=6400 maxpatterns=6400
forceparallelsim=yes reportmisrmastersignatures=yes
reportprpgmastersignatures=yes
There are several reports that can be printed in the log. We selected to report the MISR and
PRPG signatures. The initial values in the MISR and PRPG (cycle 0) and the signatures every
detect interval (every 32 cycles) are printed.
Results
At the end of the log, you see the final LBIST statistics as shown in Figure 1-16. You see in
this experiment, the scan chain test was simulated (32 cycles) and the static and dynamic
coverage is shown. The logic tests (using Static_Test sequence) was simulated (6400
cycles) and the resulting static and dynamic coverage is shown. These results are totaled to
show that there were 6432 patterns generated, of those 1999 were effective (resulted in faults
being tested) and the final coverage is 96.78% static and 85.43% dynamic. The number of
faults detected is shown in the parentheses.
Throughout the log you see indications of the signature after the scan at the end of every 32
cycles. These look like what is shown in Figure 1-17.
-----------------------------------------------------------------------
LBIST Statistics
-----------------------------------------------------------------------
Result Summary
Total patterns simulated : 6432
Total effective patterns : 1999
Static fault coverage (DC) : 96.7848%
Dynamic fault coverage (AC) : 85.4297%
Command
write_vectors testmode=MODE_LBIST_RUNBIST inexperiment=fault_par
combinesections=all
Results
The resulting verilog vectors are stored in the testresults/verilog directory. For this
experiment, there were three files:
VER.MODE_LBIST_RUNBIST.fault_par.mainsim.v - the mainsim file contains
structural information and the task definitions.
VER.MODE_LBIST_RUNBIST.fault_par.data.verilog - this is the vector file for
the LBIST tests.
cycleMap.MODE_LBIST_RUNBIST.fault_par - this is the cycle map file (Creating
cycle Map for Output Vectors in Encounter Test: Guide 6: Test Vectors for a
description of the content of this file.)
See Verilog Pattern Data Format in Encounter Test: Reference: Test Pattern Formats for
complete information about the Verilog output. Note, in particular, the information in section
LBIST Test Types.
Command
commit_tests testmode=MODE_LBIST_RUNBIST inexperiment=fault_par
If you have Severe Warnings from verify_test_structures that you are ignoring
because you know the operation of the design is correct, you will need to specify force=yes
on the commit_tests command line to have the tests committed.
Results
Commit concatenates the vectors from the experiment to the end of the master vectors for
the testmode; if there are no master vectors yet, it creates the master from this experiment.
If the vectors have fault status associated with them, as they do in this example, it marks the
master faultStatus with the results from this experiment.
In the log, commit_tests prints the fault statistics before and after the patterns are
committed so you can see the effect of committing this set of patterns on the fault coverage.
It reports the global and testmode statistics for Total Static and Total Dynamic in the same
format as shown in the log from build_faultmodel (see Figure 1-18).
The log also reports the statistics for the master test vectors after the experiment has been
committed. The statistics from this example are shown in Figure 1-18. Note that only one
experiment was committed in this example. The two test sections are the one for the scan
chain test and the one for the logic test. The init sequence is the mode initialization sequence
from build_testmode that is included at the beginning of each set of tests. The setup
sequence is where MISRs were blocked and channels were initialized from the PRPGs which
is done before each test (scan and logic).
The tester loops and test procedures are constructs in the patterns; there is a tester loop for
each test section and there is a test procedure for each sequence (init and test). See
Encounter Test: Reference: Test Pattern Formats for more information about the Encounter
Test vector format (TBDpatt).
Figure 1-18 Sample LBIST Committed Master Test Vector File Statistics
Encounter Test's Verify Test Structures tool is designed to identify many such design
problems so they can be eliminated before proceeding to test generation. Even so, it is
advisable to use your logic simulator of choice to simulate the LBIST operation on your design
for at least a few test iterations (patterns) and compare the resulting signature with the
signature produced by Encounter Test's Logic Built-In Self Test generation tool for the same
number of test iterations. This simulation, along with the checking offered by Encounter Test
tools, provides high confidence that the signature is correct and that the test coverage
obtained from Encounter Test's fault simulator (if used) is valid.
When the signatures from a functional logic simulator and Encounter Test's LBIST tool do not
match, the reason will not be apparent. It can be tedious and technically challenging to
identify the corrective action required. The problem may be in the LBIST logic, its
interconnection with the user logic, or in the Encounter Test controls. The purpose of this
section is to explain the use of signature debug features provided with Encounter Test's Logic
Built-In Self Test generation tool.
Signature Mismatch
It is not necessary to run the full number of test iterations to attain a high confidence that your
LBIST design is implemented properly and Encounter Test is processing it correctly. In fact,
the functional logic simulation run, against which you will compare Encounter Test's
signature, might be prohibitively expensive if you were to compare the final signatures after
several thousand test iterations. It is recommended that you run a few hundred or a few
thousand test iterations, or whatever amount is feasible with your functional logic simulator.
Submit a Logic Built-In Self Test generation run, specifying the chosen number of test
iterations (called patterns in the control parameters for the tool). You will need to obtain the
MISR signatures; this can be done in any of three ways:
1. Request scope data from the test generation run: simulation=gp
watchpatterns=range watchnets=misrnetlist where range is one of the valid
watchpatterns options and misrnetlist is any valid watchnets option that
includes all the MISR positions.
In the first method, you will use View Vectors to look at the test generation results as signal
waveforms. Refer to Test Data Display in the Encounter Test: Reference: GUI for details
on viewing signal waveforms. This may seem the most natural if you are used to this common
technique for debugging logic. However, you may find it more convenient to have the MISR
states in the form of bit strings when comparing the results with your functional logic
simulator.
In both cases, MISR signatures are produced at every detection interval. Signatures
are printed in hexadecimal, and are read from left to right. The leftmost bit in the signature is
the state of MISR register position 1. (The direction of the MISR shift is from low- to high-
numbered bits with feedback from the high-numbered bit(s) to the low-numbered bits.)
Signatures are padded on the right with zeroes to a four-byte boundary, so there are trailing
zeroes in most signatures which should be ignored.
The MISR latch values found in the signatures are manually compared with the results of the
functional logic simulator, often by reading a timing chart.
Syntax: [reportlatches=<integerRange>]
There is no default.
2
OPCG Flow
Introduction
On Product Clock Generation (OPCG) refers to complex logic that generates or modifies
clock signals internal to the product. This logic generally is not able to be accurately defined
using the gate level primitives that are required for test generation and fault simulation.
The goal for Encounter Test processing is to identify the internal clock signals and hide the
clock generation logic. The nets at the output of the clock generation logic are symbolically
cut so the logic feeding them becomes inactive. The "cut" nets are called cutpoints. The
cutpoints are connected to inputs that the test generators/simulators can control called
pseudo primary inputs (PPIs). Multiple cutpoints can be connected to a single PPI if the
behavior of those internal signals is the same or simply out of phase with one another. See
additional information in the TestMode section.
Figure 2-1 depicts the basic concept. The oscillator and go signal are defined for the tester
and the PLL and clock generation logic on the design generate the clock signal on the output
of the clock generator. The cutpoint removes all that logic from consideration by Encounter
Test test generation and simulation. A pseudo primary input (PPI) is defined as a clock and
connected to the cutpoint net. Test generation and simulation will treat the PPI as a primary
input to test the downstream logic.
The on-product clock generation (OPCG) feature in Encounter Test allows you to generate at-
speed tests using the OPCG circuitry built into the design. This is required where the tester
cannot generate the clocks at the desired speeds. Encounter Test implements OPCG by
defining cutpoints and assigning pseudo primary inputs (PPIs) to the internal clock domains.
The test generator then uses those internal PPIs as the launch and capture clocks in the
design. Encounter Test supports custom OPCG logic that you define and for which you must
provide all the information on how the clocking sequences are produced and their definition.
It also supports a standard set of OPCG logic that can be inserted by Encounter RTL
Compiler and for which sequences can be automatically generated.
RTL Compiler creates the pin assign file that defines the cutpoints and the OPCG logic to
Encounter Test; you still need to create the mode initialization sequence to correctly program
any PLLs that will be used for OPCG testing.
The RTL Compiler also generates a run script that automates the various steps of Encounter
Test to produce the test vectors.
When using the OPCG logic inserted by RTL Compiler, you can provide the mode
initialization sequence as input to the write_et_atpg command in RC. This generates an
RC run script named runet.atpg, which automates the various steps of Encounter Test to
produce the test patterns. The script processes the OPCG and non-OPCG test modes. For
the OPCG test modes, it runs the prepare_opcg_test_sequences command that
automatically generates intradomain delay tests to be used by ATPG. It can also generate
static ATPG tests, if desired. You can modify the script to generate inter-domain tests if you
have included delay counters in the OPCG logic.
The following figure depicts the tasks required to create the test patterns using the RC run
script:
Code PLL mode initialization sequence This initializes the PLLs and starts the
using RC template reference oscillators to be used for the test.
Use RC command define_dft opcg_mode to define This specifies the PLL Mode Initialization
the OPCG Mode sequence.
Optionally modify the run script, runet.atpg to For example, if you inserted delay counters in
customize it for the desired output OPCG domains and want to apply interdomain
tests, add interdomain=yes to the invocation of
the prepare_opcg_test_sequences
command line in the script
The following figure depicts the tasks required to generate test sequences using Cadence
inserted, standard OPCG logic without using the run script generated by RTL Compiler:
Figure 2-3 Creating True-time Patterns using RTL Compiler Inserted OPCG Logic
The following figure shows the tasks required to use custom OPCG logic within a design.
OPCG logic usually requires a special initialization sequence and sometimes requires special
test sequences for issuing functional capture clocks during application of the ATPG patterns.
This example creates special test sequences for initializing the chip and for launching the
functional capture clock.
Note that as each design is unique, customized designs require different settings and
sequences.
Two functional clock pulses are issued from the OPCG logic. The first clock launches the
transition and the second clock captures the logic output in a downstream flip-flop.
In addition, there should be an OPCG statement in either the mode definition file or the pin
assign file. The OPCG statement block allows specifying the PLLs to be used, the reference
clocks that are used to drive them, and the programming registers that are available to
program them. It also allows specifying each OPCG clock domain, the PLL output that drives
the domain, and programming registers that are available for the domain. Refer to OPCG in
the Encounter Test: Guide 2: Testmodes for more information on the OPCG statement
syntax.
The initialization sequence defines the launch-capture timing within the OPCG logic and waits
10,000 cycles for the PLL to lock.
The test application sequence defines the sequence of events required to get the OPCG logic
to issue the desired launch and capture clock pulses.
When you define cut points, Encounter Test treats those nets as though they were primary
inputs. Thus, as far as Encounter Test programs are concerned, the logic that actually feeds
cut point nets is inactive. While Encounter Test treats this logic as inactive and unobservable,
it is, in truth, observable. This logic is therefore placed in a category called OPC logic (for
On-Product Clock or Control). OPC logic is defined as any inactive logic that is in the back
trace from a cut point, plus any TG constraint logic that is fed only by primary inputs that fall
within the classification of OPC logic. The relationship of OPC logic and inactive logic is
shown in Figure Figure 2-5 on page 72.
Figure 2-5 OPC and Inactive Logic in a Simple Design with a Cut Point
In Figure 2-5, the net connecting blocks K and N is identified as a cut point. There are two
constraint blocks which by their nature are left dangling. So the nodes labeled A, B, C, F, H,
I, and M are treated as inactive by Encounter Test. Nodes A, B, C, F, H, and I are identified
as OPC logic. Note that if the cut point did not exist, then only the constraint blocks H and M
would have been inactive, and there would be no OPC logic.
Use the following procedure to configure an OPCG test mode and perform ATPG.
1. Identifying cut point locations and how they are grouped into PPIs.
2. Specify the Go signals, which are required for OPCG logic. They are allowed to be
specified on PIs and PPIs.
Note: The Go signal represents the external or internal event that starts the OPCG
clocking.
3. Create a customized mode initialization sequence to ensure that PLLs are properly
initialized and locked on the input oscillators. Refer to Mode Initialization Sequences
(Advanced) in Encounter Test: Guide 2: Testmodes for more information.
4. Define test sequences, each with an associated setup sequence. The setup sequence
must include a Load_OPCG_Controls event that specifies values to be loaded into any
or all of the defined OPCG registers.
Note: Step #4 is optional for PLL registers.
In the OPCG pin assignment file, you will see the scan control and clock information:
-SCA system clock that is inactive in logic 0 state
+TIAn input signal that must always be at logic 1
+TCAn input signal that must be at logic 1 when the functional capture clock is applied.
In other words, when creating test patterns, ATPG must assume that this input is always
at logic 1.
-SEA scan enable signal that is 0 during scan shift mode
SIx and SOxThe scan in and scan out ports
-ESA clock used for scan only
.......
For this OPCG example, you will also see the following:
cutpoint - The clock output of the OPCG logic. This cutpoint has been named as
PLL_CLK and the "+" sign means there is no inversion.
PPI - The cutpoint is now treated as a real input pin. You can assign any test function to
it. For the sample design it is a clock with a safe state of 0 (-SC).
PLL_IN - IF the OPCG logic has an internal PLL then almost always there is a reference
input oscillator. The sample OPCG has an internal PLL and the input reference signal is
flagged as an oscillator here. Logic 0 is its safe state.
PLL_EN - If the design has an enable signal for the PLL, you can flag this input with the
GO test function. A "+" sign designates that logic 1 is when the PLL is enabled. Note that
if you do not flag this, there is no error but if you do not specify how to control it, Encounter
Test generates errors.
When AT_SPEED=0 as in the example, the scan shift clock (scan_clk) coming from the tester
is selected and is therefore flagged with -SE. For at-speed purposes the functional launch and
capture clocks come from the OPCG, therefore, AT_SPEED=1 is required. That is why
AT_SPEED is also flagged with +TC.
In the sample design the OD0 and OD1 pins enable selecting four different launch-capture
times. The design may or may not allow configuring different clock speeds.
cutpoints DTMF_INST.CLK_GEN_I.AT_SPEED_CLK +PLL_CLK;
assign PPI=PLL_CLK test_function= -SC;
assign pin=AT_SPEED test_function= -SE +TC;
assign pin=test_mode test_function= +TI;
assign pin=OD0 test_function= +TI;
assign pin=OD1 test_function= -TI;
assign pin=reset test_function= -SC;
assign pin=spi_fs test_function= -SC;
assign pin=scan_en test_function= +SE;
assign pin=scan_clk test_function= -ES;
assign pin=scan_in[0] test_function= SI0;
assign pin=scan_out[0] test_function= SO0;
assign pin=scan_in[1] test_function= SI1;
assign pin=scan_out[1] test_function= SO1;
assign pin=PLL_IN test_function= -OSC;
assign pin=PLL_EN test_function= +GO;
] Pattern 2.1;
] Define_Sequence Mode_Initialization_Sequence 1;
For test mode initialization, this example sets the inputs and wait for 10,000 cycles for the PLL
to lock, as shown in Figure Figure 2-6 on page 75.
More information on inserting OPCG logic is available in the Design For Test in Encounter
RTL Compiler, Inserting On-Product Clock Generation Logic.
The following example illustrates a special sequence to get OPCG logic to issue the true-time
delay test launch and capture clocks.
The following figure represents a scan shift when scan_en=1 and AT_SPEED=0.
Note: This is a broadside load Verilog simulation and, therefore, there is only one scan shift
clock (scan_clk) to load the data. No additional application sequence is required to enter or
exit scan shift mode. All necessary data for this was provided in the pin assignment file.
The following figure shows a special sequence to have OPCG issue at-speed delay test
clocks. For the sample design, the OPCG logic will issue the launch and capture pulse when
PLL_EN goes to 0. Note that AT_SPEED=1 and this selects the OPCG output as the clock
source.
Figure 2-8 Sequence with OPCG Issuing At-speed Delay Test Clocks
3
Low Power Flow
Introduction
The following terms are commonly used in reference to low power:
Power Mode The static state of the design established by the power status
(on or off) of each power domain.
Power Domain The collection of logic blocks that are connected to the same
unique power supply.
CPF Common Power Format. Refer to Low Power in Encounter
RTL Compiler and the RTL Compiler Common Power
Format Language Reference for additional information.
UPF Unified Power Format. Refer to Low Power in Encounter RTL
Compiler and the RTL Compiler Common Power Format
Language Reference for additional information.
Optional preparation of a low power component fault subset. The test is generated
against this subset targeting only the low power components; the
create_scanchain_tests and create_logic_tests commands generate low
power test. These test results are typically used for analysis and verification.
Analysis of test patterns by using the write_toggle_gram command
Generation of Retention Test. Refer to Creating Retention Tests in Encounter Test:
guide 5: ATPG for details.
RTL Netlist
RTL Compiler-DFT
CPF or UPF/1801
Synthesized Netlist
Encounter Test
Structural
Verilog
Test Vectors
Test Vectors
Experiment
Flop-based
Toggle Count File Current data files
Estimated power
static/dynamic -switching
Power Meter -leaking
Voltage Storm
dynamic IR Drop Plot
Electromigration
The result is a verifiable flow for test patterns to eliminate tester failure due to excessive power
consumption.
recommended
Insert DFT PTAM
Encounter Test
Build Test Model
Use the DFT features of RTL Compiler to prepare the netlist for maximum flexibility during
test. Process an RTL level netlist by using the Common Power Format (CPF) or Unified Power
Format (UPF) to insert Power Test Access Mechanism (PTAM) logic and Power Aware Scan
Chain capability. Use the prepared netlist as input to the Encounter Test flow to build the test
model on which ATPG runs.
Tip
Inserting PTAM logic is highly recommended for increased low power test flexibility.
Refer to the following topics in Design for Test in Encounter RTL Compiler for additional
information:
Inserting Boundary Scan Logic
Inserting Memory Built-In-Self-Test Logic
Inserting Flop Gates to Reduce Power
Inserting Power Test Access Mechanism (PTAM) Logic
Controlling Scan Configuration
Start
optional step
Synthesized
Netlist
Build Model
Structural required step
Verilog
Read_Power_Intent
or contrib dir
CPF or Prepare CPF Data
UPF/1801
No Power Yes
Create ATPG Component create_lp_tests
(PC) Testing
No
Analysis Analysis
Report PC Fault
Report PC Fault Acceptable
Statistics
Statistics PC Coverage
Yes
Yes No
Yes
Write Vectors
Delete
Delete Regenerate
Sequences or End
Regenerate
Patterns
Delete Sequence
Range
Resimulate
The Encounter Test low power methodology is integrated into the standard ATPG use model
flow; however, it utilizes power definitions contained in the CPF or UPF/1801 file. The
following descriptions highlight the low power related tasks within the ATPG use model.
As shown in Figure 3-3 on page 86, the prepare_cpf_data command reads the CPF
information and the read_power_intent command reads the UPF/1801 information, from
both library and design definitions, parses the CPF or UPF/1801 file, and populates the
Encounter Test CPF or UPF/1801 database. Encounter Test applications that run later
automatically use this database when needed, without the requirement to explicitly specify a
CPF or UPF/1801 file.
The CPF or UPF/1801 data is first used in the Build Test Modes task. One approach to
solving the low power issue during test is to partition the design from a power perspective by
using the functional power modes defined in the CPF or UPF/1801 file. To leverage this
approach, selected power modes are mapped to a test mode. The following criterion is used
to select the power modes:
The power modes must have a representation of each switchable power domain that is
in an on and off state across the selected set of test modes.
The processing of the power modes as test modes must start with the test mode that contains
the least amount of powered on circuitry, and then move to the next, until all selected test
modes are processed. This helps address faults in those locations in the design where power
is turned on early. As a result, when a different powered domain is powered on in subsequent
test modes, ATPG does not need to focus on this power domain if it has been tested
previously.
Encounter Test can determine that a power mode is targeted by a test mode by the
specification of the power_mode keyword in the pin assign file. During build_testmode,
the power_mode keyword is set to the CPF or UPF/1801 defined power mode name to set
the power mode state within that test mode.
The first decision block in Figure 3-3 on page 86 determines whether to generate test vectors
that specifically target the power components identified in the netlist through the CPF or UPF/
1801 library definitions. The Encounter Test installations contrib directory includes scripts
create_lp_tests (for static faults) and create_lp_delay_tests (for dynamic faults)
that produce low power test vectors. These scripts can be run before or after the standard
ATPG runs. This step is optional for production purposes because the faults associated with
low power components are also targeted by running the create_scanchain_tests and
create_logic_tests commands. Refer to Creating Retention Tests in Encounter Test:
guide 5: ATPG for additional information.
Important
The create_lp_tests.pl and create_lp_delay_tests.pl scripts in the
contrib directory are not supported formally.
The Analysis blocks in Figure 3-3 on page 86 list Report Power Component (PC) Fault
Statistics as an analysis task. This task comprises of producing and analyzing low power fault
reports and low power fault statistics reports. These reports are useful in analyzing the ATPG
results. Refer to Creating Retention Tests in Encounter Test: guide 5: ATPG for additional
information.
Write Toggle Gram is an additional Analysis step. While the Report PC Fault Statistics task
focuses on the faults in the design, the write_toggle_gram command focuses on the
patterns created to test those faults. This analysis can be done on any pattern set created
with Encounter Test to determine if the toggling of the flops during the test is at an acceptable
level. Although the keyword maxscanswitching may be used to control fill of the patterns
to a balanced level, escapes may still be possible. For example, these escapes could occur
where the toggle limit exceeds the limit specified due to over-compaction of the patterns.
When this occurs, either discard the experiment that contains these violations or delete those
particular sequences from the pattern set. If deleting specific sequences, the patterns must
be resimulated to determine the test coverage impact resulting from the deletion. These
patterns must also undergo additional analysis with the write_toggle_gram command to
ensure that the deletion of the sequences did not transfer the problem to another location in
the patterns. Refer to Calculating Switching Activity for Generated Test Vectors in Encounter
Test: Guide 6: Test Vectors for additional information.
This process can be repeated for each of the selected low power test modes to provide a
complete low power test pattern set for a quality design.
During the analysis performed using the write_toggle_gram command, a Toggle Count
Format (TCF) file can be written for the patterns being analyzed. The TCF file can be used
with VoltageStorm to determine an estimated average and peak power consumption when
the patterns are applied to the chip at the tester.
Synthesized
Netlist
Build Test Model
Structural
Verilog
CPF or Read_Power_Intent
UPF/1801 or
Prepare CPF Data
Structural Verilog - library models are required to build a test model. These technology
library models will be imported so that the internal workings of the Encounter Test tool (ATPG,
Fault Simulation, and so on) can understand the behavior and develop accurate fault models
of the design to assure a quality design.
The prepare_cpf_data command reads the CPF file and subsequent Encounter Test
applications requiring the CPF information can extract it from the Encounter Test low power
database; no other application is required to read in the CPF file. Refer to Prepare CPF Data
on page 89 for details.
Similarly, the read_power_intent command reads the UPF/1801 file and subsequent
Encounter Test applications requiring the UPF information can extract it from the Encounter
Test low power database; no other application is required to read in the UPF file.
Refer to Building a Logic Model in the Encounter Test: Guide 1: Models for additional
information.
CPF data may be introduced into the Encounter Test low power flow with the
prepare_cpf_data command after completion of build_model. Refer to Figure 3-1 on
page 83. The prepare_cpf_data command executes the following:
Accepts a CPF file and compares the signals in the CPF file to the Encounter Test model
to verify the signals exist in the model. The following is the syntax to specify an input CPF
file with the cpffile keyword:
prepare_cpf_data cpffile=file
Stores the CPF data in the Encounter Test database for use by downstream commands
Accepts a name mapping file that contains the mapping between objects in the Common
Power Format (CPF) file and their corresponding names in the netlist. This is required
since object names could get modified during synthesis and the name mapping file
allows tracking of these changes while continuing to use the golden CPF file. The
following is the syntax to specify an input name mapping file with the
namemappingfile keyword:
prepare_cpf_data namemappingfile=file
Example 3-1 shows a sample output log for prepare_cpf_data. Refer to Sample CPF
Input File in Encounter Test: Guide 2: Testmodes for a sample of a CPF file specified with
the cpffile keyword.
INFO (TLP-601): The CPF information has been saved in the Encounter Test database.
[end TLP_601]
INFO (TLP-605): Getting Power Component instances in the design. [end TLP_605]
INFO (TLP-606): Found 45 instances of Power Component type(s) SRPG in the design.
[end TLP_606]
INFO (TLP-606): Found 0 instances of Power Component type(s)LEVELSHIFTER in the
design. [end TLP_606]
INFO (TLP-606): Found 0 instances of Power Component type(s) ISO in the design.
[end TLP_606]
INFO (TDA-001): System Resource Statistics.Maximum Storage used during the run
and Cumulative Time in hours:minutes:seconds:
*******************************************************************************
* Message Summary *
*******************************************************************************
Count Number First Instance of Message Text
------- ------ ------------------------------
INFO Messages...
1 INFO (TDA-001): System Resource Statistics. Maximum Storage used during the run
1 INFO (TLP-600): Processing Common Power Format (CPF) file pf_test_clean.cpf.
1 INFO (TLP-601): The CPF information has been saved in the Encounter Test database.
1 INFO (TLP-605): Getting Power Component instances in the design.
3 INFO (TLP-606): Found 45 instances of Power Component type(s) SRPG in the
design.
*******************************************************************************
UPF/1801 data may be introduced into the Encounter Test low power flow with the
read_power_intent command after completion of build_model. Refer to Figure 3-1 on
page 83. The read_power_intent command:
Accepts a UPF/1801 file and compares the signals in the file to the Encounter Test model
to verify the signals exist in the model. The following is the syntax to specify an input UPF
file with the upffile keyword:
read_power_intent upffile=file
Stores the UPF/1801 data in the Encounter Test database for use by downstream
commands.
The build_testmode command will automatically load and use a low power database
created by the prepare_cpf_data or read_power_intent command.
Figure 3-5 depicts the flow to produce a low power test mode.
Typically, when selecting power modes to be built as a test mode, it is recommended that at
a minimum, a power mode will be needed to represent each switchable power domain in both
an on and off state. Though this is not a hard requirement, it will provide for higher quality
test for a design. More or less power modes can be selected to be mapped to test modes if
desired.
To ensure that the design is fully tested, each logic block must be included in at least one test
mode that has that logic block power on. Adhere to the following guidelines when selecting
test modes:
1. If MBIST is inserted, one of the selected power modes must have every power domain
in which MBIST is inserted in a powered on state.
2. At least one instance of each power domain must be at an on condition, preferably
across multiple power modes.
3. At least one instance of a power domain is preferred to be off across power modes.
Memory is a significant area of power consumption during test. Memory testing is typically
managed by two methods:
The amount of parallelism which targeted memories will be running at the same time
The frequency at which the targeted memories are being tested
The inputs for the build_testmode command are pin assign file and a mode initialization
file. The mode initialization file is typically an optional input for most test modes. Refer to
Mode Initialization Sequences in the Encounter Test: Guide 2: Testmodes for additional
information.
The Power_Mode definition statement links the power mode to the generated test mode. The
power modes configuration is the assumed configuration which is forced when the test mode
is constructed. If the statement is not included as either a mode definition statement or in an
assign file, no specific power mode will be assumed and all power domains will be assumed
active. Refer to Power_Mode in the Encounter Test: Guide 2: Testmodes for the mode
definition syntax.
Important
Low power test flows using CPF 1.0 extended and CPF 1.1 are currently not
supported via GUI.
Refer to Building a Test Mode in the Encounter Test: Guide 1: Models for additional
information.
A series of default tests can be run with the verify_test_structures command. These
tests are in areas such as analyzing scan clocks, scan flops, tri-states, feedback, clock
choppers, fix value flops and clock race conditions.
Advanced tests are available that deal with compression and with X sources that are the
result of powered-off logic. X-source identification ensures the isolation logic is valid. These
options must be explicitly selected.
Use interactive message analysis to analyze error or warning messages in the logs. Refer to
Analyzing Test Structure Problems in the Design in the Encounter Test: Guide 3: Test
Structures for additional information.
Note: Boundary Scan Verification will be needed if the insert_dft boundary_scan
command was issued as part of the RTL Compiler portion of the flow.
Use the build_faultmodel to create a fault model with low power components.
Refer to Build Fault Model in Encounter Test: Guide 4: Faults for more information. Also
refer to Reporting Low Power Faults and Reporting Low Power Fault Statistics in Encounter
Test: Guide 4: Faults for information on generating fault reports and fault statistics reports
for power components.
4
RAM Sequential Tests
RAM Sequential test is the process of testing dynamic faults on the perimeter of a memory
element (RAM or ROM). The faults on the perimeter of the memory element are precisely
identified so that the only way these faults can be tested is through exercising a memory
operation. These faults are recommended to be created as part of a cell boundary fault
model. See Build Fault Model Examples for Cell Boundary Fault Model in Encounter Test:
Guide 4: Faults.
The memory models are required and need to be included in the build_model step; they
cannot be black boxed. The source for the memory models can come from Encounter Test
build_memory_model or can be migrated to the Encounter Test memory model format
from another tool. See Building Memory Models for ATPG in Encounter Test: Guide 1:
Models.
Use Model
The following figure represents the flow for performing RAM sequential tests.
Note:
1. To run the RAM sequential flow with the true_time use model script, specify
DELAYTESTTHRUMEMORIES=yes or DELAYTESTTHRUMEMORIES=only in the setup
file. If DELAYTESTTHRUMEMORIES=only is specified, the RAM sequential use model
flow is the only type of ATPG that is performed.
2. The value of the memories keyword on the prepare_fault_subset command can
be yes (to select all memories in the design) or a comma separated list of names of the
specific memories whose faults are to be included in the subset. The specific memory
names may be listed by one of the following (all names in the list must be one or the other;
module names and instance names should not be mixed):
cell name (Verilog module name): to include all instances of the specific modules
instance names: to include only selected instances of the memory modules
Command Examples
Use model flow selecting faults on the perimeter of all memories on the
design
The following command lines show the default flow when running the true_time use model
script. These commands prepare the faults around the perimeter of each of the memories on
the design, and then the tests are generated on these faults and simulated against the full
fault model.
Note: Tests are generated with multiple scan loads in the sequences that yield the highest
test coverage. If your tester cannot support multiple scan load test sequence, specify
singleload=yes on the create_logic_delay_tests command.
prepare_fault_subset workdir=. testmode=FULLSCAN_DELAY experiment=ramseq1
memories=yes
commit_tests inexperiment=fullsim_ram1_i1
commit_tests inexperiment=fullsim_ram1_i2_3
5
Hierarchical Test Flow
Introduction
Encounter Test supports the Hierarchical Test methodology for processing of large designs
comprised of multiple instances of the same cores. Tests are generated for each core
out-of-context. When the core is instanced on a chip, the existing out-of-context core tests are
migrated to be applied at the chip I/O. The remaining chip faults around the cores are the only
ones tested at the chip level. See Figure 5-1 for a high-level depiction of the process.
The first step, shown in Figure 5-2, is to wrap the core and process it as a standalone entity
(called the out-of-context core) using the normal test process.
RC-DFT inserts IEEE 1500 style wrappers and ties into the WIR control register the
ability to control the modes of the core. This includes the ability to control INTEST and
EXTEST compression, and the ability to have the INTEST compression modes be either
Active or Inactive where the inactive mode ensures the core does not interfere with the
top-level compression of Active core outputs.
Encounter Test is used to create tests for the core. Tests may include:
OPCG
XOR Compression
OPMISR+ Compression
Static and/or Dynamic Faults
The second step, shown in Figure 5-2, is to build the core migration model and prepare the
data required for migration of the tests at the chip (SoC) level.
Netlist for
unwrapped core START
Commit Tests
START
Migrate Core Tests for Migrate tests for instances of one core at a
each test and Commit time
Create Logic Tests for Chip All cores in Bypass mode to test
Top and Commit interconnections and top-level chip logic
read_netlist ../DLX_CORE_CLOCK_GATED.v
set te [define_dft test_mode -active high te]
set se [define_dft shift_enable -active high se]
check_dft_rules
##################################################
# Define and Insert Internal and Wrapper Channels
##################################################
insert_dft wrapper_instruction_register
set wint [find / -pin DLX_CORE_3_wir_inst/INTEST]
set wext [find / -pin DLX_CORE_3_wir_inst/EXTEST]
# Insert scan
define_dft shift_enable WSEN_in -active high -create_port
define_dft shift_enable WSEN_out -active high -create_port
################################################
check_dft_rules
report dft_core_wrapper
##################################################
# Define test_bus_ports
##################################################
# Data ports
edit_netlist new_port_bus -input -name CPI -left_bit 7 -right_bit 0 /designs/
DLX_CORE
edit_netlist new_port_bus -output -name CPO -left_bit 7 -right_bit 0 /designs/
DLX_CORE
rm dft/scan_chains/DFT_ICHAIN*
rm dft/scan_chains/DFT_WCHAIN*
Build Model
The only unique thing for processing the core for use in hierarchical test is to identify that it is
an out-of-context core whose tests are intended to be migrated when it is instanced on a chip
(SoC). This identification is done by including core=yes on the command line:
build_model cell=DLX_CORE core=yes blackbox=yes blackboxoutputs=z \
industrycompatible=yes teiperiod=__rcETdft_ \
designsource=$WORKDIR/DLX_CORE.et_netlist.v \
techlib=/techlib/regs/../sim/tsmc13.v
Note: If you do not specify core=yes, it will not impact the top-level fault processing but the
generated fault model will not have PI/PO faults.
Build Testmodes
There will be many testmodes built for this methodology; INTEST, EXTEST, and BYPASS
testmodes. For this example, the following testmodes were built.
The command line is not unique for hierarchical test, the unique features are in the modedef,
seqdef, and assignfile. The following is an example of the command line generated by
RC-DFT for this design.
build_testmode \
testmode=FULLSCAN_INTEST \
assignfile=$WORKDIR/DLX_CORE.FULLSCAN_INTEST.pinassign \
seqdef=$WORKDIR/DLX_CORE.FULLSCAN_INTEST.seqdef \
modedef=FULLSCAN_INTEST \
allowflushedmeasures=yes
The assignfile sets the test function pins required for each state. It may also include
statements for other types of logic such as OPCG or OPMISR. There are no unique
statements for hierarchical test in the core processing.
The seqdef defines the mode initialization sequence required to initialize the testmode.
Note: You may build additional testmodes that are not INTEST, EXTEST, or BYPASS, but
they will not be considered in the hierarchical test methodology. The only tests that are
migrated are those created for INTEST testmodes. The active logic in the EXTEST and
BYPASS testmodes is included in the core migration model.
The build_faultmodel command is run once to create the set of faults for the design and
identify which faults are active in each existing testmode. There are no unique keywords for
hierarchical test.
build_faultmodel
For hierarchical test, you want to create tests for each of the INTEST testmodes as those are
the tests that will be migrated. You want to create a form of the scanchain tests that does not
include explicit shifts and create logic tests.
Note:
You may create tests for other testmodes, but the only tests that are migratable are those
for testmodes that have boundary=migrate in their mode definition file (INTEST).
The only valid tests for migration are scanchain and logic tests; no other types of tests
may be included in the set of tests to be migrated.
This discussion and example are centered on static test, however delay test is also
supported and the same commentary applies to create_logic_delay_tests as
well as create_scanchain_delay_tests.
Commit Tests
Tests that are to be migrated for hierarchical test must be committed. The processing of the
tests for migration is done for all tests that are committed for the testmode. When there will
be tests for more than one INTEST mode, commit the tests for one mode before creating tests
for the next mode. This will avoid targeting faults in the second mode that were already tested
in tests generated for a prior test mode. There are no unique keywords for commit_tests;
the following is an example command line:
commit_tests testmode=FULLSCAN_INTEST inexperiment=DLX_CORE_compression
Other Commands
There are no other commands required, however, you may choose to run reports or write
vectors for analysis or for testing the core out-of-context of the chip. There are no restrictions
on running other commands except as noted in the preceding Create Logic Tests section.
Note: It is recommended to use parallel Verilog simulation of all out-of-context tests; only
serial Verilog simulation will be available once the tests are migrated to an SoC. The ability to
run parallel Verilog simulation on migrated tests is planned for a future release. Until then, it
is recommended to use serial Verilog simulation of a few migrated tests for each core
instance.
This step extracts all the necessary logic from your EXTEST and BYPASS testmodes. The
goal is to create the smallest possible model that still includes all logic necessary to allow
testing, on the chip (SoC), of logic above or outside the cores; and allows the core to be
bypassed when migrating tests for another core on the chip (SoC). The testmodes that have
been created for the core and their characteristics are already known to the application from
data in the workdir of the core. Therefore, the only input required for this command is the
location of the workdir and the location where the output data is to be written. The following
is a sample command:
build_core_migration_model coremigrationdir=./et/DLX_CORE_DIR
The location of the workdir (as in the previous command examples) is obtained from the
WORKDIR variable set in the environment or is assumed to be the directory you are in when
you execute the command. You may explicitly set the workdir on the command line with
WORKDIR=<name of workdir>.
coremigrationdir is the location for the core migration data. The core migration model
will be written into a sub-directory named by the module name of the top level. So, for this
example, since the top level module is named DLX_CORE, the data will be written to ./et/
DLX_CORE_DIR/DLX_CORE. This is done so you can specify the same coremigrationdir for
several cores and the data will be kept per core; this is useful when you build the model for
chip (SoC) processing.
This step must be run for each testmode for which you intend to migrate tests, in which case,
all the INTEST testmodes. It extracts the information about the faults in the testmode and their
status and stores information that allows these faults to be accounted for in the chip
faultmodel (even though the INTEST logic will not exist in that model). The status information
is used to give credit for the tested faults when the tests are migrated.
This step must be run for each testmode for which you intend to migrate tests and for each of
the core bypass modes that will be referenced at the chip level. The data it produces is used
by build_core_migration_testmode when processing the chip (SoC). The following is
a sample command:
prepare_core_migration_info coremigrationdir=./et/DLX_CORE_DIR \
testmode=FULLSCAN_INTEST
This step must be run for each testmode for which you intend to migrate tests. The data it
produces is used by migrate_core_tests when processing the chip (SoC). During this
process, the tests are translated into a form that does not require internal logic; so, for
example, stimulus on flops/latches is translated to the appropriate stimulus on the scan-in(s)
and pulses of the scan clock(s). The following is a sample command:
prepare_core_migration_tests coremigrationdir=./et/DLX_CORE_DIR \
testmode=FULLSCAN_INTEST
Chip Processing
There is no tool support for this step at this time. You need to catenate all WIR scan paths
and connect them as well as other core control pins to the ports of the chip (SoC).
Build Model
This step builds the model for the chip (SoC) using the core migration models created for each
core. The location of the core migration models is included in the techlib or designsource
specification. If the data for all the cores included on the chip (SoC) used the same
coremigrationdir, then you just need to include that directory in your specification; you do not
need to mention the module name in the specification. The following is a sample command:
build_model designsource=SOC_top.v techlib=./et/DLX_CORE_DIR,/techlib/regs/../
sim/tsmc13.v
This step builds the testmode data required for migration of core tests. You run this for each
core (or set of core instances) that is to have its tests migrated. You must point to the location
of the output data from prepare_core_migration_info. All keywords from
build_testmode are also available on this command. The following is a sample command.
build_core_migration_testmode testmode=FULLSCAN \
assignfile=$WORKDIR/SOC_top.FULLSCAN.pinassign \
seqdef=$WORKDIR/SOC_top.FULLSCAN.seqdef \
COREMIGRATIONPATH=./et/DLX_CORE_DIR
The unique requirement for this step is that the assignfile must contain an indication of which
instances of a single core are to be targeted for test migration and which cores are to be
bypassed. An example of the statements in the assignfile are given below. This indicates that
instance DLX_CORE_1 will be in the state set up by core testmode FULLSCAN_INTEST and
DLX_CORE_2 will be in the state set up by core testmode FULLSCAN_BYPASS. From the
information in the core migration directory, the application knows that DLX_CORE_1 should
have tests migrated for it and DLX_CORE_2 is to be bypassed.
coreinstance=DLX_CORE_1 testmode=FULLSCAN_INTEST ;
coreinstance=DLX_CORE_2 testmode=FULLSCAN_BYPASS ;
This step is used to build the testmodes that are required to test the logic around the cores.
There is nothing unique about this step, it is the same as in any Encounter Test processing
flow.
This step is used to verify the test structures in the testmodes that are required to test the
logic around the cores (from the previous step). There is nothing unique about this step, it is
the same as in any Encounter Test processing flow.
This step builds the faultmodel for the chip (SoC) and includes information gathered by
prepare_core_migration_faults that was run on the core. The only unique input to
this command is the identification of the core migration directory that contains the results of
prepare_core_migration_faults. The following is a sample command:
build_faultmodel coremigrationpath=./et/DLX_CORE_DIR
This step migrates the tests that were prepared for migration during core processing. The
step is run on each core migration testmode. The following is a sample command line:
migrate_core_tests testmode=FULLSCAN experiment=migrated_tg1 \
coremigrationpath=../ATPG_CORE_TEST/patt_migrate
When the migration is complete, the tests are committed. Once they are committed, the
global fault coverage will reflect the status of the faults that were in the original INTEST
testmode even though that logic is not included in the chip (SoC).
commit_tests testmode=FULLSCAN inexperiment=migrated_tg1
This step is used to create tests for the other testmodes on the chip (ones that are not used
for migrating core data). The tests are created and committed as for any other Encounter Test
processing.
Requirements
Cores are expected to have been wrapped using RTL Compiler (RC)-DFT. In cases of custom
design where wrappers are not inserted by RC, the custom wrappers must conform to the
requirements of ET processing.
Core Test migration to the chip is supported only for well isolated cores.
Only Encounter Test supported test compression logic structures can have tests
migrated.
Core test migration requires that the cores for which patterns will be migrated have been
processed through Encounter Test Core Processing Flow.
Limitations
The following is a list of limitations on the current support for hierarchical test:
No Encounter Test automatic sequence generation for chip level test modes intended for
migration of core test patterns. Encounter Test does not understand how to properly
initialize the core configuration registers or any PLLs that must be set up and started by
the chip modeinit sequence. Also, the scan sequences for chip modes used for migrating
core patterns have no scan sequences as they are all defined by the cores whose
patterns are to be migrated.
No support for LSSD style scan requiring skewed loading or unloading.
No support for use of partition files when defining core or chip test modes for which tests
are to be migrated.
No "assumed scan" support for core or chip test modes that intend to follow the core test
migration flow.
No "scan type none" allowed for cores or chip test modes associated with core test
migration.
No support for 1149.1 scan modes for cores, and if used on a chip, it can be only for
initializing the chip mode for test migration by use of a parent test mode. This also means
no support to read BSDL for a core, although that may still be of use for chip level 1149.1
processing.
Only compression structures natively supported by Encounter Test are allowed within
cores and for top level compression logic.
No ability to read in STIL patterns for learning the sequences for a core or a chip.
No support for PRPG save and restore registers in LBIST modes for cores or at the chip
level.
No support for TB_STABILITY latches for cores.
No support for FLH latches.
No support initially for scanfill sequences.
No support to read in migratable or migrated patterns for simulation within Encounter
Test, including in the GUI analysis. Patterns that have been prepared for migration
cannot be brought back in for analysis.
Full core gate level models used in the chip are not allowed for core test migration; only
the core migration model may be used. A full gate level model of the core should be used
only when running ATPG at the chip level for that core.
No support initially to compute overall chip level switching activity from migrated test
patterns.
No support for creating chip wide core internal IDDQ test patterns. The ability to create
core specific IDDQ tests that could be migrated is a future support item.
No support for embedded macro test, including P1687 if it needs to access a net inside
a boundary model at the chip level.
No support to perform any kind of integrity checking, for example, to validate that the
current core definition matches the version used when the boundary model and core
patterns were generated, other than standard UNLEV attribute processing.
No support for concatenated core INTEST chains at the chip level.
No support for migration of patterns to a core through resistors between the core pins
and chip pins, specifically, if the resistors feed the core pins directly. However, the
migration is possible if there is a buffer inserted between the resistor and the core.
6
On-Product XOR Compression Flow
Introduction
Test Synthesis adds specialized circuitry, called a Compression Macro, which allows for
reduction in scan chain lengths and leads to reduced test time and test data volume. XOR
compression is an on-product test data compression method based on the use of
combinational XOR structures. An XOR-tree compactor is used for test response
compression (that is, scan-outs) and an optional XOR based test input (that is, scan-ins)
spreader can be used for test input data decompression.
The following subsections describe an alternative structure for On-product test data
compression based on the use of combinational XOR structures. An XOR-tree spreader is
used for test response compression and an optional XOR based test input spreader can be
used for test input data de-compression. Figure 6-1 on page 122 shows a high-level diagram
of the on-product XOR compression architecture. A stream of compressed test data from the
tester is fed to the N scan input pins of the chip under test. A space expander based on an
XOR based spreader network internally distributes the test data to a large number of internal
scan channels which is a multiple (for example, M) of the number of scan input pins N. The
input side test data spreader therefore feeds M*N scan channels.
On the output side, the test data response is compressed by a space compactor to create an
N-wide output test response data stream. The space compactor is based on a combinational
XOR-tree. Similar to the case of OPMISR and OPMISR+, optional X-masking logic can be
added between the scan channel tails and the input to the XOR based space compactor. The
masking logic is optional in the case of XOR-based compression since it has better tolerance
for capturing X-states than a MISR however it is highly recommended. It is difficult to predict
the number of X-states that may be captured on a clean design once it is fully implemented
and put on a tester.
While the XOR-based space compactor is needed on the output side, a simpler input side
decompressor based on scan fanout can also be used in this architecture. The four
compression options are summarized in Figure 6-2 . On the input side, the space expander
can be based on either scan fanout or an XOR spreader. On the output side, the space
compactor can be based on either a MISR or XOR tree. Encounter Test ATPG and
Diagnostics support all four combinations shown in Figure 6-2 . Test Synthesis does not
currently support the combination of an XOR spreader with a MISR space compactor.
Table 6-2 describes the additional pins required for X-tolerance using the Channel masking
method described in OPMISR Test Modes in Encounter Test: Guide 2: Testmodes. These
pins and their purpose are identical to that of the OPMISR and OPMISR+ masking logic.
Figure 6-3 on page 126 shows an internal conceptual view of the XOR-Compression Macro.
There are 4 conceptual blocks composed of the XOR-Spreader, XOR-Compactor, Scan
Multiplexing Logic and the optional X-Masking logic.
Modes of Operation
Figure 6-4 on page 127 shows an external view of the XOR Compression Macro. The XOR
compression macro has the following modes of test operation.
1. Compression Mode with both XOR-spreader and XOR-compactor active (See Figure 6-
5 on page 127). This is set up when both SPREAD and SCOMP are active.
2. Compression Mode with scan fanout spreader and XOR-compactor (See Figure 6-6 on
page 128). This is set up when SPREAD is active and SCOMP is inactive. Another
possibility is that the XOR-Compression Macro is configured without the SPREAD pin in
which case this is the only available Compression Mode. In this case the Scan Input pin
X will feed the scan channels connected to scan channels X, X+M, X+2M, X+3M etc.,
where M is the scan fanout.
3. Full-scan mode (see Figure 6-7 on page 128) is established when SCOMP is inactive and
SPREAD is inactive or absent. In this case multiple internal scan channels are
concatenated to form full scan chains. The scan chain X for example will be affiliated with
SCANIN(X), RSI_SI(X), and scan channels X, X+M, X+2M, X+3M... and finally
DSO_SO(X) and SCANOUT(X). Each internal scan channel (i) is affiliated with the pins
SWBOX_SI(i) and SWBOX_SO(i).
Figure 6-4 XOR Compression Macro Connection to I/O Pins and Scan Channels of
Design
Figure 6-5 Compression Mode with Both Spreader and Compactor Active
Figure 6-6 Compression Mode with Scan Fanout and Compactor Active
When the numchannels option is not an integer multiple of the numchains option, the
scan chains in FULLSCAN mode may not be balanced since some of the FULLSCAN
chains will contain one more scan channel than the others.
When masking is used, the specified value for numchannels must be at least twice as
large as the specified value for numchains.
There is no (reasonable) upper limit on numchains or numchannels, however, if the
numchannels/numchains multiple is too large, the ability to diagnose the resulting
data will be affected. A warning is issued when this condition occurs.
7
SmartScan Compression Flow
Introduction
Related Topics
convert_smartscan_failures in the Encounter Test: Reference: Commands.
SmartScan Compression in Design For Test in Encounter RTL Compiler Guide
Converting SmartScan Serialized Tester Fail Data to CPP in Encounter Test: Guide
7: Diagnostics
SmartScan is a low pin compression solution that supports as few as 1 scanin and 1 scanout
pin while still allowing a reasonable amount of compression and diagnostics. This is useful for
designs where limited pins are available for testing purposes, For example, when performing
multi-site or system-level testing, the number of contacted test pins can be very limited. An
efficient solution to meeting this requirement is to reduce the number of scanin and scanout
pins on the design.
In the SmartScan compression architecture, each scanin feeds an N-bit serial shift register
(also known as Deserializer) and each scanout is similarly fed by an N-bit serial shift register
(also known as Serializer). This is typically known as the Serial interface. To load data into
each channel, the Deserializer first needs to be completely loaded with the test data that
would normally be applied to the Decompressor directly from multiple scanin pins. After the
Deserializer is loaded, the clock to the internal scan chains starts and the test data is shifted
into a channel. On the output side, all the bits within the Serializer simultaneously capture
data from the last flops of the channels and then serially shift it out through a single scanout
pin. The Deserializer and Serializer operations are overlapped such that while new data is
shifted in through the SERIAL_SCAN_IN pin, the response data loaded into the Serializer
(from the scan chains) is simultaneously shifted out through the SERIAL_SCAN_OUT pin.
RTL Compiler supports generation and insertion of the SmartScan compression macro. This
includes the Deserializer and Serializer registers, the clock control logic, and the optional
mask registers. RC also generates the necessary interface files required in the Encounter
Test design flow. Refer to Figure 7-2 on page 134 and Figure 7-4 on page 145 for more
information on these files.
Refer to the Inserting Scan Chain Compression Logic chapter in Design for Test in RTL
Compiler Guide for more information on SmartScan compression architecture and insertion
of SmartScan compression macro.
Encounter Test supports the SmartScan compression inserted by RC for both serial and
parallel interfaces.
Parallel Interface is where several scanin and scanout pins are available and these are
directly connected to the XOR compression network.
Serial only Interface has only a few serial scanin and scanout pins that connect to the
Deserializer and Serializer registers, which in turn are connected to the XOR
compression network.
Figure 7-2 Design Flow for SmartScan Compression with Parallel and Serial Interface
Build Model
Convert to Serialized
Patterns Serial Interface Patterns
Write Vectors
In this scenario, the Verilog netlist generated by RC contains several scanin and scanout pins
(parallel interface) for parallel access to the XOR compression network. One or more of these
pins is also shared with the serial interface.
This scenario allows the patterns to be applied either via the Parallel or the Serial interface.
For example, the parallel interface can be used during manufacturing test, while the Serial
interface can be used to apply patterns during system test.
SmartScan Testmodes
With SmartScan compression, RC generates two SmartScan test modes,
compression_smartscan and compression_decomp_smartscan for test generation.
If there is no XOR spreader on the input side, then only the compression_smartscan
testmode is generated.
Two control signals are required for SmartScan operation. These can be PI controlled or can
be internally generated test signals:
SMARTSCAN_ENABLE
Will be at Active High value in SmartScan Testmodes
Inactive value will cause SmartScan flops to be included within the scan chains
SMARTSCAN_PARALLEL_ACCESS
Active High value will select Parallel interface; inactive value selects Serial interface
Performing ATPG
Pattern generation will be done using the Parallel interface, and will be post-processed to also
be applied through the Serial interface.
The following constraints must be applied during ATPG in SmartScan testmodes. Faults
untestable due to these constraints will be targeted in non-SmartScan testmodes.
ATPG will not stim Scanins or measure Scanouts during capture cycles
Increases complexity of serialized patterns
No assumption is made as to whether all the scan pins will be contacted when
applying the serialized patterns at the tester or on the board.
Scan Enable must be inactive during capture cycles within Logic Tests
When Scan Enable is Active, SmartScan Controller allows clock to scan chains only
every Nth pulse of the top level clock (CLK)
Will allow Scan Enable being active for first (launch) Pulse in LOS tests
Linehold File
ATPG uses the linehold file generated by RC for SmartScan testmodes. This file must list the
parallel Scan in pins (real or pseudo), the channel mask enable pin (if present, to its inactive
value) and the scan enable pin (to its inactive value) for create_logic_tests.
Ignoremeasures File
ATPG uses the ignoremeasure file generated by RC for SmartScan testmodes. This file lists
the parallel Scan out pins (real or pseudo) and is used to prevent test generation for data from
SO pins during capture.
The scan chain tests generated with this keyword do not use any masking on the first Test
Sequence. This provides a method to debug when chains are not working properly, without
getting the masking logic involved.
Note:
The create_scanchain_tests command will generate multiple test sequences;
only the first test sequence will not have masking and the following ones will have
mask events in them.
In a flow with the SmartScan Serial-only interface, create_scanchain_tests
will default to format=simplified. You can choose to override this by explicitly
specifying format=normal, in which case, the scan patterns can be simulated in
NCSim only after converting them through convert_vectors_to_smartscan.
The following figure shows the changes to the test pattern made by
convert_vectors_to_smartscan:
Compressed Input/Output Stream is replaced with Load_SR / Unload_SR, which contain the
serialized data. Loading of the mask registers also is done using the Deserializer.
Note: The Use_Channel_Masks event is removed and the data within it is combined with
the Load_SR of the next Test Sequence. Hence, the converted patterns cannot be reordered,
as the CME data for a Test Sequence is present in the sequence following it.
The pattern generation is done only once, using the many parallel scanin and scanout pins
(called parallel interface). Once these patterns are converted to use the few serial scanin and
scanout pins (called serial interface), the user has the flexibility of using either set of patterns.
Note:
convert_vectors_to_smartscan can convert the scan chain and logic tests
separately, or convert a single experiment where these test sections are appended
together.
convert_vectors_to_smartscan supports conversion of logic tests that were
generated with testreset=yes on the ATPG command line.
Note that this sequence will replace the mode initialization sequence of the SmartScan
testmode. Therefore, this sequence must perform the same operations as the testmode
modeinit sequence, with the exception of setting the SMARTSCAN_PARALLEL_ACCESS,
Scan Enable, and Channel Mask Enable signals to their inactive values.
The initialization sequence must have the attribute smartscan_modeinit and the name of
this sequence must be specified for convert_vectors_to_smartscan through the
testsequence keyword. The following is a sample SmartScan init sequence supplied to
convert_vectors_to_smartscan:
TBDpatt_Format (mode=node, model_entity_form=name);
[ Define_Sequence smartscan_initseq 1 (smartscan_modeinit);
[ Pattern 1.1 (pattern_type = static);
Event 1.1.1 Stim_PI ():
"DFT_compression_enable"=1
.......
] Define_Sequence smartscan_initseq 1;
A SmartScan Description file contains information on the SmartScan structures present in the
netlist, that is, mapping of serializer and deserializer flops to the corresponding primary
inputs/outputs. Each bit (flop) in the Deserializer will map to a primary input pin (or pseudo
primary input pin added by edit-model) with test function SI, CME or CMI. Similarly, each bit
of the Serializer will map to a primary output pin or pseudo primary output pin. The file also
provides the mapping between the deserializer/serializer bits and the scan-in/scan-out used
to serially shift data into the deserializer/from the serializer.
The write_et command in RC generates the SmartScan description file in the ASCII
format. When using write_compression_macro to generate the SmartScan macro, this
file must be created manually.
An Update register can also be optionally present between the deserializer and the
decompressor. The update register is also a shift register and is of the same length as the
deserializer register. The SmartScan description file will provide the mapping of flops/bits in
update register with the corresponding primary inputs pins.
Line comments
//
--
Block comments
The block comments can span multiple lines. These are as shown below:
/*block of comments*/ - "/*" to start and "*/" to end block comments.
Header
The SmartScan description file can optionally also have an header present, which can
contain some comments for the user to understand what this file contains. The complete
header will be treated as comment by the parser and it must be enclosed using the line
comments syntax shown above.
The SmartScan description file should have the SmartScan macro version specified that is
used to generate this file. The syntax for this is as below:
SMARTSCAN_MACRO_VERSION=<version_number>;
Example:
SMARTSCAN_MACRO_VERSION=1.0;
The SmartScan macro version can start with 1.0 and increment as and when we make
changes to the SmartScan hardware (serializer, deserializer, clock-controller etc) inserted by
the RC. This will help ET understand which version of hardware it is dealing with, if there are
any incremental updates to hardware.
Statement Syntax
A statement would define a serializer, deserializer or an update register and the mapping of
its flop with the primary input (or pseudo primary input) pins. It will have the following syntax:
SMARTSCAN_REG = <Reg_type> {
[Serial_Primary_PIN = <serial_pin_name>;]
REG_BIT_CORRESPONDENCE = (
<Hierarchial_flop_name>, BIT_INDEX =<bit_index>, PIN= <primary_pin_name>;
<Hierarchial_flop_name>, BIT_INDEX =<bit_index>, PIN= <primary_pin_name>;
.
.
)
};
Here:
Serial_Primary_PIN : Specifies the primary (or pseudo) input or output pin. This will not
be specified for Reg_type= DESERIALIZER_UPDATE_REG. It can be one of following:
SERIAL_SCAN_IN
SERIAL_SCAN_OUT
The flopname can also be enclosed in double quotes (""), to support names having special
characters or escaped names. The use of double quotes is not mandatory for simple names
which do not have any special characters.
For desererializer and update register: In serial mode, the flop closest to SSI pin has bit_index
1 and next closest has bit_index 2 and so on.
For serializer: In serial mode, the flop closest to SSO pin has bit_index 1 and the next closest
flop has bit_index 2 and so on.
Statements
The language is case insensitive. Therefore, although the above shows the statement
elements in uppercase, they really can be entered in any case. Note that the names of cells,
instance, pins, etc must be in the same case as they are in the model.
The statement must end with a semicolon (;).
Use of braces '{' and brackets '(' is compulsory as shown above
Use of '=' is mandatory, where needed, as shown in syntax above
Use of comma ',' is mandatory, where needed, as shown in syntax above
The comments may appear any place white space may appear. The white space is either
a blank or a new line character
When the SmartScan Parallel Access signal is a Primary Input, the tester directly controls the
signal and automated conversion is typically sufficient. For cases where a custom SmartScan
initialization sequence is necessary, use the sequencefile keyword to define a custom
mode initialization sequence of type smartscan_modeinit to be used during the
conversion process.
Example 1
Example 2
Example 3
SmartScan Parallel Access signal is a PPI named 'smartscan_parallel_access' and is tied off
internally within the design to the inactive state as is common for Serial-Only SmartScan
configurations. In this case, the keyword sequencefile and testsequence are not
required. The signal is hard wired on-chip to the correct state for serialized SmartScan
vectors.
Example 4
In the above examples, if the SmartScan Parallel Access PI or PPI name does not have the
characters PAR followed by the characters ACCESS, then the testmode mode initialization
sequence will not be automatically converted. In this case, the keyword sequencefile is
required for convert_vectors_to_smartscan to specify this custom sequence of type
smartscan_modeinit to be used during the conversion process.
Figure 7-4 Design Flow for SmartScan Compression with Serial Only Interface
Convert to Serialized
Patterns
Write Vectors
In this scenario, the RC-generated Verilog netlist contains only few scan pins for the Serial
interface.
While building the model for Encounter Test, the editfile generated by RC is used to add
pseudo SI/SO pins to the model to create a dummy Parallel interface to facilitate test
generation. Build Testmode and test generation assume the presence of N scanin and N
scanout pins (Parallel interface), but only a few of those pins actually exist in the hardware.
The pseudo pins are added by specifying the editfile keyword for build_model.
In this case, only serial interface can be used when applying the patterns either at the tester
or on the board.
This section assumes that FULLSCAN patterns are passing Verilog simulation without any
miscompares. Otherwise, debugging needs to start with the FULLSCAN miscompares. Also,
verify that the SmartScan description file content matches the netlist, especially with pipelines
present, as there is very limited verification support in convert_vectors_to_smartscan
for such scenarios.
Below is the recommended flow and areas to investigate when faced with Verilog simulation
miscompares of the SmartScan converted patterns.
1. Verify that ATPG patterns in the SmartScan testmode(s) are passing parallel Verilog
simulation. If there are failures during parallel Verilog simulation, it is likely due to some
problems either with the generated ATPG patterns or the functional logic and its modeling
in NCsim. These miscompares should be debugged using conventional pattern debug
methods.
2. If the design has a real parallel interface, then first run serial simulation of the ATPG
patterns in the SmartScan testmode(s) (using the real parallel interface). This will ensure
that the compression logic, including masking, is functioning correctly. If these patterns
pass serial simulation, then the debug of the converted patterns can focus primarily on
verifying the operation of the SmartScan logic.
3. Check whether the converted scan chain tests are passing serial simulation. The
recommendation is to create the scan chain tests with format=simplified and then
convert these, as these tests do not contain any explicit scan shifts. The simplified scan
chain tests contain a test sequence where no masking is applied (only contains load and
unload of channel data) and additional test sequences that also include masking events.
4. If the scan chain test without masking fails, then the problem is likely with the SmartScan
initialization sequence, scan shift operation, or pipelines in the design. The failures
should be unrelated to masking or OPCG logic and their converted data.
a. If the scan chain test without masking fails, verify that the SmartScan initialization
sequence (supplied to convert_vectors_to_smartscan) is as expected. The
SmartScan init sequence should match the testmode modeinit sequence, with the
following exceptions:
The SmartScan parallel access signal must be at the opposite of its value in the
testmode modeinit sequence.
The Scan Enable and Channel Mask Load Enable signals must be at their
inactive state at the end of the SmartScan initialization sequence. Ideally, they
should also be at that value at the end of the testmode modeinit sequence.
b. Verify the clocking of the scan flops during simulation. The scan flops must be
clocked once every N cycles, where N is the width of the deserializer register.
c. Similarly, verify the clocking to the pipeline registers on the scan path. Pipelines
between the deserializer (or serializer) and the channels must be clocked similarly
to the scan flops.
d. External pipelines must be clocked during every cycle of the top-level test clock or
of the OPCG load clock (when present). External pipelines must not change state
during the launch-capture cycles.
5. If the scan chain test without masking passes but the chain tests with masking fail, then
the issue is likely with the mask data and/or its loading through the deserializer registers.
a. Convert the patterns with and without the command line option
scanenablereset=yes/no to check if both of these pattern sets pass serial
simulation. Patterns converted with scanenablereset=yes (default) will cause
the SmartScan controller to be reset after loading the mask registers, whereas there
is no such reset when using scanenablereset=no.
b. Verify the clocking of the mask registers during simulation. Similar to the scan flops,
the mask bits should be clocked once every N cycles, where N is the width of the
deserializer register. Similarly, verify the clocking to the pipeline registers on the
mask load path.
6. If all the scan chain tests pass but the logic tests fail, then check the following in the logic
tests.
a. The scan enable signal must be at its inactive state during the launch-capture
cycles. This must have been accomplished either by providing a linehold file during
create_*_tests or specifying a Test Constraint (+/-TC) on the scan enable pin
when building the testmode.
b. Verify that linehold and ignoremeasure files had been provided during
create_*_tests. The linehold file must hold all the scanin pins (except CME) to
X and the Scan Enable and Channel Mask Load Enable signals to their inactive
values. The ignoremeasure file must contain all the scanout pins. If the design has
bi-directional scan pins, then the scanin pins must also be added to the
ignoremeasure file. The bi-directional scanin and scanout pins must be stimmed to
Z in the linehold file. The intent of these files is to ensure the ATPG patterns can be
converted successfully and there is no loss of pattern quality when applied at the
tester.
c. Determine whether all the logic tests are failing or only the ones with masking events
in them.
d. When present, verify the clocking of the OPCG registers during simulation. The
OPCG side-scan flops and pipelines on the OPCG load path must be clocked during
every cycle of the top-level OPCG load clock.
SmartScan Limitations
wide2 masking is not supported for SmartScan architecture.
Encounter Test supports SmartScan compression inserted with OPCG by RTL Compiler for
both serial and parallel interfaces.
In this scenario, external pipelines are present on the serial scan pins only. These pipelines
participate during the Deserializer and Serializer shifting (i.e., when
smartscan_parallel_access = 0) and also during the pattern generation using the
parallel interface (smartscan_parallel_access = 1).
If these pipelines need to be bypassed during pattern generation, they can be bypassed in
the parallel mode by using the smartscan_parallel_access as the select signal for the
bypass Mux around the pipeline.
SmartScan with serial and parallel interface
Figure 7-7 Serial and Parallel Interface SmartScan with External Pipelines
In this scenario, external pipelines are present on the serial scan pins as well as the other
parallel interface pins. All external pipelines will be visible to ATPG if they are not bypassed
in the parallel interface. Only the external pipelines on the path to/from the Deserializer/
Serializer will participate during serial scan shifting.
START
Build Testmode
Run create_scanchain_tests
format=simplified
Encounter Test Tasks
Run create_logic_*_tests
Run convert_vectors_to_smartscan
Write Vectors
Build Testmode
Add pipeline information using pipeline_depth option to the Assign file. The
pipeline_depth value specified in the pinassign file to build_testmode must
include External pipelines that are visible to ATPG. Here is a sample syntax based on
Figure 7-6:
assign pin=PSI1 test_function=SI, CMI, pipeline_depth=3;
assign pin=PSO1 test_function=SO, pipeline_depth=3;
assign pin=PSI2 test_function=SI, CMI, pipeline_depth=1;
assign pin=PSO2 test_function=SO, pipeline_depth=2;
There may be cases where the external pipelines are bypassed in the parallel mode
(smartscan_parallel_access=1) and only participate during the Deserializer and
Serializer shifting in the serial mode (smartscan_parallel_access=0). In such
cases, such external serial pipelines will not be visible in the testmode or to the test
generation process. To facilitate the successful conversion of the ATPG patterns, the
testmode must be supplied with information about these external serial pipelines so that
the patterns are generated to account for the additional cycles needed to load through
these serial pipelines.
For the scenario mentioned above, use the new keyword
smartscanMaxSerialPipeDepth=<integer> in the Assign file to specify the
maximum pipeline depth on the serial path to the SmartScan registers. This is the
maximum depth between the input and output side.
For example, consider a design where there are four external serial pipelines on the path
from SSI to the Deserializer and six external serial pipelines on the path from Serializer
to the SSO. These pipelines are bypassed in the testmode (that is, where
smartscan_parallel_access=1) but participate in the serial mode of operation
during the shifting of the Deserializer and Serializer registers. In this case, the Assign file
must contain the statement smartscanMaxSerialPipeDepth=6.
For scenarios where the external pipelines also participate in the parallel mode (that is,
visible in the testmode), there is no need to specify this keyword in the Assign file. It
should be sufficient to describe these pipelines in the SmartScan Description file using
the syntax described later (SERIAL_PIPE_DEPTH,
PRE_DESERIALIZER_PIPE_DEPTH, POST_SERIALIZER_PIPE_DEPTH).
Run this command to test the scan chain integrity because when external pipelines are
present, the scan chain test patterns cannot be converted with explicit shifting.
Run create_logic_*_tests
Run this command to generate test patterns. The generated patterns will contain padding
data for overscan cycles. Refer to Encounter Test: Reference: Commands for more
information on test generation commands.
Run convert_vectors_to_smartscan
Run this command to convert the parallel patterns produced by ATPG for compression parts
to serial patterns required by the SmartScan architecture. Use the following keywords in the
SmartScan Description File to specify external pipelines:
Specifying external serial pipelines that participate in De/Serializer shift operation -
These pipelines exist on the path from SSI to Deserializer and/or from Serializer to
SSO during the SmartScan serial mode (smartscan_parallel_access=0).
Specify the optional keyword SERIAL_PIPE_DEPTH = <integer>; in
SMARTSCAN_REG statement.
Specifying External pipelines that are visible to Test Generation (pipelines before
Deserializer) - These pipelines exist between the Parallel Scanin pins and the
Deserializer
Specify keyword PRE_DESERIALIZER_PIPE_DEPTH=<integer> on
correspondence pin that has these pipelines
The pipeline depth must NOT include internal pipelines (i.e., between Deserializer
& channels)
Specifying External pipelines that are visible to Test Generation (pipelines after the
Serializer) - These pipelines exist between the Serializer and the Parallel Scanout
pins.
Specify keyword POST_SERIALIZER_PIPE_DEPTH=<integer> on
correspondence pin that has these pipelines.
The pipeline depth must NOT include internal pipelines (i.e., between channels &
Serializer)
Here is the sample description file syntax for the scenario depicted in Figure 7-7.
SMARTSCAN_REG = DESERIALIZER_SHIFT_REG {
SERIAL_SCAN_IN = PSI1;
SERIAL_PIPE_DEPTH = 2;
REG_BIT_CORRESPONDENCE = (
<Deserializer_flop_name>, BIT_INDEX = 1, PIN= PSI1,
PRE_DESERIALIZER_PIPE_DEPTH = 2 ;
<Deserializer_flop_name>, BIT_INDEX = 2, PIN= PSI2,
PRE_DESERIALIZER_PIPE_DEPTH = 2;
)
};
SMARTSCAN_REG = DESERIALIZER_UPDATE_REG {
REG_BIT_CORRESPONDENCE = (
<Update_flop_name>, BIT_INDEX = 1, PIN= PSI1;
<Update_flop_name>, BIT_INDEX = 2, PIN= PSI2;
)
};
SMARTSCAN_REG = SERIALIZER_SHIFT_REG {
SERIAL_SCAN_OUT = PSO1;
SERIAL_PIPE_DEPTH = 1;
REG_BIT_CORRESPONDENCE = (
<Serializer_flop_name>, BIT_INDEX = 1, PIN= PSO1,
POST_SERIALIZER_PIPE_DEPTH = 1;
<Serializer_flop_name>, BIT_INDEX = 2, PIN= PSO2,
POST_SERIALIZER_PIPE_DEPTH = 1;
)
};
Here is the sample description file syntax for the scenario depicted in Figure 7-6 where
the pipes are bypassed during the parallel mode. Therefore, there are no
pre_deserializer_pipe_depth or post_serializer_pipe_depth keywords.
SMARTSCAN_REG = DESERIALIZER_SHIFT_REG {
SERIAL_SCAN_IN = PSI1;
SERIAL_PIPE_DEPTH = 2;
REG_BIT_CORRESPONDENCE = (
<Deserializer_flop_name>, BIT_INDEX = 1, PIN= PSI1;
<Deserializer_flop_name>, BIT_INDEX = 2, PIN= PSI2;
)
};
SMARTSCAN_REG = DESERIALIZER_UPDATE_REG {
REG_BIT_CORRESPONDENCE = (
<Update_flop_name>, BIT_INDEX = 1, PIN= PSI1;
<Update_flop_name>, BIT_INDEX = 2, PIN= PSI2;
)
};
SMARTSCAN_REG = SERIALIZER_SHIFT_REG {
SERIAL_SCAN_OUT = PSO1;
SERIAL_PIPE_DEPTH = 1;
REG_BIT_CORRESPONDENCE = (
<Serializer_flop_name>, BIT_INDEX = 1, PIN= PSO1;
<Serializer_flop_name>, BIT_INDEX = 2, PIN= PSO2;
)
};
8
Generating IEEE 1687 (IJTAG) Compliant
Macro Tests
The process flow for IJTAG IEEE 1687 compliant macro test generation is shown in the
following figure:
Refer to Performing Build Model in the Encounter Test: Guide 1: Models for more
information.
For building the testmode, the mode initialization (modeinit) sequence, which is provided by
the user in the mode initialization file, starts the reference oscillator(s), initializes fixed value
registers and sets other such constraints that stay constant for the testmode that is
generated.
Refer to Performing Build Test Mode in Encounter Test: Guide 2: Testmodes for
additional information.
Note: For TAP-based design, the modeinit sequence should end in the Run-Test-Idle TAP
state. It is not required that the ScanRegisters in the ICL be defined as scan chains in these
testmodes. The access and operations of these ScanRegisters are inferred from the ICL files.
Refer to Correlation between ICL, PDL, MIPD and IJTAG Description Files on page 180 for
more information.
Reading ICL
The read_icl command parses the input ICL files and generates the Macro Isolation
Database (MIPD) files.
Note: All ICL constructs and keywords listed in 1687/v1.71 standard are not supported in the
current release. Refer to Assumptions and Limitations on page 193 for the list of constructs
that are not currently supported.
If you specify multiple ICL files (as comma-separated list) through the iclfile keyword,
each file is parsed individually and then processed by read_icl to generate a single MIPD
file.
Refer to read_icl -H or man read_icl for information on command syntax and supported
options.
The output MIPD file is generated in the tbdata directory and is named as follows:
mipd.<testModeName> if testmode is specified
mipd if testmode is not specified
The key steps in the ICL parsing and analysis done by the read_icl command are as
follows:
Perform syntax checks on the ICL files.
Ensure ICL complies with the semantic rules specified in the 1687 specification
document.
Identify the macros/instruments in the ICL that will participate in the PDL retargeting.
This is done as all modules defined in ICL are assumed as macros. Information about
these macros is then saved in MIPD. A macro instance can belong to multiple
ALGORITHMs.
Gather all ScanInterfaces defined in the ICL. Each chip-level ScanInterface is a means
to access internal registers. ScanInterfaces are required to be defined and act as the
starting point for ICL processing for generating operations.
The scanInterface must be defined explicitly in the ICL file. Implicit scanInterfaces are not
supported in the current release.
Parse the AccessLink statement and associate BSDL instruction names with
ScanInterfaces.
Extract correspondence for different port types on the macro instances. Correspondence
can be only to a chip-level IO.
Establish data correspondence for ports of type DataInPort and DataOutPort.
While in traditional Macro Test, the sequence to operate a scan chain is defined or derived as
part of building the testmode, in IJTAG, the sequences to operate ICL scanRegisters are
derived from analysis of the ICL file. For each scanInterface consisting of one or more
scanRegisters, its scan sequence contains the following steps:
Scan Preconditioning Sequence that sets up access to the scanRegister and puts it in
shift mode of operation. For example, this may involve loading the TAP with an instruction
to select the scanRegister and then moving to the Shift-DR state. For non-TAP designs,
this may be simply setting the shift enable signal to its active value.
Scan Sequence that performs an overlapped load/unload of the data for the register.
Scan Exit Sequence that returns back to a stability state. For example, this may involve
moving the TAP back to Run-Test-Idle. For non-TAP designs, the shift enable would be
set to its stability value.
SCANLENGTH = Length;
}
[CHAIN {
}]
)
SCANEXIT = (
Entity = Value ;
[ ; Entity = Value ;]
)
Caution
Support for multiple CHAIN keywords has not been tested for the current
release.
SCANEXIT :
This will provide the ScanExit sequence that takes the register out of shift state and
returns back to stability state. The syntax is similar to a Scanpreconditioning
statement.
All the ports specified with ENTITY are retargeted ports. Encounter Test finds the
appropriate chip-level ports for the macro pins specified in the ScanInterface statements
in ICL and writes the same in MIPD.
Entries in square parenthesis [ and ] are optional.
Use of semicolons and brackets, as shown, is mandatory. All keywords are case
insensitive.
Handling AccessLinks
The Scanpreconditioning section Entity contains the special keyword AccessLink with
the following syntax:
AccessLink.<EntityName>.<InstructionName>.<ScanInterfaceName>.<ActiveSignalName>
The InstructionName will specify the TAP instruction name that, when loaded in the TAP,
will make the specified ActiveSignal true. Note that < and > are explicit and mandatory
delimiters in the syntax of the above statement.
Specify verify=yes on read_icl command line to run the following 1687 verification
checks that detect any issues with the input ICL and ensure that the input correlates with the
netlist.
For each module in ICL, there is a corresponding module in the netlist.
Require that the ICL instance name matches the netlist instance name (including
hierarchy).
After processing the ICL, the tool constructs the full hierarchal instance name as
specified in ICL for each of the macro instances. It then accesses the Encounter Test
model for each of these instances and matches the name with the netlist. If the tool does
not find a matching name, a warning is issued and no MIPD is generated for the specific
instance. This will result in an appropriate error/warning message being issued from
migrate_pdl_tests if you try to read/write this macro pins via the PDL.
For each DataInPort and DataOutPort in the ICL, there is a corresponding pin on the
corresponding module in the netlist.
The full hierarchal names of the DataInPort and DataOutPort as constructed from ICL,
are matched with the Encounter Test model.
If the specified name of the port is not found in the model, the tool issues a warning
message and the specified ports shall be removed from the portGroup for the
<module_name>_IO operation of the corresponding module. This implies that the
PDL cannot read/write data at these ports and an appropriate error/warning is issued if
you try to do so.
For the path from a chip IO to an instrument in the ICL, there must be a sensitized path
from the same IO to the same pin on the corresponding netlist instance.
After generating the correspondence information for each of the DataInPorts and
DataOutPorts defined in ICL, the tool verifies the correspondence by simulating the
design in Encounter Test. High Speed Scan simulator is used to set up the modeinit state
from the testmode and apply any preconditioning, if available, for the operation. The tool
then simulates a value of 0/1 at the top-level chip pin and checks for the corresponding
value at the corresponding macro pin. If the values do not match, a warning message is
issued and the macro pin is removed from the correspondence statement for the specific
operation. Subsequently, you will not able to read/write to these pins via PDL; an error is
generated for migrate_pdl_tests command if you try to do so.
Note:
Currently, the tool only checks whether a pin really corresponds to a top-level chip
pin or not. In case of warnings, you will need to debug the issue manually using the
Encounter Test GUI - open the GUI, set up the testmode after simulating the
modeinit, and then simulate a value of 0/1 at the top-level pin. Then manually trace
back the path in the GUI for the logic cone feeding the specific pin and check what
is preventing the pin to correspond to the top-level pin.
Currently, the verification check is done only for DataInPort and DataOutPort. The
scan related ports and TCKPort are assumed to be verified using BSV.
The attribute REQUIRE_HI can be specified in ICL at the chip IOs or an instance pin level
to identify pins that must be at a constant value 1 in the testmode. This check verifies
whether the specified macro pin is at a constant high value at the test mode stability
state.
The syntax for this attribute is:
Attribute REQUIRE_HI = "YES";
If a macro pin is specified with this attribute, the pin will not be processed for
correspondence generation for the specified macro instance and will not be written in the
mipd for the <moduleName>_IO operation. You will not be able to read/write to this
macro pin via PDL.
The attribute REQUIRE_LO can be specified in ICL to identify pins that must be at a
constant value 0 in the testmode. The support for this check is similar to the check for the
REQUIRE_HI attribute discussed above. The only difference is that the pin will be
checked for a value of 0 instead of 1 at the testmode stability state.
Syntax for this attribute is:
Attribute REQUIRE_LO = "YES";
Refer to PDL file for more information on the syntax of the PDL file and the supported PDL
functions.
The command also takes the MIPD file (generated by read_icl) and the IJTAG description
file as input.
The IJTAG description file, provided through the descfile keyword, contains the BSDL
opcodes for each JTAG instruction referenced by AccessLink, the TAP port information, and
Algorithm information. This avoids the need to have a BSDL file available at the time of pattern
retargeting. Refer to IJTAG Description File for information on the syntax of the IJTAG file.
This command computes and maintains the effective scope as each PDL statement is
processed. This ensures that iCalls are executed with respect to the current scope.
Additionally, this also facilitates instance specific Pingroup naming.
Parallel simulation of the generated patterns is not possible. Refer to Format of Migrated
TBDbin Patterns for information on the format of the migrated patterns.
The entry-level scope is always the chip and for each ALGORITHM statement, the
process restarts from the modeinit and a new tester loop is generated for each of the
algorithms.
TAP Instruction Opcode - This identifies the opcode for the valid TAP instruction
names specified in the AccessLink statement in the ICL file.
Syntax:
TAP_INSTRUCTION_OPCODE {
<INSTRUCTION_NAME> = <OPCODE>;
<INSTRUCTION_NAME> = <OPCODE>;
}
OPCODE : This is the opcode, which when loaded in the TAP, will enable the specified
instruction. This is a binary value and the length should be the same for all the opcodes
specified. This length should be equal to the length of the instruction register.
TAP Port Identification - The TAP port identification statements define the TAP ports
of the device.
Syntax:
TAP_PORTS {
TAP_SCAN_IN = <TDI port name>;
TAP_SCAN_OUT = <TDO port name>;
TAP_SCAN_MODE = <TMS port name>;
TAP_SCAN_CLK = <TCK port name>;
[TAP_SCAN_RESET = <TRST port name>;]
}
Comment Syntax
Line comments:
//
--
#
Block comments:
The block comments can span multiple lines. These are as shown below:
/*block of comments*/ - "/*" to start and "*/" to end block comments.
TAP_PORTS {
TAP_SCAN_IN = JTAG_TDI;
TAP_SCAN_OUT = JTAG_TDO;
TAP_SCAN_MODE = JTAG_TMS;
TAP_SCAN_CLK = JTAG_TCK ;
TAP_SCAN_RESET = JTAG_TRST;
}
PDL file
A PDL file contains procedures to apply test patterns for the macro. The pattern retargeting
engine reads this data and migrates these test patterns at the SoC level.
You can specify a single or multiple PDL files (as comma-separated list) as input to
migrate_pdl_tests through the pdlfile keyword. If you specify multiple PDL files, each
of those files will be parsed individually and then the iProc name specified with the algorithm
(entry level iProc) will be called.
The iProcsForModule statement carries over the scope from one PDL file to another PDL
file. It is, therefore, recommended to specify one iProcsForModule at the top of every PDL
file. It is also recommended to first specify any PDL file carrying global variables that are
referenced by other PDL files.
PDL files also support the source keyword of TCL. Using this keyword, a PDL file can include
the code from another PDL file. For example:
source b.pdl
This command will source all code written in b.pdl file in the current PDL file.
However, if the source keyword is used, it is not possible to specify individual time stamps
separately for the sourced files. This can only be specified in the files that are provided
separately to migrate_pdl_tests.
Encounter Test supports the following PDL commands in the current release.
iApply
This command applies the values previously defined by either iWrite and/or iRead
commands to the hardware.
Syntax:
iApply [-group operationName]
group: This is optional and can be used to specify the name of the operation
operationName: It is the name of operation as specified in the MIPD file
If verbose=yes is specified, the command output prints the name of operation that has been
executed for each of the command instance. This helps identify the operation that is currently
executed, in case there are multiple matching operations.
If you do not specify the -group option, the set of macro ports that are read or written to
before the iApply command are identified and matched to an available operation. The first
operation that matches is executed to generate retargeted patterns for the specified ports.
If the tool is unable to match the ports with any of the available operations, it issues an
appropriate error message and you can try using the -group option to explicitly provide the
operation name. An example:
iApply -group Chip_IO;
Note:
If multiple operations match a set of ports, there will be no optimization and the first
matched operation will be executed.
The iApply command cannot be used for clock operations. Use the PDL command
iRunLoop to generate pulses on functional or test clocks.
An empty iApply command without any preceding unprocessed iRead or iWrite
commands will generate appropriate warning messages.
Currently, reading/writing to ports of multiple macros within a single iApply (Operation)
is not supported. You need to add iApply statements after reading and writing to
individual macro ports.
Reading and writing macro I/O ports and scan registers cannot be combined within a
single iApply, as there are different operations for I/O and scan. You need to provide
them as part of separate iApply commands.
iCall
This command provides a mechanism to invoke an iProc from within another iProc. While an
argument can be passed to the iProc, there is no way to return an argument.
Syntax:
iCall[instanceName].procName (arguments)*
instanceName: Proper name for the macro instance, for which the PDL commands in the
called iProc should be executed. The macro instance name must exist in the Encounter Test
model.
If a macro instance name is not specified and no current scope is available, the commands
within the called iProc will be executed at the chip-level scope. The default scope is assumed
to be of chip-level macro.
Example:
iCall myproc 10 # calling iProc with name myproc and
# argument 10.
iCall shorty # calling iProc with the name shorty
iCall srt.bdabistgrp12_13.shorty # calling shorty only for
#macro instance
#srt.bdabistgrp12_13
iClock
This command specifies that a system clock is to be running and verifies that the clock port
has a valid controlled source.
Syntax:
iClock <clk_port_name>
Here clk_port_name is the name of the port at the macro level. The macro-level name for
the port shall come via the current scoping. Use the PDL commands iProcForModule and
iCalls to specify the correct scoping in PDL for this command.
Example:
ICL:
ClockPort MySclk {
Source ClockIn;
}
PDL:
iClock MySclk
In this case, the iClock command verifies that the read_icl command has resolved the
macro clock port to its chip-level clock port and has generated a SCK operation that contains
the specified macro clock port. If there is no such operation, the tool will issue a warning
message specifying that the macro clock cannot be pulsed in PDL as its correspondence is
not resolved.
Only clocks which successfully passed this iClock check can be referenced by the -sck
option in the iRunLoop command.
iDefault
This command resets the previously stored value for the pins. This resets the internal tables
of stimuli to allow the full set of primary input and latch patterns to be generated.
Syntax:
iDefault
Example:
iDefault # This calls MTGResetStims() in PDL
iNote
This command passes free-form information to the runtime environment. The information is
stored as keyed data in the generated patterns.
Syntax:
iNote [tbdlevel] [keydata] text;
keydata: Optional. Can be a string of characters and will default to IJTAG_NOTE, if not
specified.
text: Required. Can be a string of characters, should be enclosed within quotes if contains
whitespace or special characters.
Note: Specify either only one (that is, text) or all values to the command.
Example:
# Will add keyed data of ALGORITHM_TYPE=PLLLIB on the Tester_Loop # level of
vector data
iNote "TESTERLOOP" "ALGORITHM_TYPE" "PLLLIB";
# Will add keyed data of IJTAG_NOTE= iApply for Write to memory on the #
Test_Sequence level.
iProcsForModule
This command is used to specify the ICL module with which the subsequent iProc
statements are associated. Include this command at the top of the file defining the iProc
statements for a given instrument.
Syntax:
iProcsForModule moduleName
moduleName: The module name as defined in Verilog and should be present in the
Encounter Test model.
Note: For the current release, specifying the namespace before the module name, as
mentioned in the IEEE 1687 v1.71 standard, is not supported.
Example:
iProcsForModule MbistModule; # MbistModule is the name of the module
iProc
This command identifies the name of the procedure and, optionally, lists any arguments
included as variables in the procedure. The iProc names should be unique for the targeted
module/instrument; if they are not, only the last definition is kept.
Syntax:
iProc procName '{' arguments* '}' '{'commands+'}'
arguments: Space separated ordered list. A pair of arguments enclosed within braces will
constitute an argument and the associated default. Arguments without a default value must
be listed before those with a default value.
Example:
iProc myproc {arg1 arg2 { arg3 24 } { arg4 0x32 } { arg5 1024 } {
.
}
iProc myproc2 { } {
. . .
}
The myproc procedure has five arguments: arg1, arg2, arg3, arg4, and arg5. The last
three arguments have defaults of 24, 0x32, and 1024, respectively.
Each argument defined for an iProc can have an optional default value; however, once you
define an argument with a default, it is mandatory to define defaults for all the subsequent
arguments.
For iCall invocations, arguments are passed in the order they appear in the iProc
command. If the arguments include a default, they can be omitted from an iCall statement.
Once an iCall command omits an argument, all of the remaining arguments should be
omitted as well.
iPutMsg
This command issues a message during the PDL execution and checks the severity code to
determine whether processing should be terminated for a macro or a macro group.
Syntax:
iPutMsg [messageNumber] [severityCode] text;
messageNumber: Optional. Default is 1. Specify a number less than 1000. The number is
appended to the prefix PDL to create a standard format Encounter Test message number (for
example, PDL-001).
severityCode: Optional. Specify one of the following for the severity code: I for
informational messages, W for warning messages, and S for messages that indicate the
processing for the current macro or group should be stopped. Default is I (Informational).
text: Required. Specify a quoted character string for the message text.
Note: Specify either only one (that is, text) or all values to the command.
Example:
# Will print to log file WARNING (PDL-002): Expect only one macro. [end #
PDL_002].
iPutMsg 2 W Expect only one macro.
# Will print to log file INFO (PDL-001): Expect only one macro. [end PDL_001].
iPutMsg Expect only one macro.
iRead
This command defines data to be observed and shifted out of the macro during a subsequent
iApply command. Multiple iRead commands can be entered prior to an iApply command.
However, if those commands refer to the same pinGroup, the expected values of the previous
commands will be overwritten.
Note: iReads specified between two consecutive iApplies should belong to the same
operation.
Syntax:
iRead reg_or_port_name value
InstanceName: Optional. The pingroup along with its instance or block name. For example,
iRead INSTR3.TDR3 0101
pinGroupName: A valid pinGroup name as specified in the MIPD file. This name should
match the name of a pin or scanRegister in the ICL. You have to specify the entire bus or
register for pinGroupName and not a partial bit range.
value: A string value, either in binary, hex, or integer format, which specifies the data for
each pin in the pinGroup. The following prefix will define how the value will be interpreted:
Binary Value Prefix: 0b, b, or Lb # L is the length of the value string
Hex Value Prefix : 0h, 0x, h, or Lh # L is the length of the value string
NoPrefix: The default format for value is assumed to be integer
If the binary equivalent of the value string has less width than the number of pins in the
pinGroup, rest of the bits will be filled automatically, as below:
Under-sizing an unsized value will result in the assignment of the specified bit values to
the LSBs of the associated pinGroup, with the unspecified most significant bits being
assigned either a 0 or x depending on the most significant bit of the assigned value. If
the MSB is x, then the unspecified bits will be assigned x; otherwise, they will be
assigned 0.
Over-specifying a value or mismatch of the size will result in an error.
Note: The default format for value is integer. However, if the specified string for value has the
same length as the number of pins in the specified pinGroup, then as an exception the format
is assumed as binary and an INFO message for the same is printed.
Example:
iRunLoop
This command runs the loop for the specified number of times for the pulse on a clock pin
specified in the pingroups of the specific operation.
Syntax:
iRunLoop <cycleCount>['-tck'|'-sck' port][-group operationName]
Examples:
iRunLoop 20 // Pulse TCKPort of the macro 20 times
iRunLoop 10 sck MySclk // Pulse the top-level pin corresponding to macro clock pin
MySclk 10 times
iRunLoop 10 -group counter_TCK // Explicit call to operation counter_TCK for the
macro, to pulse 10 times
Note:
1. The group option is used to identify an operation name, which should be defined as a
normal operation with its associated pingroups, correspondence, and preconditioning, in
the MIPD file.
Example:
OPERATION = pulseClk;
PINGROUPS = PULSE_CLK;
CORRESPONDENCE = (
"c4_dmi_refck_p" = "c4_dmi_refck_p", INVERSION = 0;
)
PRECONDITIONING = ( # needed for PULSE_CLK
"c4_mb0_clk_p(0)"=1;
)
2. When the functional clocks are pulsed via the iRunLoop command, TCK will be in its off
state and patterns will not pulse TCK during this time. This is accomplished by keeping
separate operations for TCK and SCK clocks.
3. The functional clocks must be described in the PDL via the iClock command before
using them in the iRunLoop command.
iWrite
This command defines new data for the pins specified in the pinGroup, which will be
controlled through the scan path or primary inputs during a subsequent iApply command.
Multiple iWrite commands can be specified prior to an iApply command. However, if
those commands refer to the same pinGroup, the expected values of the previous commands
will be overwritten.
Note: iWrite specified between two consecutive iApplies should belong to the same
operation.
Syntax:
iwrite reg_or_port_name value
InstanceName: Optional. The pingroup along with its instance or block name. For example,
iWrite INSTR1.TDR1 0011
iWrite INSTR2.TDR2 1100
pinGroupName: A valid pinGroup name as specified in the MIPD file. This name should
match the name of a pin or scanRegister in the ICL. You have to specify the entire bus or
register for pinGroupName and not a partial bit range.
value: A string value, either in binary, hex or integer format, which specifies the data for
each pin in the pinGroup. The following prefix defines how the value will be interpreted:
Binary Value Prefix: 0b, b or Lb # L is the length of the value string
Hex Value Prefix : 0h, 0x, h, or Lh # L is the length of the value string
NoPrefix: The default format for value is assumed to be integer
If the binary equivalent of the value string has less width than the number of pins in the
pinGroup, rest of the bits will be filled automatically, as below
Under-sizing an unsized value will result in the assignment of the specified bit values to
the LSBs of the associated pinGroup, with the unspecified most significant bits being
assigned either a 0 or x depending on the most significant bit of the assigned value. If
the MSB is x, then the unspecified bits will be assigned x; otherwise, they will be
assigned 0.
Over-specifying a value or mismatch of the size will result in an error.
Note: The default format for value is integer. However, if the specified string for value has the
same length as the number of pins in the specified pinGroup, then as an exception the format
is assumed as binary and an INFO message for the same is printed.
Example:
Operations are extracted from the ICL, written into the MIPD, and utilized during pattern
retargeting to identify how PDL commands should be executed on the target design. Each
root level PDL procedure should be included in the IJTAG description file as an Algorithm,
which may be applied to one or more modules in the ICL (macros in the MIPD). Each
Algorithm in turn, can apply one or more operations. The set of operations for a module is
generated as follows:
One operation is generated for each scan interface of a module. This operation is used
for reading and writing the scan registers that are accessed by that scan interface. The
operation includes all the necessary information about the scan preconditioning
sequence, the scan sequence, and the scan exit sequence in the MIPD. Among the
included information are the scan enable toggling, the scan clock pulsing, the scan
input(s) and output(s), and the scan length. The generated operations are named after
the corresponding scan interface (<scaninterface_name>), and this is how they
should be referenced in the PDL, if needed (iApply [-group
As mentioned before, the above operations are automatically extracted from the ICL
description of a design and the supplied PDL has to comply with them. That is, PDL
commands referencing scan registers, primary I/Os, and clocks cannot be mixed in the PDL
and a separate iApply that can be matched to each one of their corresponding operations
has to be issued in sequence.
The following section provides the ICL, PDL, IJTAG description files (flow inputs), and the
generated MIPD (flow intermediate output) for a sample design and the dependencies
between them are highlighted.
ICL
Module chip {
TCKPort chip_tck;
ShiftEnPort chip_se;
SelectPort chip_sel;
ScanInPort chip_si;
ScanOutPort chip_so {
Source inst.so;
}
DataInPort chip_inp;
DataOutPort chip_outp {
Source inst.outp;
}
Instance inst Of core {
InputPort tck = chip_tck;
InputPort se = chip_se;
InputPort sel = chip_sel;
InputPort si = chip_si;
InputPort inp = chip_inp;
}
}
Module core {
TCKPort tck;
ShiftEnPort se;
SelectPort sel;
ScanInPort si;
ScanOutPort so {
Source reg[0];
}
DataInPort inp;
DataOutPort outp;
ScanInterface scan {
Port si;
Port so;
Port tck;
Port se;
Port sel;
}
ScanRegister reg[3:0] {
ScanInSource si;
}
}
PDL
iProcsForModule core;
iProc Test{} {
iWrite inp 0;
iApply;
iWrite inp 1;
iRead outp 0;
iApply;
iRead outp 1;
iApply;
chip.chip_sel = 1;
chip.chip_se = 1;
)
SCANSEQUENCE = (
CLK_PORT = chip.chip_tck;
CHAIN {
SCANLENGTH = 4;
SI_PORT = chip.chip_si;
SO_PORT = chip.chip_so;
}
)
SCANEXIT = (
chip.chip_se = 0;
)
In addition to the proprietary IJTAG description file described in the previous sections, the tool
also supports migrating PDL patterns by reading-in BSDL file, which is automatically
generated by the RTL compiler. This file contains TAP port information and instruction OP
codes for generating sequence to control the TAP.
When providing the BSDL file, use the pdlentryfunction keyword for migrate_pdl_tests
to specify the pdl entry function names.
Also, specify the input bsdl file name and its path using bsdlinput and bsdlpath
keywords.
One of the key aspects of the PDL retargeting is that the patterns after migration are
converted into serial events at the chip I/Os. For example, read/write of scanRegisters may
first manipulate the TAP pins to set up access to the scanRegister, followed by loading of the
register through the TAP interface. There will not be any Scan_Load() / Scan_Unload() events
in the TBDbin that reference flops within the design. This also eliminates the need to
represent the scanRegisters as scan chains in the testmode, which may even be infeasible
for some of the scanRegister configurations described in ICL.
As mentioned earlier, for each scanInterface consisting of one or more scanRegisters, its
scan operation contains the Scan Preconditioning Sequence, Scan Sequence, Scan Exit
Sequence steps.
These steps comprise a scanop sequence for operating the ICL scanInterface. For TAP-
based access methods, each scanInterface is expected to be associated with a unique TAP
instruction; hence there will effectively be one scanop sequence for each TAP instruction in
the ICL. An exception would be when there are multiple scanRegisters and a single TAP
instruction that selects an offline-SIB which determines the scanRegister to be loaded.
Note: Currently, IJTAG does not support either inline or offline SIBs.
The following figure shows the building blocks that comprise a scanop sequence.
Return to Run-
Test_Idle
Scan operations are represented in the migrated patterns using Encounter Test structure
neutral events, Load_SR and Unload_SR. The Load_SR and Unload_SR events contain
the scan data values and the scanin and scanout pins at which to apply or measure the data,
without any reference to actual flops in the design.
Refer to Encounter Test: Reference: Test Pattern Formats and Encounter Test: Guide
6: Test Vectors for syntax of these events.
The advantage of using structure neutral events is they allow the test data to be represented
without requiring the scan register configuration to be restricted to those supported by an
Encounter Test testmode. These events also allow for a concise representation of the test
patterns as the scan protocol can be described separately from the actual usage of these
events, similar to the Scan_Load/Scan_Unload events.
Each Load_SR (Unload_SR) event specifies a stim (measure) register and the test data to
be applied to that stim (measure) register. The stim (measure) register definition points to a
scanop that describes how to operate that register, the scanin (scanout) pin that is used by
that register, and the length of the register. Note that there may be multiple stim/measure
registers that share the same scanop sequence. In case of parallel scan chains, there will be
one Load/Unload_SR for each SI/SO pair. Since all the parallel scan chains shift
simultaneously, one scanop is sufficient to describe the operation of all the registers. The
following figure shows an exemplary definition for a stim and measure register pair that are
used for scanRegisters named PARALLEL_TDR and GLOBAL_STATUS_TDR in the ICL.
The following figure shows the stim and measure register definitions that link the scanop
defined in Figure 8-2 on page 185.
The following figure puts it all together and shows the overall structure within the TBDbin file.
Experiment contains the scan sequence definitions followed by the stim and measure
register definitions. This is then followed by the patterns themselves, where the Load_SR
event specifies the test data to be loaded into stim_register #1, which is essentially loading
PARALLEL_TDR. Similarly, the iRead of GLOBAL_STATUS_TDR is represented by the
Unload_SR event, which uses measure_register #2 for this purpose.
write_vectors converts the migrated patterns in TBDBin format into WGL, Verilog, TDL,
or STIL format. Refer to Writing and Reporting Test Data in Encounter Test: Guide 6: Test
Vectors for information on converting patterns into tester formats.
You can specify multiple ClockPort constructs at the module level in the ICL file. These
ports specify the functional clock that needs to be pulsed through PDL. The tool also supports
the DifferentialInvOf construct, which is used in case the functional clocks differ in
polarity. An example is given below:
ClockPort PCK;
ClockPort NCK { DifferentialInvOf PCK; }
Here, the ClockPort statement specifies the port name for the functional clock PCK, and the
DifferentialInvOf construct specifies that the other functional clock NCK to the module
share the same source as clock PCK but is phase inverted to the source of PCK clock
The DifferentialInvOf construct sets the inversion flag, as shown in the example above,
if the corresponding top-level chip clock pin is common.
The migrate_pdl_tests command pulses the functional clocks in the generated patterns
whenever you pulse them through PDL by specifying the iRunLoop sck option. If the
functional clock is a free-running oscillator, the tool generates the wait_osc event for the
specified number of clock cycles.
PDL:
iRunLoop 1 -sck MySclk
Output TBDpatt:
[Pattern 1.1.1.2.11.1 (pattern_type = static);
Event 1.1.1.2.11.1.1 Pulse ():
"chip_clk"=+;
]Pattern 1.1.1.2.11.1;
PDL:
iRunLoop 5 -sck MyOscClk
TBDpatt:
Event 1.1.1.1.1.11.2 Wait_Osc (cycles=5):
"P1_SYSCLOCK_1";
The information whether a functional clock is free-running oscillator or not is derived from the
testmode, as free-running oscillators are defined using +/- OSC test function. The testmode
modeinit would have started the free-running oscillator clocks using start_osc event as
shown below:
Example Modeinit:
Event 1.1.2.1.1.10.1 Start_Osc (up 4.000000 ns, down 4.000000 ns,
pulses_per_cycle=8):
"P1_SYSCLOCK_1"=+;
Use the following syntax to specify multiple TCKPort statements at the module level in the
ICL file:
Module counter
{
.
TCKPort <clk 1>;
TCKPort <clk 2>;
.
}
These TCKPorts should have a connection to the respective top-level TCKPorts. Also, the
additional top-level TCKPorts need to be correlated to the 1149 TCK in the pinassign file
using the correlate statement as in the following example:
correlate chip_corr_tck +chip_tck;
The multiple TCK port information is passed through the MIPD file to migrate_pdl_tests,
as shown below:
MACRO = "Macro_Instance_Name" [, ,"Macro_Instance_Name"];
ALGORITHM = Algorithm_Name;
[GROUP = GROUP_NUMBER;]
....
"CLK_port" = "Entity"[,Entity]*;
The CLK_port statement accepts multiple, comma separated clock pin names, specified
above as entity. These pin names are the resolved pin names at the top-level block
corresponding to macro TCK ports.
In addition, the TCK operation <module_name>_TCK contains all the TCK ports for the
module and their corresponding ports at the top-level block (an example is shown below).
This operation is executed whenever you invoke it using the iRunLoop PDL command.
OPERATION = counter_TCK;
PINGROUPS = clk1, clk2;
CORRESPONDENCE = (
"clk1" = "Pin.f.l.chip.nl.chip_clk1", INVERSION=0;
"clk2" = "Pin.f.l.chip.nl.chip_clk2", INVERSION=0;
)
Based on the above data in the MIPD, migrate_pdl_tests pulses these clock pins
simultaneously every time the TCK is pulsed by Encounter Test, that is, in the preconditioning
for the TAP and shift operations and also every time you explicitly pulse these using
iRunLoop command.
The following figure depicts a simple series connection for three instruments.
To support such a scenario, ICL provides the scanInterface statement that defines the list of
ports which comprise a scan interface. A scan interface consists of a ScanInPort and a
ScanOutPort with related control signals (SelectPort). A sample is shown below:
Module WrappedInstr_A {
ScanInPort SI;
ScanOutPort SO {Source TDR [0] ;}
ShiftEnPort SE;
CaptureEnPort CE;
UpdateEnPort UE;
SelectPort SEL;
ResetPort RST;
TCKPort TCK;
ScanInterface scan_client {Port SI; Port SO; Port SEL ;}
ScanRegister TDR [8:0] {ScanInSource SI ;}
}
If a module statement in ICL defines a ScanInPort, a ScanOutPort, and a SelectPort, but does
not define a scanInterface, then, by default, an implicit scanInterface is assumed with all the
above ports. The name for default implicit scanInterface is <module_name>_scan.
If there are multiple ports of either type and an explicit scanInterface is not defined, the tool
will not assume an implicit scanInterface and will instead generate a warning/error. The
implicit scanInterface may not apply to a top-level module where an explicit scanInterface is
required.
Note that the iApply statement is specified only after all the values are applied to the target
TDR(s) in the scan chain. This iApply statement can be applied from the chip level or from
the scope of any of the instances associated with the TDRs, where the last value is written.
The tool automatically figures out the super operation that needs to be invoked.
If the PDL file does not contain user data for one or more of the scan registers that are part
of the chain, the scan registers without data are loaded with the values previously written to
them.
If no value was earlier provided, the default value, if any, is loaded. In case there is no default
assigned to the scan register, 0 is assumed as the default value.
You can use the migrate_pdl_tests keyword setdefaultvalue=0/1 to set the default
value to 1 or 0.
While the read_icl command does not require the ET model (build_model) to
process the ICL and extract the structures and sequences, it is recommended that a
production environment follows the documented flow.
AccessLink support is restricted to 1149.1 TAP controllers and any other scan interfaces
are not supported in this release.
Connection in ICL for a scan register will end at the TAP scan_in port (TDI) on the input
side and TDO port at the output side, if a TAP is present.
The select signal of a ScanMux can only connect to one of the following sources:
A SelectPort at the top level or as a SelectPort in a ScanInterface defined at the top
level.
Can come from an Active Signal statement or as a SelectPort in the ScanInterface
specified in the AccessLink statement.
Update stage of Scan Register for a non-TAP based design is not supported in this
release.
The following ICL constructs/features will not be supported in this release. Note that this
is not an exhaustive list but is intended to highlight only some key constructs:
Inline or offline SIBs
Logic Signals
ClockMux
OneHotDataGroup
oneHotScanGroup
Support for Broadcast to multiple registers in ICL
DataMuxes and DataRegisters
Partial read/write to a scan register or a vectored port is not supported in this release.
The FreqMultiplier attribute in ICL is not supported for iClock command, and
hence, cumulative multiplication factor or division ratio along the clock path is not
calculated.
Index
C
customer service, contacting 9
H
help, accessing 9
O
OPC logic 19
T
test mode
OPC logic 19
U
using Encounter Test
online help 9