You are on page 1of 4

15.

Early Formal Verification of Conditional Coverage Points to Identify Intrinsically Hard-to-Verify Logic
C. Richard Ho, Michael Theobald, Martin M. Deneroff, Ron O. Dror, Joseph Gagliardo and David E. Shaw*
D. E. Shaw Research, New York, NY {richardh,theobald,deneroff,dror,gagliard,shaw}@DEShawResearch.com*

ABSTRACT
Design verification of complex digital circuits typically starts only after the register-transfer level (RTL) description is complete. This frequently makes verification more difficult than necessary because logic that is intrinsically hard to verify, such as memories, counters and deep first-in, first-out (FIFO) structures, becomes immutable in the design. This paper proposes a new approach that exploits formal verification of conditional coverage points with the goal of early identification of hard-to-verify logic. We use the difficulty of formal verification problems as an early estimator of the verification complexity of a design. While traditional verification methods consider conditional coverage only in the design verification phase, we describe an approach that uses conditional coverage at a much earlier stagethe design phase, during which changes to the RTL code are still possible. The method is illustrated using real examples from the verification of an ASIC designed for a specialized supercomputer.

behavior of the design (functional coverage). The goal of CDV is to utilize coverage metrics to guide stimulus creation, using a testbench environment, to exercise the RTL code through as much of its behavior as possible, thereby exposing design errors. Functional formal verification (FV) is usually a parallel verification effort that utilizes formal analysis techniques targeted at assertions within the RTL description or at end-to-end assertions that describe some function of the design. The goal of FV is to obtain proofs that assertions describing functionality are true or to find design errors, manifested as counterexamples to assertions. The vexing problem for verification teams is that both techniques suffer from a form of the Pareto principle where the last 20% of verification to achieve closure takes 80% of the time and effort. For CDV, this is manifested as an asymptotic approach to 100% coverage on the set of coverage points. For FV, this comes in the form of assertions that cannot be proven true and for which no counterexample can be found. In this paper, we propose a method to identify the logic areas that are the root causes of the Pareto principle. Our method uses a set of implied properties of the RTL code, instead of user-defined assertions, in an unconstrained static formal analysis with a model checker. In a departure from traditional methods, we ignore proofs and counterexamples, and focus solely on inconclusive assertions, which provide a roadmap for the parts of the design that are inherently more difficult to verify. An important advantage of this approach is that the method can be applied very early in the design process, before the RTL description is complete, before a simulation environment is ready, and even before assertions have been placed in the design. This avoids all the usual prerequisites that normally limit the use of FV to the latter part of the verification cycle. With such foreknowledge of where verification bottlenecks will arise, it becomes possible to make early modifications to the RTL code that enhance its verifiability, reducing the long tail of verification time and effort.

Categories and Subject Descriptors


B.5.2:[Hardware] Design Aids. J.6: Computer-Aided Engineering.

General Terms
Algorithms, Design, Verification.

Keywords
Formal verification, conditional coverage, code coverage, coverage hole, verifiability, inconclusive results.

1. INTRODUCTION
Modern verification methodologies for large, complex digital circuits, such as microprocessors, graphics chips and applicationspecific integrated circuits (ASICs), commonly include both dynamic simulation and functional formal verification. Dynamic simulation is normally used as part of a coverage-driven verification (CDV) methodology where event monitors (coverage points) within the RTL description provide feedback on the effectiveness of the simulation vectors applied. The coverage points simply indicate whether dynamic simulation has exercised a particular event, which may be as simple as execution of a line of RTL code (code coverage) or as complex as a functional

1.1 Related Work


Previous works have combined formal analysis, coverage and simulation techniques in a number of ways. Formal analysis of control state machines was used to generate simulation vectors in [1]. Cunningham et al. [2] used model checking to verify reachability of expression coverage points. Similarly, Dill [3] proposed using formal verification tools to exercise coverage points not witnessed in dynamic simulation. Ghosh and Prasad

Correspondence to shaw@DEShawResearch.com. David E. Shaw is also with the Center for Computational Biology and Bioinformatics, Columbia University, New York, NY 10032.

268

[4] proposed a method for estimating the difficulty of formal verification problems and hence determining which ones are worth expending effort on. Other methods have been proposed for measuring the coverage of formal analysis [5]. Some of these metrics could potentially provide a measure of verifiability, but they rely on analysis of assertions with input constraints and would only be applicable later in the verification process. This work differs from these previous works mainly in its focus on inconclusive model checking results.

behaviors of a design. In contrast, most modern HDL simulators include an automatic method of extracting all code coverage points, including line, branch and conditional coverage points. Conditional coverage points can be mechanically translated to assertions by capturing the full conditional path to any line of RTL code. For example: module blkA (input clk, input rst, ); always @(posedge clk) begin if (!gate) begin if (en_a || intr_b) begin case (st) begin 3b010: if (do_it)// L1 data <= new_data; For the conditional expression on the line marked L1 above, the following assertions would be generated: C0: assert property (@(posedge clk) !((!gate)&&(en_a||intr_b)&&(st==3b010) && (do_it==1b0))); C1: assert property (@(posedge clk) !((!gate)&&(en_a||intr_b)&&(st==3b010) && (do_it==1b1))); The assertions C0 and C1 capture the full set of (possibly nested) conditional expressions for simulation to execute the line marked L1. They can be generated using a simple parse tree traversal, picking up only the conditional statements. For SystemVerilog, the conditional statements are: if then else; case; and the ternary foo = cond ? t_path : f_path.

1.2 Definitions
Proof: A determination by a model checker that an assertion is universally true. Falsification: A determination by a model checker that an assertion is false, usually illustrated with a counterexample. Inconclusive Result: An outcome indicating that the modelchecking tool is unable to prove or disprove an assertion. In this case, the target assertion is called an inconclusive. Constraint: In the context of formal verification, a constraint is a property that is used as an assumption during verification. When a FV tool finds a counterexample to an assertion, the tool must respect all properties declared as constraints. Conditional coverage point: Conditional statements in an RTL description (i.e. if, case and the ternary operator) reference a set of Boolean variables. Conditional coverage points are the 2n possible assignments to n Boolean variables in a conditional expression. In the following snippet of RTL code, for example, there are 23 conditional coverage points for all combinations of values of varA, varB and varC: if (varA || varB || varC) foo <= data; else foo <= data + b1;

2.2 Early Unconstrained Model Checking


Once conditional coverage points are converted to assertions, a model checker can be used to check reachability. Note the negation of the conditional paths in the assertions. This creates assertions that are violated when the conditional coverage point is reached. In other words, a counterexample to C0 represents a path to the condition of (do_it==1b0) at L1. Conversely, a proof of C0 means that (do_it==1b0) at L1 can never be reached. Note that if an assertion is proven with no restrictions on input stimuli, then the proof remains true if the inputs are constrained. The converse, however, is not truea counterexample generated with no constraints is likely to change or disappear when constraints are applied. When working with functional assertions, the accuracy of input constraints is critical for achieving credible FV results. When using formal verification to determine coverage reachability, however, we argue that accuracy of input constraints is less critical. We observed that counterexample traces for coverage points are frequently usable by engineers even if the counterexamples contain some illegal stimulus. In some cases, more accurate constraints must be added to the model-checking environment, but often an unconstrained counterexample is sufficient to point the way to uncovered coverage points. It is this reduced reliance on accurate constraints that makes model checking of assertions generated from conditional coverage points suitable for adoption early in the design cycle.

2. IDENTIFYING INTRINSICALLY HARDTO-VERIFY LOGIC


Our approach to identify intrinsically hard-to-verify parts of an RTL design is based on the following observations: 1. Conditional coverage points are free. That is, conditional coverage items are directly implied by the RTL code and can be extracted automatically (explained below). This contrasts with functional coverage points and assertions. Model checking does not need to be delayed until a full set of input constraints is available. Useful information can be obtained from formulating a problem with incomplete constraints or even without any constraints. Assertions for conditional coverage points that are found to be inconclusive can pinpoint the cause of hard-to-verify parts of the RTL design and correlate to later difficulties in closing coverage in dynamic simulation or producing conclusive results with FV.

2.

3.

2.1 Conditional Coverage Points Are Free


Code coverage points, and in particular conditional coverage points, are artifacts of the RTL description of a circuit. They are implied by the syntax and execution model of the hardware description language (HDL), for example SystemVerilog. Our interest in these code coverage metrics stems from the fact that, by construction, they touch all parts of the design. Although there have been previous attempts to automatically create functional assertions from RTL code [6], none has been able to create fully correct assertions that cover the full range of

2.3 Inconclusive Results Have Value


In practice, FV on any nontrivial set of assertions will produce proofs, counterexamples and inconclusive results. In our verification work, we observed that there is a correlation between inconclusive results for code coverage items (in particular

269

conditional coverage points) and closure problems for both simulation and FV of functional assertions. In other words, an inconclusive result when targeting code coverage assertions is a leading indicator of difficulties that will be encountered with other verification tasks. This observation has an intuitive appeal: if formal analysis cannot penetrate the RTL description to the extent that all conditional paths can be explored, then there is a high likelihood that functional analysis in that region of code will also be inconclusive. In this situation, it can be expected that a long and complex sequence of input stimuli will be required to reach uncovered coverage points in simulation. Such sequences may be difficult or impossible to generate in a CDV testbench.

Table 2: Comparison of FV Results without/with Constraints


Block Functional Assertion Targets FV Results without Constraints Proven/False/Incon FV Results with Constraints Proven/False/Incon

mem_dpctl rtr racetr ppim

478 825 34 181

348 / 56 / 74 547 / 220 / 58 22 / 12 / 0 104 / 51 / 26

365 / 21 / 92 547 / 0 / 278 34 / 0 / 0 138 / 13 / 30

2.4 Methodology
Given the observations above, we propose the following methodology to find intrinsically hard-to-verify logic: 1. Automatically generate conditional coverage assertions. This can be done on RTL code at any stage of completion and at any level of hierarchy. Run unconstrained model checking. This can be done before any part of the verification environment is ready. Analyze inconclusive assertions. Ignore counterexamples at this juncture. Focus on inconclusive results and identify RTL constructs that can be modified to make verification simpler. Examine proofs for unintended unreachable coverage points. Modify the design or verification environment so that it is simpler to verify. Rerun unconstrained model checking to confirm that inconclusive results have been eliminated.

We next examined the relationship of FV results to constraints. Table 2 shows FV results on functional assertions for a number of blocks both with and without constraints. Note that the constraints for rtr and racetr were refined to remove all counterexamples. The mem_dpctl and ppim blocks contain assertions that monitor preconditions of important inconclusive functional assertions, and these are shown as failing assertions with constraints. Notice the special case of racetr, which has no inconclusive coverage points in Table 1. This carries over to no inconclusive assertion targets in Table 2, providing anecdotal evidence that the absence of hard-to-verify logic when model checking coverage points implies easier model checking on functional assertion targets. In the case study of Section 3.2 we show a different anecdotal case where an inconclusive FV result on a conditional coverage point is directly related to an inconclusive result on a functional assertion. For the other blocks in Table 2, the data shows that when constraints are added, the set of assertions proven increases, the set of falsifications decreases and the set of inconclusive results typically increases. It is easy to understand that more accurate constraints, which narrow the range of legal stimuli, lead to more proven assertions and fewer falsifications. It is not obvious, however, why the set of inconclusive results should increase. A closer examination of the changes to results of individual assertions in Table 2 showed that when constraints are added to prevent illegal stimuli, many counterexamples are converted into inconclusive results rather than into proofs. Hence, we believe the addition of constraints makes model checking more difficult (for current tools) than model checking without constraints. Conversely, it can be said that an assertion that is difficult to model check without constraints will not become simpler when constraints are added. This is the basis for using inconclusive unconstrained model checking as an indicator of later verification difficulties. Note that some inconclusive results from FV cannot be found using early analysis, but can only be pinpointed in later verification. Hence, the set of inconclusive assertions found without constraints is typically a subset of the inconclusive assertions found using constraints. In summary, the data suggests that unconstrained FV on assertions generated from conditional coverage points are leading indicators of subsequent coverage holes in simulation as well as future inconclusive results from model checking. Uncovered Coverage Points in Simulation 169 81 11 35 (in system test) 70

2. 3.

4. 5.

2.5 Experimental Results


The observations and data presented in this paper come from ongoing or recently completed verification work performed on a large ASIC for the Anton machine [7]. The salient highlights of our ASIC are that it is in a 90 nm process and contains approximately 33 million gates with clocks running at 400 MHz and 800 MHz. We performed steps 1 and 2 of our methodology on several blocks of our ASIC. Commercially available model checkers were run for multiple hours before declaring an assertion as inconclusive. Table 1 shows the number of automatically generated conditional coverage points for several blocks in our ASIC. We also show the result of FV on those coverage points. We found that the number of inconclusive FV results was less than 1% of the number of RTL code lines in the block. Although the number of conditional coverage points in a design is highly design dependent, we believe this empirical data serves to provide an order of magnitude estimate of the number of inconclusive results that could be expected when using our proposed methodology.

Block mem_cp mem_dpctl rtr (in/out) racetr ppim

RTL Lines 3512 3896 3718 2346 9815

Table 1: Generated Coverage Points and FV Results Conditional Inconclusive FV Conclusive FV Results Coverage Points Results Reached Unreachable 3551 2938 421 1540 3015 3442 2863 421 998 2922 98 46 0 542 26 11 29 0 0 67

270

3. ENHANCING VERIFIABILITY
The following case studies extracted from our project give examples of how a design can be made more verifiable.

parameter and obtained proofs on the assertion targets as well as conclusive results for analysis of the conditional coverage points.

3.3 Removing FIFO Depth Dependency


FIFO structures cause both state space explosion and sequential depth problems for verification. In the ppim block shown in Table 1, inconclusive functional assertions were found in a cluster around a particular FIFO structure that queued data from a grid of processing elements to be used for data combination in a pipeline (Figure 2).
Processing Element Processing Element Processing Element

3.1 Removing Memory Value Dependency


4 x T a g WE, 4 x Ne x t L R U Tag RAM ( Cu r r e n t S t a t e S t o r e )
tag_way{0..3}, valid

Memory Address

4 x 2 bit LRU

C u r r e n t S t a t e Re g i s t e r
tag_way{0..3}, valid

4 x HIT Comparators

4 x 2 bit LRU

Combinational Logic

HIT, SetSelect

A widely used design structure that can also be found in our ASIC is a least-recently-used (LRU) table to implement a cache line replacement policy in a memory controller (Figure 1). The LRU table stores tag bits that indicate the state of cache lines and, for each line, a number that indicates when it was recently accessed.

Grid of Processing Elements Feed FIFO


Write to SRAM at top of FIFO

DualPort SRAM

Figure 1: LRU Structure One of the inconclusive coverage points in the mem_cp block shown in Table 1 was found to be: LRU_cp: assert property (@(posedge clk) !(!(^tag_way0) & !(^tag_way1) & (^tag_way2) & !(^tag_way3)); The expressions within the coverage point quickly led us to examine the LRU structure. An important feature of this particular memory that we noted was that the LRU tags get reset to a predetermined initial state. This is normally regarded as good design practice, but it keeps verification within a relatively small portion of the possible states of the LRU tags reachable from the initial state in a limited number of cycles. A more thorough examination of the verification needs of this structure found that the logic surrounding the LRU table was designed to operate correctly regardless of the state of LRU tags. This important property of the design allowed us to modify our FV environment so that the tag bits are treated as unconstrained inputs. With this modification, subsequent FV analysis converted all previously inconclusive assertions into proofs or falsifications.

Read from SRAM at bottom of FIFO To Data Combination Pipeline Write and Read may NOT occur to same address on same cycle

Analysis of the logic determined that assertions related to overflow of the FIFO would require analysis of all the processing elements in the grid to determine when the FIFO could be filled. In this case, the depth of the FIFO is a function of the number of processing elements and could not be converted into a parameter.

Figure 2: FIFO Overflow Logic With some assistance from the designer, we found that if we added logic to the FIFO that would reject push operations when it was full and flag an error instead, the functional assertion would be decoupled from the processing element grid and become trivially provable. The three case studies presented demonstrate different techniques for improving verifiability. In one case, the verification environment could be modified to add more controllability of internal state. In a second case, logic could be designed with parameterization so that a simpler version could be verified. In the third case study, logic could be added to the design that would decouple sequentially deep paths through large state spaces.

4. CONCLUSIONS
This work is the result of observations made during the verification of a large and complex ASIC that deployed many best-known practices in the field including extensive use of assertions and coverage points for both CDV and FV. In this paper, we have proposed a methodology that can be applied early in the design process and that uses unconstrained model checking of automatic assertions generated from conditional coverage points in an RTL description to identify hard-to-verify logic areas. We found that these logic areas are indirectly and directly responsible for CDV coverage holes and inconclusive functional assertion FV results.

3.2 Removing Counter Dependency


Another of the inconclusive coverage points in the mem_cp block shown in Table 1 is: cnter_cp: assert property (@(posedge clk) !((any_read) & (mmr_rd_req) & ((xact_fp_count <= 6'b000010))); Whereas memories cause state space explosion for formal analysis, counters create a long sequence of states (sequential depth) that must be traversed and analyzed before other design behaviors can be reached. One common use of counters, and the way it was used in this case study, is to count the number of outstanding transactions at an interface. In our initial verification, the assertion targets on the interface were inconclusive. This correlated to the inconclusive results from targeting conditional coverage points with FV in the surrounding logic. This test case provided us our initial insight into using inconclusive results of FV on conditional coverage points as a leading indicator of inconclusive results on functional assertions. We made this logic more verifiable using the well-known method of introducing a parameter for the maximum number of outstanding transactions. We subsequently reduced this

5. REFERENCES
[1] [2] [3] [4] [5] [6] [7]
Ho, R., Yang, C. H., Horowitz, M. A., Dill, D. L., Architecture Validation for Processors. In Proc. of the Intl Symp. on Comp. Arch., 1995. Cunningham, G. D., et al., Expression Coverability Analysis: Improving Code Coverage with Model Checking, In Proc. of Design & Verif. Conf., 2004. Dill, D. L., What's Between Simulation and Formal Verification? (Extended Abstract). In Proc. of Design Automation Conf., 1998. Ghosh, I., Prasad, M. R., A Technique for Estimating the Difficulty of a Formal Verification Problem. In Intl Symp. on Quality Electr. Design, 2006. Grobe, D., et al., Estimating Functional Coverage in Bounded Model Checking. In Proc. of Design Automation & Test in Europe, 2007. Ly, T. A. et al., Method for Automatically Generating Checkers for Finding Functional Defects in a Description of a Circuit, U.S. Patent 6,175,946, 2001. Shaw, D. E., et al., Anton, a Special-Purpose Machine for Molecular Dynamics Simulation. In Proc. of the Intl Symp. on Comp. Arch., 2007.

271

You might also like