Professional Documents
Culture Documents
Early Formal Verification of Conditional Coverage Points to Identify Intrinsically Hard-to-Verify Logic
C. Richard Ho, Michael Theobald, Martin M. Deneroff, Ron O. Dror, Joseph Gagliardo and David E. Shaw*
D. E. Shaw Research, New York, NY {richardh,theobald,deneroff,dror,gagliard,shaw}@DEShawResearch.com*
ABSTRACT
Design verification of complex digital circuits typically starts only after the register-transfer level (RTL) description is complete. This frequently makes verification more difficult than necessary because logic that is intrinsically hard to verify, such as memories, counters and deep first-in, first-out (FIFO) structures, becomes immutable in the design. This paper proposes a new approach that exploits formal verification of conditional coverage points with the goal of early identification of hard-to-verify logic. We use the difficulty of formal verification problems as an early estimator of the verification complexity of a design. While traditional verification methods consider conditional coverage only in the design verification phase, we describe an approach that uses conditional coverage at a much earlier stagethe design phase, during which changes to the RTL code are still possible. The method is illustrated using real examples from the verification of an ASIC designed for a specialized supercomputer.
behavior of the design (functional coverage). The goal of CDV is to utilize coverage metrics to guide stimulus creation, using a testbench environment, to exercise the RTL code through as much of its behavior as possible, thereby exposing design errors. Functional formal verification (FV) is usually a parallel verification effort that utilizes formal analysis techniques targeted at assertions within the RTL description or at end-to-end assertions that describe some function of the design. The goal of FV is to obtain proofs that assertions describing functionality are true or to find design errors, manifested as counterexamples to assertions. The vexing problem for verification teams is that both techniques suffer from a form of the Pareto principle where the last 20% of verification to achieve closure takes 80% of the time and effort. For CDV, this is manifested as an asymptotic approach to 100% coverage on the set of coverage points. For FV, this comes in the form of assertions that cannot be proven true and for which no counterexample can be found. In this paper, we propose a method to identify the logic areas that are the root causes of the Pareto principle. Our method uses a set of implied properties of the RTL code, instead of user-defined assertions, in an unconstrained static formal analysis with a model checker. In a departure from traditional methods, we ignore proofs and counterexamples, and focus solely on inconclusive assertions, which provide a roadmap for the parts of the design that are inherently more difficult to verify. An important advantage of this approach is that the method can be applied very early in the design process, before the RTL description is complete, before a simulation environment is ready, and even before assertions have been placed in the design. This avoids all the usual prerequisites that normally limit the use of FV to the latter part of the verification cycle. With such foreknowledge of where verification bottlenecks will arise, it becomes possible to make early modifications to the RTL code that enhance its verifiability, reducing the long tail of verification time and effort.
General Terms
Algorithms, Design, Verification.
Keywords
Formal verification, conditional coverage, code coverage, coverage hole, verifiability, inconclusive results.
1. INTRODUCTION
Modern verification methodologies for large, complex digital circuits, such as microprocessors, graphics chips and applicationspecific integrated circuits (ASICs), commonly include both dynamic simulation and functional formal verification. Dynamic simulation is normally used as part of a coverage-driven verification (CDV) methodology where event monitors (coverage points) within the RTL description provide feedback on the effectiveness of the simulation vectors applied. The coverage points simply indicate whether dynamic simulation has exercised a particular event, which may be as simple as execution of a line of RTL code (code coverage) or as complex as a functional
Correspondence to shaw@DEShawResearch.com. David E. Shaw is also with the Center for Computational Biology and Bioinformatics, Columbia University, New York, NY 10032.
268
[4] proposed a method for estimating the difficulty of formal verification problems and hence determining which ones are worth expending effort on. Other methods have been proposed for measuring the coverage of formal analysis [5]. Some of these metrics could potentially provide a measure of verifiability, but they rely on analysis of assertions with input constraints and would only be applicable later in the verification process. This work differs from these previous works mainly in its focus on inconclusive model checking results.
behaviors of a design. In contrast, most modern HDL simulators include an automatic method of extracting all code coverage points, including line, branch and conditional coverage points. Conditional coverage points can be mechanically translated to assertions by capturing the full conditional path to any line of RTL code. For example: module blkA (input clk, input rst, ); always @(posedge clk) begin if (!gate) begin if (en_a || intr_b) begin case (st) begin 3b010: if (do_it)// L1 data <= new_data; For the conditional expression on the line marked L1 above, the following assertions would be generated: C0: assert property (@(posedge clk) !((!gate)&&(en_a||intr_b)&&(st==3b010) && (do_it==1b0))); C1: assert property (@(posedge clk) !((!gate)&&(en_a||intr_b)&&(st==3b010) && (do_it==1b1))); The assertions C0 and C1 capture the full set of (possibly nested) conditional expressions for simulation to execute the line marked L1. They can be generated using a simple parse tree traversal, picking up only the conditional statements. For SystemVerilog, the conditional statements are: if then else; case; and the ternary foo = cond ? t_path : f_path.
1.2 Definitions
Proof: A determination by a model checker that an assertion is universally true. Falsification: A determination by a model checker that an assertion is false, usually illustrated with a counterexample. Inconclusive Result: An outcome indicating that the modelchecking tool is unable to prove or disprove an assertion. In this case, the target assertion is called an inconclusive. Constraint: In the context of formal verification, a constraint is a property that is used as an assumption during verification. When a FV tool finds a counterexample to an assertion, the tool must respect all properties declared as constraints. Conditional coverage point: Conditional statements in an RTL description (i.e. if, case and the ternary operator) reference a set of Boolean variables. Conditional coverage points are the 2n possible assignments to n Boolean variables in a conditional expression. In the following snippet of RTL code, for example, there are 23 conditional coverage points for all combinations of values of varA, varB and varC: if (varA || varB || varC) foo <= data; else foo <= data + b1;
2.
3.
269
conditional coverage points) and closure problems for both simulation and FV of functional assertions. In other words, an inconclusive result when targeting code coverage assertions is a leading indicator of difficulties that will be encountered with other verification tasks. This observation has an intuitive appeal: if formal analysis cannot penetrate the RTL description to the extent that all conditional paths can be explored, then there is a high likelihood that functional analysis in that region of code will also be inconclusive. In this situation, it can be expected that a long and complex sequence of input stimuli will be required to reach uncovered coverage points in simulation. Such sequences may be difficult or impossible to generate in a CDV testbench.
2.4 Methodology
Given the observations above, we propose the following methodology to find intrinsically hard-to-verify logic: 1. Automatically generate conditional coverage assertions. This can be done on RTL code at any stage of completion and at any level of hierarchy. Run unconstrained model checking. This can be done before any part of the verification environment is ready. Analyze inconclusive assertions. Ignore counterexamples at this juncture. Focus on inconclusive results and identify RTL constructs that can be modified to make verification simpler. Examine proofs for unintended unreachable coverage points. Modify the design or verification environment so that it is simpler to verify. Rerun unconstrained model checking to confirm that inconclusive results have been eliminated.
We next examined the relationship of FV results to constraints. Table 2 shows FV results on functional assertions for a number of blocks both with and without constraints. Note that the constraints for rtr and racetr were refined to remove all counterexamples. The mem_dpctl and ppim blocks contain assertions that monitor preconditions of important inconclusive functional assertions, and these are shown as failing assertions with constraints. Notice the special case of racetr, which has no inconclusive coverage points in Table 1. This carries over to no inconclusive assertion targets in Table 2, providing anecdotal evidence that the absence of hard-to-verify logic when model checking coverage points implies easier model checking on functional assertion targets. In the case study of Section 3.2 we show a different anecdotal case where an inconclusive FV result on a conditional coverage point is directly related to an inconclusive result on a functional assertion. For the other blocks in Table 2, the data shows that when constraints are added, the set of assertions proven increases, the set of falsifications decreases and the set of inconclusive results typically increases. It is easy to understand that more accurate constraints, which narrow the range of legal stimuli, lead to more proven assertions and fewer falsifications. It is not obvious, however, why the set of inconclusive results should increase. A closer examination of the changes to results of individual assertions in Table 2 showed that when constraints are added to prevent illegal stimuli, many counterexamples are converted into inconclusive results rather than into proofs. Hence, we believe the addition of constraints makes model checking more difficult (for current tools) than model checking without constraints. Conversely, it can be said that an assertion that is difficult to model check without constraints will not become simpler when constraints are added. This is the basis for using inconclusive unconstrained model checking as an indicator of later verification difficulties. Note that some inconclusive results from FV cannot be found using early analysis, but can only be pinpointed in later verification. Hence, the set of inconclusive assertions found without constraints is typically a subset of the inconclusive assertions found using constraints. In summary, the data suggests that unconstrained FV on assertions generated from conditional coverage points are leading indicators of subsequent coverage holes in simulation as well as future inconclusive results from model checking. Uncovered Coverage Points in Simulation 169 81 11 35 (in system test) 70
2. 3.
4. 5.
Table 1: Generated Coverage Points and FV Results Conditional Inconclusive FV Conclusive FV Results Coverage Points Results Reached Unreachable 3551 2938 421 1540 3015 3442 2863 421 998 2922 98 46 0 542 26 11 29 0 0 67
270
3. ENHANCING VERIFIABILITY
The following case studies extracted from our project give examples of how a design can be made more verifiable.
parameter and obtained proofs on the assertion targets as well as conclusive results for analysis of the conditional coverage points.
Memory Address
4 x 2 bit LRU
C u r r e n t S t a t e Re g i s t e r
tag_way{0..3}, valid
4 x HIT Comparators
4 x 2 bit LRU
Combinational Logic
HIT, SetSelect
A widely used design structure that can also be found in our ASIC is a least-recently-used (LRU) table to implement a cache line replacement policy in a memory controller (Figure 1). The LRU table stores tag bits that indicate the state of cache lines and, for each line, a number that indicates when it was recently accessed.
DualPort SRAM
Figure 1: LRU Structure One of the inconclusive coverage points in the mem_cp block shown in Table 1 was found to be: LRU_cp: assert property (@(posedge clk) !(!(^tag_way0) & !(^tag_way1) & (^tag_way2) & !(^tag_way3)); The expressions within the coverage point quickly led us to examine the LRU structure. An important feature of this particular memory that we noted was that the LRU tags get reset to a predetermined initial state. This is normally regarded as good design practice, but it keeps verification within a relatively small portion of the possible states of the LRU tags reachable from the initial state in a limited number of cycles. A more thorough examination of the verification needs of this structure found that the logic surrounding the LRU table was designed to operate correctly regardless of the state of LRU tags. This important property of the design allowed us to modify our FV environment so that the tag bits are treated as unconstrained inputs. With this modification, subsequent FV analysis converted all previously inconclusive assertions into proofs or falsifications.
Read from SRAM at bottom of FIFO To Data Combination Pipeline Write and Read may NOT occur to same address on same cycle
Analysis of the logic determined that assertions related to overflow of the FIFO would require analysis of all the processing elements in the grid to determine when the FIFO could be filled. In this case, the depth of the FIFO is a function of the number of processing elements and could not be converted into a parameter.
Figure 2: FIFO Overflow Logic With some assistance from the designer, we found that if we added logic to the FIFO that would reject push operations when it was full and flag an error instead, the functional assertion would be decoupled from the processing element grid and become trivially provable. The three case studies presented demonstrate different techniques for improving verifiability. In one case, the verification environment could be modified to add more controllability of internal state. In a second case, logic could be designed with parameterization so that a simpler version could be verified. In the third case study, logic could be added to the design that would decouple sequentially deep paths through large state spaces.
4. CONCLUSIONS
This work is the result of observations made during the verification of a large and complex ASIC that deployed many best-known practices in the field including extensive use of assertions and coverage points for both CDV and FV. In this paper, we have proposed a methodology that can be applied early in the design process and that uses unconstrained model checking of automatic assertions generated from conditional coverage points in an RTL description to identify hard-to-verify logic areas. We found that these logic areas are indirectly and directly responsible for CDV coverage holes and inconclusive functional assertion FV results.
5. REFERENCES
[1] [2] [3] [4] [5] [6] [7]
Ho, R., Yang, C. H., Horowitz, M. A., Dill, D. L., Architecture Validation for Processors. In Proc. of the Intl Symp. on Comp. Arch., 1995. Cunningham, G. D., et al., Expression Coverability Analysis: Improving Code Coverage with Model Checking, In Proc. of Design & Verif. Conf., 2004. Dill, D. L., What's Between Simulation and Formal Verification? (Extended Abstract). In Proc. of Design Automation Conf., 1998. Ghosh, I., Prasad, M. R., A Technique for Estimating the Difficulty of a Formal Verification Problem. In Intl Symp. on Quality Electr. Design, 2006. Grobe, D., et al., Estimating Functional Coverage in Bounded Model Checking. In Proc. of Design Automation & Test in Europe, 2007. Ly, T. A. et al., Method for Automatically Generating Checkers for Finding Functional Defects in a Description of a Circuit, U.S. Patent 6,175,946, 2001. Shaw, D. E., et al., Anton, a Special-Purpose Machine for Molecular Dynamics Simulation. In Proc. of the Intl Symp. on Comp. Arch., 2007.
271