You are on page 1of 344

Gerald Musgrave Editor

Computer-Aided Design ,
of digital electronic circuits and systems

North-Holland
for the Commission of the European Communities

COMPUTER-AIDED DESIGN of digital electronic circuits and systems

organized by

The Commission of the European Communities Directorate-General for Internal Market and Industrial Affairs

NORTH-HOLLAND PUBLISHING COMPANY-AMSTERDAM NEW YORK OXFORD

COMPUTER-AIDED DESIGN
of digital electronic circuits and systems Proceedings of a Symposium Brussels, November 1978

edited by

Gerald MUSGRAVE
Brunei University Uxbridge, Middlesex, U.K.

1979 NORTH-HOLLAND PUBLISHING COMPANY -AMSTERDAM NEW YORK OXFORD

* ECSC, EEC, EAEC, Brussels and Luxembourg, 1979 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner.

ISBN: 0444 85374

Published by NORTH-HOLLAND PUBLISHING COMPANY AMSTERDAM NEW YORK OXFORD Sole distributors for the U.S.A. and Canada ELSEVIER NORTH-HOLLAND INC. 52 VANDERBILT AVENUE NEW YORK, N.Y. 10017

for The Commission of the European Communities, Directorate-General for Scientific and Technical Information and information Management, Luxembourg EUR 6379 LEGAL NOTICE Neither the Commission of the European Communities nor any person acting on behalf of the Commission is responsible for the use which might be made of the following information.

PRINTED IN THE NETHERLANDS

FOREWORD

"What we have to learn to do we learn by doing" Aristotle

With the rapid change in technology providing the ever increasing complexity of digital systems it is essential to utilise the products of that technology in order to cope with the evolution. Computer aided design of digital electronic circuits and systems is essential to the ongoing development of any electronics and associated data processing industry. The European Communities recognised its importance in July 1974 when they initiated a programme of studies in the D.P. field. One study, the CAD Electronics Study commenced in June 1977 as a feasibility project with the following objectives: a. Assessment of current state-of-the-art of CAD of logic design, its cost benefits, user requirements, problem area and impact of technology evolution. Time projection of designers' opportunities and requirements within an extrapolated electronics and computer evoluation in the 1979-82 period. Investigation of the opportunity in terms of strategic scientific, industrial and economic benefit. Recommendations for further Community work, if appropriate, with detailed justification.

b.

c. d.

To match these objectives a two phase project structure was used. First a worldwide survey of CAD techniques applied to digital electronics was undertaken calling for information from users, non-users, suppliers and encompassing the product ranges of computers, communications, military systems etc. The second phase was an analysis of this data with respect to the implication for CAD development in Europe and the technology impact over the next quinquennium. One of the important conclusions of this work was the appalling ignorance of CAD techniques even by those who were purporting to be using the same. Hence the organisation of a three day symposium in November 1978 where a state-of-the-art set of lectures was given followed by important papers from leading authorities on the problem areas. In these presentations a balance was retained between software suppliers' and users' views. There was also the opportunity to present the structure, results and general conclusions of the EEC CAD Electronics Study and give delegates the opportunity to discuss the subject matter. This book is a record of the lectures, papers and discussions at the symposium and covers the subject of CAD of electronics circuits and systems, from the conceptual specification through synthesis, simulation, testing and implementation from printed circuit boards (PCBs) to very large scaled integrated (VLSI) chips. The management aspects of future trends and economic viabilities are covered which

vi

FOREWORD

affords the reader a wide spectrum of information. This volume is clearly the work of many willing and cooperative authors whom I wish to acknowledge. It has only been possible by the foresight of the European Commission and the dedication and forebearance of Mr. Bir and his staff of the Joint D.P. Project Bureau of the Commission. GERALD MUSGRAVE BRUNEL UNIVERSITY

C O N T E N T S

INTRODUCTORY SESSION OPENING ADDRESS

E. Davignon

KEYNOTE ADDRESS K. Teer TECHNICAL SESSION I INTEGRATED COMPUTER-AIDED DESIGN - AN INDUSTRIAL VIEW R.W. McGuffin PRODUCT SPECIFICATION AND SYNTHESIS D. Lewin SIMULATION OF DIGITAL SYSTEMS: WHERE WE ARE AND WHERE WE MAY BE S.A. Szygenda TECHNICAL SESSION II NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS .. Breuer LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD : ARE REQUIREMENTS CONVERGING? H. De Man CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS H.M. Lipp CAD IN THE JAPANESE ELECTRONICS INDUSTRY K. Kani, A. Yamada, M. Teramoto TECHNICAL SESSION III ASPECTS OF A LARGE, INTEGRATED CAD SYSTEM F. Hembrough, R. Pabich LARGE SCALE CAD USER EXPERIENCE F. Klaschka COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS L.C. Abel TECHNICAL SESSION IV VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN J.-C. Rault, J.-P. Avenier, J. Michard, J. Mutel

13 25 41

57 81

91 103

123 133 139

149

viii

CONTENTS page no.

COMPUTER AIDED DESIGN THE PROBLEM OF THE 80'S MICROPROCESSOR DESIGN B. Lattin USER EXPERIENCE: IN SIMULATION AND TESTING C. Gaskin DEVELOPMENT OF A DIGITAL TEST GENERATION SYSTEM P.E. Roberts, K.T. Wolsk'I AN APPROACH TO A TESTING SYSTEM FOR LSI H.E. Jones, R.F. Schauer TECHNICAL SESSION V AN ENGINEERING COMPONENTS DATA BASE M. Tomljanovich, R. Colangelo CUSTOM LSI DESIGN ECONOMICS J.G.M. Klomp AUTOMATIC GATE ALLOCATION PLACEMENT AND ROUTING S.C. Hoffman INTEGRATED CAD FOR LSI K. Loosemore E.E.C. PROJECT SESSION EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS Introduction: Organisational Aspects: Technical Perspective: A. De Mari W. Quillin G. Musgrave

169

173 183 187

207 217 229 237

245 27 255 257 303

Survey in USA and Canada: A. Carter TECHNICAL FORUM TECHNICAL FORUM I Chairman: Jakob Vlietstra TECHNICAL FORUM II Chairman: Jakob Vlietstra FINAL SESSION EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE Chairman: S. Bir INDEX OF AUTHORS

313

317

321

325

INTRODUCTORY SESSION

Chairman: C. GARRIC, European Communities

G. Muigim/e, editou, COMPUTER-AIPEP DESIGN Oi digital tlzuViotilc ccAcacti and iijtvni NoiUh-Holland Publlhoig Company ECSC, EEC, EAEC, Biuiiili S LuxmbouAa, 1979

OPENING ADDRESS E. DAVIGNON EUROPEAN COMMUNITIES It is a pleasure for me to welcome this gathering which includes many of the world's most distinguished specialists in that key tool of advanced technology, computer-aided design. We look forward to hearing contributions from leaders in the field not only from Europe, but from the United States, Japan, and even the Soviet Union. Your numbers and quality augur well for the conference. I would like to start if off by placing it in the political and economic context of the Commission's objectives for industrial policy. Not only Europe, but the developed world as a whole is in the throes of fundamental industrial change due not only to the ending of a long period of sustained economic growth, but to deep shifts in its industrial structure. As the developing nations of the world acquire competence and capability in many of the older industries, from textiles and shipbuilding to steel and cars, a process which we must welcome as offering them the chance to live and even thrive, the traditional industrial regions such as Europe must look increasingly to the newer technologies and industries as the main source of future economic growth, employment and social development. Europe has to become a high technology workshop for the world. Of these new technologies, far the most important is the complex of electronic industries associated with the processing and communicating of information. The parts of this complex still often go by separate names - the computer industry, telecommunications, electronic components. But I do not need to tell this assembly that they increasingly are one, as the French have recognised in their new word "tlmatique". This is both the nervous system and the key base technology for a modern industrial or indeed post-industrial society. The facts speak for themselves: In mid-recession the market for computing in Europe is still growing at something approaching 203S per year in fixed money terms, while each year the value for that money in terms of computing power is multiplied several times. Despite the many jobs it displaces, we already expect the number of people employedin the direct use or manufacture of computing power in Europe to double, from some 1 million in 1975 to 2 million by the mid 1980s. Badly handled, the information revolution can indeed lead to a new crisis of unemployment. Imaginatively handled, it can lead to a vast new range of employment opportunities. It is not too much to say that the competitiveness of the large majority of European industry and services will depend on the speed and competence with which it applies the new electronic technology to its products and processes and to the services it offers during the next ten years. For these reasons, the Community has recognised that both the application of dataprocessing throughout the economy and the industry itself deserve vigorous public support at Community level both to help create a receptive homogeneous market and to match the immense public resources which are put behind the industry in other advanced regions of the world, such as the US and Japan. The first political recognition of this need was the Resolution of the Council of Ministers of the Community of July 1976 which called for a Community policy for data-processing. That Resolution stressed in particular the need to promote collaboration in data-processing applications, the need for users to be brought together so that the power of computing could be more effectively applied. Since then, the Council has adopted a number of priority studies, exploring the needs

E. DAVIGNON and feasibility of action in certain specific fields of user applications. Among these were two on Computer Aided Design: a CAD study in the building and construction field and the other in Digital Circuit Design, the subject of discussion today. The Council is now approaching a more critical political test, a decision on a four-year programme for informatics which would provide more systematic and greater support for a wider spectrum of user applications. CAD will be one important element in this programme. It will be the framework in which practical proposals emerging from the work of this conference can be implemented. Why does the Commission attach importance to CAD as a tool of economic development? Major sectors of industry, particularly those using advanced technology, such as aerospace, electronics and the automotive industry are already forging ahead and making massive investments on their own account in this field. These investments can acquire larger importance for the Community when the technology is purposefully transferred to other sectors of industry. For example, the large investments made by the aerospace industry to develop three-dimensional systems enable the techniques, when proven, to be carried over and used in shoe manufacturing, plastics, glass, mould and die industries. It is now recognised moreover that, in the future, CAD will become an integrated part of the production process, combining the design processes with automated manufacture (as is already the case with an aircraft wing). Modern CAD is therefore a tool which European industry has to have. And when it has it, it will have to take account of massive social implications of its introduction. These wider aspects of CAD are being studied systematically by the Commission in preparation for the four-year programme. CAD is clearly basic to electronics, our topic of discussion today. The designer must wrestle with the challenge of ever-growing complexity which only computer aids and tools can enable him to master. These tools need to be available to a vaste range of medium-sized and small firms, in addition to the great. The study sponsored by the Community, which you will be discussing over the next few days, is designed to identify the state of the art computer aids potentially available and to suggest what the Community might do to improve them and make them more accessible. In this as in so many other fields of computing, ready access to data, effective standards, education in the use of new techniques will be essential, as a complement to innovating technology. We look forward to receiving your advice and hearing your views on what needs to be done. Ladies and Gentlemen, in every industrial period certain industries play a key part in the development of society. Today, the key industry is the complex of industries covering the processing and communication of information and using electronic technology. A strong capability in these related industries is essential to Europe's future because: the character of our society will depend on our skill in using these technologies most industries and many services will become dependent on these technologies and the remarkable growth rate of the market for these industries will continue to represent an increasing element of European and world production and wealth. This vast complex of technologies, with its unprecedented challenge to human skill and endeavour, requires resources and investments which no single European nation

OPENING ADDRESS

could justifiably or possibly undertake on its own. It is my belief that the European Community should and can make a greater contribution to the development of this world technology than it has done so far, so that the potential economic and social benefits can be harnessed to benefit mankind. In this effort, the leading role will always fall to industry, to those who develop and apply the new techniques. Moreover, national Governments can and will continue to play an essential supporting role. They have, in particular, an immense educational responsibility in this new age, for we cannot accept the paradox of a Europe with many millions of unemployed, whose econimic development is held up by an acute shortage of critical software and engineering skills in the most advanced fields, and whose citizens have only the barest understanding of the potential implications for them of the new technology. There are, however, at least three vital tasks which only the Community can fulfil and which are wider than the modest programmes for data-processing which I described earlier. One is to ensure that the powerful broadband communications infrastructure needed in the electronic age is developed on a European scale. The second is to support the development of the key electronic technologies of the future which will permit Europe to become more than the follower which it has been in the past. And the third is to develop the activities in the fields of standardisation and procurement which alone can generate a true European market. In social terms, moreover, Europe has a vocation to ensure that in a European information society these formidable tools are in the hands of the citizen, in his workplace, school or home, and not solely in the hands of centralised power, whether management, Government or anyone else. I hope that spirit will inform your discussions too. The Community must also isations outside; when reinvent the wheel. It with you the results of from your participation be open to mutually beneficial co-operation with organthere is so much work to be done, we cannot afford to is for this reason that the Commission chose to share this study. I hope you will both contribute and benefit in this Symposium to which I wish all success.

G. Muia/, edUoK, COMPUTER-AI PEP PESIGN o$ digital iZe c t/tU c CAOLUA and yitem No/ith-Holland PablUking Company ECSC, EEC, EAEC, 8/ui4es S Luxmboujig, 1979

KEYNOTE

ADDRESS

K. Teer Philips Research Laboratories Eindhoven, The Netherlands

The following

are the basic

notes

whi c h Dr. Teer used to outline

his main

points.

1. The electronics industry is a relatively young and dynamic industry with potentially large growth figures due to a very wide area of application and a high rate of innovation. Growth figures of the last decade materialized as substantially higher than those of the industry as a whole or the Gross National Product. Notwithstanding that also electronic industry is subject to industrial saturation phenomena of recent years. Especially electronic components present an investment and massproduction picture that could easily lead to overproduction. 2. There is a standard partitioning of the electronic field in telecommuni cation (telephony, telegraphy), radio (radio, television, radar, navigation) data processing (computers, data transmission) and instrumentation (measurement, control, registration); the regular market view follows similar lines. One of the striking trends, however, is that these divisions tend to merge in many ways. In particular dataprocessing penetrates in almost every field. For the future it might be much more relevant to order the classification in terms of social categories: traffic systems, education systems, health care systems, distribution systems, office systems, production systems, home systems etc. 3. Few will dispute that the "push" in the electronics field in the past and for the future is dominated by: semiconductor technology binary processing satellite technology. The first two are especially related to the issue of this symposium. It is easy to present amazing figures about the progress in semiconductor technology (in terms of bits and gates per square mm or per chip) and about the penetration of binary processing (in terms of traditional computer use as well as new applica tions). These facts are widely known and are assumed to be common knowledge at this symposium. 4. It is relevant to notice that apart from the pure electronic technology, the optical technology is emerging now with special power in transmission and recording of information. This certainly should not be seen as competing with, but as complementary to, the pure electronic hardware. It is beyond any doubt that "micro optics" will give an enormous extra momentum to the electronic field. 5. With the tools of microelectronics and microoptics available there is a remarkable situation growing where central issues are on the move. Very schemati cally we can say that the question is no longer 'how to make it' but 'how to use

K. TEER

it', and the question is no longer 'how to reduce production cost', but 'how to reduce design cost'. 6. How to use it? Most present day stories about microprocessors in newspapers and magazines start with a hard fact namely the transistors per square mm but then jump into vagueness and threat. The reader is left unsure about what actually the message is but with an uneasy feeling that things might go extremely wrong in particular concerning privacy, employment and human dignity. A cool careful analysis is seldom available which has much to do with our inability to foresee the use of modern electronics. A first step here is to order things in various levels so that distinction is made between: better function of existing products new products new functions new organisation new social categories. It 1s true that our government bureaux, our offices, our banks, our education, our health care, our homes will all change but how? To know better we should transfer an experimental attitude well trained in achieving new technologies to the domain of using new technology. 7. Notwithstanding the uncertainties about applications, already a few new subsystems can be identified now without too much sciencefiction: audio and visual facilities in the home for information acquisition, giving do-it-yourself education and active entertainment; the electronic file with powerful data and document retrieval as a comfort for almost all environments in a broad range of sizes; the intelligent manipulator which can be instructed by craftsmen on the working floor for flexible automation; the electronic inspector and recognizer (of pictures, sounds and other inputs) to improve failure diagnosis of objects and human beings; the intelligent controller optimizing the function of non-electronic equipment towards minimum energy, minimum pollution, maximum efficiency or maximum security; the speech addressable equipment leading to hand-free use, lower users threshold and often to faster reaction of the equipment; the picture generator as a tool to explain (in instruction), to analyse (during design), to amuse (in entertainment) and to express oneself (in free time creativity). 8. How to reduce cost in design? In fact the question is somewhat broader namely how to simplify the design process so that speed, cost and clarity do benefit. This is in the focus point of this symposium and will be discussed by a number of speakers much more able than the author of this contribution. Indeed it is of utmost importance that the physical parameters of the devices, the equivalent network, the logic concept, the layout, the photomasks can be achieved with the aid of automatic means. However, this is not sufficient: the computerized process should be standardized the standard should be easily accessible

KEYNOTE ADDRESS the action should extend to higher levels than modules: the multichip domain, the level where computers are a basic building block. 9. As we are in this symposium as a European community it is good to

realize what the position of European industry is in its social-economic context. European industry is confronted with: increased competition fragmentation of market in national states growing intervention of social forces in industrial activities saturation phenomena in some product ranges indistinct and unbalanced relations between automation, productivity, employment, need for work, dislike of work, the need of leisure, the need for education and the demand for education. To attack these difficulties it is necessary to respond with an enthusiastic and original approach. In that approach new forms of cooperation are a necessity. Cooperation of industries in the same line of business, cooperation of complementary industries, cooperation of industry and government, cooperation of governments. Regrettably this may sound retoric, pathetic and illusion like. But the validity of this observation cannot be denied. Next to these 'musts' for the European industry as a whole there is an additional point for the electronic industry. That is the challenge to cooperate with categories of users in much closer coupling than the customer-supplier relation in order to explore the possible answers to the question: "how to use?".

TECHNICAL SESSION I

Chairman: J. BOREL, Centre Nuclaire, France

G. Uuignave, editou, COMPUTER-AIPEP PESIGN o& digital elect/iOrUc Acuit* and yitem& Nolth-HoUand Pubtiiking Componi/ ECSC, EEC, EAEC, Biuiteti Luxemboung, 1979

INTEGRATED COMPUTER-AIDED DESIGN - AN INDUSTRIAL VIEW

R W McGUFFIN INTERNATIONAL COMPUTERS LIMITED MANCHESTER ENGLAND

The underlying reasons for the growth of CAD systems are examined. The various aspects of an integrated total technology CAD systems are presented and discussed. Also an orthogonal view is taken of CAD system design and from that, some pointers to the future examined.

1 3

14 1. INTRODUCTION

R.W. McGUFFIN

A general industrial overview presupposes that there is a common understanding in industry of what Computer Aided Design is, why it is used and what its benefits are. Nothing could be further from the truth. It is fair to say that the industry which has done most to understand the nature of, develop and exploit CAD is the computer industry. Superficially, it could be argued that this happened because there was in it a surfeit of cheap computer power and people who could understand (program) them. In reality, these factors, combined with an emerging, fiercely competitive industry, trying desperately to reduce exceedingly long timescales and high costs, caused it to become a leader in development and exploitation. This view is confirmed by the nature of the product which ranges from integrated circuit chips through printed circuit boards and operating system software to mechanical frames and piece parts. It is not claimed that one CAD system will handle this diversity of design disciplines, however, a vast amount of experience has been gained on where CAD may be used cost-effectively. 1.1 Scope and Definition of Terms

The phrase 'computer-aided design' has, from a purist point of view, become devalued since it now embraces activities which relate to the design process but are not necessarily in the design loop. Engineering design is a process of decision making in order to produce information to enable correct manufacture. Thus, CAD embraces the use of computer systems to improve decision taking, communication and information flow. For the purposes of this paper, it is worthwhi.le subdividing CAD into two categories. These are not definitions, but merely a convenience to aid the understanding of an integrated approach to the subject. CAD - This is defined as the interaction of a designer with a computer in order to aid design decision making. The interaction need not be real time, in fact there are many good reasons why the interaction should be via batch job turn-round. The essence is that the computer is processing information supplied by the designer and yielding results that enable the best design decision to be made. Examples of this are as follows: (a) Simulation - here, a designer is performing experiments on a model of the system he is designing. The simulated results of these experiments will cause the designer to modify the parameters under his control and hence obtain an adequate design compromise. Component Placement - a designer may wish to minimise the total length of copper track on a printed circuit board. The position or placement of the components is the prime parameter but this in turn is modified by track density profiles, technological rules etc. However, the designer can interact with trial placements in order to optimise around these parameters.

(b)

DA - Design Automation - this is defined as automatic design translation or pushbutton design. The design algorithm is embedded in a program rather than in the mind of the designer. However, this is only achieved by sets of rules, codes of practice and compromises which make the design translation amenable to automation. The consequence of this is a restriction on design freedom, however, it does balance ease of design with ease of production. Given a product structured for automation, then DA acts as an amplifier.

INTEGRATED COMPUTER-AIDED DESIGN - AN INDUSTRIAL VIEW

15

Instictively, the world has linked CAD with hardware, indeed, this is the principal area of application. However software, especially computer operating systems, networks etc., in many ways are more deserving candidates for the attention of CAD. How many hardware projects consume 200 to 600 man years? When considering design management, in this paper, this problem will be examined more thoroughly. 2. HISTORICAL DEVELOPMENT OF CAD

The milestones in the development of CAD have been well-documented elsewhere and there is little point in reviewing them here; but it is instructive to examine the underlying reasons why there has been an acceleration in the growth of CAD systems. CAD hardware (computer power, graphics etc) and software, have grown with the size of the problem to be solved. Although some leap-frogging has taken place, in general, both have kept pace. In the late 1950s, early 60s, in the computer world, machines were relatively simple to understand (and hence, design!) consequently, the degree of assistance required in the design and manufacture of them, was minimal. By the mid 60s to early 70s, with the advent of TTL small scale integrated circuits and the multi-layer printed circuit board, the complexity of the product increased by an order of magnitude. Hitherto, CAD was the sole preserve of small isolated teams of programmers and dedicated hardware. This period saw the growth of CAD/DA 'systems'. Here, attempts were made to rationalise the product design requirements with the CAD tools available. Further as the complexity increased, the production problems mushroomed and there was an increasing demand to provide numerically controlled machines with output from the design data files. Up to this point, CAD was developed 'in house' by large manufacturers with large problems. Smaller concerns had started to see the advantages of CAD but did not have the necessary expertise/computer power to develop their own systems. Also, computer graphics under the guise of CAD systems had been sold by over-energetic salesmen and were proving to be 'white elephants' - fun to play with but not much relevance to design problems. It was in this atmosphere that the 'turnkey' CAD system started to evolve. Hardware and software engineering experts started to combine to form small companies. They tackled a limited range of problems (predominately integrated circuit design), tailored the hardware and software to the problem, and made a lot of money. This, in many ways, was the salvation of the small company since, for a modest outlay, they could enjoy the advantages of CAD without the birth pains. However, to quote from Ecclesiastes 'He that increaseth knowledge, increaseth sorrow'. Manufacturers which are in the 'system' business must seek total systems solutions and, with VLSI accelerating upon us, erstwhile semiconductor manufacturers are rapidly becoming acquainted with the problems they bring. System problems are many-faceted. One of crucial importance is design integrity. It is not sufficient to be able to store/manipulate/delineated integrated circuit patterns unto a mask. Design integrity demands that the system concept be faithfully translated perhaps through many levels of design decision, into the designed product. It is in this world that the integrated CAD system finds its living.

16 3. THE INTEGRATED CAD SYSTEM

R.W. McGUFFIN

In this section, I am going to draw upon the ICL experience with its CAD systems. I believe it can be justly described as an 'integrated system' although, as with most industrial 'in house' products, it has evolved during its life. This evolution has, for the most part, been controlled, but as will become clear, rolling evolution is a direct result of an unclear view of the future at any point in time. The current ICL Design Automation system is called DA4 ie: it is the fourth generation of a CAD system. As the title implies, the primary concern is with design automation (translation) since, when it was conceived (1974) it was considered that this provided the most cost-effective solution to ICL's design problems. The overall structure is shown in figure 1. Since the service was provided on ICL 1900 range of computers under George 3, the operating system was used to control the housekeeping of the file store. The primary design task in ICL is to design logic which will be physically realised to make computers to make money and DA4 was tailored to this task. The overall CAD task is to balance ease of design with ease of production. Standardisation leads to dramatic savings in both design and production problems. Because of the company structure and the willingness of computer designers, technologists, test-gear designers etc., to co-operate with the DA team, the DA4 system, (database and tools) became the unifying influence in the design and production. As may be seen from figure 1, DA4 provides a 'total technology' outlook: * High level system design language - here the computer is considered at the architectural level in terms of structure and behaviour. Simulation may be performed to confirm that the machine will obey, for example, the basic order code. Further, the design may be expanded, in an orderly top down fashion, and pattern comparison performed to ensure safe design decisions. Compressed logic data capture and automatic expansion to the level of detail required for implementation. When the system level description has reached a level low enough to be translated into detailed logic diagrams, engineers sketch the designs on gridded paper. These rough diagrams are coded by technicians and entered into the design database. This task is tedious and error-prone. Techniques such as multistrings (highways, buses etc) and macrosymbols reduce the drawing and data entry problems and hence save time, reduce errors and show better logical flow. Microprogram assembly - there is a continuing debate on whether microprograms are true software or hardware conveniences. From the DA4 view point, where the output from the microprogram assembler is often being burnt into Proms and the flow diagrams going to the field engineer etc., they constitute a vital part of the total technology and as such must be supported. Further, they provide a useful source of test patterns for simulation. Logic simulation - an interactive tool used by many computer design projects. As distinct from high level simulation, this is concerned with complex logic elements, nominal and worst case delays, timing race and hazard analysis etc. The model library contains around 600 descriptions of the elements currently used in ICL computers. The tool itself has been optimised for interactive usage. Logic data file - the logic content of the computer is stored as 'pages' of logic. Conceptually, the whole computer can be thought of as an enormously large logic diagram. This diagram is cut into manageable portions (say

INTEGRATED COMPUTER-AIDED DESIGN - AN INDUSTRIAL VIEW

17

MICRO PROGRAM ASSEMBLERS

HIGH LEUEL SYSTEM DESIGN

DETAILED LOGIC CAPTURE

PROMS

LIBRARIES RND TABLES

LOGIC DESIGN DATABASE

SIMULATION GROUP CHECKS


Map Into Physical

DRAWINGS ASSEMBLY FILE DOCN PLACEMENT & TRACKING PCBs PROD. CONTROL

ARTWORK

TESTS

BACK PLANES
CHIPS
CABLES

Fig 1.

ICL's INTEGRATED CAD SYSTEM

18

R.W. McGUFFIN 1000 gates) and called a page. The page is also a convenient drawing unit. ICL, like many major computer manufacturers, is diagram-based, ie: it is the basic medium for communications. The logic page contains all the information necessary for the design and field engineer alike, for example, logic cross-references and physical placement. Two basic forms of output are used: line printer - fast and inexpensive; dot matrix printer- high reproducable quality. Assembly Extract - as previously described, the data file contains 'pages of logic'. This basically logic description of the machine is used to construct the building blocks - integrated circuit chips, printed circuit boards, multi-layer back planes (platters) and cables. The process of mapping logical into physical is called 'Assembly Extract' and is performed with the aid of an engineer generated 'flyfile' which describes which pages or parts of pages, are to be mapped into a physical assembly. This process creates an assembly file upon which act a different variety of tools. Production Output - the variety of output is large and the data expansion up to three orders of magnitude. * * * * * * * * * * Automatic tracking of printed circuit boards. Typical boards contain up to 150 dual in-line integrated circuits. Automatic tracking of integrated circuits, typically 300 - 400 ECL gates on an uncommitted logic array. Automatic technology-rule-obeying placement of components. Fully validated manual placement. Photographic artwork. Drill tapes for a wide variety of machines on many sites. Production control documentation. Assembly drawings, silk screens for component insertion. Control documentation for manual modification of boards. Version control.

Testing - this is an entire subject in its own right but, in brief, from the assembly file the following may be performed: * Automatic functional test pattern generation. To save computer power (money) some rules are applied to logic design. The benefits of these rules are that many thousands of board and IC chips types may have test patterns generated automatically. Typically, boards containing 30 IK RAMS, 1500 logic gates and 10 256 bit Proms, will require 12 million bits of test data. Verification of manually produced test tapes. Using fault simulation the quality of the tapes may be assessed and diagnostic resolution determined. Base Board Test - the tracks on printed circuit boards, before component insertion, may be checked for unwanted open and short circuits by means of computer controlled probes. The information to

INTEGRATED COMPUTER-AIDED DESIGN - AN INDUSTRIAL VIEW control this equipment is generated from the assembly file. *

19

Probe Test - functional tests may be applied to each component, or group of components on a board by probes (reading and writing) at preselected points. This is achieved by selectively powering up the components to be tested. The allocation of probe points and generation of tests is performed automatically. Group Checks - as previously described, the logic pages which describe the computer are partitioned into chips and boards. However, it makes sense to perform checks, physical and logical, on that group of boards which comprise a logical group. These checks include physical loading rule and timing path checks. The latter, in many respects, is much more economic than simulation.

This is only a cross-section of the ICL design automation scheme but does give a flavour of the 'total technology' outlook. 4. SYSTEM CAD

In this section, I will try to take a more orthogonal view of CAD systems capitalisation, company organisation etc., and from it, try to indicate some of the shortcomings of today's approaches and where the future lies. 4.1 Design

Not enough is known about the nature of design. Alternatively, if we fully understood the design decision making process, then we would be able to provide methodologies and tools that would provide a safe (error free) design evolution. Given that this is not the case, the task of the CAD system designer is to provide a framework in which design may take place in a controlled manner and tools to assist this process. This framework will have many attributes but the most important is probably the man-machine interface. Whether this interface is a natural design language or graphical is of second order importance. The prime requirement is that the design engineer should be in harmony with his tools and that they should provide fast response on the implications of design decisions. Communication, or the lack of it, is a continuing problem. The framework (design database) should be capable of creating lines of communication between related design activities such that duplication, incompatibilities etc., can be avoided. 4.2 Conflicting Pulls

The overriding benefit of CAD is cost reduction. This may be translated into: * * * * * * Labour reduction, use of less skilled labour. Timescale reductions. Error reductions/design integrity. Effective use of manpower - ease of design/manufacture. Managerial control. Timeliness.

This l i s t , although incomplete, does indicate the c o n f l i c t i n g pulls in CAD.

20

R.W. McGUFFIN

At the one end, we have management requiring the tools to control the design of complex products such that they feel that they are in command. At the other end, we have production demanding a product which they can make at a price marketing will accept. In the middle, is the design engineer, struggling against impossible timescales, to produce an error-free design. In no way can all these requirements be satisfied without compromise. Unfortunately, in many respects, the area of compromise is in the freedom of the design engineer. For example, an engineer may design an integrated circuit to all the conventional constraints - minimum silicon area, correct power consumption etc., but if it is non-testable his work has been wasted. These conflicting pulls upon the CAD system have not been satisfactorily met to date. As the complexity of the product increases, the freedom of the designer will be continually eroded unless a total systems view of the CAD system is taken. 4.3 Capitalisation

In this section I will discuss the computer hardware required to support CAD systems. Obviously, the size of computer and variety of peripherals is dependent upon the types of application, the number of users and volume of job throughput. However, at the outset, the most important point is that undercapitalised CAD is a recipe for failure. Users of CAD systems expect to derive tangible benefits from their 'conversion' to CAD. If the service provided in terms of reliability, resilience and response is poor, they very quickly will become disillusioned and will justifiably claim that using a computer, despite the benefits, is actually slowing them down. To generalise, the use of CAD must provide at least three quantifiable benefits to the user. For example, an automatic printed circuit board tracking program provides: * * * Labour reduction. Reduced timescale compared to manual method. Reproducable results.

Unless benefits such as these are identifiable, then one should rethink whether CAD is appropriate to the solution of the problem. Large mainframe computers have been the traditional workhorses of CAD. However, over the last five years, minicomputers have made inroads into the mainframe business. I do not want to get into the mini -v- mainframe arguments, however, at ICL I believe we have achieved the partition of computing activities between mini and mainframe which suits our activities. The configuration is shown in figure 2. The rationale underlying this arrangement is: (a) A considerable proportion of the CAD service work is of a data processing nature - logic group error reports, drill tape production etc., and is best suited to the background batch type of job. Some jobs - simulation, automatic tracking etc., require fast response but demand considerable computer power. With some four hundred users of the service, a considerable amount of file management is required.

(b) (c)

INTEGRATED COMPUTER-AIDED DESIGN - AN INDUSTRIAL VIEW

2 1

DESIGN DATA-BASE

LINK TO MAIN FRAME

MINI
GRAPHPLOTTER MAG. TAPE DIGITISER DOT MATRIX PRINTER UTILITIES INTERACTIONS TABLETS TERMINALS

DISK STORE

Fig 2.

MINICOMPUTER CONFIGURATION

22 (d)

R.W. McGUFFIN Other activities are of an interactive graphic nature - interactive logic diagram modification, LSI circuit design - these require 'instant response' which can best be supplied by a minicomputer. However, they are 'front ends' to the CAD system and thus, cannot be considered in isolation from the mainframe/design database.

Project activities (a) to (c) are provided on the mainframe; for (d) various graphic work stations have been provided on different geographic sites within the company. In general these are used for LSI circuit design, interactive logic design and overall drawings for technical publications. Typical peripherals are: * * * * Graph plotters and dot matrix printers. Editing tablets and digitisers. Interactive storage screen terminals. Magnetic tape back-up.

In ICL we have found that, at present, this is a useful configuration of hardware and partitions of design activities. However, as more powerful minis are being produced, the partition must be constantly reviewed. 4.4 Company Organisation

It is impossible to generalise on this subject. However, the position and status of the CAD system generators within the company is of vital importance. To take an extreme example, if they are located in the research department on a site remote from their customers, their impact on the company's products will be less than significant. In ICL the teams which provide hardware and software (operating systems) CAD are under the same manager. The combination, called a segment, enjoys equal status with: * * * * * Technology - printed circuit board technologies, component evaluation, semiconductor device fabrication. Test Engineering - design of in-house test equipment, evaluation of OEM testers. Project Teams - computer design projects. Design Services - drawing office, technical authors etc. Operating systems segment.

and has a favourable relationship with the manufacturing division. This 'equal status' relationship ensures that, in the areas where CAD/DA is cost-effective, the magnitude and timeliness of support meets the product requirements. 4.5 Software CAD

Every programming manager responsible for large or medium scale projects must have felt that the problems associated with software production were inherent in the very nature of the software design process. Many of these problems, however, sprang from the way in which software design was viewed more as an art form than a science. By ignoring the techniques which are standard practice in hardware engineering, software planning and costing exercises tended to remain a rather hit-and-miss affair.

INTEGRATED COMPUTER-AIDED DESIGN - AN INDUSTRIAL VIEW

23

Then there was the seemingly inevitable gulf between design and implementation. The traditional approach, with an analyst specifying the design in a natural language like English for subsequent translation into machine-executable form by a different group of implementors, led to many misunderstandings and inefficiences. This was aggravated by the imprcisions caused by using a natural language in the design whereas there is no reason why the language used for expressing high level design concepts should be any less exact than the language chosen to express that design to the machine. To control projects involving a large volume of code and many people there was also a need to have automatic facilities for progress monitoring and control as part of the philosophy underlying the design structure; but whether 20, 50 or 200 people are involved in a project, it has to be recognised that there is a considerable price to be paid to ensure effective design co-ordination and communication. An automated design technology must also take into account the problems of tests and validation and of release, enhancement and maintenance of final products. This is particularly true for computer manufacturers where, for example, pieces of code the size of George 3 or 0S360 cost a lot of money to design and code, more to build and debug and a great deal more to enhance and maintain. Over six years ago, on the threshold of embarking on a major programme of software development, ICL decided to overcome these traditional software production problems by developing an in-house technology of software design. This technology took a unified view of the formalisation of the design process, the use of a formal design definition language, the use of computer-aided design and design automation techniques, code documentation and standards, and project control procedures. This approach to software design and control (Cades) has been thoroughly aired and documented elsewhere. However, the lessons learned are equally applicable to hardware projects, particularly structural preservation. In large projects there is a requirement to be able to identify and preserve the overall structure - when design decisions are delegated often the nett result is that the nature of the product changes since the effect of the decision is not reflected upwards. This is structural decay. Hardware CAD excels in the range of tools available to create, manipulate and produce design. The management of design has been less emphasised in the past but, in the next generation of CAD system this cannot be ignored and the software experience should be utilised. 5. ADVANCING THE STATE OF ART

In reviewing the history of CAD, it was observed that the computer aids kept pace with the complexity of the problems. In hardware design, the product complexity has remained within tolerable proportions - ie: within the intellectual scope of man. However, with VLSI we can see a complexity barrier approaching - the early symptoms of which are a creeping paralysis in design. The initial design time for integrated circuit chips is lengthening at a predictable rate, however the time and the number of interactions required to 'get it right' is escalating. Also, the testing of these devices is starting to become unmanageable in that the goal of 100% testing is steadily retreating. The more cynical among the system manufacturers could claim that the semiconductor companies are starting to experience the problems they have lived with for years. This is partially true, however using conventional SSI and MSI packaged devices on printed circuit boards does not present all the problems of integrated circuit design - the principal difference being that PCBs are a 'modifiable' technology, ie: design or

24

R.W. McGUFFIN

production errors may be fixed with the aid of a soldering iron. This is not so with integrated circuits and the onus is on the designer to minimise the number of design interactions and to produce a testable product. Of course, hardware design problems are only a facet of the total complexity barrier - it is not sufficient for turnkey CAD system vendors to claim that they have the tools for VLSI, the roots of the problems lie much deeper. 'Systems' are a combination of hardware and software - they cannot be considered independently. So, in conclusion, the key points for the future: * Total design capture/coming together of disciplines - the next generation of CAD systems must support both hardware and software design and production. There should be as natural a relationship between the two in the CAD systems as there exists in the product. Facets of this are the design language and the simulation of the interactions of hardware and software at various levels of design. Design and manufacture - a keystone in successful industry is the effective use of design manpower. This may be translated into the effective use of design automation to free designers, not from their responsibilities for production, but from detailed concern. Testing - it is difficult.to determine how the test patterns for large integrated circuits will be determined, however, three points are clear: (1) design for testability. There is little point in designing something which cannot be tested with contemporary methods and equipment and in a time period which reflects the complexity. fault model. The conventional model (nodes stuck high/stuck low) it could be argued, is not accurate enough to detect/diagnose all possible faults. On the other hand, with networks comprising many millions of nodes, the use of this model may be uneconomic. top down test pattern generation. The concept here is of the abstraction of test patterns from the functional description of the system (hardware or software) used for design.

(2)

(3)

The convergence of design disciplines will remain a problem through to the 1980s but, as CAD systems evolve which will give us a key to the solution of this problem, we can look forward to VLSI with genuine confidence.

G. Uutgiwui, editan., COMPUTER-AIPEP PESIGN c(5 digital eZectiionic CIAJCWCU and iyitxm Nonth-HoLiand Publl&king Company ECSC, EEC, EAEC, Bnuiteti S LuxemboMg, 1979

PRODUCT AND

SPECIFICATION SYNTHESI S

Douglas Lewin Department of Electrical Engineering and Electronics, Brunei University Uxbridge, Middlesex

The specification and evaluation of computer systems at the initial user requirement level is one of the most important and critical aspects of digital systems design. A formal specification of the system not only ensures that the user requirements are correctly translated into an acceptable design but also provides the essential basis for contractual and design documentation. A critical survey of existing methods of system specification, including such techniques as directed graphs, FSM theory, hardware description languages, simulation languages etc. is presented fol lowed by a brief review of the current state of synthesis methods. In so doing it is concluded that no suitable specification and design system is available at the present time and possible reasons are given why this situation exists. 1. INTRODUCTI ON

Current digital and computer systems have now reached such a high degree of sophistication that conventional design methods are rapidly becoming inadequate. The major problem is the sheer complexity of the systems which are now feasible using LSI and VLSI subsystem modules such as micro-processors, micro-computers, ROM's, RAM's, PLA's, etc. In order to control and manage this complexity (in both software and hardware realisations) It has become necessary to enlist the aid of computers, and computer aided design techniques are now becoming accepted as essential design tools. Unfortunately CAD, though successfully used at the logistics and manufacturing levels, has not as yet realised its full potential when applied to system specification and the conceptual design stages. This is evidenced by the singular lack of success in attempting to develop realistic specification and evaluation languages, synthesis techniques and design methods for secure and reliable systems. At the present time, as shown by a recent EEC feasibility study (I), there Is no viable specification and design scheme available for digital systems, and industry - just managing to cope with current technol-ogy could well be faced with a major dilemma in the near future. The objective of this paper is to review the current "state of the art" in product specification and synthesis. In so doing the basic principles and similarities of the techniques which have emerged so far will be described, followed by an attempt to define the fundamental problem area and future requirements. 2. PRODUCT SPECIFICATION

The most Important property of any CAD scheme is the ability to be able to accurately specify the system under consideration using a suitable representation. It is essential that the specification language should be able to provide an unambiguous and concise description of the system and be capable of serving as a means of communication between users, designers and mplementers. Note also that a formal specification of the system not only ensures that the user requirements are correctly translated Into a viable design but also provides the essential 25

26

D. LEWIN

basis for contractual and design documentation. In order to handle complex digital structures a specification language must be able to describe the system at several levels, that is on a hierarchal basis. At the top level is the behavioral (information flow) description which treats the system as an interconnection of functional modules specified by their required input/output characteristics. The next level down is the functional (data flow) description; this partitions the system into subsystem components and details the logical algorithms (micro-programs) to be performed by the components with their corresponding highway transfers. At this level it should be possible to represent the algorithms in a variety of ways, for example in terms of Boolean equations, state tables, timing diagrams, flow-charts etc. Finally, at the lowest level, is the structural (implementation) representation, which describes in detail the actual gates, bistables, LSI and MSI chips, software data representation etc., used to physically realise the subsystem functions. An ideal specification language should have the following characteristics: a) b) c) d) e) capable of representing logical processes independent of any eventual system realisation; facility to formally represent and evaluate the information flow in large variable systems at the behavioral level and also to analyse data flow at the functional level; ability to handle concurrent processes and to provide insight into alternative partitions of the system; act as a means of communication between users, designers .and implementers; able to proceed directly from system description to physical realisation using either software or hardware processes.

Numerous methods have been described in the literature for the description and design of digital systems; these techniques may be generally classified into three basic approaches, which are as follows: i) Functional descriptive programming languages, such as hardware description languages (including register transfer languages) simulation languages and some general purpose high level languages such as APL. Finite State Machine (FSM) techniques, such as state-tables, regular expressions, flow charts, including the algorithmic state machine (ASM) approach, etc. Graph-theoretic methods, employing transition graphs, Petri nets, occurrence graphs, etc.

ii) iii)

The principle methods will now be considered in more detail in the following sections. 2. I Register Transfer Languages ' The intuitive design procedures used in digital and computer systems engineering are normally centred around a predefined register configuration. The execution of a required system function (for example, a machine-code instruction) is then interpreted in terms of micro-order sequences (called a control or micro-program) which govern the necessary transfers and data processing operations between registers. Register transfer languages are based on this heuristic design procedure and allow the declaration of register configurations (the data structure) and the specification of the required data flow operations (the control structure). Thus the declarative section of the language in essence forms a linguistic description of the block diagram of a machine, with RTL operational procedures being used to specify the control programs. Note that the RTL procedures can be used for documentation and simulation purposes; it is also possible to generate Boolean design equations directly from the RTL descriptions. A typical register transfer language description (Chu's CDL) for the LOAD Instruction of a computer is shown in Table I. The first register transfer language was proposed by Reed (4) and was non-

PRODUCT SPECIFICATION AND SYNTHESIS

27

procedural in nature with a small vocabulary directly related to hardware elements; it was used essentially as an algorithmic language for defining microprograms. Due to the nonprocedural character of the language it was necessary to prefix each statement with a conditional label (either a clock pulse or flag value) detailing the conditions for executing the operations defined by the RTL state ments; thus the notation could be used to represent both synchronous and asynchronous systems. The Reed language however was very primitive, having no facilities for block structures or adequate means of handling branching operations such as test and jump instructions. Schorr (5) extended the Reed language by including timing pulses as an integral part of the conditional statements, and a form of GOTO statement. For example, it is possible In the language to write statements of the form: |t,? 3 | |tjS 3 | : D; : A + A I; I * tg I t2

where, i f 'S", = I and t| = I the operation ) is performed and the next state ment to be executed occurs in ^ 5 , that is a jump to \* takes place; i f S3 = I the alternative operation takes place. Note that A, and D are registers and S3 a flag bistable or register stage. Schorr's language not only provided a more practical means of documenting microprograms but also had the distinct advantage of being fully implemented using a syntaxdirected compiler based on ALGOL 60. Moreover the language had facilities for performing logic synthesis, and analysis, with microprogram statements being directly translated into the Boolean Input equations for the bistable registers. Reed's language was also used as the model for the LDT (logic design translator) language developed by Gorman and Anderson (6) and Proctor(7), LDT was a formally defined procedural language and included highlevel A LGOL type operators such as IF, THEN, ELSE, GOTO etc. More important however was the introduction of subroutine facilities which allowed system modules, such as counters, adders, etc., to be declared as high level blocks, thus enabling a hierarchal descrip tion to be employed. The main function of LDT was the derivation of the bistable equations, suitably optimised, directly from the RTL description. LDT also enabled a timing analysis to be performed, using a sequence chart approach (8) which enabled the individual register transfer operations to be displayed against time. Another ALGOL based language (though nonprocedural) was described by Chu and called CDL (9). This language had the advantage of being able to describe special operators (such as count up/down), predetermined sequences, branching and conditional transfers as well as the basic RTL operations. Unfortunately CDL had the major disadvantages of functioning in a synchronous mode, no facilities for block structures and the inability to describe independent concurrent operations. CDL was used primarily for the specification and simulation of digital systems and is still widely used in teaching. Though not originally conceived as a register transfer language APL (10) has been extensively used for algorithm definition and the description of computer architectures; In particular the language has found acclaim in the teaching of digital systems (II). A PL has also been used by IBM as the basis of the ALERT (automatic logic design generation) system (12) with modifications to allow the expression of control and timing functions and the representation of block structures and parallel processes. A LERT was basically a conventional RTL system with provision for translating the microprogram description into a minimised set of logic design equation for the registers and control logic. A LERT was implemented on the IBM 7094 machine and used to reproduce the design for an IBM 1800 computer; though the resulting design was logically correct it was found to be highly redundant in terms of hardware. ISP (instructionsetprocessor) was Initially developed to describe primitives at the programming level of design in the PMS and ISP descriptive system due to

28

D. LEWIN

Bell and Newel I(13). ISP is similarin characteristics to other register transfer languages but with facilities for handling block structures and concurrancy and the simple sequencing of processes. However ISP has been implemented and used to describe and simulate computer architectures (14). In particular it has successfully been used to perform comparison studies of computers for military use (15), and is being seriously considered as a standard hardware description language by U.S. government agencies. Though some of the languages described above have the ability to describe subsystem blocks, none of them have facilities for representing a partitioned system consisting of interconnected autonomous modules. In the digital design language (DDL) described by Duley and Dietmeyer (15) a system Is viewed as a collection of several subsystems or automatons, each possessing "private" facilities and having access to "public" facilities (common busses) which are used for intercommunication between automatons. In DDL a system Is specified using a block structured description, where the outermost block defines the whole system in terms of subsystem blocks (automata), global variables, input-output requirements etc., and the inner blocks specify the automata in terms of their state and I/O behaviour. The description itself is in Reed-like statements and contains the usual register transfers ana operators, including special operators and declarative statements for the system level description - Table 2 shows examples of the more usual operators and declarations. DDL is a non-procedural language and uses the concept of finite state machines to control operations - for example, by storing the state of the system in registers which can be tested and modified using special operators. As well as being able to describe digital systems the DDL specification can also be translated into Boolean and next-state equations to describe a hardware realisation. Other system design languages have been described in the literature. One such language is CASSANDRE, proposed by Mermet and Lustman (17) which was based on ALGOL and uses the block structures of that language to achieve system partitioning. A CASSANDRE description consists of defined units and their Interconnections; each unit may itself comprise a network of units. The language has been implemented on an IBM 360/67 machine but only used for logic level simulation and micro-program evaluation. A similar language is the CRISMAS system (18) which also uses a hierarchical block-structured definition language; no implementation of this language has as yet been reported. The CASD language (19) (computer aided system design) encompassed high level system descriptions, simulation at both systems and logic levels and automatic translation to detailed hardware. CASD was based on PL/I and used its block structuring facilities to develop the hierarchal specification. CASD,bas i cal ly a feasibility study, was never implemented. The LALD (language for automatic logic design) system (20) allows a multilevel system description in terms of Interconnected sub-system components. The control and data structures must be specified separately and the control structure can be Implemented using either hardware or software. LALD compilers have been reported for the CDC 6400 (using SNOBAL) and in PL/I for the IBM model 91. Though it would appear that considerable effort has been expended on the development of register transfer and hardware description languages very few have been adopted for use in a real engineering situation, and a viable cost-effective system still remains to be developed. Moreover, many of the systems described above have been outdated by the rapid progress In microelectronics. Problem orientated programming languages suffer from the inherent disadvantage that they have no formal mathematical structure. Consequently, system behaviour must be interpreted indirectly from program performance whilst operating on certain specified data types. Hardware description languages usually describe a digital system In terms of simulated components and their interconnections. In order to evaluate logic networks modelled this way It is necessary to perform a physical step-by-step examination of all the relevant input-output conditions. It will be obvious that this s a time consuming process and that large amounts of storage would be required to represent the circuit model. In addition since the system is described in terms of a topological model, rather than by formally

PRODUCT SPECIFICATION AND SYNTHESIS specified system functions, the description s of limited value for general communication purposes. Moreover, since register transfer languages are constrained to operate on well defined data types, they are normally restricted to hardware representation. The use of formal methods, such as FSM and graph theory, for system description would appear to have considerable potential - these techniques are described in the following sections.

29

2.2 Finite State Machine Techniques Finite state machine theory, using for example state-table representation, though theoretically capable of describing any digital system is not viable In practice owing to the considerable practical difficulties involved in expressing large variable problems and the inordinate amount of computation required to manipulate the resulting structures. This Is undoubtedly true, particularly if both control and data structures are represented In the same state-table. However large systems must inevitably be partitioned by the designer into sub-system components in order to comprehend their complexity, and If the concept of separately defining data and control structures is used state-tables can still be a useful aid in design. This is borne out by the algorithmic state machine (ASM) approach to design (21) which uses a flow-chart to specify the control logic for a system, the implementation of which draws heavily on FSM theory. The method was successfully used by Hewlett Packard for the design of calculators etc., but currently no computer implementation is available. A formal approach to system description based on FSM theory was originally described by Keene (22), who showed that any finite state, deterministic, synchronous automaton can be described by a regular expression, and that, inversely, every regular expression can be realised as a finite state machine(23). Thus regular expressions constitute a formal language which can be used to characterise the external (input-output) behaviour of sequential circuits (combinational circuits being treated as a special case). Later work by Brzozowskl (24), using a derivative of a regular expression described an easy-touse and systematic method of transforming a regular expression to a state-table. Regular expressions are used to describe the required set of input sequences (in terms of algebraic operations on sequences of O's and I's) to a FSM in order to generate an output. Thus the behavioural description for a FSM can be reduced to an algebraic formula. Though regular expressions would appear to have many of the characteristics required by a specification language, for example, a formal structure capable of analysis, direct implementation etc., there are considerable disadvantages in practice. Contrary to what has been written the method Is not easy to use and design engineers find the formalism very difficult to apply, encountering considerable difficulties in converting from a verbal description to the algebraic formulation. Another basic disadvantage is that the language is really only suitable for FSM's with a single output terminal. Consequently with multipleoutput circuits it is necessary to derive separate regular expressions for each output terminal. It will also be obvious that the method automatically specifies both the control and data structures and hence would certainly lead to computational difficulties with large variable circuits. Finite state machine methods, as well as having practical drawbacks, also suffer from a more fundamental disadvantage. In general the FSM accepts a serial input (or Inputs) and progresses from state to state producing an output sequence (or sequences) in the process. Due to its finite memory limitation (that is, the number of internal states) the FSM is bast suited to describing systems where the amount of memory required to record past events (that is the effect of earlier inputs) is both smaI I and finite. For example, serial systems (such as pattern detectors) where the computation can proceed as a step-by-step operation on the input, and the amount of Information required to be 'remembered' Is very small. However some processes, such as serial multiplication, require to have al I the Input data available before the computation can proceed. Moreover large amounts

30

D. LEWIN

of information could need to be stored during the course of the operation, (for example, the accumulation of partial sums in the case of multiplication). Thus it follows that the FSM has the inherent disadvantage that it is impossible to specify a machine which requires the manipulation of arbitrarily large pairs of numbers. Note also that the FSM lacks the ability to refer back to earlier in puts unless the entire Input sequence Is initially stored; this implies that the Input sequence of interest must be of known finite length. These limitations can of course be overcome by using an Infinite machine model, such as the Turing machine (25), where the available memory Is theoretically uni Imi ted. 2.3 Directed Graph Methods One mathematical tool which is finding increasing application in computer systems design and analysis is graph theory (26) and many of the more successful specification methods are couched in graph theoretic terms. A directed graph is a mathematical model of a system showing the relationships that exist between members of its constituent set. The elements of the set are normally cal led vertices or nodes, with the relationships between them being indicated by arcs or edges. An example of a directed graph is shown in Figure la where the set of nodes is given by N {n|,2,3,4,5} and the set of edges by E = {e|, e2,63,64,es,e&,} . Graphs may be classified into various types depending on their properties. For example a net shown in Figure lb is a directed graph consisting of a finite nonempty set of nodes and a finite set of edges; note that a net may have parallel edges, that is two nodes connected by two different edges but both acting in the same direction. A gain, a net which does not contain parallel edges but with assigned values to its edges is called a network as shown in Figure Ic. Directed graphs have been used, for instance to represent information flow In control and data structures, parallel computation schemata, diagnostic pro cedures in logic systems etc. The major advantage of using graph theory, apart from the obvious visual convenience, Is that formal methods exist for the manipulation of graph structures, which can be represented by matrices for computer processing. The Transition graph is a simple example of a directed graph used to represent automata. It consists of a set of labelled vertices connected by directed arcs and in every graph there is at least one starting vertex and at least one terminal vertex. Each directed arc is labelled with symbols from the input alphabet of the machine (I {0,1} in the case of a binary system). A sequence of directed arcs through the graph is referred to as a path and describes the input sequence consisting of the symbols assigned to the arcs In the path. A n Input sequence is said to be accepted by the graph if a path exists between a starting and terminal vertex. Transition graphs have the advantage over statediagrams (which are a special case) in that it is only necessary to define the input sequences of direct interest, alternative input transitions being omitted. Thus the transition graph Is nondetermnistlc in the sense that, unlike statediagrams, it is incompletely specified. The transition graph also provides a convenient shorthand for representing deterministic machines, since it is always possible to convert a transition graph into an equivalent statediagram (27). However in general it is difficult to derive a transition graph which faithfully represents a required machine specification. Another directed graph approach which has found considerable application in the description and analysis of digital systems is the Petri net (28)(29). The Petri net Is an abstract, formal graph model of information flow In a system consisting of two types of node, places drawn as circles and transitions drawn as bars, connected by directed arcs. Each arc connects a place to a transition or vice versa; in the former case the place is called an input place and In the latter an output place of the transition. The places correspond to system conditions which must be satisfied in order for a transition to occur; Figure 2 shows a

PRODUCT SPECIFICATION AND SYNTHESIS

31

typical Petri net. In addition to representing the static conditions of a system the dynamic behaviour may be visualised by moving markers (called tokens) from place to place round the net. It is usual to represent the presence of tokens by a black dot inside the place circle; a Petri net with tokens is called a marked net. A Petri net marking is a particular assignment of tokens to places in the net and defines a state of the system; for example, In figure 2a the marking of places and C defines the state where the conditions and C hold and no others. Progress through the net from one marking to another, corresponding to state changes, is determined by the firing of transitions according to the rules; a transition is enabled If all of its input places hold a token any enabled transition may be fired a transition is fired by transferring tokens from input places to output places; thus firing means that instantaneously the transition inputs are emptied and a I I of its outputs filled. (Note that transitions cannot fire simultaneously, thus only one transition can occur at a time). This is illustrated in figure 2, where 2a shows the original marked net and 2b the state of the net after firing transition a; note that the Petri net is able to depict concurrent operations. After two further firings the net would arrive at the marking shown in figure 2c, here the net is said to be in confi iet since firing either of the transitions d or e would cause the other transition to be disabled. In general a conflict will arise when two transitions share at least one input place; Petri net models are normally constrained to be confIict free. Another limitation imposed on the model is that a place must not contain more than one token at the same time: this condition leads to a safe Petri net. A I I ve Petri net is defined as one in which it is possible to fire any transition of the net by some firing sequence, irrespective of the marking that has been reached. Note that a live net would still remain live after firing. The Petri net model described above may be extended into a Generalised theory by allowing multiple arcs between transitions and places, thereby allowing a place to contribute <or receive) more than one token. Further extensions to the basic model, such as including inhibiting arcs have also been suggested. It is also possible to define sub-classes of Petri net; of particular interest is the statemachine, which restricts a Petri net such that each transition has exactly one input and one output; note that this model is directly equivalent to a finitestate machine. The Petri net is considerably more powerful than the FSM model in that It can represent concurrent operations and permit indeterminate specification, hence It can provide a more faithful representation of complex system behaviour. Moreover, it has been shown that any generalised extension of the Petri net is equivalent to a Turing machine, thus the modelling power of the Petri net can be considered to be slightly below that of the Turing machine. An essential property of any model Is that it must be possible by analysis to obtain precise information about its characteristics. The FSM model for example, since it has a finite number of states, can theoretically provide the answer to any question concerning Its behaviour. However the Turing machine, because of its unbounded memory, is very difficult to analyse if a definite answer to a behavioural question Is required. Thus we have a fundamental difficulty that the more powerful a model the more difficult It is to algorithmically determine its properties. Petri nets have been extensively used to model and evaluate the control structures of logical systems in both software and hardware design. In addition it has been shown (30) that it is possible to replace the individual elements of a Petri net by hardware components, thus providing a direct realisation of the control circuits. In software design Petri nets have been used to model the properties of operating systems such as resource allocation and deadlock situations (related to the liveness of a net).(31) Petri nets can also be used to model a) b) c)

32

D. LEWIN

hierarchal structures, since an entire net may be replaced by a single place or transition at a higher level. The major advantage of the directed graph approach is that it is amenable to mathematical analysis and many authors (32)(33) have described algorithmic methods for their analysis. In the main the techniques apply to the control graph function only, known as an uninterpreted analysis, and no allowance is made for operations performed in conjunction with the data structure. Though Petri nets have many of the properties required for the specification and design of digital systems to date there has been only one example of its use In a CAD system, and that on an experimental basis. Project LOGOS (34)(35) conceived at Case Western University was based on Petri net principles and had the objective of providing a graphical design aid which would enable complex parallel systems to be defined (on a hierarchal basis) evaluated at any level and then finally implemented in either hardware or software. The LOGOS system employed two directed graphs, one for data flow (the data-graph, DG) and one for control flow (the control-graph, CG) to define a process leal led an activity). Though it was found possible to realise the control operators In the CG the problems of transforming the DG components was never fully resolved. In addition the computational problems encountered n attempting to perform an interpreted analysis (involving both CG and DG structures) of an activity was found to be extremely difficult. Though the LOGOS system was the most ambitious attempt to date to develop an Integrated CAD system, it nevertheless still requires considerable further development before it can become a viable design tool. 3. SYNTHESIS OF DIGITAL SYSTEMS ( 3 6 )

An essential prerequisite to any synthesis package is a suitable specification and evaluation language, otherwise the problem is reduced to one of minimisation and implementation of design equations. As we have seen, many of the specification techniques described above incorporate some method of hardware realisation, for example, RTL's, LOGOS, ASM charts, etc., but these in the main rely heavily on conventional switching theory. Purpose designed synthesis systems such as CALD (37) and MINI (38) employ a tabular or cubic notation to input Boolean design equations and then use heuristic techniques to obtain a near minimal solution; the resulting circuits being realised In terms of basic gates and bistable elements. Though these procedures have some application In the design of MSI sub-system components, for example in reducing the required surface area of the chip, their usefulness in system design Is strictly limited. The CALD and MINI systems, together with numerous other examples of synthesis techniques (39), all rely heavily on classical switching theory. Unfortunately semiconductor technology has progressed to such an extent that the use of minimisation methods and Implementation in terms of NOR/NAND logic Is no longer relevant. Current exceptions to this are in the use of PLA's, which utilise multiple output SOP's terms, and implementing FSM's using ROM's (40) where minimising the states will reduce the number of words required In the memory. Notwithstanding, theory has been (and is still being) outpaced by technology and a major and severe problem now exists due to the lack of a suitable design theory at the sub-systems level; for example, algorithmic techniques for the realisation of systems using ROM's, PLA's, etc. The situation is becoming even more critical now that programable electronics such as microprocessors and micro-computers are being used as sub-system components. The specific question of logic circuit synthesis has become subsumed by the general problem of computer systems engineering, including the vital topic of specification and evaluation. Moreover, it is essential that a "top-down" approach to design be adopted to allow the system to be partitioned into viable and compatible hardware and software processes. Thus at the systems level It is no longer possible to divorce hardware and software techniques, and It is essential that any synthesis procedure should take Into account the design of software, as an alternative to hardware, in system realisation. Specific hardware design techniques are still required at the LSI component level, though even here conventional techniques are of little use and certainly

PRODUCT SPECIFICATION AND SYNTHESIS they will not be able to cope with future VLSI circuits.. It will be apparent that the whole question difficult problems remain to be solved, and maintained. It Is vital that the dichotomy software engineering is obviated, since the by adopting a general systems approach. 4. DISCUSSION

33

of synthesis is wide open and may must be solved, if progress is to be that now exists between hardware and synthesis problem can only be solved

There are many difficult problems to be solved before a viable specification and design language for digital systems engineering can be developed. Register transfer languages are adequate for the design of register structured systems, but they are specifically hardware orientated, and since formal methods are not possible, evaluation must be performed using simulation techniques. Another disadvantage is that the languages tend to generate very simple constructs. This is due to the languages providing only simple elements and the users perpetuate the situation by designing at a low level. Another problem occurs in the generation and use of library routines for components used to represent complex MSI and LSI circuits and other data structures. There are two distinct cases when subsystem blocks are required: a) to represent a component or sub-routine which will be used by the system many times over, but not actually implemented each time; for example, an arithmetic unit or any complex data-processing structure; the insertion of a standard hardware component (analogous to a software macro) such as a multiplexer unit, which needs to be implemented as such in various places in the system.

b)

The major difficulty comes in isolating identical functions and, if necessary, merging them together. It is this fact which accounts for much of the redundancy encountered in RTL implementation schemes. The problem is also relevant when considering the implementation of Petri net schema, as for example in LOGOS, and generally for all systems which separate the control and data functions. It has already been suggested that the FSM model has severe limitations when used to specify complex systems. These limitations can of course be overcome by using Infinite or unbounded models such as the Turing machine or Petri net. Using this type of model the designer is unconstrained in his thinking, allowing general logical processes to be specified without reference to a particular Implementation. Unfortunately the transformation from a conceptually unbounded model to a practical realisation can, and does, present serious difficulties. Another fundamental problem is encountered In the analysis of large systems. It would appear inevitable that, if a detailed analysis of a logic algorithm is required (say in seeking the answer to a specific question) there is no other choice but to examine all possible alternatives in an iterative manner. In general, particularly if an unbounded model is adopted, it is necessary in order to determine the system operation to constrain the analysis to a restricted set of input and state conditions. This means, for example, that only particular paths through a Petri net are allowed, and the technique results in a loss of information and affects the accuracy of the model. If exact information about a system is sought the Petri net must be examined (ideally in an interpreted mode) for all possible firing sequences. In general a detailed analysis of the modelled system proves to be prohibitive In computer time. The specification techniques described above have included both special purpose languages and graphical methods. It will be obvious that from the users (and designers) point of view the use of formal graph theory could present an intellectual problem. Though graphical techniques have a visual advantage it would appear that a language approach based on formal methods would be preferable.

34
5. CONCLUDING COMMENTS

D. LEWIN

It would appear that at the present time there is no ideal specification, evaluation and synthesis scheme available for digital systems design. Directed graph techniques appear to hold the most promise as the basis for specification and analysis, but there are nevertheless many fundamental problems remaining to be solved before a viable CAD system can be evolved. The urgent need for developing a design automation scheme is indisputable, but is there any hope of finding a solution to the basic problems? In the short term much might be accomplished by jettisoning the philosophy of attempting to develop a general design language which can serve all purposes - from user level specification, through evaluation down to system realisation - and concentrate on a specific language and methodology for each function with well defined transformations from one to the other. Unfortunately the complex systems which will be required in the not-too-distant future require a fundamental reappraisal of the available theory. The need for a system level theory capable of handling the Interconnection of LSI modules working in a concurrent mode has already been stressed. Sutherland and Mead (41) go further in suggesting that a new theoretical basis for computer science Is required, based on spatial distribution and communication paths, rather than the theoretical analysis of sequential algorithms. There is also another, perhaps even more fundamental, aspect to the problem. Computer science today is concerned primarily with the derivation of efficient (and correct) algorithms for sequential machines based on a deterministic binary model. Thus the scientist searches for a solution which enables system behaviour to be explicitly described by some mapping or transformation of the Input variable onto the output. With the complexity of today's systems It is naive to expect to be able to derive design algorithms which ensure that a known and correct output will occur for a given set of input conditions. This would be possible of course for a closed system with a small bounded set of input and state variables. Systems today, however, are more characteristic of an open system since in practice they possess an unbounded set of Input and state conditions which corresponds to an interaction with the total environment. For example, digital systems with user Interaction (say a sophisticated real-time system) have the characteristic that the fundamental structure (or algorithm) can perform many different functions depending on how the basic input set is modified by the user, even to the extent of reconfiguring its original structure. This property is typical of an open system, and in the case of a digital system should be recognised as such. It follows directly from this assumption that it may be impossible to predict exactly what the response of a system will be to any given stimuli, and consequently a probabilistic approach, or similar alternative, must be employed. Thus to continue looking for design methods which will ensure complete and absolute correctness may well be a futile search and at variance with all we hope to achieve in the future.

PRODUCT SPECIFICATION AND SYNTHESIS TABLE I CDL Description for LOAD Instructions Register, R A C D F (0 (0 (0 (0 (I 17) 17) II) II) 18)

35

G H (I Subregister,

5)

Buffer register memory M Arithmetic Register Address register for memory Program register Buffer register for control memory CM Stop/Start control register Address register memory CM Operation code part of register R Address part of register R Address part of register F Main Memory Control register Power switch Start switch Stop switch Three phase clock R - (C) H t R (OP) , C If (G) THEN (F C R(ADDR) , D + count up D CM(H) ) ELSE ( + 0, 0, D * 0, R > 0)

R(OP) R(0 5) R (A DDR) = R(6 17) F (A DDR) = F(l 5) M (C) = M(0 4095,0 17) CM (H) = CM(0 31, I 18) Power (ON) Start (ON) Stop (ON) (I 3)

Memory, Switch,

Clock

F (6) F (7) F (8) END

Pd)
(2) (3)

TABLE 2 Declarations and Operators in the DDL Language a) Declaration Type MEmory or REglster TErmlnal BOolean OPerator ELement STate Automaton SEgment SYstem I Dent I f 1er Time Hardware n or onedimensional arrays of bistabies n dimensional set of wires, terminals or buses logic network defined by Boolean equations combinational circuitry shared among facilities InputOutput terminals of standard module defines states of an automaton defines an automation composed of FSM and facilities defines portion of the automaton which contains the declaration defines a system with K automata and the system's pubi le faci IItles assigns identifiers to previously defined operands periodic clock or signal generator

36

D. LEWIN

b)

Operators

Activation Connection

A VID (CSOP) ID = BE

CSOP is a set of operations that effect automaton A VID The terminals ID are connected to the network defined by Boolean expression Memory elements ID are loaded from network defined by Boolean expression Execute a transition to state SID (in the same block) Execute a transition to state NID in segment SEG and return to state RID upon execution of a return operation Return to the state specified by a transition type 2

Transfer Transition type 1 Transition type 2

ID BE * SID =* SEGID(NID,*RID)

Return transition
li

THEN THEN ELSE

I BE | I BEI

CSOP, CSOP,; C S O P Q

If BE=I, execute CSOP| If BE=I, execute CSOP,, If BE=0 execute CSOP

REFERENCES CAD Electronic Study Report on feasibility study commissioned by EEC Hardware Description Languages IEEE Computer 7(12) (1974) (Special issue). M.R. Barbacci: A Comparison of Register Transfer Languages for Describing Computers and Digital Systems. IEEE Trans. Computers C24 (1975) 137150. I.S. Reed: Symbolic Synthesis of Digital Computers. Proc. ACM Sept. 1952 9094.

October 1978.

H. Schorr: ComputerA ided Digital Systems Design and Analysis using a Register Transfer Language. IEEE Trans. Electronic Computers ECI3 (1964) 730737. D.F. Gorman and J.P. Anderson: AFI PS FJCC 22 (1962) 251261. A Logic Design Translator.

6.

R.M. Proctor: A Logic Design Translator Experiment Demonstrating Relationships of Language to Systems and Logic Design. IEEE Trans. Elee. Computers ECI3 (1964) 422430. J.P. Roth: Systematic Design of Automata AFI PS FJCC 27 (1965) 10931100. Y. Chu: A n ALGOLlike Computer Design Language. Comm.ACM 8 (1965) 607615. 10. II. K.E. Iverson: A Programming Language. John Wiley New York 1962.

F.J. Hill and G.R. Peterson: Digital Systems : Hardware Organisation and Design. John Wiley New York 1973.

PRODUCT SPECIFICATION AND SYNTHESIS

37

12. 13. 14. 15. 16. 17.

T.D. Friedman and S.C. Yang: Methods used In an Automatic Logic Design Generator (A LERT). IEEE Trans. Computers CI8 (1969) 593614. C.G. Bell and A. Newell: The PMS and ISP Descriptive System for Computer Structures. A FIPS SJCC 36 (1970) 351374. M.R. Barbacci: The ISPL Compiler and Simulator User's Manual. Computer Science Dept. Tech. Report. CarnegieMelon University M.R. Barbacci and A. Parker: Using Emulation to Verify Formal Architecture Descriptions. IEEE Computer 11(5) (1978) 5156. J.R. Duley and D.L. Dietmeyer: A Digital System Design Language (DDL). IEEE Trans. Computers CI7 (1968) 850861. J. Mermet and F. Lustman: CASSANDRE : Un Language de Description Machines Digitales. Rev.Fr.Inf.Rech.Operationel le No. I5B3I3F (1968) 335. F.Leraillez, A.Sarre and .Waterlot: CRISMASS : A Tool for Conception Realisation and Simulation of Sequential Synchronous Circuits. IEE Conf. CAD IEE Pub. No. 51 (1969) 5971. E.D. Crockett, D. Copp, J. Frandeen, C. Isberg, P. Bryant, W. Dickinson and M. Paige: Computer Aided System Design. AFIPS SJCC 36 (1970) 287296. M.B. Baray and S.Y.H. Su: A Digital System Modelling Philosophy and Design Language. Proc. 8th Annual Design A utomation Workshop (1971) 122. C.R. Clare: McGraw Hill Designing Logic Systems using State Machines. New York 1973. A ug.1976.

18.

19.

20. 21. 22.

S.C. Kleene: Representation of Events in Nerve Nets and Finite Automata. Automata Studies Annals of Math. Studies No. 34 (1956) 341. Princeton University Press. J.A . Brzozowskl: A Survey of Regular Expressions and their Applications. IRE Trans. Electron Computers E C U (1962) 324335. J.A . Brzozowskl: A Derivative of Regular Expressions. J. Assoc. Comp. Mach. Il (1964) 481494. M.L. Minsky: Computation Finite and Infinite Machines Prentice Hall Englewood CI I ffs NJ 1967. Chapter 6

23. 24. 25. 26. 27.

P.O. Stigall and 0. Tasar: A Review of Directed Graphs as applied to Computers. IEEE Computer 7(10) (1974) 3947. G.H. Ott and N.H. Feinsteln: Design of Sequential Machines from their Regular Expressions. J. Assoc. Comp. Mach. 8 (1961) 585600. C.A . Petri: Communication with Automata. Schriften des RhelnischWesterlIschen Institutes fur Instrumentelle Mathematik an der Universitt Bonn 2 (1962). J.L. Peterson: Petri Nets. A CM Computing Surveys 9(3) (1977) 223252.

28.

29. 30.

S.S. Patii: On Structured Digital Systems. Proc. Int. Sym. on Computer Hardware Description Languages and their Applications. New York (1975) 16. J.B. Dennis: Concurrency in Software Systems. Computation Structures Group Memo 651 Project Mac MIT June (1972) 118. F. Commoner, A.W. Holt, S. Even and A. Pneu I i : Marked Directed Graphs. J. Comp. System Sci. 5 (1971) 511523.

31. 32.

38

D. LEWIN

33.

R.M. Karp and R.E Mi I 1er: Properties of a Model for Parallel Computation : Determinacy, Terminations, Queuing. J. Appi. Math. 14 (1966) 1300-1411. F.G. Heath: The LOGOS System. IEE Conf. on CAD IEE Pub. 86 (1972) 225-230.

34. 35. 36. 37.

C.W. Rose: LOGOS and the Software Engineer. AFIPS FJCC 41(1) (1972) 311-323. D. Lewin: Computer Aided Design of Digital Systems. Crane-Russak New York 1977. D. Lewin, E. Purslow and R.G. Bennetts: Computer Assisted Logic Design - the CALD System. IEE Conf. CAD IEE Pub. 86 (1972) 343-351. S.J. Hong, R.G. Cain and D.L. Ostapko: MINI - A Heuristic Approach for Logic Minimisation. IBM J. Res. Dev. 18 (1974) 443-58. W.M. Van Cleemput: Computer Aided Design of Digital Systems - A Bibliograph. Computer Science Press Inc. Woodland Hi I Is Calif. (1976) 13-93. H.A. SholI and S.C. Yang: Design of Asynchronous Sequential Networks using Read-OnIy Memory. IEEE Trans. Computers C24 (1975) 195-206. I.E. Sutherland and C.A. Mead: Microelectronics and Computer Science. Scientific American 237(3) 1977 210-228.

38.

39. 40. 41.

PRODUCT SPECIFICATION AND SYNTHESIS

39

Figure 1 - DIRECTED GRAPHS


Pi Parallel Ec Edges

a)Directed graph

b)Net

c ) Network

40

D.

LEWIN

Figure 2- PETRI
Token
x

NETS
D

... Transition,

a) Marked net

b) Net

after

firing

c) Conflict situation

od digital elejittonlc cAaUt and iyitenu Nonth-Holland Publliliing Company


ECSC, EEC, EAEC, Staiteli S Luxembourg, 1979

SIMULATION OF DIGITAL SYSTEMS: WHERE WE ARE AND WHERE WE MAY BE HEADED*

S.A. Szygenda Electrical Engineering Department EMS 517 The University of Texas Austin, Texas 78712 I. INTRODUCTION As LSI technology gives way to VLSI's technology, the constraining point in future development is going to be the ability to verify and test such systems. Manual design and test set verification, for such systems, will be impossible. Prototyping, which has been a commonly used method, will also become impossible for these large systems. This is particularly true when one is considering accurate timing analysis. The reason for this is that prototyping of a system often occurs in a technology other than the one that is eventually used for the system. Therefore, when the prototype is established to be working correctly, the only thing that has been verified is the logical correctness of the device, and not its timing properties. The third method, which can be used for logical verification, timing analysis, and test set verification, is digital logic simulation. The problem is that the state of the art of digital simulation today is only adequate to process 5000 to 20,000 elements, in a reasonable manner, and these elements are normally low level Boolean gates or flip-flops. Ono way to increase the capability of simulation, in order to be able to handle VLSI, is to deal with digital logic at a more abstract level, i.e., modular or functional level simulation. The way this is normally accomplished is by making a tradeoff between accuracy, or detailed analysis, and the level at which one simulates the network. The more accuracy and detail that is sacrificed the higher the level at which one can simulate the network. The problem with this approach is that as the integrated circuit density becomes larger the requirements for accuracy actually increase, not decrease. The reason for this is that problems with the timing become more critical and correction of these timing problems after fabrication becomes more costly. The objectives of our work have been to increase the capabilities of simulation-both simulation of a fault-free network for logic verification and timing analysis, and simulation of faculty networks for the verification of test sets. This includes the capability of simulating, and generating tests, for large networks in a cost effective manner, while maintaining the level of accuracy required. The following sections of this paper present some concepts which we have been developing, in an attempt at satisfying the objectives stated above. Results of this work are also presented. Section II of this paper will consider the state of the art for simulation, as well as a discussion of the evolution of this capability toward functional simulation. Storage and run time results for some existing functional elements are * This work was supported in part by Comprehensive Computing Systems & Services Inc., Austin, TX.

4 1

42 also given in this section.

S.A. SZYGENDA

In Section III we discuss automatic partitioning of a digital network into small combinational units which can be simulated at a higher level, with similar accuracy to that achievable at the gate level. Automatic generation of element models to be used in the simulation, is also discussed. Section IV, considers the development of algorithms and data structures to support very accurate modeling of functional units for non-fault and fault simulation. This work utilized concurrent simulation concepts, and has demonstrated both advantages and disadvantages of this technique. Section V, considers a general diagnostic test generation system that would interact with a fault simulator. In this section, topological concepts are considered, including depth of sequentialness and degree of sequential ness. II. EXISTING SIMULATOR CAPABILITIES

The measure of a good simulator is the accuracy and efficiency with which it does its job. Early simulation systems unfortunately possessed neither of these qualities. They were extremely expensive, difficult to use, and plagued with model inaccuracies. They did, however, demonstrate that digital logic simulation was feasible and necessary for design verification and fault diagnosis. Before proceeding with this discussion,a distinction should be made between the terms design verification and logic verification. Design verification is accepted by most to mean that the logic correctly performs the function that the designer intended, including detection of races and hazards, within the limits of the simulator. Occasionally a subset of the design verification problem is considered, where timing, race, and hazard analysis are not performed; this is called logic verification. Early simulation systems were characterized by zero delay models for the elements, primitive single output Boolean element types, and two value models; namely, zero and one representations for signal values. The executable code was of a compiled variety, making timing analysis impossible. Hence, their use was restricted to logic verification only. These systems only considered classical stuck-at-one and stuck-at-zero faults as their fault models. It is generally accepted that these early simulators suffered from a lack of accuracy and high cost of use. It is also quite clear that exact simulation and modeling of a sizable physical system is impossible, because of the infinite number of possible fabrication errors, etc. However, it is essential that the gap between the simulation model and the physical system be reduced, as much as possible. This objective has given rise to present day simulation systems. Present day systems are capable of doing detailed timing analysis (using multivalued simulation philosophies) with spike, hazard and race analysis. They also permit different types of fault models. In addition to the familar classical stuck-at-zero, stuck-at-one models, they can handle shorted faults, complex transformation faults, shorted input diodes, multiple faults and, to some degree, even intermittent faults. Furthermore, these present day systems can simulate rather large number of elements, they can perform oscillation control and they have numerous other user options; for various design verification and fault simulation applications. Functional Simulation Functional simulation can be considered as part of the present day simulation capability; although a number of questions still remain with respect to increased accuracy and efficiency of functional level simulators. We will consider some

SIMULATION OF DIGITAL SYSTEMS

43

results that are presently achievable with functional level simulators in this section. In a later section of this paper, we will discuss some of the more recent research that is underway in this area. It must be remembered that the objective of function simulation is to lose a minimum amount of accuracy while making considerable gains in speed and storage. Therefore, the following questions are typical of those that must be answered before true functional level simulation can be achieved. - How do we achieve accurate modeling of the functions, including propagational delays? - How are faults internal to the functional models handled? It can be easily demonstrated that numerous internal faults do not manifest themselves as input-output pin faults; therefore, the philosophy of modeling only input-output pin faults does not provide for complete coverage of internal faults. - How is timing through the functional models handled? - How are illegal inputs to these modules handled, and what output is produced when illegal inputs exist? - How are multiple values propagated through functional modules? - How are faults propagated through the functional modules? These are not I/O or internal faults to the module, they are faults that may occur upstream of the particular functional module, and they must be propagated through the module. - What are the most efficient implementation techniques for functional simulation? - How does one provide nested functional simulation capability with the accuracy desired? While many of these questions still remained to be answered, in the most complete sense, some functional simulation capability does exist today, where these questions have been answered in a limited sense. A functional system that exists today is the CC-TEGAS3 1 ' , i system. The results for a few selected functional elements, from this system, are provided in Table 1. These results clearly demonstrate that significant gains can be achieved in both storage and run time, utilizing functional simulation capability. However, we should remember that true functional simulation should in fact sacrifice a minimum amount of accuracy. If we sacrifice too much accuracy, we have, in fact, achieved nothing. Therefore, considerable additional work lies ahead in the area of functional simulation, for both design verification and fault simulation. This will be discussed in more detail in later sections of this paper. III. MODULAR SIMULATION CAPABILITIES Even with the advent of functional simulation capabilities, many applications still require an accuracy only achievable at the gate level. However, the size of these problems is rapidly exceeding present simulator capabilities, due to either storage or efficiency limitations. This situation has prompted our work in modular simulation. The objective of this work is to partition the net in a manner permitting simulation at an accuracy similar to gate level, with reduced storage and run time requirements. Since the number of gates, in the networks being considered, are usually in excess of ten thousand, the partitioning of the gate level network into modules must be automated.

44

S.A. SZYGENDA TABLE 1 EXAMPLE FUNCTIONAL SIMULATION RESULTS*

ELEMENT BCD DECODERS - Nominal Delay - Ambiguity Delay - Fault Simulation MULTIPLEXOR - Nominal Delay - Ambiguity Delay - Fault Simulation SERIAL SHIFT REGISTER (32 bit example) - Nominal Delay - Ambiguity Delay Fault Simulation

STORAGE (% of that Used in Gate Level)

RUN TIME (% of that Used in Gate Level)

31%
37.: 31 !

32. 4o::

so.
13' 27% 42'

16 14' 16 7% 7% 7'

8%
O"

8.5%

It should be noted that the savings of storage and run time becomes more spectacular as the complexity of the device increases. Obviously, this condition is extremely desirable. Once these modules are formed, before they can be simulated, routines to evaluate them must be written. As the number of modules is not expected to more than an order of magnitude less than the number of gates, the writing of these evaluation routines must also be automated. The remainder of this section will consist of a discussion of the development, implementation, and evaluation of algorithms for automatically partitioning networks into prefix strings, and then automatically generating evaluation routines from these strings. The justification for performing the partitioning is based on Theorems which proved the validity of moving the output delay of a gate to the gate's inouts and then combining these input delays with the output delays of the gate's fan-ins. This process can be performed repeatedly, thus, gradually removing delays from gates and forcing them towards the networks inputs. This process is demonstrated in Fig. 1.
d + d

2
G

"1
b

+ d

d, + d~

^3
+ d

d.

3 Fig. 1

SIMULATION OF DIGITAL SYSTEMS

45

There are of course some restrictions in the delay moving process. One such restriction arises in the case of reconvergent fanout. In this case it is pos sible for two different delay values to arrive at the output of a gate (Fig. 2 ) . This of course would present an ambiguous situation, therefore, delay moving must cease at a point of reconvergent fanout. This point will then serve as the input for the module currently being formed and the output of another module. These modules will be interconnected in the modular level network, d,

Fig. 2 Another restriction in the delay moving process occurs in the case of feedback (Fig. 3 ) . In this case, the input of a gate is tied to the output of a gate fur ther downstream. A ttempting to move the delay past this point pushes the delay towards the networks outputs, instead of the inputs and would result in the delay being propagated infinitely since the feedback forms a loop. With this background we can consider the actual partitioning algorithm.

+ d, 2
G

+ d

^^^

"2

Fig. 3 Partitioning Algorithm The basic steps in the algorithm are as follows: 1. Starting at a primary output follow back each path until a point of multiple fanout is encountered. This point will then serve as an input to the module being formed and an output of another module. 2. All gates encountered along the way are included in the module being formed. 3. As each gate is included in the module, the delay which has been propagated to the output of the gate, by previous operations, is added to the actual delay of the gate and then propagated to the outputs of the fanins. Also, as the gates are encountered the logical description of the collection of gates being included in the module is built. The process is repeated until all paths ending at the primary output chosen, have been traced to a point of termination. 6. Next, each gate, which formed a termination point, is traced back until all paths ending at this gate reach a termination point. This process is repeated until the primary inputs have been reached. 7. The entire process is repeated for each primary output. 8. After the last primary output has been traced, a collection of modules will have been formed which include all gates in the network, along with a logical

46

S.A. SZYGENDA

description of each module and the necessary input delay information. 9. It should be noted that a separate module must be created for each primary input and primary output. Implementation of Partitioning As mentioned in the algorithm description, all gates must be traced backbegin ning at the primary outputs and finally ending at the primary inputs. This tracing back of fanins essentially constitutes the traversal of a graph and can be done either breadth first or depth first. For the breadth first search a hierarchy of queues (or other similar data struc tures) must be used to perform the search and a tree must be built to store the logical description. The tree is necessary since it is not possible to form a , logical string or equation for the gates being traced directly during a breadth first search. Once the tree is built it can then be traversed in some predeter mined order, to form a logical string. For the depth first search no hierarchy of queues is needed. It is necessary only to keep a stack to perform the search. Also, it is possible to build a logical string directly during the search, since a depth first search is equivalent to a preordered traversal of a tree. The only drawback of a depth first search is that it must be possible to recognize a termination point as soon as it is encountered, or else the search would continue far past the point, and possibly end at some other point. The termination point, which was passed, would not be discovered until a search along another path encountered the point again, at which time major "odifications would have to be made. If the termination conditions are limited to those used in this work (terminating at any point of multiple fanout, reconvergent fanout, or feedback) then the depth first search seems to be the best choice. However, due to the fact that It was not known what the best set of termination conditions was, the breadth first search technique' was chosen. Using the techniques described, a program was written and examples were run. It was found that, in a general circuit, one can expect between 2 and 4 gates/module. This would mean that the size of the original network would be cut by 50 to 75%. The 1. 2. 3. limitations of the present system are as follows: The network which can be modularized is limited to single output gates. The modules formed are limited to single output modules. The timing equivalence of the modular level network can only be guaranteed in the case of nominal delay. For more detailed delays the modular level network formed may be more pessimistic than the original gate level network.

In spite of the present limitations, these new techniques show great promise for increasing the size of networks which can be simulated. Further research is being performed in an attempt to extend the partitioning capabilities in order to eliminate the limitations stated above. Automatic Generation of Element Routines Produced by the Partitioning Algorithm Since parallel fault simulation techniques are,the most widely tested and uti lized, the decision was made to use an equation method of evaluation, which would be compatible with parallel fault simulation. Additionally it was assumed that three valued logic would be used; 0, 1 and (unknown). This necessitated the use of two words to represent each signal, referred to as the CV and the CV2 word.

SIMULATION OF DIGITAL SYSTEMS

47

A representation for the logic values was chosen which resulted in a minimal number of terms in the CV and CV2 for "and", "or", and "not" operations.^ Code Generation Algorithm Due to the choice of the bit representation for the three logic values, there existed a simple isomorphism between the set of Boolean equations for two valued logic and the set of CV and CV2 equations for the three valued logic. For example the Boolean equations = A Bl = A + B2 B3 = A', mapped onto Tl : (CV(C) = CV(A) CV(B) CV2(C) + CV2(A) + CV2(B)) T2 : (CV(C) = CV(A) + CV2(B), (CV2(C) = CV2(A) CV2(B)) T3 : (CV(C) = CV2(A), CV2(C) = CV(A)). Because of this isomorphism the algorithm developed consisted of a lexical scanner to verify both the syntax and semantics of the input equation and output FORTRAN statements which implemented the corresponding CV and CV2 equations. The remainder of the algorithm generated a subroutine heading, formatted the equations output by the scanner, and generated a subroutine trailer. Figure 4 shows the general algorithm while a more detailed version of the lexical scanner routine is shown in Figure 5. This outputs the equations, in terms of logical operators rather than logic operations. Implementation of Code Generation The code generation algorithm was implemented on a CDC 6600 and was written in PASCAL. PASCAL was chosen as the implementation language because it is well structured and the available compiler gives excellent run time diagnostics. Additionally PASCAL supports recursion, thus it was not necessary to maintain a stack or construct "pop" and "push" routines, by implementing the lexical scanner recursively. This work indicated that automation of simulation evaluation routines, for logic functions, is feasible and efficient. In addition such a capability seems to have additional possible uses as a tool for designers to model their own defined functions. The same types of extensions, as described for partitioning, are being considered for automatic code generation.

48

S.A. SZYGENDA

PROGRAM GENERATE (INPUT-FILE, STATUS-FILE, OUTPUT-FILE); begin open i n p u t - f i l e ; open s t a t u s - f i l e ; open o u t p u t - f i l e ; write header for status of subroutines to s t a t u s - f i l e ; i n i t i a l i z e count of subroutines successfully generated; i n i t i a l i z e count of subroutines attempted; i n i t i a l i z e count of subroutines lacking t r a i l e r s ; repeat set error flag to indicate that no error yet in present equation; perform check of header for errors and obtain module type id and increment count of subroutines attempted; if no error in header then write header for subroutine to output-file; process equation and generate subroutine body and write it to output-file; write trailer for subroutine; if no error in equation then if equation has trailer then increment count of subroutines generated successfully; else set error flag to indicate trailer missing; increment count of subroutines lacking trailers; end if; case error of error: write error message to status file including module type; no error: write message that subroutine was successfully generated including module type; trailer error: write message that subroutine was generated but trailer was missing, to status-file

end case; u n t i l end of write number w r i t e number write number equation; end program.

f i l e encountered on i n p u t - f i l e ; of subroutines attempted to s t a t u s - f i l e ; of subroutines generated to s t a t u s - f i l e ; of subroutines generated for which t r a i l e r was missing on

Psuedo Code Program to Generate Evaluation Routines for Logic Modules Fig. 4

SIMULATION OF DIGITAL SYSTEMS begin clear stack; push 0 onto stack; while not end of file do get ch; if ch=+ then write "0R("; push 2 onto stack; else if ch=* then
else

49

w r i t e "AND("; push 2 onto stack; i f ch is a variable then w r i t e variable; count:= pop stack; i f count=2 then write " , " ; push 1 onto stack; else

endif enddo end;

endif

endif end i f

while count=l do write " ) " ; count:= pop stack; enddo; if count=2 then write ","; push 1 onto stack end i f

Procedure for Generating Evaluation Routines Using Statement Functions from a Prefix Equation. Fig. 5 IV. ACCURATE FUNCTIONAL SIMULATION CAPABILITIES This experimental work centered around the development of concurrent simulation techniques, for simulating at the functional level, with the minimal amount of sacrifice in accuracy. The work concentrated on being able to simulate functional modules, utilizing minimum/maximum delays and, hence, propagating ambiguity areas through the network. The algorithms and data structure developed and implemented, support multiple input/output, memory/no memory elements as well as gates. The simulator was designed to handle any number of signal values; the number of signal values comes into play when modeling an element or device. Since, in concurrent simulation, a fault is handled in the same manner as the good element, accurate timing analysis can be performed to detect spikes caused by the presence of faults. Also, the accuracy of simulating a fault is the same as the accuracy of simulating the non-fault model. The simulator structure supports modeling of any user defined non-classical faults like: (i) functional behavior faults, (ii) technology dependent shorted signal faults, (iii) timing faults, i.e., faults that effect propagation delays

50

S.A. SZYGENDA

and operational timing parameters like setup/hold times, pulse widths, etc. However, in the experimental verison of the simulator only stuck-at-faults and timing faults were implemented. Functional elements are modeled independent of the device's internal gate configuration, and simulate the device's functional behavior. The J-K flip-flop is a very basic functional element, when modeled by its functional behavior rather than its internal gate circuitry. The flip-flop model uses different propagational delays, based on the input that is active and operational timing parameters like setup/hold times and pulse widths. Checks for violation of the operational timing parameters can be performed for both the good element as well as the faulty element. To accomplish this, the data structure that defines a fault, as well as an element, incorporates the necessary timing information. A problem that has plagued concurrent fault simulation, has been that it requires large amounts of storage, (as is true of deductive). Furthermore, the amount of storage needed in a given simulation run is dynamic and unpredictable. Although future work needs to be done in this area, a heuristic approach has been developed which appears to help control this phenomena. This is accomplished by the use of an indicator for each fault to determine whether it has ever been injected into the network. Once a fault has been injected, that fault's effects must be continuously modeled until it is detected, or inaccuracies will result. However,if it is possible to determine how much storage is left at any point in time, then one can make decisions about when new faults should be injected into the network. This is exactly what was accomplished. At any time that a fault is to be inserted, the decision is first made as to whether a sufficient amount of space is still left. If not, no new fault is inserted into the network. This is not a foolproof plan, in that any previously induced fault activity can continue to escalate and still saturate the available storage space. However, in the experiments that have been run, this solution seems to work very effectively. The run time for networks simulated in the present concurrent fault simulator were very high, when compared with run times of the same networks simulated in an existing parallel fault simulator. In some cases, the concurrent fault simulator was more than twice as slow as the parallel fault simulator. From this work, it can be seen that the problems with concurrent simulation are primarily those of speed and storage. It has been determined, through the analysis of the existing experimental model, that a large percent of the time for concurrent fault simulation is spent in searching fault lists. It has also been determined that what contributes to the search time, and also to the storage requirements, is that each individual fault effect is handled separately, if it is distinguishable from the good machine. The fact that it is handled separately is the very thing that allows for these faults to be simulated at an accuracy consistent with the functional models. However, it is also very costly in terms of space and speed. The results of this analysis, and having a working model, will direct our future experimental work in this area. V. HIERARCHICAL METHODS FOR GENERATING TESTS FOR SEQUENTIAL LOGIC NETWORKS, IN A SIMULATION ENVIRONMENT Generating diagnostic tests for sequential logic networks is a major problem of the integrated circuit industry. The trend toward highly sequential LSI and VLSI integrated circuits has made existing methods questionable. The goal of this work was to develop a test generator that would solve this problem, for LSI networks, and be adaptable to VLSI networks. There are two major aspects of the solution to this problem. The first of these

SIMULATION OF DIGITAL SYSTEMS

51

is a preprocessing of the network. The preprocessing obtains information about the topology of the network. This information is useful to the test generator and to the user controlling the test generation process. The preprocessing produces a leveling of the network and the identification of all feedback loops. It also determines the sequentialness of the network. This includes a count of all the memory elements in the network and the maximum (minimum) sequentialness for each element. This is the maximum (minimum) number of memory elements in a path from that element to a primary input of the network. All this information is used by the test generator. The user can use the feedback and sequentialness information to determine the best method for test generation. For a network with few feedback loops and a small amount of sequentialness, heuristic methods have a high probability of detecting a sufficient number of faults. However, as the sequentialness increases the use of a test generator that considers the topology of the network becomes essential. The algorithm to perform the preprocessing is as follows. All primary inputs are assigned to the first level. An element is then assigned to a level if all its fanins have been assigned to previous levels. When a situation occurs such that not all elements have been assigned to levels but none of these have all their fanins in previous levels, there must be a feedback loop in the network. A depth first search is performed on one of the unassigned elements to locate the loop. That element is assigned to a level and the process continues. As each element is assigned to a level, the sequentialness information is updated. The second aspect of the solution to this problem is the actual generation of the diagnostic tests. The methods chosen to do this consist of user selectable options, including manual, heuristic or algorithmic path sensitization, or any combination thereof. First, the modified single dimensional path sensitization technique will be considered. Algorithmic Test Generation This part of the system was designed on a modular basis. There is a control module that accepts user input and directs the test generator to the signal and its value to be path sensitized. The path sensitizer module will drive this signal to the desired value. It starts at the signal and drives backward through the net to the primary inputs. It uses the element evaluation module to set values on each element involved in the path or line justification. The element evaluation module makes use of the information found in preprocessing to determine the input values for each element. It can produce a sequence of patterns for sequential networks. There is a conflict module that checks for two different values being placed on one signal. If this occurs the backtrack module will retrace this part of the path to allow the conflict to be resolved. The hierarchy of uses involves these two modules. The amount of conflict checking and backtracking allowed controls the speed, the accuracy of the tests, and the memory space needed. The user can control the process to produce very accurate tests at a relatively higher cost or tests that might be accurate at a lower cost. Table 2 gives some results of the leveling and test generation process for a few networks. Manual and Heuristic Test Generation As digital simulators are used more extensively for design verification, the cost effective use of manually generated design verification tests, for fault analysis, becomes more attractive. It seems reasonable to expect that design verification tests, that truely test the design, would also provide a high level of fault coverage. Therefore, the use of tests generated in this manual manner would certainly be cost effective.

52

S.A. SZYGENDA TABLE 2 ALGORITHMIC LEVELING AND TEST GENERATION

TEST CIRCUIT

# OF LEVELS

i OF PASSES

# OF TESTS 16 32 48

TEST COVERAGE 79% 85% 92%

CPU* TIME (SEC) 22 49 60

ALU

1 1

MAG. COMPARATOR SHIFT REGISTER COUNTER

1 2 3 1 2 3 1 2 3

6 12 18
10 20 30 10 20 30

70%
82';

90%

7 10 13
22 40 60 18 31 45

25

63% 77% 93%


80%'

23

67%

89%

* A limit of 60 CPU seconds was used for these examples. In addition, many networks can be simply analyzed manually to determine a testing sequence. This would eliminate, or reduce, the need for expensive automatic test generation runs. For example, consider an n input 2 n output address decoder. For large n this network would require extensive run times for automatic test generation. However, we can simply deduce that the only 1 0 0 % test coverage set would require all possible 2 n inputs, since we can have a stuck-at-1 or stuck-at-0 on any of the 2 n outputs. This simple observation could result in considerable cost savings. Many similar examples exist, with varying degrees of fault coverage. The suggested use of manual generation is certainly selective in nature. It should not be construed as a suggestion for the exclusive use of manual generation. What is simply suggested is the intelligent use of whatever tools are available to perform the desired function, in the most cost effective manner possible. Another technique, used for diagnostic test generation, is heuristic test generation. Heuristics provide a rapid test generation capability. This approach does not involve generation of tests for specific faults; rather, it approaches the network through simulation, in terms of faults detected. Test patterns which detect no faults are discarded. Numerous heuristic techniques are possible. Usually, those that are found to be the most successful are "popped" to the top of the stack, for continual use. Again, this approach is simply another tool that should be available for efficiency. If additional fault coverage is necessary, beyond that achievable through manual and heuristic approaches, then, the algorithmic capability can be used for those faults. However, utilizing this philosophy would result in using the more expensive algorithmic approaches only on a small subset of the total faults in the net. Before concluding this section, one additional comment should be forcibly made. Since present day test generation systems use techniques and models far less accurate than what is achievable in present day fault simulators, it is essential that fault simulation be performed in conjunction with test generation, if accuracy is necessary. In other w o r d s , w e may suggest that the best test generator is a highly accurate fault simulator, upon which tests generated, utilizing any techniques, can be validated.

SIMULATION OF DIGITAL SYSTEMS VI. CONCLUSION

53

In this paper I have attempted to cover a very broad range of techniques and considerations involved in the areas of digital logic simulation and diagnostic test generation. The breadth and complexity of these topics prevented detailed discusions. Hopefully, I have described the present state of the art in this area, and some of the numerous additional unsolved problems that remain to be investigated. We must realize that these problems do indeed exist and must be solved. The necessity for solving these problems is being prompted by present day and proposed device technology and fabrication techniques. The luxury of reverting to manual analysis, that existed in the past, is rapidly becoming extinct. Acknowledgements Considerable portions of the work presented herein is of an unpublished nature. However, I would like to acknowledge some of my former and present students and industrial collegues that actively worked on these topics including: Dr. E. Thompson, Dr. A. Bose, Mr. P. Karger, Mr. B. Read, Mr. J. Smith and Mr. M. D'Abreu. References |1| S.A. Szygenda, The Evolution of Functional Simulation from Gate Level Simulation, Proc. of the IEEE Conference on Systems and Circuits, (1971). |2| E.W. Thompson and S.A. Szygenda, Three Levels of Accuracy for the Simulation of Different Fault Types in Digital Systems*, 12th Design Automation Conference Proceedings, June 23-25, 1975 (1975) 105-113. |3| Ajoy K. Bose and S.A. Szygenda, Detection of Static and Dynamic Hazards in Logic Nets, Proceedings of the 14th Design Automation Conference (June 1977). |4| M.A. Breuer and A.D. Friedman, (1976), Diagnosis and Reliable Design of Digital Systems, Computer Science Press, Woodland Hills, California. |5| A.K. Bose, Procedures for Functional Partitioning and Simulation of Large Digital Systems, A Ph.D. Dissertation, The University of Texas, (Dec. 1977).

54

S.A. SZYGENDA Selected Simulation Bibliography

1. 2.

Szygenda, S.A., "TEGAS2Anatomy of a General Purpose Test Generation and Simulation System for Digital Logic," 9th DA Workshop, 1972, pp 116-127. Szygenda, S.A. and A.A. Lekkos, "Integrated Techniques for Functional and Gate Level Digital Logic Simulation," Proceedings of the 10th Design Automation Workshop, June 1973. Szygenda, S.A. and E.W. Thompson, "Modeling and Digital Simulation for Design Verification and Diagnosis," IEEE Transactions on Electronic Computers, December 1976. Hong, S.Y. and S.A. Szygenda, "MNFP-A New Technique for Efficient Digital Fault Simulation," accepted April 1977, Computer Aided Design Journal. Szygenda, S.A., "The Evolution of Functional Simulation from Gate Level Simulation," Proceedings of the 1977 IEEE International Symposium on Circuits and Systems, April 1977. Bose, A.K. and S.A. Szygenda, "Detection of Static and Dynamic Hazards in Logic Nets," Proceedings of the 14th Design Automation Conference, June '77. Breuer, Melvin ., "Digital System Design Automation: Languages, Simulation and Data Bases," Comp. Science Press Inc., Woodland Hills, California, '75. Chappell, S.G., C H . Elmendorf and L.D. Schmidt, "Logic Circuit Simulation" Bell System Tech. J., Vol. 53, pp 1451-1476, 1974. Chang, H.Y., G.W. Smith, Jr., and R.B. Walford, "LAMP: System Description," The Bell System Tech. J., Vol. 53, pp 1431-1449, October 1974. Chappell, S.G., P.R. Menon, J.F. Pellegrin and A. Schowe,"Functional Simulation in the LAMP System," Proceedings of the 13th Design Automation Conference, June 1976. Ulrich, "Fault Test Analysis Techniques Based on Logic Simulation-FANSSIM," 9th DA Workshop, 1972, pp 111-115. Ulrich, E.G. and T. Baker, "The Concurrent Simulation of Nearly Identical Digital Networks," Proc, of Design Automation Workshop, pp 145-150, 1973 and IEEE Group, April 1974. Schuier, D.M., .E. Baker, S.P. Bryant and E.G. Ulrich, "A Computer Program for Logic Simulation, Fault Simulation and the Generation of Tests for Digital Circuits," Simulation of Systems, L. Dekker editor, North-Holland Publishing Co., pp 453-459, 1976. Schuier, D.M. and R.K. Cleghorn, "An Efficient Method for Fault Simulation for Digital Circuit Modeled from Boolean Gates and Memories," Design Automation Conference Proceedings, 1977. Abramovici, M., M.A. Breuer and K. Kumar, "Concurrent Fault Simulation and Functional Level Modeling," Design Automation Conference Proceedings, 1977.

3.

4. 5.

6. 7. 8. 9. 10.

11. 12.

13.

14.

15.

TECHNICAL SESSION II

Chairman: J. SCANLAN, Dublin university, Ireland

C. tlu&gtoue, editou, COMPUTER-AIDED DESIGN oi digital ele c troni c . CACWLU and iyitemi Horth-HoWwd Publiihing Company ECSC, EEC. EAEC, Bnuiieli S Luxembourg, 1979

NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS Mel vi . B reuer Associate Professor of Electrical Engineering & Computer Science University of Southern California Los Angeles, California 90007 USA Abstract In this paper we first briefly review the current state of the art of automatic text program generation systems. We then report on some of our current research dealing with the development of a very powerful computer aided system for the generation of tests for complex digital systems. The proposed system uses a number of schemes for deriving tests, such as (1) pseudo random methods, (2) manual selection based upon functional performance, and (3) algorithmic. All three techniques appear to be necessary in order to produce a cost effective system. Tests are evaluated via a concurrent fault simulator which employs both gate and higher level models. In this paper we deal mainly with the problem of automatic (algorithmic) test generation. We describe in detail our results in two main areas, namely preprocessing and functional modeling. Preprocessing deals with the algorithmic analysis of a circuit to gather information to make test generation more efficient. We present results on two preprocessing concepts namely rate analysis and cost analysis. This latter result has already been used successfully in the area of design for testability. We next present high level models for a counter which can be used by a test generation algorithm. These models are expressed by (1) a set of dcubes, (2) algorithms for evaluating their function, and (3) a high level language for describing solutions to problems to be solved. We show how the preprocessing techniques are used by our test algorithm as well as interact with our functional models.
Preliminary results appear to indicate that these techniques w i l l lead to much greater efficiency in t e s t generation since they d i r e c t l y attack the problem of buried f l i p f l o p s as well as produce a more e f f i c i e n t search procedure f o r a t e s t .

INTRODUCTION

This paper deals with several aspects of the design of software systems which aid an engineer in the development of fault detection and diagnostic tests for com plex digital systems. Such a system is often referred to as an Automatic Test Program Generation (ATPG) systems. We will present a brief stateoftheart review of ATPG systems as well as discuss some of our more recent research results on this subject. Most of the work which we will discuss deals with the concept of external testing, i.e., testing by a piece of automatic test equipment (ATE), 57

58 rather than s e l f testing systems.

.. BREUER However, many aspects of the former mode of W e w i l l also assume a classical fault Again

testing are applicable to the l a t t e r .

model in most of our discussion, namely single permanent s t u c k a t f a u l t .

many of the results presented are applicable to a more general f a u l t model. W e depict a typical ATPG system in Figure 1.
CIRCUIT

^=D5>-^.

INPUT PREPROCESSING LIBRARY FAULT ANALYSIS COST ANALYSIS RATE ANALYSIS TEST SEOUENCE GENERATION

*
I

LIBRARY OF PARTS, MODELS, TESTS

J
MANUAL (FUNCTIONAL OR CONSTRUCTIVE)

ALGORITHMIC OR HEURISTIC

~1
RAM PATTERN GENERATOR

ETC.

GOOD CIRCUIT SIMULATOR FAULT SIMULATOR

DICTIONARY CONSTRUCTION

<7
Figure 1. Typical ATPG system.

TEST TAPE FOR ATE

Here the major function of the input preprocessor is to carry out certain syntactic and semantic checks on the circuit definition, collapse faults, and process the input data with respect to library data. Two new preprocessing concepts, to be discussed later, are cost and rate analysis. The most complex part of an ATPG system is test sequence generation. For simple circuits, either functional or algorithmically generated tests will suffice. For complex systems a test is very difficult to construct and numerous aids must be available to the test engineer if he is to construct an acceptable test at a reasonable cost. Functional tests based upon the specs of a system provide an initial good test

N E W CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS set.

59

Additional f a u l t coverage can be obtained by employing random or algorithmic

test generation methods. Tests are usually evaluated using simulators. The main problems in simulation

appear to be excessive run time and model construction f o r large (LSI) modules such as microprocessors, ROM's, RAM's and PLA's . 1.1 Brief Review of the Current State-of-the-Art Test sequence generation Most research in the area of t e s t sequence generation has been primarily concerned with gate-level combinational and sequential c i r c u i t s using the permanent single stuck-line f a u l t model. For t h i s model, the problem of generating tests f o r stuck-at f a u l t s in combinational c i r c u i t s has essentially been solved. The most common techniques are the Boolean Difference method [ 1 ] , the D-algorithm [2,3] and the LASAR algorithm [ 4 ] , the l a t t e r being a modified version of the D-algorithm. Variations and modifications of these methods exists 15,6,71. It appears that the D-algorithm, in i t s original or modified form, is the most e f f i c i e n t procedure for generating tests f o r combinational logic c i r c u i t s . Circuits containing several thousand gates can be easily processed in a few minutes of CPU time. Researchers have also studied other f a u l t s modes, such as multiple s t u c k - a t - f a u l t s 18,91, i n t e r m i t t e n t f a u l t s [45-48], bridge f a u l t s results primarily of academic i n t e r e s t . For sequential c i r c u i t s the t e s t generation problem becomes much more d i f f i c u l t . Again, several techniques exist [13,14], but for large classes of c i r c u i t s these techniques are computationally i n f e a s i b l e . Hence the generation of tests f o r This problem is being An example of complex sequential c i r c u i t s remains an open problem. [10] and delay f a u l t s [11,12]. Except f o r the l a t t e r case, the combinatorics of these problems make most of these

attacked in two ways, namely by the development of more powerful t e s t generation t o o l s , and by simplifying the problem via design for t e s t a b i l i t y . the former is the work of H i l l and Huey [15]. A register transfer type model is used. using both portions of the model. In this work a system is p a r t i -

tioned into i t s structural data processing portion and i t s control p o r t i o n . The search for a t e s t then proceeds W e believe that t h i s is a f r u i t f u l approach to

pursue, and eventually functional information concerning what a c i r c u i t is i n tended to perform may be included in such a model. The area of design for t e s t a b i l i t y is not the subject of this paper, but we should

60

.. BREUER Because of the in-

mention that considerable effort is going into this area.

creased complexities of digital circuits and systems, testability considerations in design are essential if future systems are to be maintained at a reasonable cost. Random test generation methods have been successfully used in industry for moderate size circuits, and have been studied quite extensively by researchers [16-20]. When long tests are used, say several million, then the ATPG system as shown in Figure 1 is not employed. Rather, the tests are generated on line by the ATE and Fault dethe results compared to either a "golden-board" or to signature data. by probing. Our analysis of this effort indicates that this approach is viable for small and moderate size boards. ated by an ATPG system. The test generation cost are usually less than those generAlso high testing rates are achievable. For complex

tection and diagnostic data is usually obtained by physical fault insertion and

boards and microprocessors where the ratio of pins to gates is low, this approach appears to become impractical. times be a problem. is often not necessary [201. Also, initialization or synchronization can someHazards and races can also lead to discrepancies Interestingly, it has been shown that exact synchronization

between the circuit under test and the "golden-board." The use of on-line random testing has lead to several studies in the area of compact testing, namely what data need be observed from the unit under test. These methods are potentially useful because of the small amount of hardware and software required to implement them. Most notable results deal with transition count testing and Hewlett-Packard's signature analysis technique [21], the latter being intended specifically for systems containing microprocessors. Transition counting has been analyzed from a deterministic viewpoint by Hayes 122-241 and from a probabilistic viewpoint by Parker [25] and Losq [20]. Various generalizations of transition counting have also been proposed [26-291. test patterns are used. These studies have assumed either that complete test sets are available, or that random Neither assumption is particularly satisfying from a practical viewpoint; complete test sets are difficult to generate, while random test generation cannot guarantee complete fault detection. Finally, there has been considerable work carried out in the area of generating tests for solid state RAM's. tical nature 130-341. This work is both of a theoretical as well as prac15 3 Tests of complexity 0(n ' ) to 0(n ) have been developed,

NEW CONCEPTS IN AUTOAMTED TESTING OF DIGITAL CIRCUITS the former being feasible f o r 64K RAM's. Fault Simulation One of the major problems in f a u l t simulation is excessive CPU time. run time. logics For

61

years researchers have been investigating and developing new techniques to reduce Most simulators are table-driven event directed and employ multi-valued The three most common f a u l t (usually 0 , 1 , x ) , and employ p r i m i t i v e gate elements having several delay [351, deductive 1361 and concurrent f a u l t

variables, such as separate rise and f a l l times. simulation techniques are parallel simulation 1371.

Chang et a l . 1381 have studied the r e l a t i v e run times of parallel and deductive simulators and have come to the conclusion that f o r large c i r c u i t s , the l a t t e r technique outperforms the former. W e have recently developed an approximate analytic model for estimating the run time for parallel and concurrent simulators and havefound that the l a t t e r also outperforms parallel simulators. W e believe that concurrent simulation i s , at However, i t can present, the most general and fastest form of f a u l t simulation. lead to serious memory requirements. In our analysis we have assumed a f a u l t detection model as shown in Figure 2. 100 I

faults detected

0.1 Test length 1/N

Figure 2. Percent detection vs. test length For the following set of assumptions we have obtained the results shown in Table I. Assumptions: (a) (b) (c) (d) average event activity - 10% 50 1851 instructions required to process (evaluate and schedule) one event in a parallel (concurrent) simulator host machine characteristics - 1 MIPS and unlimited core 36 faults processed in parallel

62

.. BREUER (e) number of gates in circuit - 10 (e.g., a VLSI chip)


(f) number of vectors in test - 10 Parallel
4 Concurrent C D = = D = 282 6.25 45
43

no f a u l t dropping f a u l t dropping

A =
A

12,700 2,130
=

A C D

== ;=

45 340

fi 6

Table I .

Estimated simulation time (hours) for f a u l t simulation.

1.2 Problem Areas There are two major problems related to the area of ATPG, namely c i r c u i t "test complexity" and modeling. With the evolution from SSI through MSI and LSI to VLSI (10 two years. gate equivalences/

c h i p ) , over the last few years we have witnessed a doubling in chip density every This growth is predicted to continue for at least the next 8 years. The most dramatic development has been the emergence of microprocessors. Inherent in the use of LSI and VLSI is the l a c t of knowledge of the detailed logic d e f i n i t i o n of these chips. c u i t models somewhat useless. is l o s s . less applicable. level models. Test sequence generation problems For large complex c i r c u i t s there are several specific problems which make t e s t sequence generation p a r t i c u l a r l y d i f f i c u l t , namely (1) modeling (delays and f a u l t modes), (2) i n i t i a l i z a t i o n , (3) problems associated with timing such as races and hazards, and (4) buried f l i p - f l o p s , i . e . , f l i p flops which are hard to control or observe. The l a t t e r problem occurs most frequently when large These problems become more serious as the In some counters and s h i f t registers are used. This has made our classical gate and latch c i r In addition, information on f a u l t modes and timing

Hence classical test generation and simulation approaches are becoming Three potential solutions to these problems are (1) to employ

functional level t e s t i n g , (2) to design for t e s t a b i l i t y , and (3) to employ higher

level of integration increases and the pin to gate count decreases.

cases s e l f testing rather than external testing techniques must be applied. W e have studied the problem of automatic i n i t i a l i z a t i o n and have found no pract i c a l solution except to design c i r c u i t s for easy i n i t i a l i z a t i o n . and is impractical for most c i r c u i t s . The exact analysis of the i n i t i a l i z a t i o n of f a u l t y c i r c u i t s requires a multi-valued algebra

NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS Problems of timing in test generation can be handled by using more accurate models. Some results in this area are presented by Breuer 139,401.

63

Here infor-

mation on races and hazards are tracked during test sequence generation in order to produce more valid test sequences. We will address the problems of modeling and test generation efficiency in the next section. 2. SOME ASPECTS OF AN ADVANCED ATPG SYSTEM Though design for testability is becoming a necessity, the availability of an ATPG system having a powerful algorithmic test sequence generation system and fault simulator will continue to be a useful tool for the foreseeable future. In this section we will outline some of our recent work in this area. implement these concepts in a system called TEST/80. system are outlined below: 1. 2. 3. 4. 2.1 Extensive preprocessing High level primitives Accurate timing analysis Concurrent f a u l t simulation. We plan to

The major aspects of this

Preprocessing By preprocessing we mean the generation of data about a c i r c u i t that can be

used l a t e r by a test generation algorithm to make the process of generating a test more e f f i c i e n t . techniques: 2.1.1 Rate analysis Rate analysis is a preprocessing technique in which some lines in a c i r c u i t are assigned l a b e l s , called rates. which a l i n e can change values. These labels indicate the maximum rate at These rates can be used by an ATG algorithm to At present we are pursuing two specific preprocessing rate analysis and l i n e cost.

aid in assigning consistent logic values to l i n e s . The rate associated with a l i n e can be represented by a regular-expression-like notation, e . g . , (0 1) = 001001001... . In Figure 3 we i l l u s t r a t e a few timing C is a clock pulse assumed to be high W e assume that t , = t. = 1 u n i t , and Hence the clock sequence diagrams and t h e i r corresponding rates. (H) f o r time t , and low (L) f o r time t . .

that a l l rate expressions are in terms of this u n i t . is represented by (10) . lasting units of time. in a modulo 8 counter.

shown in Figure 3 can be denoted,by the binary sequence 101010...; f o r brevity i t In general l n ( 0 n ) represents a H(L) signal value Let Q., QB> Qc represent the outputs of the f l i p - f l o p s Then the signals on these lines along with t h e i r corresNote that rate expressions

ponding rate expressions are depicted in Figure 3.

can be operated on via logical operations, e . g . , the rate on the output of the

64

.. BREUER
+

AND gate forming QftQB is ( 0 6 1 2 ) + ( l V )

= (0612)+.

Assume that Qc = QA QB i s the clock input to some device, such as a f l i p f l o p . I f during test generation i t is desired to apply the input 010 to t h i s f l i p f l o p , i t is clear from rate analysis that t h i s assignment i s not possible. Hence enormous CPU time can be saved i n endless backtracking, used by such techniques as the Dalgorithm, i f one can predict as early as possible those sequences which cannot be achieved. ( 0 6 l V covers 0 8 1 3 0. The maximum rates are (10) + and ( 1 0 ) + , and the minimum rates are 0 + and 1 + , namely constants. W e normally denote a rate (a) by simply w r i t i n g a. Note that a sequence of the form 081 0 can be achieved W e say that on Q p by simply i n h i b i t i n g the clock during some clock periods.

(IO) + = 0 (0 2 l 2 ) + = O O I I O O I I

C P , _TLJn_JT_jn_jn_Jl_jn_jn_JT_
QA

"1 ~1
~L

I I

I
4

I
(0 J )
4 +

1 I

L L_

1
(08|8)+

Qr.

(0 S I 2 ) +
QA'QR

figure 3. Example of signals and rate expressions.

If A is a signal line, then the rate associated with the line is denoted by A . J r The complement of a rate a is obtained by changing all O's to l's and 1's to O's, and is denoted by I. The AND and OR of two rates a and b are denoted by a b and a + b, respectively, and are also rates. Rates can be propagated through circuit elements. For example, if C = A B

(AND gate) then C r = A p B r > where C r = B p if A r = 1, and C r = 0 if A = 0.

NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS

65

Let X r = 1 2 3 ..., and Y f = yml y m 2 / 3 ... w h e r e , y { 0 , 1} and n ] > m,. To compute Z r = X r Y r find a prefix of Y r of length n,. If = O then output "1 else output the prefix. Delete x ni from X r and the prefix from Y . If processing a term of the form (...) n , then take the result produced, say p, and replace it by (p) . Return to the new rates X and , relabel if necessary, and repeat the procedure until both have been processed to the end. The result of concatenating the output data is . As an example ( 16 ,16) . ( 0 2 ,6) _ 0 16 ( 0 2 1 6 ) 2 > apd n 16 {() 2 ,6)2 . Q1 _ n l 6 ( n 2 ( 0 1 )3}2_ fts can be seen, the complexity of rates can increase quite fast, and we are currently investigating methods for approximating these expressions by simpler ones. Rates are usually created at the output of sequential devices, such as flip flops, counters and shift registers. Normally we select the input to these devices to have rates such that the output rates are maximum. Primary inputs usually are assigned the rate (01) . Consider a device triggered by a positive edge on the clock line. We denote this condition by C (+). If (C ) = 0 n i l" 2 , then the rate of the positive edge is ((C (+)) r = 0 n l n , wheren = n| + n ? . Consider a JK flip flop where J r = "1 l"2, (C ) r = 0 n 3 l n 4, and K p = 0 n 5 l n 6. Then the output Q has rate Q r = 0 a l b where (1) a, b 2i , + , i.e., the output cannot change faster than the input period (2) a > n,, i.e., as long as J = 0 the flip flop cannot be set (3) b >. n 5 , i.e., as long as K = 0 the flip flop cannot be reset. Hence we can easily bound the rate at which this flip flop can change state. For J = K = 1, Q = 0 1 , where = , + , and this is the maximum rate for a flip flop. In a similar fashion we can compute the rates at the output of shift registers and counters. As a simple example, consider a mod 16 counter having an enable line E, a clock line C , data outputs Q., Q , Q c , Q D , and carry output C = Q A Q B Q C Q D E C . For maximum rate activity we set E r = 1. For (C 1 = 0 n l m (q = + m) we obtain (Q.) = 0 q l q , (QJ = 0 2 q l 2 q , (Q ) = 0 4 q l 4 q , (Q ) =

0 8q l 8q , and C r = 0 1 6 q m

Given a circuit, we are currently designing an algorithm for generating the maxi mum rate of each line of the circuit. Our major difficulty to date deals with simplifying complex rate expressions, feedback and reconvergent fanout. The major application of rate analysis is the identification of allowable

66

.. BREUER

sequences on lires. As an example consider the circuit shown in Figure 4.

+3

Figure 4. Portion of a circuit. are driven by large c Assume E, C and J are combinational circuits and that their rates are not know. Let K be a primary input.

For the counter, the input rates which produce 31 maximum output rates are E = 1 and (C_)_ = 01. This gives us Cr = 0 1 . For " 31 the JK f l i p - f l o p , we set J = K = 1 , and since (C ) = 0 1 , we have that Q =
3? 3?

We assign rates as follows:

"

0 1 . Assume that we desire to construct a test for the fault A s-a-0. problem we have the following subproblems: 1. 2. Set A = 1 To solve 1 we have subproblems: t: b) 3. 4. c) Cq= t(C q (t) t - 11 1, cq(t 0 ) , J = 1. t - 1: J = 1, Q = 0, C = 0. t' < t - 1:

To solve this

To solve the problem Q = 0 we have:

0.

By implication, we see that R = 0 also resets the counter and we have C = 0. Time: t' (t - 1) 0 t 1 O31 1. Therefore, for t ' = 1 , we have

In summary we have, Line C: 0

But rate analysis implies that C ( t - 1) = 31 and t required. 2.1.2 Cost analysis 32.

Therefore at the very least a 32 clock time test is

Our proposed test generation algorithm is similar to the D-algorithm in that i t employs the concepts of l i n e j u s t i f i c a t i o n , D-drive and implication. former two concepts usually imply choices. problems on which to work. The Our t e s t generation algorithm can be

modeled as a search procedure, where at any instant of time one has numerous subThe order in which these problems are selected can Cost analysis deals with the greatly affect the run time of the algorithm.

N E W CONCEPTS IN AUTOMATED TESTING OF D IGITAL CIRCUITS concept of assigning costs to subproblems. t o t a l run time w i l l be reduced.

67

By selecting the order in which sub

problems are processed based upon some function of cost, i t is hoped that the The problem is to construct an adequate cost In t h i s section we function which w i l l lead to a reduction in computation time. w i l l discuss some of our results to date. Rutman 1411 has introduced the concept of assigning three l i n e cost values to each l i n e A in a c i r c u i t , namely cA the cost of setting l i n e A to a 1 , c i tile cost of setting l i n e A to a 0, and dA the cost of d r i v i n g a D () on l i n e A to a primary output. Here D denotes an error being propagated to an output [2 Rutman's ATG In general, problems of higher cost are more d i f f i c u l t to solve. technique.

system f i r s t calculates these costs, and then uses them to guide his search Unfortunately his r e s u l t s , based upon three test cases, provided inconclusive support for t h i s technique.

W e believe that the main problem with By "side e f f e c t s " we

Rutman's cost function is due to side effects and fanout. satisfy one specific requirement.

mean the effect on other devices which occur when we set a l i n e to a value to For example, assume we desire to change the W e can least s i g n i f i c a n t b i t of an b i t counter by incrementing the device.

show that when the counter i s incremented, the expected number ET(n) of f l i p 1 1 flops which change state is given by the expression E,(n) = E.(n 1) t , = 2 r . For a s h i f t r e g i s t e r , a s h i f t operation w i l l e f f e c t , on the average
-

(7 = n/2 f l i p - f l o p s .

To drive a f l i p - f l o p to a 0, the "cheapest" solution may But this would reset a l l the f l i p For each p r i m i t i v e W e w i l l now present

be to set a 0 on the master reset l i n e R. required.

flops driven by R, and t h i s could then eliminate state settings at 1 which were W e have modified and extended Rutman's work. element in our system we have derived an equation which determines the cost of each output l i n e of an element given the input l i n e costs. some of our r e s u l t s . The cost cA is given by the equation cA = min (cfA t csA t cdA, K), where be 32,000. A is

the output of an element of type t and K is an input parameter usually taken to The term cfA i s that cost contribution due to the logical properties A similar equation of t ; csA is that cost contribution due to the side effects of setting A to 1; and cdA is a constant cost associated with the element type t . holds f o r cA. Calculation of cfA and cf Gate functions: the output be A. Consider a gate having input lines i = 1 , 2 . Let Then we have the following r e s u l t s .

Private communication.

68
AND gate A = 1.2

.. B REUER

cfA = y ci i=l A =Tt2 + cfA min i

(each input must be a 1 )

{cT) (at least one input must be a 0)

NAND gate Interchange A and . Similar equations hold for a OR and NOR gate. We represent a sequence of input vectors by X(l), X(2), X(3) Then the cost of this sequence is the sum of the cost of each vector. Also, since C (t) = (C = 0 ) , (C = 1 ) , then cC p (+) cC + cC .

F l i p f l o p (JK positive edge triggered; S, R asynchronous):

Q = J(Q + K) S R C ( t ) + SR Q + K(Q + 3) S R C (t) + S R Comments cfQ = min (cS + cR, cJ + cK + 2cS + 2cR + cCp + cCp , cJ + cQ + 2cS + 2cR + cC + cC ) cfQ = min (cR + cS, d i r e c t set

set F/F set or t r i g g e r

cK + cJ + 2cS + 2cR + cC + cE , cK + cQ + 2cS + 2cR + cC + cC ) Counter:


E

L
B
P

Cr

ip_ \

% %

>

T
QA
QA

<
reset and increment load a 1 Q. = 0 and increment reset load a 0 Q A = 1 and increment.

R = (RL, RLE C p (t)) RCPA + RLE C (t)QA


RL
+

R:P A

+ RLE Cp(t)QA

NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS Therefore cfQ. = min (cR + 3cL t 2(cR + cE) + cC + cC , " cR + cL + cPft, 2(cR + cL + cE) + cC + cC. + cQ);
P P M

69

Q B = RE P B + RLE C (+)QA Q B = RL + R[ P B + RLE C (+)(QAQB) etc.

load a 1 Q = 1 and increment ; A reset load a 0 Q = Qg = 1 and increment; A

C r = E Cp QA% % % cfC r = cE + cC p t cQ A + cQ B + cQ c t cQ D .

In similar fashion, cost equations for other logic units can be derived. Computation of side effects factor csA
When a l i n e is set to a 0 or 1 , due to i t s fanout t h i s l i n e setting may affect the logic value of many elements. effects. Consider a l i n e which affects only gates and f l i p f l o p s . can be computed as follows. Then the value of csA Simulate The W e refer to this phenomenon as side

Let l i n e A have logic value 6 e { 0 , 1}.

the c i r c u i t with the i n i t i a l condition A = 6 and a l l other lines at x. contributions to csA are as follows: (1) (2) (3) a l i n e B (output of a gate) set to 0 or 1 contributes' 1 ;

a gate B having m binary inputs and an output at has a contribution of m/n, where is the number of inputs to gate B; i f a f l i p f l o p is set, or reset, i t s contribution is 4.

Again, t h i s concept can be extended to more complex elements, such as s h i f t registers and counters. For example, f o r a counter, i f we set R = 0 (reset) we W e thus have that csR = can assume that half the f l i p flops w i l l change s t a t e . the counter.

j (4 + G) where G is the average side effect associated with an output l i n e of When a counter is incremented, E(n) f l i p flops may change s t a t e , In Figure 4 we indicate the flow hence the side effects cost is E(n)(4 + G).

chart f o r assigning the costs cA and C to a l l lines in a c i r c u i t , where we assume that the rules f o r calculating the costs f o r each p r i m i t i v e element in the c i r c u i t are known. I f A is a primary input, cfA = cdA = 0. The algorithm starts at the primary inputs to a c i r c u i t , calculates these costs, and then proceeds to

70

.. B REUER

process the elements to which these lines fanout. B ecause of feedback, some elements are processed repeatedly until their cost values stabilize. The final cost value is a measure of the control ability of each line in the circuit. Set all line costs to 32.00 Compute costs of all primary input lines (side effect cost only)
Put a l l on fanout l i s t s of p i ' s into f r o n t i e r l i s t . Compute cA and cA for a l l outputs of a l l elements in f r o n t i e r l i s t . Assign these costs to the l i n e s . For each l i n e having a new cost, put a l l elements on i t s fanout l i s t into the f r o n t i e r l i s t . D elete from the fron t i e r l i s t a l l elements assigned new costs. ~C Frontier l i s t empty? J yes I D one I Figure 4. Computation of Line Costs cA and cA.

elements

Computation of D drive costs dA The cost dA associated with a l i n e A i s an estimate of the d i f f i c u l t y of driving an error signal (D ) on l i n e A to a primary output (po). For each p r i m i t i v e element in our system we have developed an equation f o r calculating the D drive cost for each input given the l i n e costs for each input and the D drive cost of each output. Because of the complexity of these equations, we w i l l only i l l u s t r a t e a few simple cases. Let l i n e X fan out to elements ,, E 2 , . . . , E p . Then dX = Assume we have already computed the costs dE,(X), which is the cost of driving a D on l i n e X through E. to a po.

min {dE } .

Consider an element having input A and output X, then dA = dpA + dX + dt where dpA is the cost of propagating a D from A to X dX is the minimum cost of propagating the D at X to a po, and dt is a cost associated with the element type t of E. Usually, dt increases with the number of clock times required to drive a D at A to X. Next we will indicate a few of the equations used to define dA.
1. Primary outputs: dA - 0 i f A is a primary output.

2. Gate functions:" AND gate assume inputs l,2,...,n, output X, and a D on input line i.

Then

di =

j=l,jrl

ci + dX

1.

NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS


This follows since a l l other inputs to the gate must be a 1 . 3. JK f l i p - f l o p with d i r e c t set and reset (negative logic) and positive edge triggering: Case A) Propagation of J = D to output. i 0

71

To propagate a D (or D) from J to Q or Q we need S = R = 1 , Q = 0, and a clock pulse. J = 0 we have Q = 0 , and i f J = 1 we have Q = 1. Therefore, If

dJ = min(32000,cQ>2cS + 2cR + cC (+) + dX + 2)

In similar fashion we can compute dK, dS, dR and dC . drive equations f o r counters and s h i f t registers.

W e have also developed D-

In Figure 5 we indicate the flow chart for computing a l l D-drive costs in a circuit. circuit. These costs are a measure of the observability of each l i n e in a In fact L. Goldstein of Sandia Laboratories has extended our results

and has successfully applied his results to the area of design f o r t e s t a b i l i t y , since these costs give the designer valuable information on observability and control a b i l i t y . I Set a l l D drive costs to 32000 1 Set D drive costs of a l l output pins to zero. Place a l l elements which have at least one primary output on the f r o n t i e r l i s t . Select an element E on the f r o n t i e r l i s t having an output with a minimum D cost. Flag t h i s output from further processing. Compute D drive costs for i t s inputs when possible. Add i t s input elements to f r o n t i e r l i s t ( f o r those lines assigned a new c o s t ) . Delete duplicate e n t r i e s . I f D-drive costs f o r a l l inputs to selected element E have been computed, delete E from f r o n t i e r l i s t . ( F r o n t i e r l i s t empty?'V-^ Cost computation Figure 5. 2.2 High Level Models In most Computation of D-Drive Costs.
ls com

P1ete

In the LASAR system 141, the only p r i m i t i v e element is a NAND gate.

other test sequence generation systems primitives consists of gate and f l i p flops. In our system p r i m i t i v e elements consists of gates, f l i p flops and higher level functional elements such as counters, s h i f t r e g i s t e r s , RAM's and ROM's. are several reasons f o r taking t h i s approach, namely There

Private communication.

72

. . BREUER

1) 2)

most d i g i t a l system consist of the interconnections of function elements such as counters, s h i f t r e g i s t e r s , decoders, multiplexers, etc. in LSI and VLSI c i r c u i t s , the detailed logic of these functional elements is not known, hence gate and f l i p flop models are not applicable, and higher level models lead to considerable computational e f f i c i e n c y .

3)

Hence several of the problems discussed in section 1 are aleviated or reduced in complexity by employing high level p r i m i t i v e s . The major disadvantage of this approach is the time required to develop the model for each p r i m i t i v e function. Breuer and Friedman 1421 have reported on the development of high level models for s h i f t registers and counters to be used during t e s t generation. t i o n we w i l l b r i e f l y summarize this work. In generating a t e s t f o r a c i r c u i t using the D-algorithm there are three main operations to be carried out, namely l i n e j u s t i f i c a t i o n , D-drive and implication. One could generate a set of cubes [2,31 for defining each of these operations. For a NAND gate having inputs A and B, and output C, the cubes are shown below, d-cubes A B C D 1 D 1 D D D D D 0 X 1 p r i m i t i v e cubes A B C X 0 1 1 1 0 In this sec-

Here we see that i f A = D and we want C to equal TJ, then we require = 1 (Ddrive). choices. For large primitive elements such as s h i f t registers and counters we do not develop a complete set of cubes since they would be too numerous to store and manipulate. line. Rather we use the concept of algorithms and compute our results onW e w i l l i l l u s t r a t e these concepts for a p r i m i t i v e counter element. I f A = 0 then C = 1 ( i m p l i c a t i o n ) ; i f we want C = 1 then we require A = 0 In general D-drive and l i n e j u s t i f i c a t i o n imply or = 0 ( l i n e j u s t i f i c a t i o n ) .

Consider an -bit counter which, under normal operation performs four functional operations, namely (U) (D) (P) (H) Count up - - the contents of the register are incremented by one; Count down - - the contents of the register are decremented by one; Parallel Load - - the contents of the register are set to the values of the data inputs A-,, . . . . A ; and Hold - the contents of the register are unchanged.

NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS

7 3

The functional behavior of this device is determined by the values of four control signals (? , U/D, G, L) as specified by Table 2.

% 1 0 0

U/D

L 0 1 1 1 1

Algorithm (P) Parallel Load (H) Hold (H) Hold (U) Up (D) Down

1 0

1 0 0 0

Table 2. Input codes for normal functional operations. Implication for UP/DOWN counter Implication can be performed using a tabular approach that utilizes an input mapping table which specifies the functional behavior of the device for any values of the control inputs. Allowing inputs on the control signals to have the values 0,1,, there are (3) = 81 possible input conditions. The table maps each of these 81 input conditions into one of 15 possible algorithms which correspond to the union of the four basic operations previously listed. The input algorithm mapping table for the UP/DOWN Counter is shown in Figure 6.

U/D G L

Algorithms Load () Up, Down or Hold (UDH) Hold (H) Hold (H) Up o r Down (UD) Up (U) Down (O) Up o r Hold (UH) Down or Hold (DH) Up, Down or Hold (UDH) UH DH UDH DH DH Hold or Load (HP) HP Up, Down or Load (UDP) Up or Load (UP) Down or Load (DP) Up, Hold or Load (UHP) Down; Hold or Load (DHP) Up, Down, Hold or Load (UDHP) DHP UHP UDHP DHP UHP UDHP

No.

of Input Conditions Uniquely Covered 27

0 1 1 1 1 1 1 1 1 1 1 1 1 1 1

1
0 0 0

1 0 1 0

- 1 - 0

."

0 0 0

0 1 0 1

0 0 0 0 0

. 27

* *'

1 0 0 0 0 0 0

. .

1 0 1 0 0 1 0 1

1 0 0 0

" \

0 0 0

1 1

27

>

Figure 6.

Input algorithm mapping table for Up/Down Counter.

74

.. B REUER

Some of the 15 algorithms for the counter are as follows: Counter Algorithms U (COUNT UP) i) ii) iii) iv) all bits to the left of the least significant 0 are unchanged; all bits (if any) to the right of and including the least significant become ; least significant 0 becomes 1 if it is to the right of the least significant ; and a rightmost string of l's (if any) become O's.

In the following development, y i is the current state of the ith flip flop in the register, and Y. is the next state of this flip flop. (HOLD) (LOAD) 1) 2) j + j . ; Y. + . ; 0 S i . 0 S i S .

UDH (UP o r D O W N o r HOLD)

a l l b i t s to the l e f t of both the least s i g n i f i c a n t 0 and least s i g n i f i c a n t 1 are unchanged. a l l other b i t s become .

These algorithms are independent of the specific implementation of the UP/DOWN counter. The table mapping control inputs into algorithms w i l l , in general, be implementation dependent. The table of Figure 6 and the counter algorithms can be used to perform implication. Let = {H,P,U,D} be a set of four p r i m i t i v e functional algorithms From the values of the control inputs (C ,U/D,G,L) and the of the counter.

table of Figure 6 a set of possible algorithms B, i s determined. S i m i l a r l y , from the values Y,y and A another set of possible algorithms B 2 _B is determined. He

Specifically, ( V . y ^ l all i ; e

iff iff

iff

all i iff

(, . )
1

M ;

UcBo

(U(y).Y.) r 1 ; all i ~7

D cB2

D ( (y). Y.) t 1 . all i ~1

The actual algorithm must be i n the set B ^ B ^ may be implied.

From this set and the table of

Figure 6 using cubical i n t e r s e c t i o n , additional values of the control inputs From the counter algorithms, additional values of the outputs The counter algorithms can also be used to determine For the = ; U = D; D = U; Y may then be implied.

implied values of y and A using the concept of inverse algorithms. counter, the inverse relationships are as follows: "1 = . Example:

Consider a 6 - b i t counter and l e t (C , U/D,G,L) = (0 ) From the table of Consequently,

A = (0 1 1 1 0 1 ) , Y = ( 1 0 ) and y = (0 0 1 ) . Figure 6, B1 = {U, D, H, P} and from Y, y and A, B2 = U).

NEW CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS


B,0 B2 = {U} and from Figure 6 the signal values L = 1 , G = 0, and U/D = 1 Now y = U (Y) = D (Y) = ( ) and hence no new state Also Y = U(y) = (0 ) implies 6 = 0. After y = (0 0 1 ) and

75

are implied.

variable is implied. ( p ,

completion of implication Y = (0 1 0 ) , U/D , G, L) = (0 1 0 1). DDrive f o r the Counter

This problem consists of propagating error signals D or D from the control signals, the data inputs A and the state variables y to the outputs Y. single and mul t i pievector solutions e x i s t . When signals can assume the values D and D the implied signal values can be determined from the table of Figure 6 and the counter algorithms by the process of composition 1431. The determination of inputs which propagate a D to an output of the counter can be specified by a "canonical" set of propagation D cubes such as those i n Figure 7 which specify the propagation of a single D signal to an output of the counter.
Composite Algorithm H/P H/P H/P H/P U/P U/P U/P U/P H/U H/U H/D H/D U/D U/D U/D U/D H/D H/D H/U H/U P/P H/H H/H U/U U/U D/D D/D L

Both

U/D

y.

y.

j< '
1 1 1 1 1 1 0 0 D D D D 0 0 1 1 . 1 1 0 0 D D D D D 1 . 0 0 0 0 . D D 0 0 0 0 0 0 D D D D 1 1 0 0 0 0 0 0 0 1 0 1 1 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 0 1 0 1 1 0 0 1

s< '

Y. i D D D D D D D

D D D D D D D D 1 1 1 1 1 l 1 1 1 1 1 1 0 1 1 1 1 1 1 0

1 1 0 0 0 0 D D D D 0 0 0 0

1 0 1 0 1 1 1 l l 1

1 1

D D D D D

1 1 1 1

15
D D D D D D D D D D

O 0 0 1 0 0 0 0

0 1 -

15
0 1 D

Figure 7.

"SingleD " propagation D cubes f o r UpD own Counter.

76 Line J u s t i f i c a t i o n for Counter

.. BREUER

Both single and multiple-vector solutions exist to l i n e j u s t i f i c a t i o n problems. W e w i l l only i l l u s t r a t e single vector solutions. Let B, be the set Let B2 be the set Compute , 2 and of possible algorithms defined by the control input values. of possible algorithms defined by the values of y , A and Y.

define additional control inputs and values of y and A using the table of Figure 7 and the counter algorithms as specified in the following general procedure. Procedure: (Single-Vector Line J u s t i f i c a t i o n for Counter)

(1) Generate , 2 . (2) Select a primitive algorithm in ,,, if one exists. (If none, justification is impossible.) (3) Specify necessary values to the elements in (C , U/D, G, L) so that P' algorithm is realized.
(4) (5) Example: Specify y = " (Y) unless = , in which case A = of (Y). Normal backtrack can be used to generate other possible solutions. Consider a 4 - b i t counter and assume X = (0 1 0 1 ) , y = ( 0 ) , Then = {U, D, , } and

A = ( 0 1) and (C , U/D, G, L) = ( ) . 2 = {U, D, } = ^ ^ J u s t i f i c a t i o n No. 1 :

y = U-1, (Y) = D(0 1 0 1 ) , which, from the Counter Algorithm for D implies y = t0 1 0 0 ) . SUMMARY AND CONCLUSIONS

Select = U.

From Figure 6, C = 0, U/D = 1 , G = 0 and L = 1.

Furthermore,

In this paper we have first reviewed the current state of the art in ATPG systems. We believe that the growing complexity of modern circuitry makes test generation an extremely arduous task. To ease the difficulty of this task one must employ some means of design for testability as well as have available a powerful ATPG system. We have discussed several aspects of an advanced ATPG system which we are designing, namely the use of preprocessing and high level functional models. We have not presented the actual algorithm. This algorithm ties together our functional models, the time delays in the circuit, and the results of our rate and cost preprocessing so that test for faults can be more efficiently generated. These tests will then be processed using a concurrent fault simulator. Aspects of the Initial design of this simulator were reported in [44]. In the future we believe that test generation must proceed at still higher levels

N E W CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS of description, such as the register transfer level and i n s t r u c t i o n set l e v e l . The system specification must also be used, hence one w i l l actually be testing for overall functional performance. arrays of chips. W e must learn how to e f f i c i e n t l y handle b i t sliced microprocessors, and concatenate tests in order to handle large

77

There i s also a potential application f o r generating tests f o r

large regular arrays using recursive procedures. I f the problems of testing are not adequately solved, the successful u t i l i z a t i o n of LSI c i r c u i t s may be seriously impeded.
ACKNOWLEDGEMENT I would l i k e t o acknowledge t h e c o n t r i b u t i o n o f P r o f e s s o r A. D. Friedman who worked w i t h me on much o f t h e model development f o r t h e c o u n t e r which is

reported in t h i s paper.

REFERENCES til F. F. Sellers, M. Y. Hsiao and L. W. Bearnson, (July 1968), "Analyzing Errors with the Boolean Difference," IEEE Trans, on Computers, Vol. C-17, pp. 676-683. A Calculus and

121 J . P. Roth, (July 1966), "Diagnosis of Automata Failures: a Method," IBM Journal of Res, and Dev., pp. 278-291.

131 J . P. Roth, W. G. Bouricius and P. R. Schneider, (Oct. 1967), "Programmed Algorithms to Compute Tests to Detect and Distinguish Between Failures in Logic C i r c u i t s , " IEEE Trans, on Computers, Vol. EC-16, pp. 567-580. 141 151 J . J . Thomas, (August 1971), "Automated Diagnostic Test Programs f o r Digital Networks," Computer Design, pp. 63-67. D. B. Armstrong, (January 1966), "On Finding a Nearly Minimal Set of Fault Detection Tests f o r Combinational Logic Nets," IEEE Trans, on Electronic Computers, Vol. EC-15, No. 1 , pp. 66-73. D. T. Wang, (July 1975), "An Algorithm for the Generation of Test Sets f o r Combinational Logic Networks," IEEE Trans, on Computers, Vol. C-24, pp. 742-746. S. S. Yan and Y. S. Tang, (November 1971), "An E f f i c i e n t Algorithm f o r Generating Complete Test Sets for Combinational Logic C i r c u i t s , " IEEE Trans, on Computers, Vol. C-20, pp. 1245-1252. D. C. Bossen and S. J . Hong, (November 1971), "Cause Effect Analysis f o r Multiple Fault Detection in Combinational C i r c u i t s , " IEEE Trans, on Computers, Vol. C-20, pp. 1252-1258. M. A. Breuer, S. J . Chang and S. T. H. Su, (January 1976), " I d e n t i f i c a t i o n of Multiple Stuck-Type Faults in Combinational Networks," IEEE Trans, on Computers. Vol. C-25, pp. 44-54.

[61

[71

181

191

78 [10] [11] [12]

.. BREUER K. C. Y. Mei, (July 1974), "Bridging and Stuck-At-Faults," IEEE Trans, on Computers, Vol. C-23, pp. 720-727. J . J . Shedletsky, (June 1978), "Delay Testing LSI Logic," 1978 I n f i . Symp. Fault-Tolerant Computing, Toulouse, France, pp. 159-164. M. A. Breuer, (October 1974), "The Effects of Races, Delays, and Delay Faults on Test Generation," IEEE Trans, on Computers, Vol. C-23, pp. 1078-1092. G. R. Putzolu and J . P. Roth, (June 1971), "A Heuristic Algorithm f o r the Testing of Asynchronous C i r c u i t s , " IEEE Trans, on Computers, Vol. C-20, pp. 639-646. M. A. Breuer, (November 1971), "A Random and an Algorithmic Technique f o r Fault Detection Test Generation for Sequential C i r c u i t s , " IEEE Trans, on Computers, Vol. C-20, pp. 1364-1370. F. J . H i l l and B. Huey, (May 1977), "SCIRTSS: A Search System f o r Sequential C i r c u i t Test Sequences," IEEE Trans, on Computers, Vol. C-26, pp. 490-502. P. Agrawal and V. D. Agrawal, (July 1975), "Probabilistic Analysis of Random Test Generation Method f o r Irredundant Combinational Logic Networks^' IEEE Trans, on Computers, Vol. C-24, pp. 691-695. R. C. Ogus, (May 1975), "On the Probability of a Correct Output from a Combinational C i r c u i t , " IEEE Trans, on Computers, Vol. C-24, pp. 534-544. H. Huang and M. A. Breuer, (1973), "Analysis of Detectability of Faults by Random Patterns in a Special Class of NAND Networks," Comput. and Elect. Engng., Vol. 1 , pp. 171-186. K. P. Parker and E. J . McCluskey, (May 1975), "Analysis of Logic Circuits with Faults using Input Signal P r o b a b i l i t i e s , " IEEE Trans, on Computers, Vol. C-24, pp. 573-578. J . Losq, (June 1977), "Efficiency of Compact Testing for Sequential Circ u i t s , " Proc. 7th I n f i . Conf. on Fault-Tolerant Computing, pp. 168-174. Hewlett-Packard Company, (April 1977), "A Designer's Guide to Signature Analysis," Application Note 222, Palo A l t o , C a l i f o r n i a . J . P. Hayes, (June 1976), "Transition Count Testing of Combinational Logic C i r c u i t s , " IEEE Trans, on Computers, Vol. C-25, pp. 617-620. J . P. Hayes, (January 1978), "Generation of Optimal Transition Count Tests," IEEE Trans, on Computers, Vol. C-27, pp. 36-41.

[131

41

51

[161

71 [181

91

[201 [211 [22] [231

1241 J . P. Hayes, (October 1976), "Check Sum Methods for Test Data Compression," J . Design Automation and Fault Tolerant Computing, Vol. 1 , pp. 3-17. [251 [261 K. P. Parker, (June 1976), "Compact Testing: Testing with Compressed Data," Proc. 1976 I n f i . Symp. on Fault Tolerant Computing, Pittsburgh, pp. 93-98. S. C. Seth, (February 1977), "Data Compression Techniques in Logic Testing: An Extension of Transition Counts," J . Design Automation and Fault Tolerant Computing, Vol. 1 , pp. 99-114.

N E W CONCEPTS IN AUTOMATED TESTING OF DIGITAL CIRCUITS [27] [28] J . Losq, (June 1976), "Referenceless Random Testing," Proc. 1976 I n f i . Symp. on Fault-Tolerant Computing, Pittsburgh, pp. 108-113. H. Fujiwara and K. Kinoshita, (June 1978), "Testing Logic Circuits with Compressed Data," Proc. 1978 I n f i . Symp. on Fault-Tolerant Computing, Toulouse, France, pp. 108-113. Fluke-Trendar Div, (1973), "Faultrack: Universal Fault Isolation Procedure f o r Digital Logic," B u l l e t i n 122, Mountain View, C a l i f o r n i a . J . Fischer, (1974), "Test Problems and Solutions for 4K RAMS," in 1974 Digest Papers, Symp. Semiconductor Test, pp. 53-71.

79

[29] [30] [31] [32]

John P. Hayes, (February 1975), "Detection of Pattern-Sensitive Faults in Random-Access Memories," IEEE Trans, on Computers, Vol. C-24, pp. 150-157. J . Knaizuk, J r . and C. R. P. Hartmann, (November 1977), "An Optimal Algorithm for Testing Stuck-At-Faults in Random Access Memories," IEEE Trans. on Computers, Vol. C-26, pp. 1141-1144. V. P. S r i n i , (April 1978), "Fault Location in Semiconductor Random-Access Memory Units," IEEE Trans, on Computers, Vol. C-27, pp. 349-358. S. Seshu and D. N. Freeman, (August 1962), "The Diagnosis of Asynchronous Sequential Switching Systems," IEEE Trans, on Electronic Computers, Vol. EC-11, pp. 459-465. D. B. Armstrong, (May 1972), "A Deductive Method for Simulating Faults in Logic C i r c u i t s , " IEEE Trans, on Computers, Vol. C-21, pp. 464-471. E. G. Ulrich and T. Baker, (April 1974), "Concurrent Simulation of Nearly Identical Digital Networks," Computer, Vol. 7, pp. 39-44. S. G. Chappell, H. Y. Chang, C. H. Elmendorf and L. D. Schmidt, (November 1974), "A Comparison of Parallel and Deductive Simulation Techniques," IEEE Trans, on Computers, Vol. C-23, pp. 1132-1139. M. A. Breuer and L. Harrison, (October 1974), "Procedures f o r Eliminating Static and Dynamic Hazards in Test Generation," IEEE Trans, on Computers, Vol. C-23, pp. 1069-1078. See references in A. Hlawiczka, (February 1978), "Comment on Procedure for Eliminating Static and Dynamic Hazards in Test Generation," IEEE Trans. on Computers, Vol. C-27, pp. 191-192. R. A. Rutman, (1972), "Fault Detection Test Generation f o r Sequential Logic by Heuristic Tree Search," IEEE Computer Repository Paper No. R-72-187. M. A. Breuer and A. D. Friedman, "Functional Level Modeling in Test Generation," submitted to IEEE Trans, on Computers. M. A. Breuer and A. D. Friedman, (1976) Diagnosis and Reliable Design of Digital Systems, Computer Science Press. M. Abramovici, M. A. Breuer and K. Kumar, (June 1977), "Concurrent Fault Simulation and Functional Level Modeling," Proc. 4th Design Automation Conference, pp. 128-137.

[33] [34]

[35] [36] [37]

[38]

[39]

[40] [41] [42] [43]

80 [44] 45

.. BREUER . . Breuer, (March 1973), "Testing f o r Intermittent Faults in Digital C i r c u i t s , " IEEE Trans, on Computers, Vol. C-22, pp. 340-351. I . Koren and Z. Kohavi, (November 1977), "Diagnosis of Intermittent Faults in Combinational Networks," IEEE Trans, on Computers, Vol. C-26, pp. 1154-1158. S. Kamal and C. V. Page, (July 1974), "Intermittent Faults: A Model and a Detection Procedure," IEEE Trans, on Computers, Vol. C-23, pp. 770-725. J . Savir, (June 1977), "Optimal Random Testing of Single Intermittent Failures in Combinational C i r c u i t s , " 1977 I n f 1. Symp. on Fault-Tolerant Computing, Los Angeles, C a l i f o r n i a , pp. 180-188.

46 47

G. Huigrave, editoi, COMPTER-AIDEP DESIGN oi digitai ele c troni c c ir c uiti and iy&temi North-Holland Publishing Company ECSC, EEC, EAEC, Bruiieli S Luxembourg, 1979

LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD ARE REQUIREMENTS CONVERGING ?

H. De Man Katholieke Universiteit Leuven Laboratorium ESA T Heverlee, Belgium

INTRODUCTION No discipline in todays technology is subjected to such a increase in complexity than that of digital electronics. This is mainly the result of achieving over four orders of magnitude increase in functional density per integrated circuit during the last two decades LU . This development is the basis of the low cost of computing power which in its turn is necessary to cope with the manipulation of the vast amount of data involved in the design of such systems. As it stands today a digital system can be considered as an interconnection on a printed circuit board (PCB) of a number of functional integrated circuits (IC's) at MSI or LSI level. The purpose of this contribution is to discuss the commonalities and differences between computer aided design (CA D) tools for PCB and IC level design. This is believed to be of interest to understand future developments of software tools for both fields and to encourage crossfertilization wherever possible. The problem will b studied by looking at the growth in complexity of IC's and the corresponding growth in software tools for simulation, layout and testing. It is thereby convenient to look at the so called MSI period before 1975 followed by a study of actual trends from LSI towards VLSI. 1) THE MSI PERIOD BEFORE 1975 This period is characterized by little or no commonalities between PCB (systems) and IC design. During this time span IC's are limited in complexity to gate and register level (Fig. 1 ) . These functions are assembled by the system designer as sets of PCB's containing 10...100 MSI's (1...10K gate equivalents) to build digital control systems and mini or maxi mainframes where high speed at lowest cost is of primary importance. To ease the design of complex systems MSI IC's are characterized by a) A high degree of standardization in electrical, delay, packaging, I/O buffering characteristics as well as logic function. b) Lowest possible cost/function for highest possible performance in terms of delay and power dissipation. Clearly this leads to strongly different design objectives, abstraction level and, as a result of this, software design tools. The differences are summarized in table 1. PCB level Characteristic IC level Design problem Circuit logic & system PCB layout Testing gate and higher ,,, MIN(package count)


Abstraction Signals Objective Table 1. Transistor V(t), I(t) MINtPxt^xA ) d c

Difference in design problems for MSI level design. P; t^ and A c are respectively gate power, gate delay and chip size.

81

82

H. De MAN

Due to the low complexity, layout and testing of MSI functions did not present major problems. Layout is done manually using interactive graphics as a drawing aid . This leads to the objective of smallest chip size (lowest cost). The main problem is one of circuit optimization at transistor level whereas for PCB MSI design the problem is at the logic design level as well as PCB layout and testing level. 1-1) Design verification. In as far as design verification is concerned this has led to the disciplines of circuit simulation (CKTSIM) for IC design and to logic simulation (LS) in the PCB design world. Their properties are shown in table 2 and illustrated in Fig.1. Characteristic Level Model Signal Technolcgy-Layout link Algorithm Ckt Sim. (IC) Transistor Algebraic V(t), I(t) Excellent Matrix inv. Newton Raphson Implicit Integration Double precision 30...50 gates Logic, sim. (PCB) Gate-FF-(functional) Logic & delay ,,, Very weak Event scheduling Inactivity exploitation Logic operations 20.OOO gates

Arithmetic Max. complexity

Table 2. : properties of circuit simulation and logic simulation. From this table and Fig. 1 one can notice : a) At the time of their development (1968...1970) both tools could cope with the full complexity of the problems at hand in both areas. b) As wished by the IC design objective, circuit simulation allows for very accurate ( 1%) simulation of circuit behavior as a function of technology and layout but the complexity of algebraic device models, numerical algorithms and double precision (64 bit) arithmetic, impose a maximum capacity of ca. 20...40 gates/ simulation. This limitation is fundamental and can only be lifted at the expence of loss in accuracy and for digital circuits only L ^ J (see 2 on timing simulation). c) Logic simulation is ca. 10^ times more efficient by exploiting the high average inactivity (latency) of digital networks and by representing models as simple logic operators in the very restricted set of signals states (,,,) R* . The succes of LS at PCB level is the result of the high degree of standardization of MSI functions offered by the IC manufacturer which allows for the creation of library stored "macrodescriptions" of most MSI IC's to be used directly by the designer, who does not have to worry too much about the modeling problem. Notice however that in LS the link of performance to technology and layout is very weak. LS is a typical PCBMSI tool. The problem of using it to LSI IC's today and at PCBLSI level will be discussed in 2 ) . 1-2) Testing design. Fig. 1. clearly shows that PCB designers have been confronted f rom the start with the testing problem of 1...lOK gate circuits. No such severe problem existed at IC level. This has resulted in the discipline of Automated JJest Pattern Generation (ATPG) initiated after Roth's introduction in 1966 of D calculus 141 or path sensitization which is still recognized as the best systematic technique available.

LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD


The problem with ATPG is that CPU time (as in CKTSIM) increasesas '' : CPU V K(FF).NY (1,5 < y < 2,5) (2)

83

with N the number of gate equivalents and K(FF) an exponential function of the number ofJlipF_lops (or state variables) of the sequential system. Figure 2 ^" curve Q shows indeed how ATPG cost for unconstrained design becomes prohibitive for complexities in excess of 5000 gate equivalents. This limit is also indicated in Fig.l. The probiert A even more complicated since ATPG needs to be verified by TP verifi cation d L J ta using logic simulation due to crude modeling and simple fault models used in ATPG algorithms. Fig.l. shows again that ATPG at the time of its concep tion (1966) was barely capable of coping with the PCB system complexity but not at all capable as such to cope with todays complexity. Therefore computer manufac turers rapidly went into the dissipline of imposing design rules to make the sys tem testable i.e. the discipline of design and partitioning for testability(DPT) These techniques mainly impose : a. Exclusive use of synchrononous logic b. The provision to add additional (10... 20%) logic to transform sequential into combinational logic for testing purposes by allowing all flipflops to be con nected as a scanable shiftregister (SRL) for state verification L1QJ c. Subnetwork partitioning into subnetworks using the SRL concept which reduces cost to : CPUv2 N . y ( N . ) y (3) 1 1 i X The result is shown clearly in F i g ^ ^ b y curves (y applying a and b) and (applying a,b, & c ) . Fig.l shows that DPT is a must for future PCB systems design. 13) PCB layout A large number of PCB layout programs have been developed. They all make use of the high degree of packaging standardization and perform partitioning,placement and routing. Fig.l. however reveals the fact that the average system complexity is increasing at a much slower rate than IC complexity i.e. rather than making the system complexer it is made cheaper per function by IC technology. Therefore the layout complexity is moving rapidly away from the PCB to the inside of the IC package. Therefore the PCB layout problem will probably not become more difficult in the future but the burden will be on IClayout as will be discussed in 2. 2 A FTER 1975 : FROM LSI TOWARDS VLSI Q)

Referring back to Fig.l. let us analyse some converging and diverging requirements for the design of a PCB system and IC's. a. Converging requirement : cope with increase in complexity Fig.l. shows clearly the convergence between systems and IC complexity when en tering the eighties. From Fig.l. we can also derive the following facts : 1. CKT SIM as such is totally inefficient to cope with full IC complexity. Instead it seems that LS is appropriate, at least until 1981, for IC simulation 2. Even today, ATPG for unconstrained design can barely cope with LSI complexity and clearly will fail totally when VLSI becomes a reality around 1980. Design and partitioning for testability, as discussed in 1, are an absolute must both for future systems and IC design. This imposes the sacrifice of some of the potential complexity increase to design structuring and architecture for testability(e.g. synchronous pipline type arithmetic). Only this way the algorithms of ATPG can cope with the increased complexity.

84

H. De MAN

3. Beyond 1980 for IC design and today already for PCB design, there is a strong need for system simulation (SS) which allows for design verification of synchronous systems described as the interaction of subsystems(RAM,ROM,PLA, processors, registers) described at functional level. To the authors knowledge a-lot of research is going into this field but it is not yet widely used by lack of standardization of methods and education of the users. Here is an important contribution of sponsored joint efforts of universities and industry seems necessary. On the other hand Fig.l shows that most of the detailed design problems of todays PCBMSI are moved away from the PCB into the IC package and this actually causes strongly diverging requirements to PCB and IC design software. b. Diverging requirements : the technology link The biggest difference between PCB and IC design is that, although both are now systems design, the lowest level of design abstraction is extremely different. - At PCB level the interaction,placement and routing of well defined functional modules with well defined external characteristics(e.g. TTL + 5V supply,TTL logic levels, timing data) is considered. - At IC level the full spectrum from system down to circuit, transistor and technology level needs to be considered. Exactly those three fields are in full evolution since the growth in Fig.1 is only possible by technology improvements (smaller linewidths,low temperature processing, low defect oxide growth...) and new circuit techniques (dynamic storage, charge transfer, merged structures, double polysilicon memory cells,analog MOS, MOS sensing amplifiers etc...). This leads to : - strong layout-technology-performance link as opposed to external standardized functional modules at PCB level. - decreasing on chip voltage levels, logic levels, use of new device structures etc... All these effects cause strongly divergent requirements in different fields discussed below : 1) Data base For PCB systems the design of data base at functional(logic)level is imperative in conjunction to ATPG, DPT,LS, SS software. Practical application of such a data base is linked to its continuous updating which is a tremendous task requiring close collaboration of research institutes and semiconductor industry, (cfr EEC study proposal on data bank). For IC design the amount of design data explodes literally downard into device and technology data ( 10 4 gates/chip g, 10& geometric data ! ) . Practical use of such a data base is in it capability to adapt a fast technology evolution ! Whereas in PCB design the data-explosion is in the increasing number of LSI functions, in IC design the data-explosion is present at each design problem at hand. 2) Simulation (design verification) The need for SS and LS both for PCB as well as IC level has been recognized above. However, since progress is only possible by circuit technology cleverness, often electrical effects which cannot be modeled in LS (e.g. transmission gates, sensing amplifiers e t c . . ) are exploited and no such standardization as at PCB level exists. Furthermore the correspondence between intended logic and layout wiring is sometimes far from straightforward(e.g. I^L) and layout and logic performance are strongly interrelated. Therefore full circuit simulation at 1... 10K gate (bit) level could be useful in many cases (e.g. dynamic RAM design). This is clearly beyond the capability CKTSIM as shown in Fig.l. ri r .. r i To fill this gap recently a new CAD discipline called TIMING SIMULATION L y " L -1 (TS) has appeared which can cope with dominant effects in logic MOS and I^L circuits up to ca 5K gate level(see fig.l). Timing simulation is a relaxed form of circuit simulation for digital circuits

LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD

8 5

based on the three following facts : - model simplification : device models are stored as tables or piecewice linear elements and no nonlinear iteration is used. - algorithm simplification : by equation decoupling for small timesteps, matrix inversion is avoided and network inactivity is exploited as in logic simulation. - macromodels of logic networks are used (from gates to PLA's). As a result TS is between 10^... 10^ times more efficient than CKTSIM yet still predicts circuit behavior as a function of layout and technology. Timing simulation is a typical IC design tool as a result of the strong link of IC design to technology. Fig.l and Table 3 summarize the range of applicability, the response and modeling level as well as the degree of link to layout & technology of the different simulation disciplines know today.

io6
V L 10-

SIM.TYPE

RESPONSE

MODELING

TECH.LAYOUT LINK

SYSTEM SIM. LOGIC SIM.

VECTOR 1,0 TIMING 1,0 DELAY ,,,DELAY V(H) ,I(H) 20% Accuracy

FUNCTIONAL GATE FF LOGIC DP MACROMODEL DEVICE TABLE

0 %

L S HO3.

20 %

TIMING SIM.

80 %

2-

S
S

io

_i

CRT SIM

V(t) , I(t) 1 % Accuracy

ALGEBRAIC

100 %

Table 3 : summary of properties of existing design verification aids. Clearly visible is the need to use the full spectrum of design aids for IC work in contrast to PCB design. However most important is that no single discipline can cover the full spectrum since the higher the level of simulation, the less detailed information is obtained. This in itself imposes a top-down design approach such as in structured software programming. The VLSI chip design starts by conceptualizing (testable!) interaction between synchronous subsystems '(SS). These can be verified by LS by expanding the subsystem into gate-FF-register-memory level. This level can be verified for meeting the timing requirements within the systems clocking by LS and CKTSIM. The direct link to layout and technology is of primary importance. From the above and the extremely high cost of redesign due to mistakes, it is absolutely necessary to unify all CAD disciplines under one common description languages (CDL) and data base such that no errors are introduced when going from one level of abstraction to the next.lJqj Also a tendency exists to be able to mix the design levels in so called "hybrid" simulation, in which LS,TS and CKTSIM are possible under one memory and program manager. M 3 3) Layout Since the complexity of interconnections is moving from the PCB board into the LSI chip the IC layout problem becomes the most severe with very different requirements as summarized in table 4.

8 6

H. De MAN

*? 1 2 3 4 5 6

(V) LSI Area optimization important Strong performancelayout link Transistor level Highly technology dependent Many design rules Function identification very difficult Highly error prone

PCB Less important Weak due to I/O standardization Functional level Standardized Few design rules Component library easy Less error prone

Table 4 : layout problems for LSI & PCB The area optimization requirement(point 1) still makes manual design predomi nant, sometimes aided by symbolic design whereby use is made of a menu of predefined symbols which are translated into the different mask level geome 1 tries L ^!. However when going from 1970*1985 we see ca. 3 orders of magnitude expansion of geometric data/chip design i.e. ca. 10 items to be verified again some 40 design rules ! Manual techniques are verv error prone and therefore the 1 h typical. IC discipline of DESIGN RULE CHECKING t2"! has been created/ Again how ever : CPU /V n ... n lgn with n the number of geometric entities. This property again indicates the need for a design automation in a structured partitioned environment. On such technique is the automated layout using a cell approach H I P S . In this polycell approach a number of'Standard" circuit functions are designed at transistor level(cell) using interactive graphics coupled to design rule ve rification .and circuit simulation(activities(2)(3)(4) in Fig.3 which represents an " integrated automated design system'.').At this level all approaches can be done on a minicomputer possibly used as intelligent terminal. Once these cells have been designed their geometrical, logical and electrical characteristics (e.g. logic diagram, loading rules, macromodels) are stored in a common data base as discussed earlier. . , An automated layout program based on row placement and channel routing L ^ l i . | 2 4 J then treats the placement of interconnection of the cells without regard to their actual content. In newer approaches partitioning,placement of routing of polycell subcircuits as well as manual design becomes possible. At this level similar techniques as in PCB design are used. Since all data is contained in the data base also logic and timing simulation of the automated layout can be performed to verify the actual chip performance. If cell layout also is done to testability requirements also ATPG is possible (see fig.3). In this way a fully automated design system can be designed for (V)LSI. It can be concluded that probably this is the only viable way to (V)LSl.The evolution towards this controlled design method is probably comparable to the evolution of programming from machine level language towards higher level struc tured programming. 4) Modeling Whereas for PCB design as well as VLSI design modeling at functional level is necessary ,there is again an additional modeling requirement for VLSI at the

LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD

87

bottom level i.e. technology and device level. See activities (10) and(ll) in Fig.3 .previous device models cannot cope with a number of new effects resulting from reduction to micron linewidths and modeling of technology steps is necessary to understand submicron lithography and low temperature processingBSl . g 3 CONCLUSION From the above we can conclude that CAD for VLSI has a number of converging and diverging requirements. - converging : - both are systems design in top-down fashion (share SS,LS,Functional modeling) - testability has to be included in the design process so that ATPG can be used - diverging : - absolute link to technology - spectrum of VLSI down to transistor level - absolute requirement for integrated simulation-layout-testing CAD system Fig.4 illustrates the CAD disciplines for both PCB and IC design showing clearly this interrelationships, commonalities and differences. Clearly the efforts to create tools for both disciplines are fairly different except at the top level. References f j . } G.E. MOORE : " Progress in Digital Integrated Electronics;IEEE IEDM Talk 1.3, Washington D.C., Dec.1975. Qj) M.Y.HSEUH et al : " New Approaches to Modeling and Electrical Simulation of LSI Logic Circuits", Comptes Rendus des Journes d'Electronique, Ecole Polytechnique de Lausanne, pp 403-413, Oct.1977 3] M.A. BREUER, A.D. FRIEDMAN : " Diagnosis & Reliable Design of Digital Systems" Pitman pubi. Ltd,London 1977. [4] J.P. ROTH : " Diagnosis of Automata Failures : A Calculus and a Method",IBM Journal of Research and Development, Vol.lO.pp 278-291,July 1966 [5] G.H.STRUGE: " A Test Methodology for Large Logic Networks",proc. 15th Design Autom.Conference,Las Vegas,pp 103-116, June 1978. [6] B0T0RFF et al : " Test Generation for Large Logic Network" Design Automation Conf.,pp 475-486,June 1977 , proc. 14th

fj] SZYGENDA et al : " Digital Logic Simulation in a Time Based, Table Driven Environment -part 2 - Parallel Fault Simulation",Computer pp 38-49,March 1975 8) H.Y.CHANG et al:" Deductive Techniques for Simulating Logic Circuits",Computer, pp 52-53, March 1075 [9] E.G. ULRICH et al : " Concurrent Simulation of Nearly Identical Digital Networks", Computer, Vol.7,pp39-44, April 1974 [lq| M.J.Y.WILLIAMS et al : "Enhartcning Testability of Large Scale Integrated Circuits Via Test Points and Additional Logic",IEEE Transactions on Computers, Vol.C-22,pp 46-60,January [ly W.B.RABBAT et al : " A Computer Modeling Approach for LSI Digital Structures" IEEE Trans.on Electron Devices, Vol.ED-22,No 8,August 1975 [12] B.R. CHAWLA et al : " MOTIS - An MOS Timing Simulator", IEEE Trans.on Circuits & Systems, Vol.CAS-22,No2,Dec.1975.

88

H. De MAN

[j|| S.P.FAN et al : " MOTIS-C : A New Circuit Simulator for MOS LSI Circuits", Proc.IEEE Int'l Symp. CAS.,pp 700-703, 1977 Il4j G. ARNOUT et al : " The use of Threshold Functions and Boolean controlled network elements for macromodeling of LSI circuits", IEEE Journal of Solid State Circuits,Vol.SC-13,Nr.2, june 1978 pp 326 - 332. | l 5 J D.O. PEDERSON et al : " A simulation program with LSI emphasis". Proc. 1978 Int'l Symp.on Circuits & Systems, pp 1-4, May 1978 [ 6 J H. DE MAN et al : " The Use of Boolean Controlled Elements for macromodeling of Digital Circuits", See ref. 15, pp 522-526 [ 1 7 J G.R. BOYLE : " SIMPIL : A Simulation Program for Injection Logic" See ref. 15, pp 890-896. ( 8 J W.M. VAN CLEEMPUT : " An Hierarchical Language for the Structural Description of Digital Systems", proc.14th Design Autom.Conference, San Francisco, pp 377-385, June 1977 [}9j D.GIBSON et al: " SLIC-Symbolic Layout of Integrated Circuits" ,proc. 13th Design Autom.Conference,pp 434-440, June 1976. |2o] B.W. LINDSAY et al:" Design Rule Checking and Analysis of IC Mask Design", proc. 13th Design Autom. Conf., June 1976, pp 301-308 2^ G. PERSKY et al : " LTX- A System for the Directed Automatic Design of LSI Circuits", Proc.of 13th Design Autom.Conference, San Francisco, pp 399-416 1976 g2J A. FELLER : " Automatic Layout of Low Cost Quick-Turnaround Random Logic custom LSI Devices", see ref.[21], pp 79-85 23 A. FELLER : " CAD VLSI Design Techniques and Microprocessor Application", Digest of Int.Solid State Corcuits Conf., Febr.1978, pp 212-213. 24 H. BEKE et al : " CALMOS Computer Aided Layout program for MOSLSI", Journal of Solid State Circuits, Vol.SC-12,No 3,pp 281-282, June 1977 25] B.T. PREAS et al: " Methods for hierarchical Automatic Layout of Custom LSI Circuits Masks", Proc.15th Design Autom.Conf.,Las Vegas,pp 206-212,june 1978. 26J : " Proc. of NATO Course on process and Device Modeling" ,Universit Catholique de Louvain, Louvain-la-Neuve, Belgium,July 19-29,1977

LSI DEVICE CAD VERSUS PCB DIGITAL SYSTEM CAD

89

XXK -\p

U - S Y S g M SIM

^+
L O GIC S!M

W\j~ZFULL NETWORK -WITH SRL (9SI

FULL NETWORK PARTITIONED AND SRL

10 12 1 4 IS 18 20 22 24

THOUSANDS O F GAIES

Fig. I n v o l u t i o n of IC and syste m complexity as compare d t o comple xity handled by CAD t o o l s .

F i g . 2 : Cost of ATPG as a function of comple xity for d i f f e r e n t d e sign s t r a t e g i e s . S o u r c e : re f 6

1ECHNOLOGYI MODELING ||I0I

c ] J SYSTEM CCSCR1PTON| 111

sueicip.cuii LAYO UT |[2| I JESSMSy.E C-OONG1|3I

IC

DESlGw

BU3CF.:U1SMUAT0N ||4| I

3f
SYST. DOC. LOGIC VEaiFCAI;C~ll51 I SYSlGi LAYO UI 1IM1IC SIMULATIO N LOGIC SIMULAI OM BACKPLANEWRNG | PCB LAYO UT -SYSTEM SIM. \

TECHNOLOGY MO DELING DEVICE MO DELING CIRCUIT SIMULATIO N DESIGN RULE CHECKING

Dl6l
k, |'

-LOGICSIM. 1 - TIMING SIMULATIO N -FUNCTCXALMXH MACRO MO DELING AUTOMATED LAYO UT -ATPG -DESIGN .FA VFORTEST

WEAK MASK I MAKING IESI rlERN GENERATIO N VERIFICAtlON TECHNOLOGY LINK

7
ALGORITHMIC LINK

= IG.

Fig.3 : " An integrated design system for IC's "

Fig.4 : Overview of CAD for sys tem and IC design and their re lationships

G. Uuigrave,

ECSC, EEC, EAEC, 8<iu4.6es S Luxembourg,

oi digital electronic circuiti and iyitemi North-Holland Publishing Company

editor,

COMPtfTER-AIPEO PESIGN 1979

CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS Hans Martin Lipp University of Karlsruhe/Germany ABSTRACT Technological success results in digital circuits with more functions per chip than ever before. Highly regular structures like matrices are preferred or such designs which result in random structures that are generally accepted. Customerdesigned circuits are only a realistic alternative for high volume applications and they require a close interaction between customer and semiconductor manufacturer. On the other hand digital circuits are also in wide use in fields which deal with only a few parts per system. It seems to be impossible to cover the whole range of applications with a unique digital concept. Low priced microprocessors may suggest a common hardware approach, but speed restrictions and high programming effort practically prevent such a general solution. Up to now the designer has to struggle with very different kinds of hardware, and new products seem to enlarge that problem. Without additional insights and tools it is nearly impossible to evaluate given choices in an efficient and competent manner. There exists therefore a concrete need for powerful computer aided design methods that can deal with the complexity of modern circuits and that guarantee a high design quality. As an example the Karlsruhe design system LOGE will be presented as an efficient instrument which can handle different hardware concepts for control applications. I. INTRODUCTION Digital circuits now are in widespread use not only in computer applications but also in many different areas which have been formerly dominated by mechanical, pneumatic or electromechanical devices. As a result the term digital may be an attribute belonging to a complete unit or only to a small fraction of a whole system. Design of digital circuits, therefore, here will refer to that step within the development process which is concerned with the construction of a specific solution from a given set of digital operating chips and modules. This paper does not deal with a detailed evaluation of existing digital circuits and their features. With regard to the topic of this symposium, the aspect of computer aided design for such circuits is the central topic, outlining the need for cheap but effective tools in this field. Larger companies and manufactures with a broad production line in digital systems have a lot of in-house experience, and correspondingly trained personnel. The aim of this paper is not primarily to discuss their problems but to create an approach which better reflects the needs and possibilities of smaller companies without a broad background in digital techniques. Their restricted abilities in manpower, investments, and in experimenting in a new field create a much more difficult situation in adapting to digital electronics than it is for larger companies. Another limitation should also be mentioned. Fashion influences, the need for more comfortable handling and control, smaller volume, higher reliability, reduced power consumption, and additionally wanted features are forcing the introduction of digital circuits in nearly all areas of engineering. But often the solutions must compete e.g. with electromechanical units, which are very cheap, safe against power failure, and able to directly actuate power switches. Then, a real gain in overall costs and performance can only be achieved if a good choice for the hardware has been made and the digital circuits are extremely 91

92

H.M. LIPP

well designed. In my opinion, this can only be achieved in the long run by using appropriate design tools. To define more exactly the designs ste,;s this paper is concerned with, the schematic of Fig. 1 may be used. The complete development process of a system can be divided into several more or less distinguishable steps which are related to specific tasks. The dividing lines between Customer the customers' and the Definition of task manufacturers' part are not precisely fixed. ' Their actual positions depend heavily on the Formol description type of integrated Technology. circuit family to be Hordwareconcept used. Designations 1 Logic design like program and periphery are not Software necessarily equal to IMicroprogramsetc) those used in the con1 Physical design text of digital com"ROH-Codes Manufacturer puters . Programs are often fixed routines ' performing operations Production for predefined, invariable tasks, e.g. 1 in control applicaF tions. Peripheral Test by units may not include manufacturer printers and card Interface. readers, but can conPeriphery ' sist of switches, Test by customer solenoids, stepping motors, special Software Customer readouts, etc. (Userprograms) F

The history of digital electronics shows that 1 the efforts to use computers and approDelivery priate programs for design, first started close to manufacture. It had been triggered Figure 1 by economical factors and the growing complexity of tasks like routing, wiring, drawing, and producing cross references. Later on, modelling and simulation at the gate and register level was supported by computer programs. Two basic steps, translating of the (mostly verbal) given task into some more or less abstract description, and the logic design itself remained a domain of man. Designers acted in this field like artists. Personal creativity and design style, experience and time constraints produced highly differentiated results. In many cases, they may have been quite effective, but on the other hand, understanding by other people, documentation and testing were neglected in this trend. With the same speed as technological progress produces more complex circuits, the classical design philosophy is becoming more and more obsolete, and cannot meet the needs of current developments. Two different strategies now try to overcome these problems. The first one assumes that programming is more easily accomplished,

Production and test of system

CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS.

93

and better adapted to different tasks, thus avoiding new hardware designs and their problems. The microprocessor approach is an outstanding example for that intention. The other strategy concedes the fact that not all problems can be solved without specialized hardware, and even larger microprocessor applications need additional circuitry that must be designed another way. Hardware description languages and computer aided design are then the central aids, necessary to generate more exact task descriptions, and more reliable hardware designs. Fig. 1 does not show the fact that the design process is no linear arrangement of independent steps. In contrast, constraints especially in logic and physical design overlay a complicated net of backward directed interconnections to distant steps. The great number of alternative choices for implementations, and the complexity of present chips and modules makes it impossible, also for experienced designers, to decide, which would be the best solution for a given problem. Computer aided design may ease this situation a lot, but in general an overall optimization is beyond all possibilities. The next chapters will discuss some concepts and ideas on how to overcome those difficulties related to logic design. II. LOGIC DESIGN AUTOMATION Hand crafted logic designs normally are verified by modelling and simulation. This may be interpreted as a software replacement of the well known hardware experiment. But nowadays, one cannot expect a simulation run to be an adequate check. The grown complexity of circuits together with the increasing number of parameters do not allow one to perform detailed simulations with a complete set of test patterns. Only some functional tests are within reach. But this does not guarantee safe and reliable operation at all. Increasing demand for high speed computation and very large and fast direct access memories impose a high amount of additional costs to the overall development. Because of its inefficiency, simulation must not be used as a validation tool for logic design. It should be restricted to performance evaluation during the design phase of a systems architecture (see e.g. |9|). The constraint of additional costs is getting more and more important, because Choice of basic type Formal definition smaller companies cannot of task afford the burden of unsuccessful simulation runs. '1 The main goal of CAD tools, Modification of structure Definition of therefore, must be to save structure as much unnecessary effort as possible. Highly sophi'' sticated algorithms and Improvement of near-by solution Choice of synthesis procedures parameter values Modification of porameter values together with restart facilities former design 1' runs are the only way to Correct optimal Synthesis Evaluation prog rams byde signer achieve this goal. Generalor near-by solution ly speaking, analytic procedures must be avoided, and replaced by synthesFigure 2 izing ones. Their basic interrelations are described by Fig. 2. The first difference to conventional designing is the introduction of a formal task definition which leads to completely determined behaviour and interface description, before starting any concrete design step. Designers often decline to take this step.because a very early definition would not be possible in real circumstances. To my opinion, this is no substantiated argument, because it is based only on experience with a design style that always starts with some 1

94

H.M. LIPP

hardware design steps to realize some subtask. More parts then are added as necessary, but documentation is mostly done afterwards. The overall result is a time consuming and expensive simulation or hardware debugging. Introducing formal task specification primarily s h i f t s a c t i v i t i e s from a late step in design to a very early one without additional costs. In a d d i t i o n , the designer has to change somewhat his interests. Instead of generating and implementing hardware d e t a i l s , he should concentrate on evaluating alternate solutions with regard to influences that cannot be handled by computers. The most essential feature of the proposed concept l i e s in the fact that a l l results of the synthesis procedure are validated by proof of the algorithm used, and not by simulation of every single case. The challenge of this approach may be seen in four d i f f e r e n t areas: 1) The 'language' used to generate the formal description of a given task must be simple, based on more or less familiar elements, acceptable to most designers, and easy to implement. Sophisticated design concepts ( e . g . see | 8 | , 10]) require at least at the moment well-educated users, and are therefore not generally applicable. Data and schematics generated by the CAD system should be used as main source and reference for test generation and maintenance support. Thus, the design description must be well readable, self-contained, and complete. 2) Synthesis algorithms and programs should be able to handle at least contemporary parameter values and module sizes. Modification to new devices must be possible by only changing minor parts of the procedures. In the past, many CAD programs f a i l e d because they used well-known but inadequate methods (see Fig. 9). 3) Logic design problems normally are related to very bad growth functions. As a r e s u l t , enlarging the number of parameters or their values implies a steep increase in storage and computing time. For most design tasks an upper l i m i t for essential parameters is not known or not r e a l i s t i c . Theref o r e , storage overflow or timeout may occur during a design calculation. Poorly designed CAD systems react to these events only by stopping without a r e s u l t . More e f f i c i e n t systems must control internal calculations with regard to those e f f e c t s . I f storage overflow or timeout are l i k e l y to occur in the midst of a step, the program i t s e l f has to switch to a d i f f e r e n t goal, producing not the optimal but a suboptimal solution within the given l i m i t s . Then the designer has to decide whether he would be s a t i s f i e d with the result or not. I f a more elaborated result is neccessary, he must be able to restart the calculations with the former result as a s t a r t i n g point. Wasting of computer time then is minimized. 4) In some cases CAD tools are available for modern d i g i t a l c i r c u i t s . But often they are r e s t r i c t e d to a single c i r c u i t family. Switching to other products is nearly impossible. The more expensive CAD systems are, the less is the chance for the customer to adapt to upcoming better devices. Theref o r e , CAD systems should be based on product-independent algorithms. Personalization to specific products then w i l l affect only small portions of the design programs (see F i g . 9). Design tools of the described type may help the designer to overcome some of the unsolved problems. A l l design steps concentrate on solving a small fraction of the whole design problem. The applied optimization procedures are only valid within the limited context of the subtask of this step. I t seems to be impossible to perform an overall optimization. Thus, designers assume that piecewise o p t i mization may end in a solution near to the ultimate goal , which i t s e l f is unknown. Validation of this heuristic attack is s t i l l open. But some implementation studies indicate that optimizing one step may s i g n i f i c a n t l y increase the number of other design steps. Up to now, evaluation of cost and also of quality of a design can only be done by judging the f i n a l r e s u l t ,

CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS which implies a complete design. Using improved design instruments may help in getting this evaluation knowledge more effectively with short turnaround times.

95

The substitution of the hand-crafted logic by (semi-)automatically created designs has another effect on the designer. CAD always means some standardization in structures and circuits. Because of this, the designer may feel that his freedom has been reduced, and he would like to be able to perform as creatively as before. But the goal to increase the overall quality of digital design will be more important in the future than creating own and often too tricky solutions. Special conditions may still tolerate this design philosophy, but smaller companies with only a small activity in digital circuits, and therefore with small and less experienced design staffs, must realize a different way in design. III. MODERN DESIGN PHILOSOPHY Nearly independent of a specific technology, the current trend yields new circuits with more functions per chip then ever before. Regular arrangements of basic circuits or software-oriented solutions form the main stream of available or announced modules. In all cases (customer designs are not discussed here) the adaptation of the neutral devices to a specific task will be done by the user by personalization techniques like fuse-blowing or arranging statements in a certain sequence. Solutions are embedded into more general circuits which must show a higher complexity than necessary for a special application. That always means wasting a certain amount of possibilities on the chip, up to about fifty percent. Designers have to accept this tendency, and should not try as before to make use of all elements, if this reduces clarity of design and increases problems for testing. Today's and even more tomorrow's gain in using LSI and VLSI devices lies in adapting tasks to circuits and not vice versa. And only with this postulate being realized a major breakthrough in computer aided logic design and testing will be possible. The price that must be paid is not as high as it was with SSI and MSI circuits. It may be expressed in the percentage of unused parts or instructions, depending on the type of hardware. Only in a small number of high volume, low cost applications, better adapted designs must be looked for. Design flexibility may be created in two different ways as has already been mentioned. One of it uses the generally accepted random structure of digital computers. The specific solution then exists in a sequence of instruction which may perform arithmetic, logic, transfer, address, and branching operations. Roughly spoken, flexibility is constructed within the time domain by'extremely serializing data processing. One of the drawbacks is a relatively low performance. Another one must be seen in the fact that an overlay of a specific structure (both in space and time) on the given task unnecessarily complicates design and understanding. Many of the former difficulties with hardware are only shifted to software. The other way realizes flexibility mainly in the hardware realm, and may generate very fast circuits. Personalization is done by the manufacturer or by the user through connecting or disconnecting basic circuits and pins. The level on which the design can be personalized depends strongly on the desired speed, production volume, and available design support. Table 1 shows some properties of the so-called 'uncommitted logic array' (ULA), often also called 'standard logic array' (SLA) or 'master slice gate array' (MSGA). Prefabricated and tested arrays of basic circuits (often only identical gates) are connected by one or more layers of metal that represent the specific solutions (see |2|). This may be the logic elements of the future for high speed, high volume data processing units. CAD is available in some cases.

96

H.M. LIPP ULA, SLA, MSGA Modules Technology T ^ ' M O V Un IL MOS, CMOS,
LJUJ

Number of gates Wiring Speed of gate Power Cons. CAD


less tnan 1000

m^i metal

! ns

or

less

several watts * " * ~ times per chip

B V .

Table 1
Table 2 refers to a d i f f e r e n t class of prefabricated c i r c u i t s that are completely finished including wiring and packaging. In addition to the immediately useful logic, they contain additional elements that are necessary for the personaliza tion process. Burning off NiCrfuses or inducing migration in pnjunctions chan ges the interconnection of the chip to the desired configuration. This process sometimes is also called programming, despite the fact that i t is related to hardware. Table 2 indicates that in principle, these c i r c u i t s have two logic l e v e l s , with some exceptions,where a second level only can be achieved by wired AND resp. OR, or a t h i r d level may introduce preprocessing of two input variables or internal X O R functions. Some of the c i r c u i t s also contain f l i p f l o p s for state and output variables, and internal feedback lines to form sequential c i r c u i t s of the Moore or Mealy type. The logic power is r e s t r i c t e d to functions consisting of a small number of terms. Random access memories are the only ones which can map a r b i t r a ry,complex functions, i f a l l address variables are fed to the decoders. Field programmable logic c i r c u i t s Module type
P.random access memory (PROM)

1st logic 1.
AND

2nd logic 1.
OR

c i r c u i t type

Application

fixed

variable

comb, ci re.

general
sli

.129lior,y (PLA, PLS) array logic (PAL) P.multiplexer (PMUX) P.ROM patch (PRP) PGA1
y

variable

variable

"lb. and sequ. c i r c . comb, and sequ. c i r c .

9htly restricted slightly restricted

variable routing and selection special mapping

comb

clrc

._ "

. , Peclal

comb. c i r c .

special

variable

not available

comb. c i r c .

special

Table 2

CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS

97

There is a growing number of such devices, some of them more specialized than others, but a l l with higher complexity. There appears to be a trend to overcome some of the d i f f i c u l t i e s which are related to other hardware structures l i k e ULA and microprocessors. Programmable c i r c u i t s are especially suited for system development and small volume applications with high speed operation. Experience shows that they are as easy to adjust to design changes as microprocessor programs but better to design. Their structure immediately represents the structure of the problem. Wordlength, processing time, and e l e c t r i c a l parameters are better adjustable than with most microprocessor f a m i l i e s . They are even superior in b i t handling operations, but i n f e r i o r in numerical surroundings. Because none on the discussed approaches can cover a l l upcoming tasks with the same e f f i c i e n c y , at least three d i f f e r e n t hardware concepts w i l l be important in the foreseeable future (see also Fig. 6 ) : 1) Hardware implementation. Solutions are based on gates, f l i p f l o p s , and comparable elements. ULA, PAL, PGA, P M U X may be included within this class. 2) Firmware implementation. Solutions are mainly based on arrays with the addition of counters, r e g i s t e r s , multiplexers, and decoders. The essential feature is random access to the array contents. PLA and random access memories (ePROM, PROM, Read/Write) are specific instances of such arrays. 3) Software implementation. Microprocessors and related elements are the main building blocks. For higher performance, b i t scuce processors are available. Firmware realizations on one hand may be seen as specially arranged gates and f l i p f l o p s , but on the other hand are very similar to microprocratrarring. Some designers therefore also use the term microcontrollers for the sequential type. But i t should be mentioned that there is a great difference in the optimization goals. With controllers of a general kind, minimization of storage u n i t s , branching and optimal input s e r i a l i z a t i o n are much more essential than with computer applicat i o n . Yet a positive aspect of this close neighborhood is the f a c t that software designers may think in terms of programming, while hardware designers s t i l l may think of gates, b i t s , and signal l i n e s . I t i s l i k e l y that this w i l l bring hardware and software engineering closer together. IV. FUTURE ASPECTS OF DIGITAL DESIGN At the moment i t is hard to decide whether there w i l l be a new hardware breakthrough or not. I estimate that the p o s s i b i l i t i e s f o r fundamentally new devices are very l i m i t e d . Units may be faster and more complex, show additional features, r e l i a b i l i t y may increase and prices decrease, but the inherent logic problems and structures w i l l not d i f f e r widely. What we r e a l l y need is a better insight into the overall design process, and CAD systems that cover most aspects of d i g i t a l design in an e f f e c t i v e and realiable manner. Some of the related topics may be b r i e f l y discussed. Design evaluation. Comparisons between d i f f e r e n t logic designs are currently performed by counting gates, f l i p f l o p s , pins etc. Other aspects, especially cost/performance trade-offs and f a c i l i t y of modifying already created designs are often out of focus. Logic design should deal more with concepts that guarantee more e f f e c t i v e tests with less c a l c u l a t i o n , and with influences on chip layout and board layout. There w i l l be no single standard f o r measuring the overall quality of a design, but at the moment, evalution is based on some questionable parameters, and better c r i t e r i a are necessary.

98

H.M. LIPP

Product description. Modern circuits still being described by manufacturers in the same way they have used for much smaller units: text, listings and pulse diagrams. The more complex the circuits, the higher the chance that not all essential information is provided by these descriptions. It is somewhat curious that in the age of computers, products are not described by the manufacturers by means of computer based information. With the growing use of hardware description languages by customers, it would be very convenient, if not necessary, to get together with the products, computer based product descriptions (CBPD). Based on a common language and some standards, this would be an significant improvement for modelling, simulation and evaluation. More steps of the design process then may be connected to form a real CAD system. Fig. 3 gives an impression how this would look like. Basic decisions are assigned to the designer, who also produces a formal description of the TASK DEFINITIO task. This may be translated and used for computer based performance evaluation (CBPE). DESCR CHOICE OF HARDWARE CONCEPIS The final structure of the system to be realized then is \ 1 CBPE contained in a computer based PRODUCT # n PRODUCI# 1 task description, the terms CBPD CBPD 1 of which are compatible with [BIO the CBPD. The CBTD should 1 ? serve as a reference for all * i consecutive design steps. It CALD primarily supports the computer aided logic design (CALD) and CAES the computer aided evaluation / and selection (CAES) in finding * the optimal solution out of CAPO. CAIG. CAM several choices. Additional steps like computer aided SOLUTION physical design (CAPD), computer aided test generation Figure 3 (CATG) and manufacture (CAM) complete this ideal concept. Consistent data checking and transfer, complete documentation of all design steps, and a high design quality would be supported by this scheme.

Integrated circuits. Up to now the term 'integrated circuit' refers to digital or analog circuits on the same chip. With the trend to replace (electro-)mechanical devices by semiconductor elements, interface problems between electronics and the environment are getting harder than before. EMC, rough operation conditions, control of power devices, power failure compatibility are only some aspects. In many cases, interface electronics is more expensive than the digital core itself. To take real advantage of semiconductor devices in general control applications for small and cheap systems, transducers from and to nonelectrical parts of a system must be realized in the same technology. First results with silicon devices that act as transducers from mechanical parameters into electrical ones, have been reported (e.g. | 1 | ) . With further progress in this field, CAD may also support interface design of this kind. Access to CAD systems Complex and effective CAD tools require a considerable amount of scientific a nd programming effort and must be run at least on powerful minicomputers. Doc umentation, system maintenance, and consultant service to customers ought to be carefully provided for more pretentious applications, Only large companies a re able to raise the necessary investments for own installations. The larger nu lier of smaller companies must be supported in a different way. Servicecenters, possibly closely related to universities and research laboratories, may be a possible solution. But large CAD systems are quite different from commercial s ervice programs. Result validation by other means than

CURRENT TRENDS IN THE DESIGN OF DIGITA L CIRCUITS

99

the CAD tool may be impossible. Therefore questions of reliability, confidential handling, and liability must be discussed in detail. Reproducibility of results due to a maintained system, also must be secured. Even second sourcing of a CAD system may be of interest. V. DESIGN SYSTEM LOGE As an example of an existing CAD system for logic design which meets most of the discussed criteria, the design system LOGE will be described in a short sketch. It is currently under construction at the Institut fuer Nachrichten verarbeitung at the University of Karlsruhe/Germany*. To get satisfactory results, we first concentrated our work on the logic design process for a restricted but highly in teresting class of problems. To PROCESSING U N I I define the task more precisely, IDT r i s p MATERIAL we refer to a well known schema FLOW REPRESENTATION.' MA1IBIAI MATERIAL tic for processing (see Fig. 4 ) . All tasks are divided into a -CONTROL SIGNALS processing unit which maps the CONDITION material or data flow in ^CONTROL SIGNAIS . INTERFACE "space", and a control unit, CONTROL UNIT which represents the time rela IPROC_SEQLIENCE REPRI tionships.
ITROCESSG NI

ri

jr
RECURSIVE PAR1UIONIN

1 I

SPECIAL CONTROL SIGNALS

(.

e g overflow, error, reedy

SlNCHRONISAIION INTERFACE AT THE LOWEST LEVEL OE PAR1ITI0NING THE CONTROL UNIT MAI B E REALIZED B V HARDWARE OR B( SOFTWARE

Control sequences may be de scribed as mappings from one bit vector (condition signals) cnto an other one (control sig nals). Switching theory pro vides some useful tools to start with. The whole unit may be described as a Mealy or Moore automaton (see e.g. | 4 | ) . A specific type of flow diagram is used for the formal descrip tion. Each processing step con sists of a triplet of different symbols for branching, state transitions and outputting (see Fig. 5 ) . The difference to the usual flow diagram lies in the state transition element, becau se with control applications, no instruction counter is avai lable as it is with a normal computer. We prefer this kind of representation because it is completely independent of any specific implementation, easy to handle for both software and hardware designers, easy to vali date and to transform into com puter input. Experience of many years has shown its usefulness for digital design.

Figure 4

Processing step

State

.. - - * " " - " " - *


x

Conditions Condit essential essent for branching

I 11 ...I

111 1

F r i +J&L+ fyH
^ L _ C o lc k
" Next slate *

^ ^
!

nr -- n^
Figure 5

*The program development is being supported under contracts LIP/100102 by the Federal Republic of Germany; gov. agency is the Kernforschungszentrum Karlsruhe

100

H.M. LIPP

LOGE consists of three main modules, independent of each other, but using the same task description. Fig. 6 shows the different types of implementation with the main output of each module. The whole system cannot be described here. (For more details see |3|), but Fig. 7 and 8 may give an impression of the hardware structures used for the firmware approach. Recently we have mainly concentrated on this type, because implementation Hordware - Implementation Firmware - Implementation Software - Implementation studies had shown that there Result ot run Result ot run Result ot run would be a large demand for fast List ol chips. List ol nets List of chips and Programming specs. and simple controllers for probinterconnecting nets. ond net variables. Documentation Documentation Programming specs. lems that microprocessors cannot Documentation cope with. This decision has since been supported by the fact that programmable devices of Evaluation ot results by designer this kind are now available as LSI modules. Firmware concepts Figure 6 are more versatile than other ones. The embedded algorithms are highly effective and allow large problems to be solved within Output seconds or minutes of computer time. The number of inputs and outputs together must be less or equal to twice the word length of the computer used. The number of different states is also adapted Input Masked to the size of the computer used variables input and must be less or equal to 256. variables Both LOGE-SSW and LOGE- have been thoroughly tested on a Figure 7 UNIVAC 1108 resp. PDP 11/40 computer system and are well documented. LOGE-MIR is currently under investigation. The modules contain about 10 000 of FORTRAN IV statements each.
Decoded output

Clocfc

LHIIE
<> Clock Mosk code

T
Output cade

Stale code

( F l PROM

Special emphasis has been given to ensure the following general properties of the system: Checks of computer input for consistency and completeness. Possibility to predefine a maximal computing time with the guarantee to receive a solution within that time. Impending storage overflow reduces search space, but does not cause stop without result. Possibility to embed a task into a predefined structure (essential for task modifications).

Figure 8

CURRENT TRENDS IN THE DESIGN OF DIGITAL CIRCUITS

101

MODERN

DESIGN

PHILOSOPHY

PRODUCT FAMILIES

COMPUTER TYPE

Fig. 9 Applications-of LOGE to industrial design problems have shown that turnaround times for design and testing are significantly lower than with conventional designs. Faults in the final solution have been due only to faulty circuits, wrong wiring and incorrect definition of the task itself. A special benefit is the generation of a complete documentation of all design steps. Our experience has proven that fast and efficient logic design tools need a profound background of scientific work to be successful. In addition, the problem of comfortably handling such a system may lead to the same amount of effort as the design of algorithms itself. We now try to enlarge the scope of our work to cover other aspects of design, and to create a CAD concept that meets the demands of modern design.

102 IV. BIBLIOGRAPHY

H.M. LIPP

1| J.B. Angell: Micromachined silicon transducers for measuring force, pressure and motion. Proceedings of ESSCIRC 1978, Delft. 2| W.F. Arnold: Gate arrays have marketers raring to go. Electronics, April 27 (1978) pp. 83 und 84. 3| A. Ditzinger, W. Grass: Rechneruntersttzter Entwurf digitaler Steuerungen ausgehend von einer realisierungsunabhngigen Beschreibung. Tagungsbericht zum 8. Internationalen Kongress Mikroelektronik, Mnchen, 1978. 4| W. Grass: Steuerwerke - Entwurf von Schaltwerken mit Festwertspeichern. Springer-Verlag, Beri i n/Hei del berg/New York, 1978. 5| W. Grass: Zur Minimierung des Multiplexeraufwands bei Mikroprogrammsteuerwerken. Elektronische Rechenanlagen 20 (1978) vol. 2, pp. 57-64 and vol. 3, pp. 123-134. 6| H.M. Lipp: Array Logic. Proceedings of the second symposium of EUROMICRO, Venice (1976) pp. 57-64. 7| H.M. Lipp, G. Merz: Schaltungssynthese und Prfung in der digitalen Elektrotechnik. Workshop proceedings, 1977, Zentral verband der elektrotechnischen Industrie, Frankfurt/Main, pp. 71-93. 8| E.A. Snow, et al.: A technology-relative computer-aided design system: Abstract representations, transformations, and design tradeoffs. Proceedings of the 15th Design Automation Conference, June 1978, Las Vegas, pp. 220-226. 9| H. Weber: Ein Programmsystem zur Untersttzung der Rechnerentwicklung. Nachrichtentechnische Fachberichte 49 (1974) pp. 49-64. 10[ H. Woitkowiak: Register-Transfer-Ablufe auf Netzen, Beschreibung und Synthese. Habilitationsschrift, Fakultt fr Informatik, Universitt Karlsruhe/Germany, 1978.

G. Maigraue, editar, COMPUTER-AirP DESIGN oi digital electronic circuiti and iyitemi North-Holland Publishing Company ECSC, EEC, EAEC, Bruiieli S Luxembourg, 1979

CAD IN THE JAPANESE ELECTRONICS INDUSTRY

Kenji KANI, Akihiko YAMADA and Masanori TERAMOTO Nippon Electric Co.,Ltd. TOKYO, JAPAN

This paper is a b r i e f survey of the CAD a c t i v i t i e s in the Japanese electronics industry. F i r s t , NEC's C A D systems, especially in the f i e l d of Computer, ESS(Electronic Switching System) and LSI(Large Scale Integrated C i r c u i t s ) , are overviewed. Among various CAD a c t i v i t i e s , test pattern generation of computer, P W B layout of ESS and c i r c u i t analysis of LSI are described in d e t a i l . Second, notable CAD features and a c t i v i t i e s of Japanese electronics companies are presented.

1.INTRODUCTION
Major Japanese electronics companies which have advanced CAD technologies are F u j i t s u ( F u j i t s u Co.,Ltd), Hitachi(Hitachi Co.,Ltd), Mitsubishi(Mitsubishi Electric Corp.), NEC(Nippon E l e c t r i c Co.,Ltd), Okifoki E l e c t r i c Industry Co.,Ltd) and Toshiba(Tokyo Shibaura E l e c t r i c Co.,Ltd). In this paper, however, three major examples of NEC's CAD systems are described, because these companies are competitors in the f i e l d of Computer, large ESS (except Mitsubishi and Toshiba), Communication Systems, Consumer Products and Integrated C i r c u i t s , and actual status quo of the other companies are confidential and remains vague. Among so many f i e l d s of electronics, Computer, ESS and LSI are picked up because advanced CAD technologies of d i g i t a l electronic c i r c u i t s and systems can be found there. In Section 2, a computer CAD system, which has been developed in the NEC Computer Engineering Division, is overviewed. For test pattern generation of large computer, the usefulness of Scan Path approach has been recognized in NEC. Therefore, i t s effectiveness is also summarized. In Section 3, an ESS CAD system, which has been developed in the N E C Switching Engineering D i v i s i o n , is overviewed. As an Example, the performance of the PWB(Printed Wiring Board) layout program is described in d e t a i l . In Section 4, an LSI CAD system, which has been developed in the NEC IC Division is overviewed. As an example of the important LSI CAD program how c i r c u i t analysis programs have been u t i l i z e d is also summarized. In Section 5, notable CAD features and a c t i v i t i e s of Japanese electronics companies are described. The above mentioned three NEC Divisions use several application programs .in common through the remote termianls of NEC ACOS/700 as shown in F i g . l . This computer is maintained by the SCC(Scientific Computing Center) located at the Central Research Laboratories. The IC Division uses another NEC ACOS/700, which is maintained by the SCC branch located at the Tamagawa plant. The Computer Engineering Division and the Switching Engineering Division have t h e i r own large computers i n d i v i d u a l ly.

K.KANI is with the IC Division,Nippon E l e c t r i c Co.,Ltd,Kawasaki.Japan. A.YAMADA is with the Computer Engineering D i v i s i o n , Nippon Electric Co.,Ltd, Fuchu,Japan.

M.TERAMOTO is with the Switching Engineering Division, Nippon Electric Co.,Ltd, Tokyo,Japan.

1 0 3

1 0 4

K. KANI, A. YAMADA, M. TERAMOTO Figure 1 Computing environment of the NEC Computer Engineering, Switching Engineering and IC Divisions

FUCHU PLANT (FUCHU) COMPUTER ENGINEERING DIVISION NEC ACOS/800 etc. REMOTE TERMINALS

MITA PLANT (TOKYO) SWITCHING ENGINEERING DIVISION NEC 2200/500 etc. REMOTE TERMINALS

TAMAGAWA PLANT (KAWASAKI) IC DIVISION STAND ALONE SYSTEMS REMOTE TERMINALS) /TSS ( REMOTE BATCH) v GRAPHIC /

CENTRAL RESEARCH\LAB 0RATORIES (KAWASAKI) SCIENTIFIC COMPUTING CENTER (SCC) L |NEC AC0S/70

X 7
\ /

,: zf
SCC Branch 4-

2400 \9600 bps TAMAGAWA' PLANT


NEC ACOS/700 (Dual CPU)

2. NEC's CAD SYSTEM F O RC O M P U T E R CAD System Configuration & Function CAD programs for computers were f i r s t developed in the late 1950's in Japan to design large transistorized machines. In the I960's, many computer manufacturers started preparing CAD system to develop the t h i r d generation computers with i n t e grated c i r c u i t s , and they completed t o t a l systems for computer design support. With the advent of large scale integrated c i r c u i t s ( L S I s ) , powerful and sophistica ted CAD capability has become essential to develop high performance machines with LSIs. Therefore, most C A D systems have been enhanced or reorganized to meet the requirements of the new technology. As an example of the latest C A D system for computers in Japan, the system of NEC is shown in Fig.2. This system has been used to develop NEC ACOS series system 200 to 900 (roughly corresponding to IBM370/115 to 3033), mini computers, o f f i c e computers and D IPS(D endenkosha Information Processing System). I t has a centralized data base f o r hardware design support, and many application subsystems are connected to the data base through a data base management subsystem (DBM). The data base consists of D esign Master File(D MF) and Component Master File(CMF). D M F and C M F have the same f i l e configuration. Individual designed data are stored in D M F and l i b r a r y data for common use l i k e chip data are stored in CMF. The data base has hierarchical configuration. Each level of the data base corresponds to physical l e v e l , such as chip, LSI package, logic card, backboard,

CAD IN THE JAPANESE ELECTRONICS INDUSTRY


Figure 2. CAD System for Computers

105

Firmware sl

Des i gn J

Hardware j Design J LOGIC SIMULATOR PHYSICAL DESIGN SUPPORT LSI package Logic card Back board

FIRMWARE DESIGN SUPPORT

DATA BASE MANAGEMENT

<=?
FIRMWARE SYSTEM GENERATION

TEST GENERATOR

LOGIC DIAGRAM GENERATOR

POST PROCESSOR Firmware for Shipment


D i g i t a l Data for Production

Figure 3. LSI chips and LSI package for NEC ACOS 800 & 900

106

K. KANI, A. YAMADA, M. TERAMOTO

and unit. The DBM subsystem used here was developed for CAD purpose. System 800 and 900, the largest models of NEC ACOS series, use low level CML (Current Mode Logic) LSI chips (0.7ns/gate, 7 pico joule/gate, max. 200 gates/ chip). These chips are packaged in a high density LSI package (max. 110 chips/ package, max. 3,500 gates/package). Wiring design for ceramic substrates of the packages is automated almost 100% by an automatic router. Fig.3 shows the photograph of LSI chips on a film carrier and a LSI package. The firmware design support[32] of the system has a general purpose microprogram assembler and an automatic flowcharter. System generation capability is also supported to get the firmware corresponding to a customer system configuration. Automatic Test Generation With advent of LSIs, the problem of testing logic cards or LSI packages has become increasingly difficult. An efficient solution of this problem requires much effort in both test generation technique and easily testable design. Many test generation systems have been developed in Japan[4]. The following is the latest example of automatic test generation systems developed by NEC[33]. The system configuration of this automatic test generation system is shown in Fig.4. It can treat up to 3,000 gate sequential circuit by using extended D-algorithm[33]. Main features of this system are as follows: (1) Easily applicable to both combinational and sequential (synchronous and asynchronous) circuits. Sequential circuits are transformed to iterative model after feed back loops and flip-flop output connections are cut automatically. Sequential circuits with Scan Path can be treated as combinational circuits. It can treat various flip flops and functional elements as primitive elements. The functional elements include Read Only Memory (ROM), Random Access Memory (RAM), and Content Addressable Memory (CAM). Therefore, the system can efficiently generate test patterns for logic circuits including these memory elements. Test generation concept is based on extended D-algorithm. The following ten logical values are used to represent the state of each element in circuits for high speed processing. 0 logical "0". 1 logical "1". D logical "1" in a fault-free circuit but logical "0" in a fault circuit. D logical "0" in a fault-free circuit but logical "1" in a fault circuit. either logical "0" or logical "1" (don't care). state. unknown state to be easily set to logical "0". u o unknown unknown state to be easily set to logical "1" state. Ui positive clock pulse. N negative clock pulse. Random number test generation and extended D-algorithm test generation can operate successively. The former is effective in early stage of fault detection. When the efficiency of fault detection decreases, the generation mode is switched to the latter. The combination of two generation methods can produce test sequences with high fault coverage in rather short period of time.

(2)

(3)

(4)

CAD IN THE JAPANESE ELECTRONICS INDUSTRY Figure 4 Total configuration of the automatic test generation system
Physical ft Logical Design Information

107

Preprocessor

Test Generator Q Fault Simulator

Postprocessor

Table 1 Some automatic test generation results No. of Fault No. of CPU TIME No. of Elements /IMIPS \ . NOTE C i r c u i t Gate Flip Functional Faults Coverete Test Flop Pattern ^COMPUTER/ () 64 b i t R A M 503 1551 96.3 86 8min.l0sec. 1 8 X4 2 3 4 5 6 790 880 827 994 1263 21 9 20 16 24 16 b i t C A M X2 2148 2588 2119 2862 3622 95.6 96.9 91.2 97.4 99.9 150 121 103 127 87 4min.50sec.
Scan Path Scan 5min.27sec. Path

12min.38sec. 13min.02sec. 1 Omin.26sec.

108

K. KANI, A. YAMADA, M. TERAMOTO

(5) It can provide test sequences for both input/output connector pin access mode and all IC (Integrated Circuit) pin access mode of ATE (Automatic Test Equipment). B y using the latter mode, the test sequences with high fault coverage can easily be obtained, because all IC pins can be used as test points. Some application results of this test generator are shown in Table 1. Test generation time for an average 1,000 element circuit is about 5 to 3 minutes by 1 MIPS computer, the number of test sequence is 100 to 160, and fault coverage is 90% to 100%. Circuits including RAMs or CAMs are processed in reasonable time as shown in Table 1 (Circuit 1 & 4 ) . Sequential circuits with Scan Path can be converted to combinational circuits during test generation and testing, as flip flops in a circuit can operate as a shift register by the aid of Scan Path and contents of flip flops can be accessed externally. Therefore, the test genera tion efficiency for these circuits is very high as shown in the example of circuit 2 and 3 in Table 1. The effectiveness of Scan Path in automatic test generation was evaluated by using this test generation system. The improvement ratio of test generation by using Scan Path is summarized as follows: Figure 5 Implementation technique of the Scan Path

MTEJT Tf FURflOPt 0

SLAVE R.TP-FIOP

-LV
() MASTER/SLAVE SWITCH INPUT . O-TYPE FLIP-FLOP WITH

INPUTS |

(b)

IMPLEMENTATION

OF

THE

SCAN-PATH

CAD IN THE JAPANESE ELECTRONICS INDUSTRY

109

(1) test generation time is 1/2 - 1/4, (2) the number of test vectors is 2/3 - 1/3, (3) fault coverage is the same or better. Additional logic for Scan Path configuration is just two pins and few gates as shown in Fig.5. The result in detail is reported in the reference[5]. By using Scan Path technique and partitioning technique, automatic system level test generation for large computer systems can also be realized. The application result in NEC on large commercial computer systems with 100,000 gates or more is reported in reference[34]. Besides automatic test generation technique itself, easily testable design considerations such as Scan Path will become more and more important to realize efficient test generation for large digital circuits with LSIs or VLSIs. 3. NEC's CAD SYSTEM FOR ESS Outline In 1964, the research and development of large size ESS began at Electrical Communication Laboratories (ECL) of NTTPC (Nippon Telegraph and Telephone Public Corporation) in cooperation with NEC, and the other three manufacturers. The first ESS, called DIO System, began its operation in Tokyo in 1971, and has been widely used. The hardware technologies employed here were TTL and discrete wiring, and the CAD system was designed particularly for these technologies. From 1975 to 1977, the central control of the DIO System was improved by the use of CML (MSI and LSI) and Back Wiring Board (BWB) technologies. A new sophisticated CAD system was developed for these technologies in the same cooperative project described above. Main frames of DIO (dual CP frames and a memory frame) and its packages are shown in Fig.6 and 7, respectively. In both cases, the designed results from the CAD systems have been transferred to the manufacturers in magnetic tape, whose format is standardized and maintained by the committee members from the manufacturers and NTTPC. So, NTTPC has an influence on the configuration of the ESS CAD systems in Japan[12]. In 1970, the Switching Engineering Division of NEC began to develop its own CAD system and integrate it with Computer Aided Manufacturing (CAM) and Computer Aided Testing (CAT) systems. These systems have been used for various types of ESS products. A typical ESS hardware design process using NEC's CAD system is shown in Fig.8. Because of hardware requirements and engineering changes, the system is physical design oriented and intended to be generic. The main features of each subsystem are as follows. MDS: A firmware design support subsystem, which consists of an assembler, an automatic flowcharter and a ROM bit editor. These programs are the same as those of Computer Engineering Division[32]. FDA: A functional simulation program with high level hardware description language. The simulator is effectively utilized for verification of hardware design and for debugging of microprograms and test programs (TP) [29]. DIMS, PDI, DI: A data base management program (DBM) for design data base (DIMS), and designed data input programs (PDI, DI). The DBM is specially designed for CAD to get better file handling efficiency. IC or package level inputs can be transformed to gate level if necessary. LSS: A gate level logic simulator for large circuits. unit delay and 2 values. It uses compiling method,

no
Figure 6

. , . YAMADA, M. TERAMOTO Figure 7 Packages used in DIO ESS

Mainframes of DIO ESS

DOC:

A schematics (logic diagrams) drawing program for package and frame levels. Both COM and printer outputs are used for documentation.

ALT, FUA: A test generation subsystem for packages. ALT consists of a heuristic, called "MO-", algorithm[2] and a parallel fault simulator. For a package which contains a blackbox LSI as a microprocessor, only truth value simulation of manually coded test data is performed by the functional simulator, FDA, mentioned above. APK, IDS, PASS: A printed wiring board design subsystem, described later. The details will be

WDS, APK: A subsystem for designing backpanel wiring. WDS contains such functions as minimum spanning for each net, ordering, coloring, cabling, and twisted pair assignment. Its outputs are various documentations and NC tapes for manufacturing and testing. For BWB routing, the same program as PWB, APK, is used. EC-W: A program for managing engineering changes (EC) of wiring information. It updates the data base so that the EC may be reflected on the schematics correctly, and also generates the specific wiring document if the hardware is under manufacturing.

The above mentioned CAD system consists of about 360,000 source code lines written by PL/1 subset and assembly languages. Further improvements, such as shortening turn around time, advanced interactive capabilities, flexibilities for rapid changing ESS technologies, etc., are expected.

CAD IN THE JAPANESE ELECTRONICS INDUSTRY Figure 8 ESS design process using CAD system

111

to Equip Testing.

toPackage Functional Testing

to PWB Manufacturing

to Wiring and Testing

BWB 'Artwork)

oota y

ItO BWB

Manufacturing

112
PWB and B WB Layout Design

K. KANI, A. YAMADA, M. TERAMOTO

For development of ESS, many new packages to be designed are required, so auto mated layout is important. The automated package layout program, APK, has been continuously improved to cope with the increasing complexity and the technology changes. Its latest version has the following features. (1) Various types of board can be treated by defining a geometric file and some parameters. The 400mm 400mm board, on which 2 lines go through between adjacent lands, is the current maximum size in practical application. (2) Main functions are IC placement, routing, design rule checking (APK); digitized input, pattern correction (IDS); and artwork data generation (PASS (3) Algorithms for IC placement are the pair linking method for the initial placement, Steinberg's assignment method and the pairwise interchange method for the iterative improvement. (4) Routing algorithm is a generalized line search method which can vary its routing characteristics from the original line search method[18] to Lee's method according to the given parameters. The routing program is implemented so that the parameters may be changeable during the routing steps in order to get solution economically. (5) Three types of via (feed through), i.e., floating via, fixed via and via whose position is limited by power and ground plane, are selectable. (6) For CML circuits, some special functions, such as unicursial (no branching) spanning, placement limited by line length and terminating resistor assign ment, are taken into account.. (7) B ecause of its generic characteristics, the router is also applied to B WB routing practically. At this time, package placement on BWB is performed manually. Examples of routing results, which are used in the current ESS system, are shown in Table 2. The average density is approximately 1 square inch per IC in this example. When the density rises to 0.7 square inch per IC, the maximum density for the board, the routing rate will be reduced to around 92%.

Table 2 Examples of PWB routing results


Board size No. of lines between adjacent lands No. of ICs Average No. of lines to be connected r best No. of incomplete lines Average routing rate ] average I worst 210mmX190mm(70ICs) 2 31 61 395 0 9.2 25 97.7%

CAD IN THE JAPANESE ELECTRONICS INDUSTRY 4. NEC's CAD SYSTEM FOR LSI LSI Technology

113

In NEC, the beginning of the LSI age started in 1970, when desk calculator LSI chips were developed. At that time, the first stage of LSI-CAD systems was prepared. From that time, CAD technologies have become much important as the number of circuit elements per chip increases. At present, 5,000 - 10,000 gate microcomputer and 16,000 bit RAM chips are representative of high volume production LSIs. Also, R&D activities are accelerated due to the Japan's MITI (Ministry of International Trade and Industry) VLSI project and the NTTPC cooperative project with Fujitsu, Hitachi and NEC. An example of a high speed bipolar 8 bit LSI processor chip is shown in Fig.9, which was recently developed by ECL and NEC[1]. This 4.5mm 4.5mm chip contains about 5,000 transistors and 5,000 resistors, which are interconnected with the three layer wirings. LS I-CAD System The LSI CAD system is composed of the programs which are shown in Fig.10. An LSI is designed in the following way. First, the basic blocks (AND gate, Flip Flops, Registers etc.) are designed manually, checked carefully by the circuit analysis program (COSMOS for MOS, SPICE[19] or NECTAR[11] for bipolar), and stored in the block library. Then, the chip layout begins based on the logic diagram, which is verified by the logic simulator, L0G0S[17]. For high volume production LSIs, the manually designed layout is digitized, checked and modified on the graphic system, Applicon or Calma. For small volume production LSIs, the automatic master slice layout design program, MASTER, can be used. For artwork data verification, the DRC (Design Rule Check) program and logic verification program, PALMS, have been recently developed and used. But these are not yet economical. Figure 9 Bipolar LSI processor chip

114

K. KANI, A. YAMADA, M. TERAMOTO Figure 10

u
TEST TAPE EDITING (LOGTEG)

CAD System for LSI MASK ROM BIT PATTERN


CIRCUIT ANALYSIS (COSMOS) (SPICE) (NECTAR)

LOGIC / - \ DIAGRAM

LOGIC SIMULATION (LOGOS)

ROM POST PROCESS (AROM) 1/ J"

rcbt
BLOCK LIBRARY

TEST PATTERN GENERATION (FOCUS) (PTS)

LAYOUT DESIGN (MASTER)

ARTWORK DATA EDITING (APPLICON) (CALMA)

ARTWORK CHECK (DRC) (PALMS)

ARTWORK TAPE

Table 3 Main f e a t u r e s o f the NEC's LSI-CAD programs Program COSMOS NECTAR LOGOS MASTER Purpose

Main features Built-in M O S Model,Nodal.Implicit Integration Piecewise Linear Approach,Modified Tableau Unit,Min,Max,Rise,Fall and Wire Delay. 3-Value Master s l i c e , Two-stage Routing Min.Spacing,Min.Width,Enel os ure Checks P a r a l l e l , 4-Value, Stuck at f a u l t s Generate Artwork data and Test tape Generate Test Tapes from Common File

C i r c u i t Analysis C i r c u i t Analysis Logic V e r i f i c a t i o n Automatic Layout Artwork Check Fault Simulation Mask-ROM postprocess Test Tape Editing

D R C
FOCUS AROM LOGTEG

CAD IN THE JAPANESE ELECTRONICS INDUSTRY Figure 11 Increase in LSI development period as the number of components per chip increases Figure 12

115

Total computer run time for LSI circuit analysis in the NEC IC Division (1972=1)

O
OJ CL

C OJ

Q> Q. O

repeating cycle / (debug ly'' initial cycle ( design, wafer process

a test )

N o . of components / chip Another flow is to prepare the test data. The automatic test pattern generator, PTS, which was developed in the NEC Computer Engineering Division[33], is not sufficient for the complex LSI. Therefore, the simulator, FOCUS, is used to verify the manually designed test sequence. Mask-ROM bit patterns can be automatically broken down to the test tape and the artwork data, by the AROM program. From the manufacturing standpoint, the artwork data editing system,'Appi icon or Calma, and the test tape editing program, LOGTEG, are important, because a set of typical artwork data is composed of 100,000 rectangles and a typical test sequence contains 10,000 patterns. The main features of the above mentioned programs are summarized in Table 3. There is a tendency that as the number of components per chip increases, the LSI development period becomes longer as shown in Fig.11. This is caused mainly by the increase in the repeating number of the design-waferprocess-test cycle, due to various kind of errors. Therefore, the LSI CAD should aim at reduction of this repeating cycle number in addition to reduce the initial cycle period. LSI Circuit Analysis During the late 1960s, IC designers started to use the circuit analysis programs which had been developed in NEC. During the early 1970s, at the beginning of the LSI age, the total computer run time for the LSI circuit analysis increased rapidly because of the increase in design data amount and accuracy as shown in Fig.12. The program performance has been improved more than 100 times during this ten years. As mentioned above, at present COSMOS, which has an accurate built-in MOS model, is used for MOS LSI design verification. SPICE, which was developed in the

116

K. KANI, A. YAMADA, M. TERAMOTO

University of California, or NECTAR developed in the NEC Central Research Laboratories, are used for bipolar LSIs. NECTAR is a unique program which guarantees obtaining DC solution whenever it exists[22]. A problem is that the maximum circuit size which can be analyzed economically by these programs, is limited to about 200 gates. M0TIS[14] seems a good idea to solve this problem if its computational error can be evaluated more theoretically. Statistical analysis and parameter optimization are important, but they are not yet economical in the LSI Division. 5. NOTABLE CAD FEATURES AND ACTIVITIES IN JAPAN The notable CAD features and activities of Japanese electronics companies are as follows. Total Features (a) During the early 1960s, major Japanese electronics companies started to develop their own CAD systems on their own computers. Since then, the CAD systems in each company have been almost in-house-made and have rarely released to the outside. Thus, CAD program circulation is usually limited in Japan. (b) During the early 1970s, most of the above companies purchased interactive graphic design systems from the U.S.A., such as Appi icon, Calma, and Computervision systems. They have used these systems as error checking and minor changing of PWB or LSI artwork data, and as the stand-alone systems which have magnetic tape interface between their large computer CAD systems. (c) Remote TSS terminals have been popular from around 1975, but remote graphic terminals have not yet become popular. Improvement in the convenient environment for CAD users, including the above, seems to be too slow, compared with the U.S.A. (d) As an exception of the independent CAD system described in (a), Electrical Communication Laboratories (ECL) of NTTPC has researched many CAD systems, particularly for ESS, and has cooperated their development with Fujitsu, Hitachi, NEC and Oki. (e) As a part of the VLSI project, Japan's MITI supports for development of CAD systems for VLSI from 1976 to 1979. Activities in each CAD Technologies Field (f) Long after the two well known circuit analysis programs, i.e., NET-1 and ECAP, were made in the U.S.A. from 1964 to 1965, several Japanese companies started to develop such programs. At present, NTTPC's ECSS[27], Fujitsu's FNAP[13], NEC's NECTAR etc. are released. Also, ECAP, ASTAP and SPICE, which were made in the U.S.A., have been widely used in the Japanese electronics industry. (g) In transmission equipment design, analysis and optimization programs for linear circuits, such as filter, equalizer, etc., have been used since 1957. The linear AC optimization algorithm was greatly improved by an iterative Chebyshev approximation method developed in NEC in 1968[7]. Also, linear AC tolerance assignment programs have been used since the early 1970s. (h) For the device simulation, two studies from around 1970 are worth noting. One is a modeling method, which reduces semiconductor device analysis to a lumped network analysis[8,23]. The other is a numerical study of the AC characteristics of semiconductor devices[16]. Recently, two dimensional

CAD IN THE JAPANESE ELECTRONICS INDUSTRY analysis of the bipolar and CMOS devices have been made in ECL[26].

117

(i) For LSI chip layout design, many efforts have been made, as shown in Table 4. Generally speaking, building block layout programs are not yet practically economical. However, the master slice layout programs are effectively in use. The reason is that the former is required to minimize the chip size, while chip size is fixed in the latter. Graph theoretical investigations of the layout problem seem to be active in Japan[10]. The artwork data verification methods are being studied in many companies recently[6,35]. Table 4 LSI chip layout programs developed in Japan Program[Ref.] Developer LILAC[15] TAPLS[20] R0BIN[9] CAD75[3] [24] MARC[31] Hitachi Toshiba Main Feature includes partitioning, assignment and Building Block routing convenient for interactive graphical Building Block design chip area during routing Building Block minimize process interface with graphic design Master Slice offline system Master Slice online graphic display two stage routing, offline interface Master Slice with graphics Mul ti chi LSI two stage routing Technology MOS gate array determine ordering of gates

NEC
Hitachi

Oki

NTTPC, F.H.N.O COMPAS[28] NTTPC, NEC BLOOM[36] NEC

(j) Sharp Co. and the University of Osaka have developed a minicomputer-based PWB layout design system[21]. This system can be used as both iterative automatic placement and wiring and as interactive design. The one-layer curved line boards, which are usually used in the analog system, are tractable in addition to the two-layer regular boards, which are usually used in the digital system. (k) CAD data base has two features in Japan. First, there are two approaches: one is to develop the special data base designed for CAD as described in Sections 2 and 3, and the other is to utilize the general purpose data base[25]. Second, most of CAD data bases have not been logical, but physical design oriented so far. (1) Recently, functional simulator or register-level simulator have been developed and used in several companies[29]. Keio University has developed a multi level simulator in which a parallel value simulation technique is taken into account[30]. Concluding Remarks There is a rapid progress in semiconductor integrated circuit technologies, i.e., from LSI to VLSI. Therefore, the LSI CAD technologies must advance further hereafter, especially in the layout and test pattern generation fields. And,

118

K. KANI, A. YAMADA, M. TERAMOTO

Computer and ESS CAD technologies must greatly be changed in order to avoid the VLSI chip redesign cost and time. References LI] L2] L3] [4J L5] L6] [7] L8] [9] [10J [11] [12] [13] [14] [15] [16] [17] [18] L19J L20J Akazawa,Y.,H.Kodarna,T.Sudo,T.Takahashi,T.Nakamura and K.Kimura: A High Speed 1600 Gate Bipolar LSI, IEEE ISSCC(1978) 208-209. Arima,X,J.Okuda,G.Amamiya and M.Tsuboya: A Heuristic Test Generation Algorithm f o r Sequential C i r c u i t s , 11th D A Workshop (1974) 169-176. Chiba,X.R.Kamikawai.K.Kishida.A.Ozawa and I.Yasuda: Placement and Routing Program for Master-slice LSI's,13th D A Conf.(1976) 245-250. Funatsu.i.N.Wakatsuki and T.Arima: Test Generation Systems in Japan, 12th D A Conf.(1975) 114-122. Funatsu,S.,N.Wakatsuki and A.Yamada:Designing Digital Circuits with Easily Testable Consideration,Annual Test Conf.(1978). Igarashi,KY.Ikemoto.H.Kano and T.Sugiyama: Correction and Wiring Checking System for Master Slice LSIs, 13th D A Conf. (1976) 336-343. Ishizaki.Y. and H.Watanabe: An I t e r a t i v e Chebyshev Approximation Method for Network Design,IEEE Trans.CT,vol.CT-15,no.4 (1968) 326-336. Kani,K.and A.Yokota: A Nonlinear Lumped Network Model of Semiconductor Devices with Consideration of Recombination Kinetics,IEEE Trans.ED, v o l . ED-19,no.9 (1972) 1028-1037. Kani,K.,H.Kawanishi and A.Kishimoto: R0BIN;A building block LSI routing program, IEEE ISCAS (1976) 658-661. Kani,K.and T.Ohtsuki: Graph Theory and Combinatorial Algorithms f o r Design Automation, Journal of Information Processing Society of Japan,vol.16, no.6 (1975) 526-537 (Japanese). Kawakita.K.and T.Ohtsuki:NECTAR2-Circuit Analysis Program based on Piecewise Linear Approach,IEEE ISCAS (1975) 92. Kawano,L,H.Fukushima and T.Numata: The Design of Data Base Organization f o r an Electronic Equipment D A System, 15th DA Conf.(1978) 167-175. Kojima.T. and K.Watanabe:CARD(Computer Assisted Research and Developement) for Electrical C i r c u i t , Fujitsu,vol.24,no.7 (1973) 175-189 (Japanese). Kozak,P.,H.K.Gumme 1 and B.R.Chawla: Operational Features of an M O S Timing Simulator, 12th D A Conf.(1975) 95-101. Kozawa,TH.Horino,T.Ishiga,J.Sakemi and S.Sato: Advanced LILAC-An Automated Layout Generation System for MOS/LSI, 11th D A Workshop (1974) 26-46. Kurata,M.: A Small Signal Calculation f o r One-Dimensional Transistors, IEEE Trans.ED,vol.ED-18,no.3 (1971) 200-210. Kurobe,T.,S.Nemoto,Y.Shikata and K.Kani: LSI Logic Simulation System;L0G0S2, Monograph of Technical Group on D A of Information Processing Society of Japan,DA31-2 (1977) (Japanese). Mikami.K.and K.Tabuchi: A Computer Program f o r Optimal Routing of Printed C i r c u i t Connectors,I FI PS (1968) H47-50. Nagel,L.W.and D.O.Pederson: SPICE,simulation programs with integrated c i r c u i t s analysis, Memorandum No.ERL-M382, Univ. of California (1973). Nakada,Y.,K.Yoshida,S.Kawakami and M.Koike: High Packing Density LSI Layout System with Interactive F a c i l i t i e s , IEEE ISSCC (1974) 46.

[21] Nishioka.I.,T.Kurimoto,H.Nishida,I.Shirakawa and H.Ozaki: A Minicomputerized Automatic Layout System for Two-Layer Printed Wiring Boards, 14th D A Conf. (1977) 1-11. [22] Ohtsuki.T.,T.Fujisawa and F.Kumagai: Existence theorems and a solution algorithm for piecewise linear resistor networks, SIAM J.Math.Anal.,vol.8 (1977) 69. [23] Ohtsuki.T. and K.Kani: A Unified Modeling Scheme for Semiconductor Devices with Applications of State-Variable Analysis, IEEE Trans.CT.,.CT-17,no.l (1970) 26-32. [24] Ozawa,Y.,M.Murakami and K.Suzuki: Master Slice LSI Computer Aided Design System, 11th D A Workshop (1974) 19-25.

CAD IN THE JAPANESE ELECTRONICS INDUSTRY

119

[25] Soga,M.,C.Tanaka,K.Tabuchi,K.Seo,M.Kunioka and H.Tsuji: Engineering Data Management System(EDMS) f o r Computer Aided Design of D i g i t a l Computers,11th DA Conf.(1974) 372-379. [26] Sudo.T. : Analysis of LSI Devices and Their Models,Journal of IECE of Japan, vol.61,no.7.(1978) 714-724(Japanese). [27] Sugimori.M. : D E M O S E C i r c u i t Analysis Program;ECSS, National Conf.of IECE of Japan(1974) 1784(Japanese). [28] Sugiyama.Y..K.Ueda,K.Kani and M.Teramoto: Routing Program for Multichip LSIs, USA-Japan DA Symposium(1975) 87-94. [29] Teramoto.M..M.Tsuboya and N.Koganemaru: RTL Simulator f o r Modular Design, National Conf. of Information Processing Society of Japan (1978) 607-608 (Japanese). [30] Tokoro.M.,M.Sato,M.Ishigami,E.Tamura,T.Ishimatsu and H.Ohara: A Module Level Simulation Technique f o r Systems Composed of LSI's and MSI's,15th D A Conf. (1978) 418-427. [31] Ueda.K. and Y.Sugiyama: LSI Layout and Wiring System;MARC, National Conf. of IECE of Japan,(1976) 421(Japanese). [32] Yamada,A.,A.Kawaguchi.K.Takahashi and S.Kato: Microprogramming Design Support System,11th D A Workshop (1974) 137-142. [33] Yamada,A..N.Wakatsuki,H.Shibano,0.Itoh,K.Tornita and S.Funatsu: Automatic Test Generation f o r Large D i g i t a l Circuits,14th D A Conf.(1977) 78-83. [34] Yamada,A..N.Wakatsuki.T.Fukui and S.Funatsu: Automatic System Level Test (1978) 347-352. [35] Yoshida,K.,T.Mitsuhashi,Y.Nakada,T.Chiba,H.Ogita and S.Nakatsuka: A Layout Checking System for Large Scale Integrated C i r c u i t , 14th DA Conf. (1977) 322-330. [36] Yoshizawa,H.,H.Kawanishi and K.Kani: A Heuristic Procedure f o r Ordering M O S Arrays, 12th DA Conf. (1975) 384-393.

Generation and Fault Location for Large Digital Systems,15th DA Conf.

TECHNICAL SESSION III

Chairman: G. FREEMAN, CAD. Centre, United Kingdom

G. Uusgrave,

ECSC, EEC. EAEC, Bruneis

oi digitai electronic circuiti and iyitemi Nonth-HoUand Publishing Company


6 Luxembourg,

editor,

COMPITTER-AIPEP PESIGN 1979

ASPECTS OF A LARGE, INTEGRATED CAD SYSTEM

Fred Hembrough Manager, CAD/CAM Software Department Richard Pabich CAD/CAM Program Manager Raytheon Company Bedford, Massachusetts USA

Much emphasis has been placed on the development of sophisticated algorithms and software tools to support the design automation process. As CAD progresses from the realms of a research tool towards integration into the design, development and production activities, a number of aspects, both technical and nontechnical, must be addressed in order to fully realize CADs enormous potential. While this paper will address the technical aspects of a large CAD system, it will also address a number of points related to the introduction of an integrated CAD system into a large multifaceted electronics company. These will include: Interface with user. Acceptance of CAD training of system users. System evolution. Responsiveness to changing technologies and user requirements. CAD application support. How does CAD fit into the hardware design cycle? CAD software development procedures. Development of design goals, documentation requirements . The paper will examine these topics as related to the introduction and evolution of the Raytheon CAD system, with particular emphasis on the support of a growing, diverse user community. INTRODUCTION The development, installation, and acceptance of a computer aids to design (CAD) system must be viewed as both a technical and managerial challenge. A technical challenge in that the system must be responsive to emerging hardware technologies and analytical techniques, a managerial challenge in that acceptance of automation depends not only on the provision of responsive technical capabilities but also on the education of potential users both at the managerial and designer levels. It is the intent here, to describe those aspects which, based on the author's experience, are critical to the successful deployment of a computer aids to design system. More specifically:

123

1 2 4

F. HEMBROUGH, R. PABICH
Technical Capabilities Applications Support Software Engineering System Support

TECHNICAL CAPABILITIES A large scale CAD system can be viewed as providing automation support throughout the hardware design cycle. An integrated system provides not only automation support, but, also, the means to pass data from one design phase to the next with minimal manual intervention. For example, logic interconnection data, verified by logic simulation, can serve as input to the product design phase, thus reducing processing and throughput time. An idealized view of the hardware design cycle, together with some common automation support capabilities, is depicted in Figure I. The character of the automation support is dependent on the nature of the product to be designed and the processes used for fabrication; for instance, printed circuit boards versus wirewrap boards, customer LSI design versus commercial IC, high speed versus low speed logic, etc.

SYSTEM/ SUBSYSTEM DESIGN SYSTEM REQUIREMENTS SYSTEM SIMULATION FUNCTIONAL SIMULATION RELIABILITY ANALYSIS

MODULE DESIGN (ELECTRICAL)

PRODUCT DESIGN

FABRICATION

TEST A N D EVALUATION RELEASt TO PRODUCTION

GATE LEVEL SIMULATION CIRCUIT ANALYSIS MICROWAVE ANALYSIS LOGIC D I A G R A M M I N G

PLACEMENT AND ROUTING MASK M A K I N G N / C MACHINE TOOL CONTROL FABRICATION DOCUMENTATION

AUTOMATIC TEST GENERATION AUTOMATIC TEST EVALUATION AUTOMATIC FAULT ISOLATION DATA GENERATION TEST TRANSLATION CONTINUITY TESTING

Figure I .
SYSTEM TAILORED TO NEEDS The key point here is that automation support requirements are dependent on the nature of the product to be designed and developed and on the supporting facilities and processes. So, important first steps in the "tailoring" of a CAD system are The development of a clear understanding of the design process flow with the emphasis on the data interfaces between supporting organizations such as engineering, drafting, testing, or document control. The development of a clear understanding of present and projected product mix with the emphasis on the technical aspects of the automation support requirements.

These steps are critical in the long-term success of a CAD system. However, once a plan is formulated, the introduction of automation into the design cycle can be a gradual process so that individuals and organizations can adjust to changes

ASPECTS OF A LARGE, INTEGRATED CAD SYSTEM


in their way of doing business. the system can be expanded.

125

As users are trained and as acceptance is won,

A description of Raytheon's introduction to CAD illustrates this point. Automation tools, which could show early cost reductions or which could clearly increase productivity, were introduced first. As the users became more sophisticated, and more demanding, the system capabilities were expanded and the concepts of integration and data management were introduced. CAD AT RAYTHEON Raytheon became involved in CAD in 1964 when it obtained the Electronic Circuit Analysis Program (ECAP), a rudimentary analog circuit simulator. This program could perform only DC, AC and piecewise nonlinear transient analysis. In the late 60's ECAP was augmented by a number of circuit analysis programs such as SCEPTRE, CIRCUS and , and in 1974 a truly sophisticated analog simulator, AEDCAP, became available. With one common circuit description, an analog circuit designer could perform both nonlinear DC and transient analyses, small signal linear AC analysis, and sensitivity analysis. Since 1974, considerable effort has been applied by Raytheon to increase the capabilities of AEDCAP. By 1976, Fourier Analysis as well as Monte Carlo Analysis, Functions of a Complex Variable and a technique for automatic generation of model parameters for nonlinear devices had been added to the system. In 1977, the program, now called Raytheon Circuit Analysis Program (RAYCAP), capabilities were expanded to include a Root Sum Square (RSS) analysis, initial condition generation for capacitors and inductors and a worst case analysis. Digital designers obtained their first verification tool in 1968 when the LogicMachine Aids to Design (LMAD) program became available. LMAD, at that stage, was a simple, two-state gate level simulator. By 1971, LMAD was a more powerful fourstate simulator, i.e., logic 0, logic 1, undefined and high impedance. During this period, the concepts of data bases and data base management began to emerge. The data in the LMAD system became the control point and data source for other programs such as an automatic test generation system for combinational logic and a computer aided routing system. In 1973, LMAD was integrated with a test grading capability and renamed the Grader/Simulator System (GRASS). This system provided extensive simulation and test generation capability. By 1977 such capabilities as Load Analysis, Race Analysis and Worst Case Timing had been added to GRASS. Also, in 1973, a tool to perform digital verification at the architectural level was introduced. It was called CDL or the Computer Design Language. From 1973 to 1977 efforts were extended to refine both CDL and GRASS in both technical capabilities and execution. In 1975, it became possible by the CDL to GRASS interface to use data verified at the functional level to verify gate level designs. The most significant occurrence in this time period was the introduction in 1976 of the TOTAL data base management system. TOTAL provided the repository for all data and transactions in the CAD system. It is the one capability that transformed a group of CAD programs into the integrated RAYCAD system. The process of taking a verified or released design and producing all the artwork and documentation to fabricate that design lends itself extremely well to the automation process. Raytheon first began applying automation techniques to product design in 1968 with the development of a system that took digitized data and transformed it into artwork and documentation. In 1968, the task was initiated to automate the process of developing the data that input artwork generators as totally as possible. In 1970, an interactive system for microelectronics mask making that placed and interconnected array structures was introduced. Systems for automatically placing and routing microelectronic devices were made available. The technology and techniques developed for microelectronic work were applied in 1975 to printed circuit board (PCB) work. That was the year that Raytheon's first stand alone PCB routing system became available. In this time frame, it became obvious

126

F. HEMBROUGH, R. PABICH

that two types of systems would be necessary to satisfy the different types of technologies being employed in the company; a batch routing system for large volume and multi layer PCB design efforts and interactive systems for small volume and two layer PCB work. To this end the first turn key interactive graphics system for two layer work was purchased in 1972. Since that time, as the volume of new digital designs increased, a much more sophisticated interactive routing system became necessary. It was then, in 1975, that a REDAC system was purchased. In contrast, the high volume and multi layer PCB design problem lends itself more to a different type of implementation; a batch routing system. The technologies and techniques used for microelectronics and two layer PCB work at MSD were melded in 1976 into the multi-layer Computer-Aided Routing of Interconnections (CARI) System. In 1977, CARI was expanded to encompass high production two layer PCB work and, of great significance, was integrated with the data base management system, and was enhanced to include interactive completion of routing solutions. REDAC, like CARI, was also integrated with the data base management system. Therefore, the two primary Raytheon circuit board routing systems can now automatically accept design data that has been completely verified through simulation and stored in the data base. Automatic test generation techniques were first introduced to Raytheon in 1971 with the release of the Automatic Functional Test and Evaluation (AFTER) system. AFTER, making use of Roth's algorithm, automatically produced an optimal functional test for any digital design that was combinational in nature, i.e., not a sequential device. However, as digital designs were becoming more complex, most became sequential in nature. As such, AFTER could not be used. A method of dealing with sequential devices had to be devised. In 1972, a test grader system was developed to evaluate the completeness of manually prepared tests for sequential circuits. This was the same test grader that was merged with LMAD to form the GRASS system in 1973. As with the design verification and product design system, the test and evaluation system was integrated with the data base management system in 1976. As the user base expanded, and as hardware designs using standard integrated circuit types became common, the concept of an integrated circuit (IC) library emerged. As the RAYCAD system evolved, the library concept has been expanded to include not only IC logic information, but also electrical and physical characteristics for both ICs and more complex hardware components. Thus, today, Raytheon's CAD system (RAYCAD) provides automation tools, integrated around a data base management system. The system is structured such that, to the greatest extent possible, data is automatically passed from one phase of the design cycle to the next. The beneficial effects of CAD during the design process have been experienced and the system is widely used by Design Engineers, Draftsmen, Test Engineers, and Technicians. A key to the acceptance of the RAYCAD system lies in the attitudes of the hardware project management personnel. These individuals, who ultimately are accountable for the success or failure of a task, have been made aware of, or have experienced the benefits of CAD and now consider CAD as an integral part of the hardware design cycle. Here the concept of Applications Support is introduced. Because each project has different schedule requirements and technical goals, it is important that RAYCAD personnel and project management meet to develop a detailed CAD supported project plan.

ASPECTS OF A LARGE, INTEGRATED CAD SYSTEM

127

The process described above is critically important to the acceptance and efficient utilization of an integrated CAD system. If early definition of CADs role in a given project is not specified, then inefficient use of CAD by project personnel will, at best, limit its effectiveness and, at worst, have an adverse impact on project schedules and on system design. SOFTWARE ENGINEERING In order to develop automation tools which are responsive to user need, well defined software development procedures must be established. In an environment where the measure of success is not only technical responsiveness, but also the degree of user acceptance, the emphasis must be placed on the development of "useable" software. Useable software can be characterized as being: Technically Responsive Well Tested Well Documented User Oriented In addition, with the introduction of Modular Design Defined Programming Practices the concept of "maintainable" software can be introduced. Software developed with these attributes in mind not only will gain wider user acceptance, but will also reduce the software support requirement. More specifically: Well written user documentation reduces ambiguities in using the software and provides clear direction in dealing with common failure modes. Well tested software reduces the occurrence of unexpected failures due to unanticipated operating modes. User oriented software provides a user interface which'employs familiar notation and supplies clear user directives and output formats. Software enhancements can be implemented with minimal impact if modularity and expandability have been considered. Well defined programming practices and complete software documentation minimizes familiarization time for new personnel. RAYTHEON'S APPROACH With these points in mind, we can now introduce the specifics of the RAYCAD software development cycle. Figure III presents a block diagram of this cycle. The relationships of critical activities such as: Requirements Definition Software Design Implementation Testing Design Review Documentation

128

F. HEMBROUGH, R. PABICH

are depicted. The specifics of each activity will be described below, but, first, some general comments can be made. Formal user approval is required before software design can begin and before the User Manual can be released. This ensures that the software to be developed is responsive to predefined user needs and, also, that the user documentation is sufficient to allow efficient use of the software. The proposed software design is reviewed periodically by project personnel to ensure that the design guidelines are being followed, that the design is responsive to the user requirements, and that design flexibility has been considered. Testing requirements and results are reviewed by project personnel to ascertain the completeness of the testing approach, to review the test results, and to approve for release to the user.

DESIGN TECHNICAL SPECIFICATION REO'TS GENERATION USER

DEVELOPMENT

FUNCTIONAL SPECIFICATION GENERATION

IMPLEMENTATION

TESTING

RELEASE TO USER

1 TESTING 1 REO'TS

1 RESULTS

USER REQUESTED CHANGES

PROJECT MGMT REVIEW AND APPROVAL

MAINTENANCE MANUAL GENERATION

USER REVIEW A N D APPROVAL

USER MANUAL GENERATION

'

Figure III. The means by which user requirements are developed will be discussed in a later section, however, it is appropriate to describe how a general user requirement for a CAD software capability is translated into a detailed specification. The Technical Specification, which is generated by RAYCAD personnel, is the end result of an interactive process between user and implementer to formally define software characteristics. In final form, the Technical Specification will contain: Scope of Software Capability Major System Interface Requirements Operating Environment Preliminary User Interface Requirements Preliminary Testing Requirements

ASPECTS OF A LARGE, INTEGRATED CAD SYSTEM


Detailed design can begin only after the document has been formally accepted (signed) by the user.

129

Once the Technical Specification is approved, work can begin on generation of the Functional Specification which will present a detailed description of the software design. More specifically. Scope of Software Capability Detialed Software Interface Design Detailed User Interface Design Detailed Testing Approach Detailed Organization Formal reviews are conducted periodically to evaluate key aspects of the design and testing approach. When the document is completed and approved, the implementation, testing, and documentation activities can begin. The Functional Specification serves as the baseline for the Maintenance Manual which is generated at the completion of the development phase. In order to fully describe the scope of the RAYCAD activities, some additional topics, such as user requirements definition, user training, and other support functions will be addressed in the next section. SYSTEM SUPPORT System support is defined as a long-term commitment of resources. This activity which must continue independent of any new technical development activities, is made up of the following major elements: User Training Libraries Maintenance User Feedback System Maintenance In the past, long-term support of software systems has been viewed as strictly a software maintenance function. That is, trained personnel were made available to respond to user problems and to implement minor software changes where necessary. Clearly, the software maintenance activity makes up a significant portion of a CAD system support task. The system maintenance and user feedback functions can be characterized as those activities which ensure that the released automation software is responsive to user needs. The feedback cycle must provide to the user a means of formally reporting software bugs or any other problems or requests which could enhance system usefulness. The RAYCAD feedback cycle is used not only to report bugs, but also to request limited scope changes to the software. Each formal feedback document is reviewed regularly by a joint project management/ user representative committee and the requests are prioritized in accordance with a predefined process; for example, a system bug takes precedence over a system enhancement. If a user feedback request involves a larger scale system enhancement, consideration of the task is deferred until the yearly planning cycle is undertaken at which time, the scope of the full year CAD effort is formalized. Of equal importance is the long-term commitment to the user to provide a CAD capability which is responsive to state-of-the-art technologies. To be more specific, given a CAD capability, then, for each project application, the problems of user training and libraries requirements must be addressed. The level of CAD proficiency of selected project personnal must be determined

130

F. HEMBROUGH, R. PABICH

and an appropriate training activity scheduled. The library update requirement will depend on the number of elements (logic models, mechanical models, etc.) to be added to the established CAD libraries. As the scope of the CAD applications activity increases (i.e., more projects), then additional manpower resources must be dedicated to the support task. The training approach taken for the RAYCAD system is to provide a number of indepth courses which correspond to major system elements. For example: Gate Level Simulation Circuit Analysis Functional Simulation Automated Drafting These courses require up to 30 hours of classroom instruction, as well as homework assignments. Where possible, the material is tailored to reflect applications familiar to those being trained. In this way, the training process can be more easily extended to the actual design environment. The courses are presented periodically with special sessions scheduled on a demand basis. The RAYCAD libraries support function provides a comprehensive set of common library models to support the design verification and product design activities. The models have been thoroughly verified and documented by RAYCAD library support personnel. When additional models are required for a new project, a formal request is presented to the support group and a mutually agreeable delivery schedule is developed. Because of the increasing complexity of individual library devices, it is imperative that library requirements are presented early in order to allow sufficient time to develop the models. SUMMARY This paper has touched on several aspects of a complex subject. The intent here is not to provide all the answers but to enumerate those activities that are critical to the success of a large scale CAD project. A long-term commitment of manpower and resources is mandatory; the benefits of automation must be made clear to upper management; the character of the product development cycle must be clearly understood; the dynamics of the technological marketplace must be monitored; the development of user oriented software must be a primary goal; and the support of the user throughout the design cycle is mandatory. In conclusion, if the needs of the CAD users have been satisfied and if upper management is aware of the benefits of CAD, then the essence of a successful CAD system has been established.

ASPECTS OF A LARGE, INTEGRATED CAD SYSTEM


APPLICATIONS SUPPORT

1 3 1

This activity melds the hardware design cycle with the CAD system capabilities to form an integrated, CAD supported hardware design cycle. This phase is critical to the successful utilization of CAD in that CAD supported project goals and projected milestones are defined. At this point, individuals, who are well versed in CAD applications, meet with project personnel to discuss design requirements and to develop a plan for the efficient use of CAD resources. Figure II presents a graphical representation of key events during the hardware development cycle, with CAD support taking the following form: During the project planning phase, the scope of the RAYCAD effort is defined, its impact on schedule and cost is -assessed, and other requirements, such as user training and library updates are detailed. During subsequent phases, the user training and libraries update tasks are initiated and RAYCAD personnel are designated to support CAD related activities. As the design cycles progresses, the design file serves as a means of passing data from one phase to another and is updated as more detailed design information becomes available. Design data file access is limited to only specified users in order to ensure its integrity. At the end of the design cycle, the file contains electrical and physical design characteristics, fabrication aids such as PCB drill tapes, PROM burn-in data, deliverable documentation formats, and test data for translation to one or more automatic test systems.

SUPPORT

PROJECT MANAGEMENT

SUPPORT TO USERS

PROJECT PLANNING

TEST A N D FABRICATION J . "DEVALUATION 1 - ^

DESIGN DOCUMENTATION FABRICATION AIDS

&

Figure I I .

oi digitai electronic circuiti and iyitemi Horih-HoUand Publishing Compone ECSC, EEC, EAEC, Bruiitli S Luxembourg, 1979

G. Uusgraoe,

editor,

COMPUTER-AIPEP PESIGN

LARGE SCALE CAD USER EXPERIENCE F. Klaschka Data Processing Systems Division Siemens AG Munich, Germany

An integrated system for computer aided development and production of digital electronic circuits and systems is presented from the point of view of a user. By the application of specialized methods, machines and computer programs it is possible to considerably speed-up the product design cycle. Simultaneously development risks are reduced and progress becomes more transparent. Introduction Siemens Data Processing Systems Division has developed and put into operation an integrated system, which we call "Arbeitssystem PRIMUS", for the support of the development and production of digital hardware products. By an "Arbeitssystem" we mean all procedures, resources and organizational measures which are necessary to coordinate a product development project from the first idea to the finished product. The procedures include design and test methods as well as procedures for manufacturing and quality control. The resources consist of production investment, data processing investment and material. Suitable organisational measures are a precondition for the successful application of the procedures. For example, personnel training, application rules and the definition of areas of responsibility and authority. The result of these considerations is a work flow in which computer aided or automated steps alternate with manual steps. Special emphasis is placed on consistency, completeness and freedom from formal errors of the transitional phases between these steps, where naturally the manual stages are particularly critical. Therefore extensive checks of the input data are always carried out at the beginning of computer-supported steps. Outline of the "PRIMUS System" (Fig. 1) The heart of our system is a central data base. The programs of the RUE system are grouped around this data base; they extract data from and deposit data into it. The system RUE is used in dialog mode to support the functional design. All of the design data are entered and checked for formal errors. Functional changes are carried out with the help of a change language. The simulator TEGAS is used for the verification of the functional correctness and timing behaviour of the logical networks. Data which are completely free of formal errors and partially free from semantic errors are provided for the program system PENTA. After an interactively performed placement of the components on their boards, the conductor paths for the printed circuit boards are generated automatically. An analysis of the electrical behaviour 133

134

F. KLASCHKA

of the complete networks concludes this phase. The D-LASAR system is used for the generation of test data. For this purpose the current descriptions of the networks are extracted from the data base. The post-processor PERFEKT generates manufacturing documentation and data media for the control of NC-machines. Change data for printed circuit boards and back panels which have already been produced are generated by the post-processor AEDIFF.

PRIMUS
MANAGEMENT

> > status report

IDEA

REALIZATION

PRODUCTS
Figure 1

LARGE SCALE CAD USER EXPERIENCE Phases of the Development Process (Fig. 2) The whole process is broken down into four phases: functional design physical design physical realization and test test data determination

135

logic design
gip diagram sigrmems for ' physical assignments realization Jjzati

PRIMUS

technology design
padagng circuit technology

dialog
initial input and changes

3L

PRUBAL-PRUPFE
formal checks

V V
< >
master file

PENTA
placement routing 'master file analysis

TEGAS
logic simulation

central database

D-LASAR

AEDIFF
change instructions change equipm. for printed circuit boards, plug ins,back panel boards

PERFEKT
documentation, input data for NC-machines NC-machines forj plotting' | ' testing | drilling"

test pattern generation plug in testeF

2"

PALOG

imtrtmg wiring

Figure 2

136

F. KLASCHKA

Phase 1 : Functional Design The system level requirements for a data processing system are determined by general properties, like performance, application fields and compatibility. The characteristics of the system, which we call system architecture, are described on this level in categories like processing capacity, throughput, transfer rates and interrupt behaviour as well as type, number and capacity of background storages and peripheral devices. The planning of the register level follows the determination of the system architecture. It covers, above all, the data registers, data paths and the connections existing between them. The transition of the design from the register level to the gate or component level leads to a considerable expansion of the amount of data. Computer aid in documenting, checking and updating these detailed planning data, becomes imperative. When a development engineer has reached the gate, or component level in the design process, his logical network will be entered in a data base of the RUE system. All network data are described in the input language LOGOL, and entered in dialog mode on a display terminal. In the case of larger complexes the logic diagrams are entered with the help of a digitizer. These data can also include assignments and physical realization constraints, e.g. component locations or maximum signal delays. In connection with the data of the central library, the RUE system supports the verification of the design with a great number of plausibility and completeness checks. The library contains descriptions of: . the packaging technology . electrical signal transmission characteristics . integrated and discrete components with their functional, geometrical and electrical characteristics . standards for documentation which is to be produced automatically. The logic networks are verified for functional correctness with the help of the simulation system TEGAS. The technique of design verification by simulation on the gate level has reached a high standard. Thanks to the performance of the computers and simulation programs now available, it is possible to consider not only typical gate delay times, but maximum and minimum ones as well. Timing errors like spikes and hazards of the networks are thus recognized. The logic networks stored in the data base can be plotted in the form of logic diagrams at any time. Logic changes and corrections of planning errors are described in the AESPRA language and carried out in dialog mode with the RUE system. This involves comprehensive plausibility checks and, if necessary, the generation of a number of lists which provide the complete current status of all networks.

LARGE SCALE CAD USER EXPERIENCE

137

Phase 2: Physical Design When the logic networks in the data base have attained the level of maturity necessary to start layout of PC boards and back panels, PENTA processing is started. The placement of components on the PC boards is carried out interactively. The developer's assignments with regard to location and distances between components are taken into consideration. The result is presented in a computer-generated assembly plan, which shows the locations of all components on the PC board. The conductor path layout for PC boards and back panels is done automatically, with allowance being made for electrical and geometrical boundary conditions, in so far as these are determined by packaging and circuit technology (conductor width, connections and paths, number of signal layers, etc.). If occasionally the available wiring capacity is insufficient for the realization of all signal connections as printed conductors, then the remaining connections are generated with discrete wires. With the help of appropriate programs, the PENTA program system analyzes the electrical behaviour of the physical networks, and reports in the form of fault lists any violations of the logic circuit design rules. All faults can then be eliminated by means of manual interventions using modification instructions. Phase 3: Physical Realization and Test The physical design is conclued with the generation of a PENTA master file. This master file is transferred via a conversion routine into the central data base of the RUE system so that the data base now contains all physical data, in addition to the functional data. Using this comprehensive data base the manufacturing documents and data media for the control of NC machines are produced. The PC boards and back panels for the prototypes are produced step by step on a series of automatic and semi-automatic NC machines. Magnetic tapes control electronic and mechanical plotters which draw loic diagrams and assembly plans for PC boards, as well as check plots of board layout and photomasters for printed boards. Paper tapes, and magnetic tapes are used as control media for multispindle drilling machines, automatic insertion equipments, wiring machines and automatic testers. After the prototype is built, it is submitted to an extensive functional test in the development and test center. The design errors determined with the support of appropriate test and system software are not manually corrected. Using the RUE system, all necessary corrective measures on PC boards and back panels are carried out with computer support. Each functional change as it occurs during prototype testing is formulated by the designer in AESPRA and entered into the data base. Subsequently, the postprocessor AEDIFF produces automatic change instructions and data media for the control of change equipment. The change data are generated by comparison of the current data base contents with the previous valid state. The change data are used in the development and test center to update PC boards in full compliance with established production engineering standards. By proceeding in this way, the prototype is updated not only functionally, but also physically, so that, at the end of its test

138

F. KLA SCHKA

it represents in fact the first production model. Phase 4: Test Data Determination The RUE system provides the network description of the test specimen directly from the data base. If a functional simulation has already been performed for the unit concerned the bit patterns from the TEGAS result file can be conver ted and prepared for use by automatic testers. If the test quality thus obtained is insufficient, further bit patterns can be generated using the DLASAR system. If neither simulation bit patterns nor hand written test programs are available fora particular device, the test patterns will be completely generated using DLASAR. Testing and debugging is carried out on the automatic tester PALOG. Experience with the PRIMUS System The PRIMUS system with the capabilities described has been in use in the Data Processing Systems Division since 1976. The system was first applied for the development and production of the central unit models 7.722 and 7.760. Next, the development of the new compact computer models 7.708 and 7.718 was supported. PRIMUS also is used for the development of laser printers. In the Tele communications Group the program systems RUE and PENTA are used for the development of switching processors. Further application areas will follow. Our experience with the "Arbeitssystem PRIMUS" has been highly positive. The expenditure for the development and use of this system is more than justified by a considerable rationalisation of the entire development and production process, in particular by the substantial reduction of development time. Furthermore, development risks are reduced and progress is made transparent to management. When we compare current experience with the time required for development of the first central units of the 7.000 series,we find the following relationship (Fig. 3): The time required for functional design cannot be significantly reduced. Progress can be seen from the fact that, in spite of continually increasing demands for performance and capacity, the design data at the end of this phase have reached a high quality standard. This has a decisive influence on the following stages, physical design and prototype manufacture. The time required for these is reduced by about 50%. Due to the methodical and consistent procedures used, the time from start of prototype test to delivery of the first production machines is shortened by about 30%. A lto gether, we reckon with a reduction of the product design cycle for the development of complex digital hardware systems of up to 30%. A prerequisite for this success is the intensive training of all personnel and the strict observance of procedure rules. ip" physical Afunctional design^ design p. prototype test

O.5y

Or

0.7z

Fig. 3

, first production model 30 %

G. Uuigrau,

ECSC, EEC. EAEC, Brussels

oi digital electronic circuits and systems North-Holland Publishing Company

editor,

COMPI/TER-AIPEO DESIGN S Luxembouiy, 1979

COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS Dr. Luther C. Abel Digital Equipment Corp. Maynard, Massachusetts, USA. ABSTRACT This presentation is divided into two parts. The first is an overview of stateof-the-art CAD tools used in designing and manufacturing contemporary computer systems, as typified by those at Digital Equipment Corporation. The second part is a discussion of the managerial and business-oriented impact of CAD - benefits experienced, barriers which must be overcome, etc. We trace the evolution of a hypothetical CPU design from concept to production. RTL simulators derived from an ISPS description of the machine are first used to explore the correctness and performance of the proposed design, later they are used for microcode development. Logic designs are captured using an interactive schematic drafting system, SUDS. Data from these are used as input to a highprecision logic simulator, SAGE, and later to the physical design (interconnection layout) process. PC design is performed using the IDEA system, which combines interactive design editing with a full complement of automatic algorithms, running on a tightly coupled mainframe host/graphics satellite configuration. Interface data from several boards are combined to initiate a backplane design (printed or wirewrap) which is also done on this system. If custom LSI chips are required, the same logic design path is used. Data from it are input to one of several interactive chip layout systems depending on the technology to be used. PC and backplane data are passed via a common Product Description File to a central CAM group. Here, manufacturing process dependent features are added to the design and it is post processed into soft tools, such as artwork, and N/C machine tapes. Design quality is ensured by several feedback paths, such as a comparison between final artwork and original logic design data. Difficulties experienced in emplacement of CAD tools such as these are numerous. Besides initial cost, difficulty in measuring benefit, project risk, reluctant acceptance by users, and lack of flexibility to meet new technologies are all barriers. The traditional benefit claimed for CAD - reduced design cost - is often the least significant. Others which must be examined to assess the overall corporate impact of CAD include faster design turnaround, greater design accuracy, and need for CAD tools simply to deal with design precision or complexity. INTRODUCTION The needs for computer aids to the engineering, design, and manufacturing processes are nowhere as imperative as in the computer industry itself. Digital systems are today of such complexity that it is both intellectually and economically infeasible to design them without computer augmentation of human skills. Our goal :s to present an overview of a typical set of tools, and then to discuss the managerial and business aspects of CAD. 139

140 DESIGN TOOLS

L.C. ABEL

The evolution of Computer-Aided Design tools at Digital has followed the traditional "bottom-up" cycle: CAD tools were first introduced at the final stages of the engineering design process and at the engineering/manufacturing interface. Driven by both new advances in CAD technology and demands of new design technologies, CAD tools have been made available earlier and earlier in the design cycle. Today they are available to the engineer to assist him with everything from his earliest conceptual explorations to final release of a completed design to volume manufacturing. Architectural and Logic Design We follow a hypothetical new computer through its design cycle. The machine's architecture is expressed in ISPS (3). This description is automatically processed into RTL simulators which are used to explore the correctness and performance of the proposed design. When the computer's architects are satisfied this ISPS is frozen, giving a precise functional description of the processor via its simulator. Machine development now splits into two independent paths which will not rejoin for many months. Microware (firmware) engineers write microcode for the machine verifying via this functional simulator they verify that their microcode, acting on the machine described, will produce the desired external (user perceived) characteristics. Meanwhile, logic engineers produce a gate-level logic design for the machine. Their description is captured via an interactive schematics drafting system, SUDS (8). SUDS offers not only improved drafting productivity, but simplifies later changes to logic (if necessary) and captures electrical interconnection information in a computer data base which is input to later steps in the design process, insuring accuracy without the possibility of errors introduced by hand transcription. Logic design data from SUDS is used to drive a high-accuracy logic simulator, SAGE (11). The engineer explores the performance of his design and is made aware of not only outright mistakes in his logic design, but marginal conditions and possible timing errors that would be difficult to detect using a hardware breadboard. Changes and fixes are easily incorporated. Final accuracy of the logic design is ensured by driving both this gate-level simulator and the earlier functional simulator with identical input sequences and comparing their outputs. Physical Design Interconnection and component description lists from SUDS are then input to the physical design process. Printed circuit layout (if PC is the selected interconnection technology) is accomplished on the IDEA system (1.5). This system interconnects high-performance interactive graphics terminals for the editing of PC designs with a large mainframe host having sufficient computational power and memory space to handle layout data management and complex layout algorithms. Layout follows the traditional process of first locating components on the board and then routing the interconnections. Intermediate adjustment of gate assignments to packages and connector finger assignments are first made. Automatic routines define component placement using both constructive and iterative improvement techniques (7). Similarly, powerful algorithms of both the basic line routing variety (9) and a unique typologically based router (6) can be used to layout the interconnections themselves. Competitive pressures in our marketplace demand boards of such complexity and

COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS

141

density that the average layout is typically beyond the capability of even the most powerful of contemporary algorithms to process with complete success. Consequently, a human designer must be kept "in the loop" to complete and perfect most designs. Powerful interactive design editing stations are a necessary part of the design system. Here, technician operators can modify proposed component placements, gate assignments, and wire routings and can interactively complete the "leftovers" from the automatic routines. The result of each design session (many of which are required to complete a PC design) are stored in a data base under the aegis of a powerful data base management system. This DBMS, via the structure of the data base and the relationships it defines, insures the integrity of design data and agreement between physical and logical descriptions (e.g. interconnections on a schematic versus etch paths on a board). A history of the evolution of a design is kept so that alternative approaches may be recorded and evaluated. Design Postprocessing When the PC layout is completed, post-design audit and quality control routines check the design for manufacturability (e.g. spacing tolerances between lines, lines and pads, etc.). An interconnection list is derived from the data base which will produce board artwork. It is automatically compared to the original interconnect list from SUDS. Any discrepancies are corrected by the designer before release. Changes made during layout which affect the schematic (e.g. connector pin assignments, gate swaps - note that none change the logic) are noted in a file which is used to automatically update the SUDS schematic before the final schematic to artwork comparison is made. A read-only data base for the design is passed to our Computer-Aided Manufacturing tools generation group. Here manufacturing-dependent augmentations (e.g. plating bars for PC manufacture) are added. A sharp distinction must be maintained between the engineering and manufacturing definitions of a design. The former describes the final product, the latter may include substantial processdependent information which may vary from production site to production site. Other Design Systems Interface data from schematics for the several boards comprising the system are automatically combined into a file defining required backplane interconnections. Signal consistency between boards is thoroughly checked. This backplane interconnection list is then input into either our wirewrap system or into IDEA for PC layout. If a mix of PC and discreet wires are required, connections not completed on the PC are fed into the wirewrap system. Once again, post-design auditing routines check for manufacturability and for agreement between logic (schematic) and physical designs. Integrated circuit design is done using a similar process. Even heavier emphasis is placed on logic simulation during design and on post-layout audits of design correctness because of the impossibility of post-design circuit modifications and the high cost of manufacturing tools (e.g. mask sets) - the design must be absolutely correct on the first pass! Custom IC designs done at the circuit element level (individual transistors, resistors, etc.) typically involve at least an order of magnitude more elements than even the most complex PC design, severely straining the capacity (and running expenses'.) of CAD tools. Actual IC layout is done using a variety of technologies and tools. A commercial digitizer/editor system is used for custom and standard cell designs; experiments have also been performed to determine the adaptability of PC layout techniques to gate arrays and other regular logic forms.

142

L.C. ABEL

The use of CAD at Digital is not limited to digital logic design. Circuit simulators aid in the design of everything from custom IC chips to power supply regulators. Mechanical and hydrodynamic analysis programs are imperative in the design of disk storage devices. Interactive systems speed design and verify the layout of sheet metal and mechanical subassemblies. MANAGEMENT ISSUES An extensive set of CAD tools such as those described is not emplaced without considerable management forethought and problem-solving. We now shift our focus to discuss some of the management issues surrounding the development and operation of a CAD system. CAD Benefits The most obvious question is "why CAD?". The traditional answer has been that it can substantially reduce direct design costs. However, experience at Digital and many other companies (e.g. 10) has shown that this goal is rarely met. CAD tools do, however, reduce design time. In the highly competitive world of computer design and manufacture where baseline technology is changing so rapidly, the speed with which a new technology can be translated into a marketable product is essential to corporate success. Reduced design time also means increased productivity from each design engineer and technician. In an era of severe shortages of educated qualified technical personnel, maximal productivity from each can be another key to overall corporate success. Increasingly, the justification for a CAD system is simply that it is impossible to design the product any other way. Nowhere is this better illustrated than in the IC design field. As designs proceeded beyond a few hundred gates "bookkeeping" type CAD tools had to be developed to organize, store, and interrelate design data, leading to the digitizer/editor systems so prevalent today. As we stand on the threshold of VLSI designs containing 10^ - 106 elements, a new generation of tools is required as the capability of the human intellect augmented by today's tools is once again exceeded. Similarly, the high cost of building prototype chips and the impossibility of debugging them has lead to heavy emphasis on design analysis and verification tools - software which will, as far as possible, ensure the correctness of a design before it is ever manufactured. This is an example of another true advantage of CAD - reducing indirect design costs. In a similar vein, we have repeatedly found that automating a design step has produced heavy, often unexpected demand for its resulting data base. Eliminated is hand transcription of data, an extremely error prone process contributing enormously to indirect design costs. Often the new data base is also the key to automation in another engineering or manufacturing process. Finally, increased accuracy and pre-testing of a design via CAD techniques can significantly reduce product recall or field revision expenses. Barriers Barriers to the use of CAD tools, even given the above advantages, are numerous and must be addressed by management. Most obvious is cost - especially because it is a highly visible, up-front, often centrally funded investment, while the returns on that investment are often dispersed over time and over numerous projects with independent budgets. Our experience, once again bolstered by that from other similar corporations, has been that each design station requires on the order of $100,000 in capital investment. This has remained surprisingly constant in spite of falling hardware costs. The reason is simple: The complexity of designs (and hence the amount of CPU performance, storage, etc. required

COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS to do them) is increasing at a similar rate.

143

An oft-overlooked problem in planning for the CAD system investment is effect of varying workloads. The number of designs requested by Engineering can and do fluctuate widely. In a traditional manual design environment, a few hundred or thousand of dollars of equipment (drafting boards, desks, etc.) standing idle except for peak loads is of little consequence. In the capital-intensive CAD environment, a sound strategy for coping with peak demands is imperative; this may require justifying and requesting even more capital than for a constantly-loaded system always at full productivity. Another expense in fielding a CAD system is the matter of operator training. Contemporary CAD tools are so complex that even the most carefully humanengineered and self-prompting system may require weeks of operator training and familiarization. Surprisingly, the environment surrounding the CAD program data entry routines, utility programs, getting on and off the system, system crash recovery procedures, etc. - can account for a large portion of this training. The risk of using a new tool is often a barrier. No engineering manager wishes to jeopardize his project by relying on a new and unproven tool - unless adequate fallbacks are available or there is simply no other way to do the task. Rarely is a CAD tool well understood when it is first used. Bugs may exist in the software or at least unanticipated deficiencies in its functionality. Inexperienced operators and under-tested processes abound. In a recent example, we discovered that the average time to do a design using one particular new tool at Digital fell from twelve weeks to seven weeks during its first nine months of use (2). Less obvious are psychological barriers. Designers frequently feel uncomfortable with the newness of a tool, perceive threat that it will degrade or supplant their own creativity, fear enslavement to the system and loss of control over their own data and work. Users (operators, engineers, management) must be educated about the need for and the reasonableness of a tool to the point where they enthusiastically await its arrival, for it is they who will share in the unexpected problems invariably accompanying the introduction of a new tool. They must be convinced that the tool will enable them to better solve their real day-to-day design problems and not some abstraction and simplification distilled by the system developers. A thorough understanding of the design process by the system designers, good human engineering of the system and involvement of the users in its design from the earliest stages can help generate a sense of partnership. Finally, the flexibility of CAD tools to quickly adapt to new technologies or design rules is limited. This is due to the very nature of the design trade-off decisions made when implementing a tool and its data base (e.g. number of layers, fineness of grids in a PC system). This necessitates a conscious management change towards better planning, early warning about technology changes to CAD developers, and inclusion of CAD lead times in overall project schedules. Without this planning and anticipation, CAD will always be in a "catch-up" mode. CAD Introduction How are CAD tools introduced into a corporation? Most companies plan and budget for expanding and upgrading their CAD tools in accordance with some set of goals and priorities. Although this may result in the introduction of CAD into new areas of design, most often it is an evolutionary process centering on existing tools. A radically new tool is far more frequently introduced into the corporation's engineering environment by a "forcing event". The tool (or at least its basic

144

L.C. ABEL

technology) may exist for years, yet it will not be adopted by the company. Perhaps its cost/benefit is not well established, perhaps it cannot find the necessary champion to sponsor it at high management levels. Most often, though, it simply loses in the competition for the corporation's limited internal development resources. Then a decision is made to adopt a product technology which absolutely requires this tool for success. The tool must be brought into the company. Latent demand for this type of tool elsewhere in the company surfaces and it spreads until it becomes a part of the general CAD system. An excellent example of this phenomenon is the introduction of logic simulation into Digital several years ago. While the CAD technology has existed since the early 1960's (4), it did not find sponsorship for the development investment and engineering design process changes required to bring it into the corporation. A decision by Digital to do some of its own IC designs was the forcing event simulation was imperative to IC design and logic simulation tools were developed. Once the initial system existed, other engineering departments willingly paid the smaller investments of expanding the basic system to meet their needs, user training and process changes, etc. As a result, logic simulation has now spread to permeate the entire engineering environment. Increasingly, new CAD tools are an integral part of the plan for new product technologies. ECL logic, custom ICs, hybrid circuits, and multi-layer PC boards are but a few of the advanced technologies adopted at Digital in the past few years which required planned concomitant CAD tool development. CAD as Part of the Design Process A CAD tool always exists in a context; the surrounding design process. Tools can and do affect process; process must often change to maximize the effectiveness of a tool. Thus there is a need to examine the surrounding design process whenever a CAD tool is introduced - during system design (so required process changes can be identified and planned), when the tool is released, and again perhaps a year later when the tool is fully entrenched. Why, for example, should CAD output be pen plotted only to be micro-filmed for distribution when direct COM is available? How many times is CAD generated data hand transcribed for input to another computerized process when a simple interface program could be written? As CAD speeds individual design steps, time spent in simple administrative tasks (passing jobs from one responsible group to another, logging and filing, etc.) and time lost waiting in queues can dominate total design time. Again, careful management attention to the overall process is required to ensure that this does not occur. Data interfaces between CAD tools become important as more of them are emplaced. Often given the least forethought by CAD system designers, they may be difficult to operate, non-standardized in their human interfaces, and crippled by a lack of uniform data definition across systems. This can lead to a significant portion of system complexity and operating time occuring in what should be the most trivial portion of the overall system unless the problem is actively monitored and addressed. Finally observing data flows can be a key to understanding and improving the CAD-augmented design process. We have already mentioned the fact that automating a design step may produce unanticipated demand for its design data base. These usages should be traced to ensure that the right data is being provided

COMPUTER AIDED DESIGN OF DIGITAL COMPUTER SYSTEMS

145

and to understand all impacts if a design data base is changed. Other groups may need data, but at times or in formats incompatible with current process and tools. Examining data flows can help identify these users and permit changes to satisfy their needs. CONCLUSION A CAD system must evolve and grow. Technology both permits it and demands it. We perceive two essential keys to success in that growth: automating the right steps in the design process, and managing the transition to an automated system (which includes everything from initial system design to installation to operational tuning). A description of a typical, extensive set of automated tools has been presented, along with insight into the management problems surrounding their introduction. REFERENCES 1. 2. 3. Abel, L.C., "Structure and foundations of a large multi-user, multi-task CAD system", Interactive Systems - Proc. EUROCOMP, Sept. 1975, pp. 247-262. Armstrong, R., report at (internal) Digital CAD Symposium, October 1978 (no proceedings published). Barbacci, M.R., et al, "The symbolic manipulation of computer descriptions: ISPS Computer description language", Dept. of Computer Science and Electronic Engineering, Carnegie-Mellon University, Pittsburgh, PA., March 1978. Breuer, M.A. Ed., Design Automation of Digital Systems, Prentice-Hall, 1972. Bruce, E.A., "Device independent interactive graphics in a time shared environment", Interactive Systems - Proc. EUROCOMP, Sept. 1975, pp. 109-125. Doreau, M.T. and Abel, L.C., "A topologically based non-minimum distance routing algorithm", Proc. 15th Design Automation Conf., 1978, pp. 92-99. Hanan, Mand Kurtzberg, J.M., "A review of the placement and quadratic assignment problems", IBM Research Report RC 3046, April 1970; also in Breuer opp. cit. Helliwell, D., "The Stanford University Drawing System", Stanford Artificial Intelligence Laboratory, 1972. Mikami, K. and Tabuchi, ., "A computer program for optimal routings of printed circuit connections", IFIPS P r o c , vol. 2, 1968, pp. 1475-1478. Schuyler, S., "Quantitive system cost analysis", IEEE/Michigan State Univ. Design Automation Workshop, 1977 (no proceedings published).

4. 5. 6. 7.

8. 9. 10.

11. Sherwood, W., "Simulation hierarchy for microprocessor design", Proc. ACM Symp. on Design Automation and Microprocessors, Feb. 1977, pp. 44-49.

TECHNICAL SESSION IV

Chairman: Hugo DE MAN, Leuven university, Belgium

G. Uusgraoe, editor, COMPtTER-AIPEP DESIGN oi digital electronic circuits and syitemi North-Holland Publishing'Company ECSC, EEC, EAEC, Brssels S Luxembourg, 1979 VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN J.-C. Rault'-J.-P. Avenier2-J. Mlchard3-J. Mutel 2 1. THOMSON-CSF and IRIA, Domaine de Voluceau, 78150 Le Chesnay, 2. LET I-MEA, Grenoble, France France 3. CIl-Honeywell-BuI I, Les Clayes-sous-Bois, France

This paper addresses the different techniques and associated tools which prevail in the development of digital circuits and are conducive to design verification. The structure of the paper parallels the usual process that is followed in the development of integrated circuits : a) specification writing, b) automated specification verification, c) design verification after commitment to hardware implementation (hardware description languages, design simulations, -analog and digital- early design rule checking, comparison with specifications, testability analysis), d) artwork generation (design rule checking, automatic mask defect detection), e) test program preparation. It is attempted to assess the state-of-the-art regarding automated tools available for different levels of verification. Interrelationships among these tools are also discussed. A description of the actions that can be taken in order to verify digital circuit design as it progresses from specifications to physical implementation is provided. 1. INTRODUCTI ON The ever increasing complexity and density of LSI and VLSI circuits we have witnessed during the past few years wi'l I soon cause obsolescence of the CADCAM tools and procedures that are in use today. Most prevailing industrial tools reflect the characteristics and constraints of MSI but not those of forthcoming VLSI. To cope with this evolution, those responsible for computer tools aiding the design and manufacturing of LSI circuits are presently reconsidering the architecture and capabilities of CAD-CAM tools. One can note that new criteria, often missing from present tools, predominate in the design of future tools ; among these criteria are the procedures favoring top-down design, circuit testability, integration of the various tools, as well as those procedures conducive to multiple and complementary levels of verification during the different design steps. 2. IC DESIGN STEPS Whatever technology is used, IC design consists schematically of the following steps (12,14,39,59) : abcdefghidrawing requirements (functional specifications and physical performances) checking specifications for consistency and completeness selecting a functional architecture selecting components for physical realization checking that integration constraints and initial specifications are met drawing masks analyzing masks and checking design rules manufacturing controlling quality 149

150

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL


To each of the above steps correspond verification procedures which are based on CAD-CAM tools. The present paper will review each of these procedures while indicating their interrelationships and the characteristics that the corresponding CAD-CAM tools must exhibit. The paper will be built upon the above sequence of steps ; but, first we will briefly describe design verification.

3. DESIGN VERIFICATION (21,41,48,61,80,81,93) 3.1 Levels of verification Verification of IC design is not performed in a single step but, on the contrary, corresponds to a sequence of operations performed at each step in the design process. Schematically, three levels of verification may be distinguished : A- A first level, once specifications are drawn, in which it is attempted to detect possible design errors such as inconsistencies, omissions, redundancies or unnecessary points (step a and b) B- A second level corresponding to the design itself (steps c to h) C- A third level after the final circuit is obtained (step i) 3.2 Approaches and types of analysis Design verification takes several approaches which may apply to more than one of the above levels : a- derive, by analyzing the functional specifications, a set of conditions and relations, or assertions, and check for their consistency b- derive, by analyzing the functional specifications, a set of assertions and check that they correspond to the function actually implemented by the circuit (levels Band C ) . Basically, this approach compares results of two simulations of the circuit, each for a same predetermined sequence of input signals. The results are the formal specifications of the circuit's function and the function, derived by simulation, of the circuit as it is implemented. c- derive from the description of the implemented circuit its electrical and logic (even thermal) behaviors and, subsequently, check that they meet the corresponding specifications. The above approaches lead to three types of analyses : . a priori simulation : analyzing specifications before commitment to hardware (steps a and b) . analyses during the first design step on a functional schematic (step c) . analyses performed after commitment to hardware ; as the case may be, these analyses concern, separately or simultaneously, physico-chemical data, electrical schematics, logic schematics, thermal data, or a mask drawing (steps d to i ). 4. WRITING AND VERIFYING SPECIFICATIONS 4.1 The different types of specifications As for any other technical product, this step is decisive in the design of an integrated circuit. Inconsistencies, ambiguities or omissions in the specifications will be reflected in the final product ; the later they are uncovered in the design process, the higher the cost for their elimination. In other words, the quality and reliability of an integrated circuit depend as much on the reliability and quality of Its specifications as on those of its manufacturing process.

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

151

It is of prime importance to pay as much attention as possible to the way an integrated circuit is specified as well as to the procedures that are used for verifying both the specifications (before actual implementation Is performed) and their respect by the final product. Let us describe the different kinds of specifications more precisely. As a rule, one may distinguish : . specifications intended for the potential user, I.e. the set of Information strictly necessary for the user to use the final product. In fact, such specifications are the mere expression of the user's needs. . specifications intended for the circuit designer responsible for circuit manufacturing. These specifications concern : - the function to be implemented (logic function or algorithm, input and output signals, etc.) - the actual implementation (partitioning into main blocks, nature and type of input and output signals, logic and dynamic behaviors, domains of use, geometry, package, behavior under given environmental conditions, thermal dissipation, power). In short, specifications concern two points : . design : it is the area for which automatic validation is beyond all question the most difficult . implementation, I.e. the definition of structures which will confer to the final product those characteristics matching the user's needs as specified initially. First, we will focus on the former point (steps a and b avove) ; the second point involves tools used in the subsequent steps (c to i) where verification is better adapted to automatic tools. The specification writing step corresponds to abstract entities whose purpose is defining unambiguously the functions and physical and electrical characteristics of the product. This set of entities forms a model later used as a reference during the other steps in design. 4.2 Ideal characteristics for specification writing tools An ideal tool for writing specifications should exhibit the following characteristics : . be independent from technology . provide several levels of abstractions and automatic communication between levels. At each level of abstraction only strictly required information should be present. Here lies the main difficulty of specification writing : provide sufficient information while not attacking the actual physical Implementation of the product. . allow for a format comprehensible to the user for whom specifications are intended (potential user or designer of the circuit) . avoid omissions and ambiguities . lead to specifications formal enough so that their consistency and completeness may be verified mechanically. 4.3 Tools for writing and verifying specifications The foregoing indicates that the specifications of a circuit may have several forms which can be put into two main categories : formal and informal speci fcations.

152

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL


Informal specifications Those written in common natural technical language ; they do not lend themselves to mechanical interpretation. Initial specifications of integrated circuits are most often written in this way ; their verifying proceeds from the engineer's ingenuity. Formal specifications Those expressed in a form that provides a technical and mathematical description of the concepts on which the circuit to be designed is based ; therefore, a description processable by a computer program. Various forms favoring automatic verification are possible ; as examples one may mention : f low-charts chronograms state tables logic diagrams Boolean functions tables of physical data As a rule, several of the above forms may be present in a same set of specifications. The above list alone shows the difficulty of standardization, for the dialoguing parties use the form that best suits each item of the specifications. To attain both standardization and verification at this level of description, much effort Is currently made on the design of languages and tools for writing formal specifications. Among these tools (77) one may mention Petri nets, parallel schemes, GRAFCET, SARA (31, 33, 36, 72), etc. The above tools (description languages and associated simulation programs) well suit the current trend of VLSI's incorporating complex functions such as microprocessors and similar circuits ; however, their use by industrial IC designers is still limited because of their recent advent and of the complexity of their processing and of the analyses they entail.

5. VALIDATION OF ARCHITECTURE The step corresponding to the selection of circuit architecture is more and more important among the various design steps. As a matter of fact, VLSI circuits become true computer systems and the selection of a proper architecture Is an important factor in their optimization. This selection is performed by refining the Initial solution through a sequence of synthesis steps each of them following the same scheme. Starting with a functional description (i.e. the function to be implemented) one derives a structural description representing one of the many possible implementations by means of entities with more basic functions. This procedure, called top-down design, is stopped when the basic functions used for implementing the initial function can be expressed directly in hardware terms. Each structural description is, in fact, a set of interconnected modules, each with a functional description. The design proceeds in this way as a sequence of synthesis and optimization steps (17, 18, 42). Concomitant with this design methodology is a circuit description language which, without intermediate transcoding, should accomodate the successive versions of the implementation from the highest level (algorithms or conceptual schematics) to the lowest level (elementary logic functions) (12,14,21,59). As homogeneity is the main expected advantage, one may easily understand that such a language should be compatible from one step to the other. Along with this language, it is useful to have at hand three types of tools : . a functional simulator

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

153

. a consistency verifier which allows checking that each refinement of a part of the description leads to a sub-set equivalent to the implemented function and compatible with the associated constraints . an assessment program as a decisional aid for the choices to be made during the partial synthesis steps. As a matter of fact, for each function to be implemented, several solutions are possible. The designer should be able to establish overall assessment of each of them, their characteristics as different as testability, silicon area, ease of implementation and manufacturing, etc. By no means, are these performances precisely defined as the synthesis is crude ; however, they are an invaluable guide to the designer. If one analyzes the tools present in current IC design automation systems, one may note that nearly no system provides a language for true functional description along with a hierarchical functional simulator. Similarly, assessment programs are virtually non existant. However, VLSI designers could benefit from a wealth of existing tools. As a matter of fact, many functional languages and simulators have been designed long before LSI's advent (12, 14, 42, 48, 98) ; however, these languages have experienced limited use ; this is due to their variety and lack of Industrial status. The general use of microprocessor-1 i ke circuits and needs of VLSI causes renewed interest in these languages which should be instrumental in the actual use of structured design for digital systems, In the analysis of their testability and In their functional partitioning. 6. VERIFICATION OF LOGIC AND ELECTRICAL IMPLEMENTATIONS Once a structure is chosen and validated, at least with respect to functional and global performances, the designer has to gradually define an electrical and logic implementation. The initial data he has at hand is a set of interconnected functions along with a list of basic functions, whose logic and electrical structures have been defined previously (basic cells specific to each technology). Two cases may arise : . macrofunctions :in this case, systematic or even automatic synthesis procedures may be used ; in a few cases (PLAs, shift registers, memories, ...) automation may proceed up to layout. In this instance, simulation is not necessary In principle ; on the other hand, it must be verified that constraints associated with the block implementing the given function are met. This procedure is aided by synthesis and consistency verification programs. . non standard functions : in this case, translating functions into logic operators is aided by a simulator allowing, at each step, to check that logic and dynamic specifications are met ; on the other hand, if the designer must create new cells, he must use an analog circuit simulator for checking electrical and timing characteristics. Choice of tools of the two above types is, of course, dependent upon the technology In use and the accuracy that is requi red. 6.1 Verification of the electrical implementation

Today there are many tools available for this operation ; as a matter of fact, analog circuit simulators are among the first CAD tools to have been designed and this, well before the advent of integration. In spite of an apparent abundance, it appears that LSI designers do not have a wide choice among possible programs. According to circuit size, type of description, and the type of analysis to be performed, one may distinguish first two main categories of programs : - the programs Intended for component designers ; they are those directly relevant to circuit integration ; their main feature is to use technological data such as the geometry of active and passive components and data on the physicochemical process ; consequently, they are most often technology dependent.

154

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL

- the programs Intended for users of components ; these components are described both by an approximate model and by the numerical values of the parameter of this model. The integration technology is not in evidence but is incorporated in the models to be used and validated elsewhere, either through programs of the first category, or through programs used in the validation of the physico-chemical process (see section 8 below). A more accurate categorization may be adopted : - genera I-purpose simulators addressing mainly non-1 i near circuits for transient analyses. To this category belong SPICE-2, IMAG-3, ASTAP, ASTEC (46) SUPERSCEPTRE, COD, SYSCAP, CIRCUS, TRAC, PHILPACK,etc. - simulators dedicated to a given technology and handling non-linear circuits with a stress on transient analysis . Programs of this category are often derived from programs of the first category ; they are, for instance, : MSINC (96), MOTIS (20, 32), T-SPICE-2, MICE (55), SPLICE (70), SIMPIL (11). - simulators dedicated to linear circuits. These programs are less complex than the two previous ones and are less expensive to use, and, moreover, they allow analyses that are not performed, in general, with the other simulators (for instance, optimization, connection with synthesis programs). To this category belong SLIC, SNAP, NASAP, OPNODE, CORNAP, COMPACT, ANP3, ESOPE, etc. - simulators specific to a given class of circuits : amplifiers, oscillators, power supplies, filters, etc. A wealth of such programs are commercially available. Schematically, simulators of the first two categories are most often run on large computers. However, minicomputer versions are appearing (10,32,34,55,94, 96) particularly for the second category (32,55,96). Simulators of the last two categories are run indifferently on large and small computers. Experience gained while using analog circuit simulators shows that a universal program is somewhat utopie ; in practice, It is preferable to have a set of complementary programs available, each being, as the case may be, most suited to various conditions such as : . a given type of circuits : linear or non-linear circuits, bipolar or field effect transistors, microwave or hybrid circuits, etc. . given conditions of simulation : - DC state : knowing this state is a prerequisite for other analyses (AC, transient, tolerance analyses) - AC analysis : small signal response for given DC conditions. Non-linearities are usually modeled as a set of linear cases around a DC point ; a frequency scan is usual - transient analysis : response of the circuit to stimuli defined by the user for predetermined initial conditions given by the user or computed elsewhere - sensitivity and tolerance analyses for the different parameters of a circuit 115,19,30) ; such analyses are of prime importance in assessing IC manufacturing yield and viability of circuits . noise analyses . influence of non-linearities in active components ; such an analysis is usually beyond the capabilities of conventional simulators . fault analyses : (shorts, open-circuits, components outside of their tolerances, etc) ; such analyses are useful in preparing testing sequences to be fed to automatic testing equipment . optimization : in general, these computations are costly because the mathematical algorithms available assume that the objective functions exhibit only one extremum and are well behaved. Actual objective functions do not necessa-

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

155

rily meet those assumptions. Consequently, optimization is most often restricted to linear circuits whose analysis is not expensive connection with synthesis programs : the structure of the circuit to be analyzed is provided by a synthesis program which has as input data response specifications to be met. Synthesis is restricted almost to linear circuits (passive, active, CCD filters) and simulation is used for verification and optimization a given description mode : - circuits described as an Interconnection of basic passive and active components (diodes, transistors, resistors, capacitors) - circuits described as an interconnection of functional blocks (filters, delays, amplifiers, gates, flip-flops, etc.). The functions may be described in varied ways : . . . . . rational functions (linear circuits) mathematical functions empirical formulas tables of data presolved differential equations

- circuits described by the numerical values associated with the model parameters - circuits described by symbolic parameters - scattering parameters a given industrial context - device manufacturing. - use of components (discrete or integrated) - system design - manufacturing and quality control a given computer environment : remote batch, time-sharing, stand-alone computer, large or small computers Taking into account the above considerations, one may also distinguish program addressing the following modes of simulation : . genera I-purpose numerical simulators . simulators dedicated to a given LSI technology . sem i-numerica I simulators (linear circuits) . symbol I c analyses The two previous modes provide the following advantages : more accurate and compact information, avoid some numerical analysis problems, reducing computation cost for sensitivity and tolerance analyses . functional analysis : a capability in demand in the case of LSI circuits . logico-analog simulation (timing verification) . simulation taking into account the layout data (capacitive loads, coupling between components, parasitic capacitors between layers, parasitic resistors, etc.). This mode of simulation is useful in the LSI context as dealt with In section 8 below). . simulation taking into account thermal mappings (see for instance program T-SPICE . analog fault simulators (11,20,32,55,70,78,96)

156

J.-C. RAULT, O.-P. AVENIER, J. MICHARD, J. MUTEL


This brief analysis which is, by no means, exhaustive, well depicts the diversity in the capabilities expected from analog simulators, each of them being useful in the verification of the electrical schematic of integrated circuits. If capabilities of analog simulators currently in use in industrial contexts are assessed, it is clear that they do not match the constraints of VLSI : simulation running times are prohibitively too long, simulations are restricted to small sub-parts of a same circuit (a few dozen of active components) are not global, communications among the various simulation modes are most often manual, computer implementations as well as processing of output results lack flexibility for the user. In order to meet forthcoming needs of LSI, a new generation of simulators in under way ; its main features are : . use of novel simulation techniques that allow simulation speeds closer to the speed of logic simulators while taking into account analog characteristics (gain in the order of 10 ) , . use of macromodeling techniques (gain between 10 and 30) (16, 25, 26, 37, 44, 47, 50-53, 97), . techniques for selecting proper models (35), . hiearchical structure : simultaneous simulation of several sub-circuits described at different levels of abstraction (71, 84), . a unified language for several modes of simulation or several simulators (69), . minicomputer implementations (10, 20, 32, 34, 55, 70, 94, 96), . use of intelligent computer terminals so that interactivity and user's comfort are improved, . automatic connection with other tools especially with drawing aids.

6.2 Verification of the logic schematic Once the functional structure of a circuit is established, the designer must choose a logic realization. Then he translates a functional scheme into a logic diagram described as an interconnection of basic logic elements (gates, flip-flops, registers, memories, etc.). At this step, he uses tools for logic simulation, timing simulation (see 6.3), and fault analysis (either for a priori assessment of circuit testability or for generation of testing sequences used later on) ; to the latter mode belong programs for synthesizing testing sequences. Possible structures for simulators and simulation techniques are fairly well known today and many efficient programs have been developed. As several papers presented at this symposium deal with logic simulation and testing sequence generation, we will restrict our discussion to the evolution considered as necessary for logic simulators to cope with VLSI's constraints. In spite the fact that nearly every IC DA system includes one logic simulator and one program for testing sequence generation at least, one can note a gap between the characteristics of VLSI circuits and the capabilities of these tools (description modes and analysis levels) which, unfortunately, have not evolved at the same pace as circuit complexity. In fact, industry-oriented tools, presently in use, date back to the eras of discrete components and MSI technology (58). The main evolution to be considered as necessary is providing true and extended macro-modeling and macrosimulation capabilities, for both purposes of design verification and of preparation of test programs. The latter point has never

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

157

received a satisfying solution for the case of MSI and it is recognized that the insufficient approaches taken so far will be of little help for the VLSI case. There are several reasons for this situation : First, taking into account the higher combinatorics involved in the corresponding algorithms, the expected size of VLSI circuits make those methods based on gate-level descriptions obsolete ; unfortunately, the methods that work for the MSI case do need a gate-level description. Secondly, there Is no proof that simplifying assumptions regarding faults still will hold for the VLSI case ; conventional fault analysis algorithms (serial, parallel, deductive and concurrent fault simulations) are likely to be modified accordingly. Finally, logic simulators seldomly are actually integrated to the other CAD tools (see 7 ) . Some preliminary work (2, 3, 44, 45, 48, 58, 65, 89) has been done addressing the above points ; however, their results are not yet made fully available to LSI circuit designers and to those responsible for quality control. Besides the points discussed above, we would like to mention a novel direction of development, still at the research stage, but worth considering for circuit design verification. Experience with conventional logic simulators indicates that users frequently would prefer to be able to deal with data and results not expressed as sequences of Os and Is, but rather expressed as symbols ; this need is more stringent for verifying that specifications are met. Symbolic manipulation corresponds to a novel mode of simulation which conventional logic simulators can not accomodate. Then, too, they exhibit the following disadvantages : . no guarantee of exhaust i veness, . necessary assumptions which do not match real life : for example, initial states of circuits which can not be ascertained, dynamic parameters of the components and values of a few signals, etc., . Inappropriate formats and large volume of results whose processing is therefore cumbersome. On the contrary, simulators capable of handling symbolic data would provide the following advantages : . no assumptions regarding initialization, . results more compact, easier to apprehend ; in particular, result format is similar to the way in which specifications are usually written, . more accurate information on the origin of hazards and races and, therefore, on the way to eliminate them, . gain in computation time : simulation is performed at a functional level and no longer at the gate level. This approach, still little investigated (73, 90), seems to be promising, for it is close to functional techniques and opens avenues to the use of powerful tools for systematic verification such as those for proof of design correctness (I). 6.3 Verification of timing characteristics (13, 43, 49, 56, 74, 85, 87). Optimizing dynamic performances of integrated circuits for a given implementation, requires checking the maximum clock frequency consistent with the propagation delays, along their manufacturing distributions, of the components in the circuit. The object is to uncover potential delay faults caused by propagation delays falling outside their manufacturing tolerances ; such faults entail the propagation of incorrect or non-stabilized logic values.

158

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL

Such potential faults may be detected through electrical or logic simulations with the tools described above (models most often provide minimum, typical and maximum propagation delays) ; however, IC designers would prefer an exhaustive way of testing which the above simulators can not guarantee. A direct attack of the problem is generally economically unrealistic. For example, for an -input combinational circuit, an exhaustive delay testing would require determining the maximum propagation delay of the circuit for 2 (2 -1) input transitions among 2 input combinations (for = 10 this number is close to 10 ). For this reason, other approaches involving tools departing from conventional ones have been proposed. One may mention : . Fault propagation sensitizable paths (13, 49, 87). Here It is implicitly assumed that a delay fault originating in a block located on a sensitizable path causes a delay, in the propagation of an input transition, that can be detected on the circuit primary outputs. This assumption is verified only partially in practice. This approach to verification involves two types of tools : the first one, a classical one for generating testing sequences, provides a list of sensitizable paths and criteria of selection among this list ; the second one corresponds to a conventional logic simulator. . Use of a simulator for which the logic levels are no longer considered as Boolean values but as stochastic variables taking values between 0 and 1 according to a given or computed probability. This technique, akin to PERT network analysis, has been implemented by several authors (40, 56, 62-64) ; however, it has experienced limited use. Recently,an elaborate program (62-64) has been developed along these lines. The problems involved in the industrial use of such simulators do not lie in the optimization of the simulation algorithms but in the preparation of model libraries including this probabilistic treatment. . Structural analysis of the graph derived from the circuit in order to enumerate its internal propagation paths and to determine their respective propagation delays. Due to combinatorial computations, the general case can not be handled this way. However, if restrictions are placed on the possible structure of circuits (29), determining the maximum propagation delay path may be considered on a practical basis (85). This is a good instance of a situation where the feasibility of a CAD tool is highly dependent on the des ing procedure itself. Timing verification of VLSI circuits is still a problem awaiting a satisfying solution. Considering the higher combinatorics involved,it is very likely that a solution is to be sought not in magic algorithms but rather through the modifications of the design procedure so that the initial problem is simplified from beginning. 7. VERIFICATION OF LAYOUT (5-7,9,22,23,27,54,57,60,66-68,75,79,83,86,88,91,92,95)

Schematically, IC manufacturing requires two categories of data : . a mask description ; this information is used during the microlithographic operations specific to the mask machines at hand, . the parameters of the physico-chemical process ; these data are necessary to set the conditions for operations such as diffusion, oxydation, etching, etc. These parameters are fixed once a technology is chosen and concern manufacturers rather than circuit designers.

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

159

This section deals with the verification of the first category of data which draws on the various CAD tools ; we shall deal with verification of the second category of data in the next section. During the various steps involved in the translation of a functional diagram into both a logic schematic and an electrical schematic and, subsequently, into a set of actual masks, several types of errors or flaws may be introduced : . inconsistencies between the topology of the electrical and logic schematics derived from the actually implemented mask and the ideal schematics specified initially, . overlooking admitted tolerances for the drawing and determined by the microIithographic process, . introduction of unacceptable parasitic components entailed by the physical implementation and not apparent in the ideal electrical schematic, . non-optimal choice or accidental errors regarding sizes of components with respect to their expected electrical and dynamic characteristics. Detection of all these faults is based on various mask analyses and on a detailed modeling of basic passive and active components. 7.1 Verification of the logic and electrical schematics The problem here is to verify that the circuit derived from the descriptions of the masks (the circuit to be actually implemented) is equivalent, with respect to electrical and logic responses, to the circuit devised initially by the electronic designer ; in other words, once an IC mask is drawn, the electrical and logic circuits have each two descriptions : one obtained from the mask, the other at the origin of the mask and representing the initial specifications. Checking these two pairs of descriptions against each other allows two other verifications : a- Analysis of the topological structure. By recognition algorithms one determines characteristic patterns or combination of characteristic patterns identifying components (diodes, transistors, gates, cells, etc ) and interconnections. In the end, a list of components with their interconnections is drawn. This list is subsequently compared to the corresponding lists drawn initially for simulation purposes during the preceding steps in the design. If mask descriptions would include identifiers for all the components, their connecting pads and the electrical nodes (equipotentials), this analysis would be straightforward ; unfortunately, such identifiers are not generally available for their introduction is cumbersome and prone to error. Consequently, the analysis requires complex algorithms based on sort, merge and selection operations on the files storing the descriptions of the different mask levels. Those algorithms are not generally applicable but are most often specific to a given technology. b- Analysis of electrical and logic responses. During the analysis above, recognition of basic components (resistors, capacitors, diodes, transistors) of the schematic or of parasitics, may be followed by computation of the numerical values of their parameters The electrical or logic schematics, then fully documented, may be analyzed with the conventional simulators used in the early steps of design. Use of these simulators leads to two additional verifications : . verification of proper electrical or logic use of the components (load conditions, fan-in, fan-out, etc ) ; an analysis of the descriptions is sufficient without resorting to an actual simulation.

160

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL


. verification that specifications are met. Checking the results of two additional simulations, logic and electric, against the specifications may serve two purposes, either uncovering design errors undetectable by simulation at the early design steps, or tuning the circuit with respect to parasitic effects introduced by the physical implementation. Of course, the quality of the verification will be that of the set of input stimuli (signal, electrical conditions, etc .) defining the simulations. This second type of automated analysis is not provided by every IC design automation system. As examples, one may mention the following programs : VETO (57), CMAT (75), MASOB (88), TOPOL (CII-HB),

7.2 Verification of microlithographic constraints The inaccuracies concomitant with microlthographic operations must stay within tolerances determined by the physico-chemical operations corresponding to each geometrical shape in the masks (minimum diffusion width, minimum spacing between oxyde grid and metal contact, etc.). Generally, during mask drawing, these constraints are only partially met ; consequently, a global verification is required after the final mask is drawn. Basically, the analysis consists in verifying overlappings or minimum spacings between basic figures of a same mask or of different mask levels. The corresponding operations are complex for they must take into account various situations and, moreover, they entail combinatorics. Generally speaking, most IC design automation systems provide some sort of verification of this kind, either automatically by means of costly algorithms, or interactively by aiding visual inspection with display tricks (superposing colors (54) for those DA systems with color displays (54, 66, 91)). 7.3 Present status and needs Regarding the verifications described in the two preceding paragraphs, references 6,7 describe the capabilities of 20 IC DA systems as well as discussion on several algorithms. However, one can note that most algorithms do not adequately match needs of LSI (for bipolar technology : a few thousand of components, 10 levels of masks, several thousand vectors ; for MOS technology : a few tens of thousand components, 6 to 7 levels of masks, several hundred thousand vectors per mask). Their use requires important computer resources (hours of CPU time). For this reason, restictions are often placed on shapes (for instance, no oblique or circular arc edges, limited number of edges in a polygon, etc . ) . 8. VERIFICATION OF MANUFACTUGING DATA (4,24,28,38,82). This verification concerns those responsible of technology more than circuit designers and, particularly, custom designers. However, CAD tools have a role in the verification of manufacturing conditions. As a matter of fact, selection of the parameters of the physico-chemical process determines the characteristics of the basic components. In fact, one of the factors in optimization of manufacturing yield is the minimization of the sensitivity of the electrical parameters of components to fluctuations in the control of the physico-chemical process. An optimal choice requires solving a statistical modeling problem. As a first approach, conventional Monte Carlo techniques have been used ; there the same component, modeled according to the physico-chemical process in use, is simulated for various combinations of values of the parameters in the model (8). As for every Monte Carlo technique, this approach requires, for reaching an acceptable level of confidence, a large number of simulation iterations. Besides the fact that each simulation is costly regarding the required level of accuracy, the results are not very satisfying, for random parameter combinations lead to unrealistic conditions.

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

161

For several years, a second type of approach has been followed ; two analysis programs are used concurrently : . the first one provides, from physico-chemical data and with a good accuracy, a good prediction of diffusion profiles . the second one provides, from results obtained with the first one and from geometrical data, the nominal values and the distributions of the electrical parameters of a given model for components. In practice, one looks for a set of independent parameters from which the remaining parameters are computed, while taking into account their correlations with the first ones. The independent parameters are determined by means of a statistical analysis of experimental samples. Several such programs addressing different technologies have been developed : . SUPREME (4, 38), SITCAP (24), BIPOLE (82). In practice, the results obtained with them are considered as satisfying. The sequence of the different analyses is schematized as follows : description of the physico-chemical process

d if fusion prof i les

parameters of the component model

sensitivities

I optimization

I extreme performances

correlations statistical distributions sensitivity to fluctuations in the component parameters

nominal values

component designer

circuit designer

By using these programs, the circuit designer, especially if he is close to the component manufacturer, may find here a powerful tool for validating component models and parameter values used in analog simulation.

162

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL

9. CONCLUSION After this brief survey on the different features of LSI design verification, one may wonder what will be the perenniality of current CAD tools with respect to mmi nent VLSI. Several important facts must be mentioned : . With respect to their structures and capabilities present CAD tools patently are better adapted to a linear sequence of the steps in IC design ; in fact, changes in structure and implementation require questioning decisions made at earlier steps in the development process. In practice, the optimal design of complex circuits requires making choices during the early design phases, while keeping a global view of all the other steps. By no means, is it realistic to force designers to simultaneously apprehend all the aspects of the design. . Analyses that are performed are microscopic rather than macroscopic. . Available tools are rather various and ill-matched ; selection criteria are not well formalized nor known to users. . Communications among the various analyses that must be coordinated in order to guarantee exhaustiveness of design verification, are most often difficult and little automated. . Program form and method of use lack versatility to the point of deterring potential users. . Due to their combinatorial nature, some analyses can not be performed economically, even realistically, unless either new design procedures simplifying problems at the start are adopted, or restrictions with respect to exhaustiveness or accuracy are accepted. For several analyses, it is demonstrated that their complexity, in the mathematical meaning, makes it illusory to search for magic algorithms ; one has to be satisfied with heuristics only. In the quest for a better adequation of CAD tools to forseeable needs of VLSI, one may note several trends : . Integration of the different CAD tools in a same consistent set favoring a design procedure allowing global feedbacks from one design step to another. . Automatic communication among the different tools by means of a central data base (14) accessible at each design step and provided with means for automatic translation among the different levels of description of a same circuit. . Hierarchical structure of analysis tools : verification of specifications, overall performances, functional, logic, electrical and timing simulations, testability analysis, etc. . Capabilities for macro-macromodelIng and macro-simulation at the different levels of description (functional, logical, and electrical). . Allowing for different modes of description and simulation for a same analysis. For Instance, such a capability corresponds to the development of "hybrid" simulators allowing simulations circuits where several sub-circuits are described at different levels : functional, logical electrical, temporal, thermal, graphical. . Capabilities for symbolic analysis at different levels of description (functional logical, and electric). This type of analysis, still at the research level. Is particularly favorable to exhaustive verification and to the use of techniques for proof of design correctness.

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

163

. Simplifying analyses by modifying design procedures at inception so as to avoid redhibitory combinatorics (e.g. symbolic simulation, symbolic layout). . Better explcitation of criteria for selection among the various possible CAD tools. . Improving flexibility and accessibility of tools through general use of minicomputers and intelligent terminals ; analysis algorithms have to be modified according to this context. It is only at the price of such improvement that VLSI needs will be met ; work in these directions is already well In progress, as witnessed by the annexed bibliography of recent works.

164

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, 0. MUTEL

References 111 121 131 141 I5I 161 171 181 I9I 1101 1111 1121 1131 1141 1151 I16I 1171 I18I 1191 1201 I21I 1221 S.K. Abdali (1971) : On proving sequential machine designs, IEEE Transactions on Computers, vol. C-20, n 4, December 1971, pp. 1563-1566 M. Abramovici, M. A. Breuer, and K. Kumar (1977) : Concurrent fault simulation and functional level modeling, Proceedings of the 14th Design Automation Conference, June 1977, pp. 128-137 G. Alia, P. Ciompi, and E. Martinelli (1978) : LSI components modelling in a three-valued functional simulation. Proceedings of the 15th Design Automation Conference, June 1978, pp. 428-438 D. Antoniadis, S.E. Hansen, and R.W. Dutton (1978) : SUPREM II - a program for IC process modeling and simulation. Technical Report n 5019-2, Integrated Circuits Laboratory, Stanford University, June 1978 H.S. Baird and Y.E. Cho (1975) : An artwork design verification program (ARTCON), Proceedings of the 12th Design Automation Conference, June 1975, pp. 414-420 H.S. Baird (1977) : A survey of computer aids for IC mask artwork verification Proceedings of the 1977 IEEE International Symposium on Circuits and Systems, Apri I 1977, pp. 441-445 H.S. Baird (1978) : Fast algorithms for LSI artwork analysis. Journal of Design Automation and Fault-Tolerant Computing, vol. 2, n 2, pp. 179-209, May 1978 P. Balaban and J. Golembeski (1975) : Statistical analysis for practical circuit design, IEEE Transactions on Circuits and Systems, vol. CAS-22, n 2, February 1975, pp. 100-108 J.-C. Bertails and J. Zirphile (1977) : A standardized approach for the reduction of LSI design time and automatic rule checking, IEEE Journal of Sol idState Circuits, vol. SC-12, n 4, pp. 433-436, August 1977 B.L. Biehl (1978) : Machine independent minicomputer circuit simulation, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 886-887 G.R. Boyle (1978) : SIMPIL-a simulation program for injection logic. Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 890-894 M.A. Breuer (1972) : Design automation of digital systems : vol. 1 : Theory and techniques. Prentice Hall 1972 M.A. Breuer (1974) : The effects of races, delays, and delay faults on test generation, IEEE Transactions on Computers, vol. C-23, n 10, pp. 1078-1092, October 1974 M.A. Breuer (1975) : Digital system design automation : Languages, simulation and data bases. Computer Science Press Inc., Woodland Hills, California E.M. Butler, E. Cohen, M.J. Elias, J.J. Golembeski, and R.G. Olsen (1977) : CAPITOL-clrcuit analysis program including tolerancing. Proceedings of the 1977 IEEE Symposium on Circuits and Systems, pp. 570-574 E.M. Butler (1977) : Macromodels for switches and logic gates in circuit simulation, Proceedings of the IEEE International Symposium on Circuits and Systems, pp. 692-695 H.D. Caplener and J.A. Janku (1973) : Improving modeling of computer hardware systems, Computer Design, vol. 12, n 8, pp. 59-64, August 1973 H.D. Caplener and J.A. Janku (1974) : Top-down approach to LSI system design, Computer Design, August 1974, pp. 143-148 F.Y. Chang (1978) : Pseudo statistical analysis of LSI design, Digest of the IEEE Solid-State Circuit Conference, February 1978 B.R. Chawla, H.K. Gummel, and P. Kozak (1975) : MOTIS-an MOS timing simulator, IEEE Transactions on Circuits and Systems, vol. CAS-22, n 12, December 1975, pp. 901-910 R.C. Chen and J.E. Coffman (1978) : MULTI-SIM-a dynamic multi-level simulator Proceedings of the 15th Design Automation Conference, June 1978, pp. 386-391 B.J. Crawford (1975) : Design rule checking for integrated circuits using graphical operators (program DRC), Proceedings of the Second Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH 75, June 75, pp. 168-176

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

165

1231 B.J. Crawford, D.R. Clark, A.G. Heninger, and R.S. Clary (1978) : Computer verification of large scale integrated circuit masks, COMPCON SPRING 1978, pp. 132-135 1241 H.J. De Man and R. Mertens (1973) : SITCAP-a simulator of bipolar transistors for computer aided circuit analysis programs, ISSCC Digest of Technical Papers, February 1973, pp. 104, 105, 205 1251 H. De Man (1977) : Adequacy of models to simulation programs and introduction to macromodeling, Journes d'Electronique on Modeling Semiconductor Devices, Lausanne, Switzerland, October 18-20, 1977 1261 H. De Man (1977) : The use of Boolean controlled elements for macro-modeling of digital circuits, Journes d'Electronique on Modeling Semiconductor Devices, Lausanne, Switzerland, October 18-20, 1977, also Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 522-526 I27I I. Dobes and R. Byrd (1976) : The automatic recognition of silicon gate transistor geometries - an LSI design aid program, Proceedings of the 13th Design Automation Conference, June 1976, pp. 327-335 1281 R.W. Dutton et al. (1977) : Correlation of fabrication process and electrical device parameter variations, IEEE Journal of Sol id-State Circuit, vol. SC-12, n 4, August 1977, pp. 349-355 I29I E.B. Eicheberger and T.W. Williams (1977) : A logic design structure for LSI testability, Proceedings of the 14th Design Automation Conference, June 1977 pp. 462-468 1301 N.J. Elias (1975) : A tolerancing program for practical circuit design. Digest of the 1975 IEEE International Solid State Circuit Conference 1311 G. Estrin (977) : Modeling for synthesis-the gap between intent and behavior Proceedings of the Symposium on Design Automation and Microprocessors, February 24-25, 1977, IEEE Publication 77 CH1189-0C, pp. 54-59 1321 S.P. Fan, M.Y. Hsueh, A.R. Newton, and D.O. Pederson (1977) : M0TIS-C- a new circuit simulator for MOS LSI circuits, Proceedings of the 1977 IEEE International Symposium on Circuits and Systems, pp. 700-703 1331 R.S. Fenchel (1977) : SARA user's manual (System ARchitects Apprentice), Computer Science Department, University of California, Los Angeles, California, January 1977 1341 J. Fong and C. Pottle (1977) : Simulation of parallel microcomputer system for circuit des'ign. Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 131-134 1351 D.L. Fraser and S.W. Director (1978) : Model selection for computer simulation of digital M0SFET LSI circuits. Electronic Circuits and Systems, vol. 2, n 2 March 1978, pp. 39-46 I36I R.I. Gardner (1977) : Multi-level modeling in SARA, Proceedings of the Symposium on Design Automation and Microprocessors, February 24-25, 1977, IEEE Publication 77 CH1189-OC, pp. 63-66 I37I M. Glesner (1978) : New macromode I I Ing approaches for the simulation of large scale integrated circuits. Proceedings ECCTD, September 1978 I38I A.G. Gonzalez, S.R. Combs, R.W. Gill, and R.W. Dutton (1975) : Fabrication process modeling applied to IC NPN transistors using a minicomputer, in Proc. of the Int. Electron. Conv., Sydney, Australia, Paper D-2568, August 25-29,1975 I39I P.R. Gray and R.G. Meyer : Analysis and design or analog integrated circuits, J. Wiley 1401 J.W. Grundman and S.C. Bass (1978) : Probabilistic analysis of digital networks, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 527-531 I41I H. Hal Iwell and J.P. Roth (1974) : System for computer design, IBM Technical Disclosure Bui letin, vol. 17, pp. 1517-1519, 1974 1421 R.W. Hartenstein (1977) : Fundamentals of structured hardware design - A design language approach at register transfer level. North Holland 1977 I43I R.A. Harrison and D.J. Olson (1971) : Race analysis of digital systems without logic simulation. Proceedings of the 8th Design Automation Workshop, pp. 82-94, June 1971

166

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL

1441 R.B. Hayter, P.S. Wilcox, H. Rombeek, and D.M. Caughey (1978) : Standard cell macromodels for logic simulation of custom LSI, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 1108-1112 I45I E.L. Hepler and C A . Papachristou (1977) : A logic simulator for MSI, LSI, and microcomputer systems. Proceedings of the 1977 IEEE Conference on microcomputers, pp. 220-226 I46I M.H. Heydemann (1977) : A general purpose circuit simulator efficient through sparse tableau and input processing. Proceedings of the 1977 IEEE Symposium on Circuits and Systems, pp. 118-121 1471 M.H. Heydemann (1978) : Functional macromodeling of electrical circuits, Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 532-535 1481 H. Hoehne and R. Piloty (1975) : Design verification at the register transfer language level, IEEE Transactions on Computers, vol. C-24, n 9, September 1975, pp. 861-867 1491 E.P. Hsieh, R.A. Rasmussen, L.J. Vidunas, and W.T. Davis (1977) : Delay test generation. Proceedings of the 14th Design Automation Conference, New Orleans, June 20-22, 1977, pp. 486-491 1501 H.Y. Hsieh and N.B. Rabbat (1977) : Computer-aided design of large networks by macromodular and latent techniques, Proceedings of the 1977 IEEE Symposium on Circuits and Systems, pp. 688-691 1511 H.Y. Hsieh, N.B. Rabbat, and A.E. Ruehll (1978) : Macromodeling and macrosimulation Techniques, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 336-339 1521 M.Y. Hsueh and D.O. Pederson (1977) : An improved circuit approach for macromodeling digital circuits, Proceedings of the 1977 IEEE International Symposium on Circuits and Systems, pp. 696-699 1531 M.Y. Hsueh, A.R. Newton, and D.O. Pederson (1978) : The development of macromodels for MOS timing simulators, Proceedings of the 1978 IEEE Symposium on Circuits and Systems, pp. 345-349 1541 B. Infante, D. Bracken, B. Me Calla, S. Yamashoki, and E. Cohen (1978) : An interactive graphics system for the design of integrated circuits, Proceedings of the 15th Design Automation Conference, June 1978, pp. 182-187 1551 L.C. Jensen and D.O. Pederson (1978) : MICE - a minicomputer integraded circuit emulator, 1978 European Conference on Circuit Theory and Design, Lausanne, Switzerland, September 4-8, 1978 I56I I.I. Kirkpatrick and N.R. Clark (1966) : PERT as an aid to logic design, IBM Journal of Research and Development, vol. 10, n 2, pp. 135-141, March 1966 1571 J. Lecarpentier (1975) : Computer-aided synthesis of an IC electrical diagram from mask data. Digest of the 1975 IEEE International Sol id-State Conference, pp. 84-85 1581 Y.H. Levendel and W.C. Schwartz (1978) : Impact of LSI on logic simulation, COMPSOC Spring 1978, pp. 102-119 1591 D. Lewin (1977) : Computer-aided design of digital systems, Crane Russak, New York I60I B.W. Lindsay and B.T. Preas (1976) : Design rule checking and analysis of IC mask designs. Proceedings of the 13th Annual Design Automation Conference, June 1976, pp. 301-308 1611 P. Losleben (1975) : Design Validation in hierarchical systems, Proceedings of the 12th Design Automation Conference, Boston, June 1975, pp. 431-438 1621 B. Magnhagen (1976) : A high performance logic simulator for design verification, Proceedings of the 1976 Summer Computer Simulation Conference, July 1976, pp. 724-726 1631 B. Magnhagen (1977) : Practical experiences from signal probability simulation of digital designs, Proceedings of the 14th Design Automation Conference, pp. 216-219, June 1977 1641 B. Magnhagen (1977) : Probability-based verification of time margins in digital designs, Linkping Studies in Science and Technology - Dissertations n 17, Linkping University, Sweden, September 1977

VERIFICATION OF LSI DIGITAL CIRCUIT DESIGN

167

1651 M. Malek and A.K. Bose (1978) : Functional simulation and fault diagnosis, Proceedings of the 15th Design Automation Conference, June 1978, pp. 340-346 I66I J. Michard, X.H. N'Guyen, and P. Zamansky (1977) : VISTA - un systme d'aide au trac de circuits intgrs. International Conference on Microlithography, Paris, June 21-24, 1977 I67I C L . Mitchell and J.M. Gould (1974) : MAP - a user-controlled automated mask analysis program, Proceedings of the 11th Design Automation Workshop, June 1974, pp. 107-118 I68I C L . Mitchell (1975) : MAP - Mask Analysis Program, M & S Computing Inc, Report N76-17855, October 21, 1975 I69I A.R. Newton, J.D. Crawford, and D.O. Pederson (1977) : A proposal for a unified input syntax for CAD Programs, University of California at Berkeley, October 14, 1977 I70I A.R. Newton and D.O. Pederson (1978) : A simulation program with large-scale integrated circuit emphasis. Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 1-4 1711 A.R. Newton and D.O. Pederson (1978) : Hybrid simulation for LSI design, ELECTRO 78 1721 W.T. Overman and G. Estrin (1977) : Developing a SARA building block - the 8080, Proceedings of the Symposium on Design Automation and Microprocessors, Feb. 24-25, 1977, IEEE Publication 77 CH II 89-OC, pp. 77-86 1731 M. Perkowski (1978) : The state-space approach to the design of multipurpose problem solver for logic design, IFIP Working Conference on "Artificial Intelligence on Pattern Recognition in Computer-Aided Design", GrenobIe,March 1978, to appear (North Holland) 1741 D.S. Pilling and H.B. Sun (1973) : Computer-aided prediction of delay in LSI logic systems. Proceedings of the 10th Design Automation Workshop, June 1973, pp. 182-186 1751 B.T. Preas, B.W. Lindsay and C W . Gwyn (1976) : Automatic circuit analysis based on mask Information, Proceedings of the 13th Annual Design Automation Conference, June 1976, pp. 309-317 1761 Y. Purl (1977) : A Monte Carlo based circuit-level methodology for algorithmic design of MOS LSI static random logic circuits, IEEE Journal of Sol id-State Circuits, vol. SC-12, n 5, October 1977, pp. 560-565 I77I J.-C Rault (1978) : A bibliography on the logical simulation of digital systems, THOMSON-CSF Internal Report (600 entries) 1781 H. Rombeek and R.E. Thomas (1975) : Electrical simulation of LSI cellular components. Proceedings of the 1975 IEEE Electrical Engineering Conference, Canada. (Program NANS IM) I79I L.M. Rosenberg and C. Benbassat (1974) : CRITIC - An integrated circuit design rule checking program. Proceedings of the 11th Design Automation Conference, June 1976, pp. 14-18 1801 J.P. Roth (1977) : Hardware verification, IEEE Transactions on Computers, vol. C-26, n 12, December 1977, pp. 1292-1294 I81I J.P. Roth (1973) : VERIFY (a design verifier), IBM Technical Disclosure Bulletin, vol. 15, n 8, January 1973, pp. 149-151 I82I D.J. Roulston, S.G. Chamberlain, and J. SehgaI (1972) : Simplified computeraided analysis of double diffused transistors including two dimensional high level effects, IEEE Transactions on Electron Devices, vol. ED-19, pp. 809-820, June 1972 1831 G. Russel (1978) : Automatic mask function checking of LSI circuits, Proceedings of CAD 78, Brighton, Sussex, England, March 1978, pp. 182-194 I84I H. Schlchman (1978) : A multilevel simulation strategy, ELECTRO 78 I85I J.J. Shedletsky (1978) : Delay testing LSI logic, IEEE International FaultTolerant Computing Symposium, June 20-22, 1978, Toulouse, pp. 159-164 1861 J.D. Stauffer (1978) : LCL - a compiler and language for logical mask checking, SANDIA Corp. Report SAND 77-2031, March 1978 1871 T.M. Storey and J.W. Barry (1977) : Delay test simulation, Proceedings of the 14th Design Automation Conference, New Orleans, June 20-22, 1977, pp. 492-494

168

J.-C. RAULT, J.-P. AVENIER, J. MICHARD, J. MUTEL

1881 L. Szanto (1978) : Network recognition of an MOS integrated circuit from the topography of its masks, Computer Aided Design, vol. 10, n 2, pp. 136-140, March 1978 1891 M. Tokoro, M. Sato, M. Ishigami, E. Tamura, T. Ishimitsu, and H. Ohara (1978): A module level simulation technique for systems composed of LSI's and MSI's, Proceedings of the 15th Design Automation Conference, June 1978, pp. 418-427 1901 T.J. Wagner (1977) : Hardware verification. Ph. D. Dissertation, Report n STAN-CS-77.632 and n AIM-304, Computer Science Department, Stanford University, Stanford, California, September 1977, also report AD-A048684/SGA, September 1977 I91I N. Weste (1978) : A color graphics system for IC mask design and analysis. Proceedings of the 15th Design Automation Conference, June 1978, pp. 199-205 I92I P. Wilcox, H. Rombeek, and D.M. Caughey (1978) : Design rule verification based on one dimensional scans, Proceedings of the 15th Design Automation Conference, June 1978, pp. 285-289 1931 M.A. Wold (1978) : Design verification and performance analysis, Proceedings of the 15th Design Automation Conference, June 1978, pp. 264-270 1941 Y.-M. Wong and C Pottle (1976) : Adaptation of circuit-simulation algorithms to a simple parallel microcomputer structure, Electronic Circuits and Systems, vol. 1, n 1, pp. 27-32, 1976 I95I M. Yamin (1972) : XYT0LR - a computer program for integrated circuit mask design checkout, Bell System Technical Journal, vol. 51, pp. 1581-1593, 1972 I96I T.K. Young and R.W. Dutton (1976) : Mini-MSINC- a minicomputer simulator for MOS circuits with modular built-in models, IEEE Journal of Sol id-State Circuits, vol. SC-11, n 5, pp. 730-732, October 1976 I97I T.K. Young, L.K. Scheffer, D.B. Estreich, and R.W. Dutton (1978) : Macromodeling of IC structures, Proceedings of the 1978 IEEE International Symposium on Circuits and Systems, pp. 340-344 1981 IEEE Computer Magazine : Special issues on computer hardware description languages Vol. 7, n 12, December 1974 Vol. 10, n 6, June 1977

G. Uusgraue, editor, COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Comapny O ECSC, EEC, EAEC, Brussels S Luxembourg, 1979

COMPUTER AIDED DESIGN THE PROBLEM OF THE 80'S MICROPROCESSOR DESIGN

Bill Lattin Intel Corporation Aloha, Oregon

The rapid evolution of semiconductor technology continues to make possible very sophisticated electronic systems on a single silicon chip. At present projections, in 1982 a single silicon chip may have over 100,000 transistors. The problem that this technology evolution presents is how to design, layout and check this level of complexity. Unless there is a major break through in Computer Aided Design, this level of complexity will go unused in that the accurate design of 100,000 transistor chips would take 60 man years of layout and 60 man years of checking. At present design rates, it is clear that the major problem for the 1980's is to devise new layout and checking CAD tools so that the semiconductor technology with all its density will be usable by the electronic community. The rapid evolution of semiconductor technology is the major force which is motivating microprocessor manufacturers to take a second look at their internal design methods. The technology has increased the complexity on a chip by a factor of 4 in the last two years, but the design methods have not changed in the last six or seven years. This means that it now takes more and more of a manufacturer's resources to design each chip. In addition to the amount of resources, the actual time to design, debug and transfer a complex microprocessor to production has increased at the same rate as the chip complexity. The challenge for manufacturers of LSI devices in the future is how to reduce the design cost of the product and how to reduce the actual time from design to volume production. This paper will focus on just one element of the design cycle -- "Layout". This J^ the most costly portion of the design cycle as well as the most error prone. Figure 1 shows the historical density improvement for microprocessor technology. The complexity of microprocessors at the chip level has grown exponently for the last few years. By using the number of active transistors on a chip as a general parameter of complexity and plotting it against the year of introduction of that microprocessor, one can get a glimpse into the future. This view indicates that the largest component of the design cycle will remain layout. It could even be stated that layout will become an increasing portion of the cost and possibly become the limiting factor. 169

170

. LATTIN

Figure 1 DEVICES PER CHIP VLSI - MOS TECHNOLOGY

10.000K __

YEAR OF INTRODUCTION

At the present time, the productivity of an average layout designer is between 5 to 10 devices per day. This includes the time to draw, check and redraw. A wide variety of layout techniques fit within this range of productivity, such as interactive graphics or manual draw and digitize. Figure 2 is then taken from Figure 1 by using the number of transistors that the technology will provide and translating into man years of layout effort, assuming each layout designer can achieve a productivity of 10 transistors per day. With this level of productivity, a complex microprocessor in 1982 will take 60 man years to layout. What this means is that the technology will have outrun the manufacturer's ability to use it -- at least for complex systems design. That is not to say that the technology will go unused since increasingly dense memory chips can and will make use of this technology, but large complex microprocessors will have been limited by the layout portion of the design cycle.

CAD : THE PROBLEM OF THE 80'S MICROPROCESSOR DESIGN

171

Figure 2 MAN YEARS FOR PLANNING, DRAWING, CHANGING AND CHECKING OF RANDOM LOGIC

60. 50. 40. 30. 20. 10.

72

74
YEAR

The solution to limitation will depend on the microprocessor manufacturer's ability to alter his design methods and develop CAD tools to increase layout productivity to keep pace with the rapid evolution of semiconductor technology.

G. Uusgraut, editor, COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Company ECSC, EEC, EAEC, Brussels S Luxembourg, 1979

USER EXPERIENCE IN SIMULATION AND TESTING C. Gaskin, Litef, Freiburg, West Germany In early spring of 197 2 . due to increased usage of digital logic, it became clear that the digital test capabilities at LITEF would not match our requirements for the near future (3 years). LITEF did a study of the market to determine how to meet these requirements. The basic question had to be resolved make or buy and if buy, what. Over and above the question of in house requirements was the question of which direction was the industry as a whole taking. At that point in time it was clear, at LITEF, that the hardware was important, however the hardware to software cost was at least 1:5 if not 1:9, so the importance was placed on software with a firm requirement for a subset of ATLAS at the least. It was further clear that the generation of a high quality (95$) test program from hand was not economically feasible. The test program simulators and or generators were either too large or too expensive to be considered for LITEF at that point in time. Our solution to the problem of testing digital logic in this time frame was the purchase of a commercially available automatic test system which used an adapted subset of ATLAS as a programming language. The goals to be achieved with the introduction of the Texas Instrument ATS 96O were the following: 1. 2. 3. k. 5. Improved product quality Increased testing capacity Improved repeatability of results Reduced skill levels Management information gathered. 173

174

C. GASKIN These goals could only be met as the result of a total system approach greatly dependent on the software used. Due to the high quality requirements of the test programs, which could only be evaluated by a computer simulation, we had to purchase this service from the U.S. industry. LITEF purchased fifty (50) test program sets from Texas Instruments in Dallas and Pacific Applied Systems in California over the course of the next 3 years. This was in fact a good arrangement as the costs were reasonable, an average price of two thousand five hundred (2,500) Dollars U.S. per program or 125,000 S U.S. total purchase price. Over and above this it required approximately four (h) to six (6) manweeks of effort for the checkout, quality control acceptance, and documentation control effort in house per program. These test programs covered 95$ of all SA1 and SAO IC pin faults with the average board complexity of 50 IC's (25OO nands) with about 150 input output pins. The problems incurred with this solution were for long term solution not acceptable and it was clear that LITEF required an in house capability of automatic test program generation. The major problems were: 1. 2. 3. k. 5. 6. Schedules dictated by other companies Costly in house effort required (k-6 to test programs Turn around times for ECO's expensive Test program quality not verified Inaccurate hardware documentation. manweeks) Very costly engineering change order (ECO)

USER EXPERIENCE IN SIMULATION AND TESTING


Most of these problems are self explanatory however I would like to point out a number of possibly not so apparent problems in conjunction with ECO's. There is a fixed cost and a variable cost associated with ECO both in house and out of house. The sad but true situation is the fixed cost in 90$ of all the instances is much higher than the variable cost. A list of these costs (Figure 1) makes it apparent why this situation exists. Tasks Change Definition Cost estimate and Handling Engineering evaluation of change Model change Pattern generation and resimulation Program checkout Quality control and documentation control k-6 X (1 wks) wks average 1 wk X (2-^ wks) X X (2 day) X (1 day) X (2 day) Fixed cost Variable cost X

175

Figure 1. The high cost of executing ECO's (see Figure 2) plus the fact that in the normal life cycle of a new design LITEF averages four (h) ECO's per board type results in the maintenance cost of programs being as large if not larger than the original cost. This coupled with the fact that the turn around time for out of house ECO's is approximately the same as for new generation defines a major problem associated with utilizing an out-of-house service for test program generation.

176

C. GASKIN
OUT-OF-HOUSE VS IN-HOUSE LIFE CYCLE COST 50 Test Programs Purchase Price In house effort Out-of-house Life cycle cost per program SMC 3103 purchase price (5 yr write off) Test engineers In-house life cycle cost per program 10,000 $ US Figure 2. 50,000 $ US 25O man weeks ^00 man weeks 17,000 $ US Original cost 125,000 $ US 25O man weeks ECO cost 150,000 $ US 1000 man weeks

These problems plus the new integrated circuit technology required that we have an in house capability of Automatic Test Program Generation. LITEF has in the past four that are on the world market. One of the important discoveries we made was that it was less a question of cost but more a question of capabilities that determined the choice. (k) years investigated the major commercially available systems

USER EXPERIENCE IN SIMULATION A N D TESTING The p r i m e f a c t o r s 1. 2. 3. k. 5. for d e t e r m i n a t a t i o n were as follows:

177

Highly automatic test program generation Ease of handling ECO's Defined quality of Test Programs Fast turn around times Cost reduction

Naturally the system must demonstrate that it meets the basic requirements to be considered. After two years experience using the system, we selected a SMC 3103 with D-LASAR Software from Scientific Machine Corp. of Dallas, Texas. The following goals have been achieved : 1. 3. 3. k. 5. 6. 7. 8. Highly automatic test program generation Very easy handling of ECO's Excellent definition of program quality Extremely short turn around Very large reduction in costs Design verification Improved documentation Automatic schematic drawing

As you can see we have achieved more than originally planned. Over and above this we have a greater level of achievement in each area than originally planned. The last three points have allowed a new organization which reduces the work load in the ATPG area and at the same time improves the quality of the design thus reducing hardware integration time. This new capability has made way for the new organization flow as shown in Figure 3.

178

C. GASKIN

FUNCTIONAL Design Inputs TECO Deck

FLOW

ORGANIZATION ,

Engineer Sketch LASAR Model

DESIGN D LA S A R VERIFICATION

Predicted Responses

Timing Info

L
NO/<

ood

NO '

ATPG

Drafting

<^95<

N O

Hardware

Automatic Test System

Finished Product FIGURE 3

USER EXPERIENCE IN SIMULATION AND TESTING D-LASAR is used in two separate modes to help the designer verify the logic and the timing of his design before it becomes hardware. The first mode allows the designer to specify input patterns he expects to encounter and then D-LASAR shows him the logic response.This is in the form of

179

a timing diagram which is easy for the engineer to interpret and design changes at this point are quick and painless. Once this mode has been successfully completed then D-LASAR slectes it's own inputs to detect 95$ of all defined fault classes and by varying the circuit response plus and minus 30$ observes whether critical timing problems occur within the defined logic. Timing problems discovered fall into two catagories, situations which cannot arise at system level and those which can. All timing problems are discussed with the logic designer and situations which fall into the first catagory are either ignored or prevented from occurring in future runs. Those which can occur are corrected by design change before proceeding. The reduction in our work of ATPG is realized by being able to determine test access inter-actively before the design is frozen. This also allows much shorter test sequence to achieve the same fault coverage. Directing your attention to Figure 2, we can see where the items 2-5 in the achievements list are documented. We can see that the man power required for out-of-house vs in-house ECO's is 2.5:1 but this alone is not the complete story. The manpower requirement however also determines the turn around minimum time and reducing this requirement to 2 man weeks allows a very short turn around. The manpower cost is at the same time the largest portion of the total cost (9:1). The major factor in manpower reduction is the very high quality of the test program plus a good set of documentation. In other fault simulators the run time is so extensive that they are normally not used due to the cost and long turn around time thus blocking the system thus lower quality.

180

C. GASKIN
Further goals/requirements as seen by LITEF are a continuation of the present effort to bring it to a normal conclusion. After studying Fig. 3 one realizes that the loop is closed much too late to be easily corrected. In the time between the drafting effort, but before the film is drawn a verification must take place to ensure that at the ATS the hardware and its test program are based on the same design. An integrated design system where there is one input and many outputs all based on one central information source is the first major goal (see Figure k). The second major goal is improved design verification. The minimum improvement here is very accurate timing models and variable time simulation. This would necessarily include at least five (5) families of timing MOS, TTL, Low power TTL, Low power schottky TTL, and schottky TTL but a better solution would be the ability to model the real IC timing. The design verification should also encompass most other design parameters at the same time: fan-in, fan-out, ect. The third goal should be further cost reduction by general improvements and better man machine interface. During the next short term (2-3 years) LITEF sees the major problem areas as follows: 1. 2. 3. 4. Complete system simulation Large scale integration (LSI) modeling Long computer runs Accurate timing modeling

Now that we have freed the engineer of much of the design verification task, at the board level, we are relying on him to solve a much more difficult task, design verification at the system level, This is not very logical as we have seen, on a lower level, that it is very expensive if at all possible. It is clear that we require computer support of this task and we should not despair as the nand requirement will, in most cases, not exceed one hundred thousand (IOO.OOO) for a large digital system.

USER EXPERIENCE IN SIMULATION AND TESTING

1 8 1

INTEGRATED

DESIGN SYSTEM Engineer Change order

Engineer Sketch

<fComputer Model INTEGRATED DESIGN SYSTEM DESIGN Verification

STIMULI Generator

Complet Release

<^Complete>-

Interactive LAYOUT

Automatic Drafting TAPE

Parts List

Automatic Schem.Drawinc

Placement Drawing

Automatic Test Program

FIGURE U

182

C. GASKIN
The next three items are in fact all self impacting. That is to say as more and more LSI's are used more accurate timing is required and longer run times result. This complex of problems must be approached as a many-sided single problem to realize a reasonable compromise. Not only are the answers to these problems, to be found bymanufacturere of such tools but the IC designer and manufacturer must be brought into the loop in order to achieve a reasonable solution. A further impacting parameter is the so-called small improvement that IC manufacturers make and is first discovered with a new lot of IC's with the same marking as before. The fact that they function differently is all too clear, the question is why. In summary, LITEF's experience with introducing an Automatic Test Program Generation System, has met and exceeded original estimates of productivity and cost improvement. The task is not complete today, nor are presently aware of a commercial system that would meet our requirements for an integrated design system in the next short-term period.

G. Musgrave, editor, COMPUTER-AIDED DESIGN o digital ele c troni c c ir c uits and systems North-Holland Publishing Comapny ECSC, EEC, EAEC, Brussels 6 Luxembourg, 1979

DEVELOPMENT OF A D I G I T A L TEST GENERATION SYSTEM

Paul E. Roberts and K e i t h T. Wolski Scientific Machines Corporation 2612 Electronic Lane Dallas, Texas 75220

As the complexity of digital c i r c u i t s grew, requirements for a computeraided digital test generation system became very apparent. In 1969, at LTV Aerospace Corporation in D allas, Texas, an e f f o r t was i n i t i a t e d to develop a simulation and fault analysis program to aid in the production of tests for the sophisticated digital avionics of the A7E weapon system. The system of programs, LASAR (Logic Automated Stimulus And Response), began its growth to become the most complete system of its kind in existence. A solid foundation was required to allow for the large variety of processes necessary in the complete system. The most important element in the LASAR system is the use of the N A N D gate as the basis for all c i r c u i t models. An a t t e m p t at a functional type system proved too cumbersome and incomplete. The NAND equivalent approach made the description of devices very accurate and the processes of the system much more manageable. Hence, the system would not have to be modified as new devices became available. The threestate simulator was developed quite readily since all circuits were of only one component t y p e , the N A N D gate. A f t e r comparing simulator results to actual c i r c u i t results, it became apparent that a t i m i n g analysis was necessary. The t i m i n g analysis had t o consider gate tolerances and tester skew. The simulator then became a very valuable part of the system. With the simulator, an engineer could develop a reliable test, but the test quality was not known. A fault analysis had to be implemented. The simulator could be used to simulate faults. So the simulator was modified t o do the fault analysis. The fault analysis included simulation of all N A N D gate outputs stuckatone and stuckatzero and all N A N D gate inputs open. Also included were all c i r c u i t inputs stuck high and low. This set of faults was chosen because each N A N D and N A N D junction represents some function of the device or c i r c u i t , and to test the c i r c u i t each function should be v e r i f i e d . Many engineers were shocked to discover that tests thought to functionally exercise the c i r c u i t did not test it very well at a l l . The N A N D gatelevel f a u l t analysis opened many people's eyes to the magnitude of the test generation task. An automatic stimulus generator was badly needed. For many c i r c u i t s , the task of developing a thorough test was not economically feasible and almost not humanly possible. The need for a stimulus generator was known f r o m the beginning, but its development was accelerated at this phase of the program. Since all faults must be detected at the c i r c u i t outputs, a reverse trace f r o m the c i r c u i t outputs was implemented. C r i t i c a l paths would be sensitized f r o m the c i r c u i t outputs to include as many faults as could be detected by this method. All others would by definition be undetectable. This method guarantees a thorough test. With all of the mentioned elements, the system was complete. The processes continue to undergo improvements of speed and capacity, but the basic system is meeting the test of time. The most common question asked today about gatelevel test generation systems is: "How can a system which expands circuits into NAND equivalent f o r m possibly manage circuits w i t h the largescale devices of today and tomorrow?" Microprocessors and large capacity memory devices produce c i r c u i t s containing tens and even hundreds of thousands of N A N D equivalents. The number of functions or possible faults to be analyzed approaches one

183

184

P.E. ROBERTS, .T. WOLSKI

m i l l i o n . The memory capacity, processing speed and mass storage capacity of the host computer are all stressed by these c i r c u i t s . Is this kind of analysis necessary? Is it possible? First let us determine what is necessary. The most important properties required in a digital test generation system are: 1. A u t o m a t i c Test Generation Today's and especially tomorrow's circuits are so large and complex that manual test generation is unfeasible, humanly impossible and/or too costly; Worst-Case Timing Analysis Device and tester tolerances must be considered to produce a test which w i l l not fail good boards. Tests without worst-case timing analysis cause costly t r i a l and error test program implementation and rejection of circuits which perform w i t h i n manufacturer's tolerances; and Detailed Test Quality Analysis Poor or unknown test quality is very costly. Only a detailed v e r i f i c a t i o n of all possible c i r c u i t functions is a reliable measure of test quality.

2.

3.

To date, only LASAR has demonstrated all these properties. The reason not apparent to other test generation systems is the use of the N A N D gate for all c i r c u i t models and processes. The gate-level c i r c u i t description contains a near minimum representation of the elements of a c i r c u i t required to perform all the operations listed above. When something less is used to represent a c i r c u i t , much of the necessary information is lost and consequently some of the required properties of the test generation system are lost. The temptation to use these other methods of representing circuits is fueled by the large and complex circuits of today. It must be accepted that the problem of test generation is not solved by a less complete method, only by improvement of the processes of the proven method. The SMC-3I00 (Scientific Machines Corporation-3100) A u t o m a t i c Test Generation F a c i l i t y currently contains up to 524,288 20-bit words of memory, typically 80 megabytes of mass storage disk and specially designed instructions for test generation. This system has successfully generated tests for circuits of , NANDs and 70,000 faults. Recent improvements to the LASAR program allow processing of circuits containing 30,000 NANDs and 100,000 faults. The complete analysis performed by LASAR on circuits of this size is unheard of on any other test generation system. It is obvious that the above l i m i t s must be increased. A number of items are necessary to accomplish the analysis of the super large circuits of today. A more powerful computer w i t h many times the capacity of most of today's minicomputers is needed. The processing t i m e must be decreased by higher speed devices and parallel processing. These are certainly achievable w i t h the same devices which are causing the problem. It is interesting that the same devices that are causing the problem can be used to solve the problem. Why t r y to solve the problem w i t h old tools? SMC is currently in the process of designing a computer dedicated to solve the test generation problem for these large c i r c u i t s . Many program algorithms are being carefully analyzed to determine faster methods and the possibility of dedicated computer hardware to increase processing speed. Many of the LASAR processes are very simple due to the use of the simple N A N D gate and, hence, lend themselves to high-speed methods. In particular, the fault analysis process as described by Armstrong computer hardware when only the N A N D gate need be considered. Consider the following situation: is easily implemented in

DEVELOPMENT OF A DIGITAL TEST GENERATION SYSTEM A C D

185

For each input to the N A N D gate there is an associated list of faults which would cause that input to fail to the opposite s t a t e . To compute the faults c r i t i c a l to the gate output, E, the following equation is used. C r i t i c a l Faults to E = AND[A,B,C,D] + E(SAO) This equation is very general and easily implemented in the computer hardware. In this manner, thousands of faults can be processed simultaneously. Similar operations exist for the processes of stimulus generation and simulation. Another area where processing can be made more e f f i c i e n t is in the fault analysis process. A study done at Siemens Corporation on random f a u l t sampling has shown that the quality of tests generated for a small sample of faults is only slightly less than that of the sample. This is understandable based on the mathematical theory of sampling. The computer t i m e and storage savings was considerable for these c i r c u i t s . The savings for larger c i r c u i t s is projected to be even more. Table I shows the results of the Siemens' study.

Table 1 Random Fault Sampling 1/4 Sample


N A N D Equivalents Gate Level Faults Test Patterns Percent CPU Time Of Total Percent Faults Detected Percent Faults Detected On Whole C i r c u i t

1/5 Sample

514 2176 752 100 88 544 459 60 86 436 459 . 55 85.09

85.47

85.27

Other circuits were analyzed w i t h the same process and the results were similar. Many test generation systems analyze IC pin faults. This is a sample, but, a very biased one which has 2 been shown in a number of studies to be unrepresentative of the t o t a l test quality. These are only a few of the many ideas to lower the cost of test generation without resorting to significant decreases in test quality and quality assurance. When analyzing the m e r i t s of any " t o o l " one must keep in mind what objectives that tool must accomplish. It is often surprising to us how often this simple rule is overlooked. Second order effects or nice features are always a welcome addition to any e f f e c t i v e tool but are a mere disguise to the ineffectual operation of that tool if the final objectives to be reached are compromised. As examples, could you be fooled into purchasing a c o m p u t e r . which calculated ten times faster than any other computer but whose answers were always

186

P.E. ROBERTS, .T. WOLSKI

incorrect; car which is easy to manuever but whose engine was in constant need of repair; or a compass whose accuracy is unparalleled but whose readout is illegible. In these three examples, the obvious objectives of accurate calculation, reliable transportation and knowledge of direction were misused. The objectives in generating digital test programs are also obvious; comprehensive, accurate and repeatable test results which w i l l aid in greatly reducing your cost to manufacture your products. SMC-LASAR does not comprise any of the necessary test requirements simply to make our job, the developer of LASAR, easier nor w i l l it in the f u t u r e . It has been suggested that the philosophy on which LASAR is based is not practical for tackling the high density circuits of t o m o r r o w . One point to be made is that only LASAR satisfies the quality test program requirements of today. This has been a problem t i m e and t i m e again in study after study by both m i l i t a r y and commercial users of all types of test equipment. Therefore, since LASAR stands as the technical leader of today, its developer has the greatest probability of meeting the requirements of the f u t u r e .

References

Douglas B. Armstrong, (1972), A Deductive Method For Simulating Faults in Logic C i r c u i t s , IEEE Transactions on Computers, Vol. C - 2 1 , No. 5. 2 F-16 Depot Support Equipment Final Engineering Report, CCP 5073, P a r t i i : F-16 A u t o m a t i c Test Program Generator Evaluation, C D R L A03H, (1977), General Dynamics,

Vol. I.

G. Uusgraue, editor, COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Company ECSC, EEC, EAEC, Brussels 6 Luxembourg, 1979

AN APPROACH TO A TESTING SYSTEM FOR LSI MR. H. E. JONES DR. R. F. SCHAUER IBM, DATA SYSTEMS DIVISION EAST FISHKILL, N.Y. - U.S.A. ABSTRACT This paper summarizes an approach to a testing system for LSI. Problema encountered with testing unconstrained designs in LSI are reviewed. These problems have led to the requirement for design techniques (or rules) in LSI which, when properly used, result in packages that can be readily tested in the design, manufacturing and field environment. The text explains the technique for level-sensitive design which can then be expanded into a level-sensitive scan design, LSSD.2,9,10 LSSD permits the partitioning of large sequential logic networks (required for normal machine operation) into smaller combinational logic networks which can then be readily tested using existing test generation techniques. The text also describes other important areas of this approach including, network subdivision,6 rules checking,3 testing of LSSD6 and interfaces to non-LSSD logic. 1. INTRODUCTION In the past, the logic designer had great flexibility in the way he used circuits to implement logic functions in machines such as CPU's, channels, and control units. This resulted in a variety of design implementations, many of which had dependencies on the ac characteristics .of the individual circuits. This flexibility sometimes led to unexpected timing problems, and complicated the testing. It had the advantage of allowing the designer to use every technique he knew to obtain the best performance with the fewest circuits. This approach was also supported in component manufacturing, since ac parameters such as rise time, fall time, and circuit delay could be readily tested. Thus, the design interface was well defined and reliably tested. With LSI, it will become impossible or impractical to test each circuit for all of the ac design parameters. Thus, the well-defined and reliably-tested circuit-to-circuit interfaces will no longer exist. Consequently, it is important to find methods of designing logic subsystems that have low sensitivity to these parameters. 187

188 2.

H.E. JONES, R.F. SCHAUER LEVEL SENSITIVE DESIGN A design method will be outlined here that will provide reliable operation without strong dependence on hard-tocontrol ac circuit parameters. This design method, called level-sensitive design, can be defined as follows: "A logic subsystem is level-sensitive if and only if the steady-state response to any allowed input state change is independent of the circuit and wire delays within the subsystem. Also, if an input state change involves the changing of more than one input signal, then the steady-state response must be independent of the order in which they change. (Steady-state response is the final value of all logic gate outputs after all change activity has terminated)." It is clear from this definition that level-sensitive operation is dependent on having only "allowed" input changes. Thus, a level-sensitive design method will, in general, include some restriction of how these changes occur. In the detailed design rules, these restrictions or rules on input changes are applied mostly to the clock signals. Other input signals have almost no restrictions on when they may change. A level-sensitive subsystem is assumed to operate as a result of a sequence of allowed changes in input state with enough time between changes to allow the subsystem to stabilize in the new internal state. This time duration is normally insured by means of clock signals that control the dynamic operation of the logic network. A principal objective in establishing design rules is to obtain logic subsystems that are insensitive to ac characteristics such as rise time, fall time, and circuit delay. Consequently, the basic storage element should be a level-sensitive device that does not contain a hazard or race condition. The polarity-hold latch meets these requirements, provided it is implemented properly. A hazard free polarity-hold latch in Figure 1, has two input signals. Its operation is as follows: When the clock signal, C, = 0, the latch cannot change state. When C = 1, the internal state of the latch is set to the value of the data input, D. Under normal operating conditions, the clock signal, C is 0 during the time when the data signal, D, may be changed. This prevents the changing of D from immediately altering the internal state of the latch. The clock signal, C, will normally occur (change to 1) after the data signal, D, has become stable at either a 1 or a 0. This

AN APPROACH TO A TESTING SYSTEM FOR LSI causes the latch to be set to the new value of the data signal at the time the clock signal occurs. The correct changing of the latch is not dependent on the rise or fall time of the clock signal, but only on the clock signal being 1 for a period equal to or greater than some time 0' where is the time required for the signal to propagate through the latch and stabilize. It will be shown later in this paper that the testing problem can be greatly simplified if level-sensitive polarity-hold latches are also capable of being operated in a shift register. A design for a polarity-hold shift register latch, SRL, is shown in Figure 2. It consists of two latches, LI and L2. As long as the shift signals A and are both 0, the LI latch operates exactly like a polarityhold latch. Terminal I is the input for the shift register, and L2 is the output. When the latch is operating as a shift register, data from the preceding stage are gated into the polarity-hold latch LI via I, by a change of the A shift signal to 1. After A has changed back to 0, the

189

( a )

IFi
(a)

(b) Figure 1 Hazard-free polarity-hold latch. (a) Symbolic representation, (b) Logic representation

(b)
Figure 2: Polarity-hold SRL Ca) Symbolic representation (b) Implementation in AND-INVERT gates

shift signal gates the data in the latch LI into the output latch, L2. A and can never both be 1 at the same time if the shift register is to operate properly. The modification of the polarity-hold latch, LI, to include shift capability requires adding a clocked input to the latch and a second latch, L2, to act as intermediate storage during shifting. The interconnection of the SRLs into a shift register is shown in Figure 3. The shift signals A and B, are connected in parallel, and the I (input) and +L2 (output) signals are strung together in a loop.

190 3. DESIGN STRUCTURE

H.E. JONES, R.F. SCHAUER

A specific set of design rules may now be defined to provide level-sensitive logic subsystems with a scannable design that will aid testing. 1) All internal storage is implemented in hazard-free polarity-hold latches as already described. 2) The latches are controlled by two or more non-overlapping clocks such that : a) a latch X may feed the data port of another latch Y if and only if the clock that sets the data into latch Y does not clock latch X.

b) A latch X may gate a clock C^ to produce a gated clock Cig which drives another latch Y if and only if clock C^g does not clock latch X, where C^ is any clock produced from C]_. 3) It must be possible to identify a set of clock primary inputs from which the clock inputs to SRLs are controlled either through simple powering trees or through logic that is gated by SRLs and/or nonclock primary inputs. In addition, the following rules must hold: a) All clock inputs to all SRLs must be at their off states when all clock primary inputs (PI) are held to their off states. b) The clock signal that appears at any clock input or an SRL must be controlled from one or more clock Pis such that it is possible to set the clock input of the SRL to an on state by turning any one of the corresponding Pis to its on state and also setting the required gating condition from SRLs and/or nonclock Pis. No clock can be ANDed with either the true value or the complement value of another clock.

c) 4)

Clock primary inputs may not feed the data inputs to latches, either directly or through combinational logic, but may only feed the clock input to the latches or the primary outputs.

A sequential logic network designed in accordance with Rules 1 through 4 will be level-sensitive. To simplify testing and minimize the primary inputs and outputs, it must also be possible to shift data into and out of the latches in the system. Therefore, two more rules must be followed: 5) All system latches are implemented as part of an SRL. All SRLs must be interconnected into one or

AN APPROACH TO A TESTIN G SYSTEM FOR LSI more shift registers, each of which has an input, an output, and shift clocks available at the terminals of the package. 6) There must exist some primary inputsensitizing condition, referred to as the scan state, such that: a) each SRL or scanout PO is a function of only the single preceding SRL or register during the shifting scanin PI in its shift operation; all clocks except the shift clock are kept off at the SRL inputs; any shift clock to an SRL may be turned on or off by changing the corresponding clock primary input for each clock.

191

b) c)

If these design rules are followed, a logic subsystem with two clock signals will have a structure as shown in Figure 4. It is evident from Figure 4, that the two clock signals partition the logic subsystem into two parts, each composed of a combinational network and a set of SRLs. Each of the combinational networks, N^, and N2, is a multipleinput, multipleoutput logic network. _ and P2 are primary inputs to the network and Z^ and Z2 are primary outputs. C ; j _ and C2 are the two system clock signals.

OUT_

IN

w
J X~
]

[LJ

> .

Chip

Chip

> *>

1 _

OUT

"
1
Chip

JL
Chip

Figure 3: SRL and (a) (b) interconnection at chip module Chip with three SRLs Module with four chips

IM

* j
Figure h :

General structure for LSSD sub3y3tem with two system clocks

192

H.E. JONES, R.F. SCHAUER

The operation of the subsystem is controlled by the system clock signals, C^ and Co At C^ time, C2 is zero and the inputs and outputs of N ; j _ are stable (assuming that the external inputs _ are also stable). The clock signal, Cl, is then allowed to pass to the SRL system clock input. This gates the output values of N^ into the L]_ latches. Thus, some of the latches may change at C^ time. These signal changes immediately propagate through network N?. As soon as Ci is changed back to 0 and all LI signals nave finished propagating, the next clock signal, C2, may occur. For correct operation of the subsystem, all that is needed is for the clock signals to be long enough to set the latches, and for the time between clock signals to be long enough to allow all latch changes to finish propagating. This structure meets the requirements for level-sensitive operation as defined in the preceding section and ensures that there is little or no dependencies on ac circuit parameters. For proper operation of the logic subsystem, as is clear from Figure 4, all that is needed is that the delay through the combinational networks and N2 be less than the corresponding time between the clock signals. The network shown in Figure 5 is another one that follows the rules. The network in Figure 4 is called a single-latch design, since all the system inputs to networks N. and N2 are taken from the LI latch. The network in Figure 5 is called a double-latch design, since all the system inputs into network are taken from the L2 latch. Making use of the L2 latch reduces the overhead associated with such a method. The overhead will be discussed later in this paper.

Cotnb>n*tnal

&^

f.-

>

S 0 -

Cl ASfwfl 0 -

c,.. 0 Figure 5:

w&)

Scan Oui

LSSD double latch design

AN APPROACH TO A TESTING SYSTEM FOR LSI The concept of level-sensitive design is completely compatible with the concept of three-value simulation? that has been used in designing many IBM systems. A properly designed level-sensitive logic subsystem can be simulated with three-value simulation without using any delay blocks. This will, in fact, provide a check on whether the design is level-sensitive.

193

The scan capabilities of the network significantly help in its testing. These aspects will be discussed further. A sequential logic network that is level-sensitive with scan capability as per Rules 1 through 6 is called a levelsensitive scan design, LSSD. USE AND ADVANTAGES OF LSSD The use of LSSD helps to solve the LSI testing problems in the following ways : 1) The correct operation of the logic network is nearly independent of the ac characteristics of the devices and circuits. 2) The elimination of all hazards and races greatly simplifies both test generation and fault simulation used for testing large networks. 3) A network that performs the function of a large sequential network in its application, can be tested as combinational logic. Although 1 and 2 above go a long way toward solving the LSI testing problems, the ability to test networks as combinational logic is one of the most important benefits of LSSD. Test generation for large sequential logic networks remains very difficult, because no general solution has yet been found to the problem of automatically generating test patterns for these circuits. For combinational logic networks, on the other hand, the automatic generation of test patterns is relatively easy, and comes very close to obtaining 100% coverage of stuck faults. Thus, one way to effectively solve the sequential testgeneration problem is to reduce it to a combinational problem. This is easily done by operating the polarity-hold latches as SRLs during testing. During testing, any desired 'pattern of Is and 0s can be shifted into the polarity-hold latches as inputs to the combinational networks. The outputs can then be clocked into the latches and shifted out for inspection. For example, the combinational network N in Figure 5 can be tested in the following way: 1) A desired test pattern is shifted into the SRLs ( Y j _ Y2 > ^n) anc^ applied to the primary inputs Pi 2) After the signals have had time to propagate through

194

H.E. JONES, R.F. SCHAUER N, the clock C^ is turned on long enough to store the Xi, X2, ... X n signals into the LI latches of the SRLs. 3) The pattern in the LI latches is then shifted out and compared with the expected response. The shift register must also be tested, but this is easily accomplished by shifting a sequence of Is and Os through the SRLs, as detailed later in this paper. Any partitioning of the general structure shown in Figure S will result in a structure that can be tested in the same way. That is, all logic gates can be given combinational tests by applying the appropriate test patterns at the primary inputs P^ and at the SRL outputs by shifting in serially. The output patterns can be obtained from the response outputs, X]_, X2, ... X n , and by shifting out the bit pattern in the SRLs. Thus, the same method can be used to test at any packaging level. The use of SRLs to enter and retrieve bit patterns will enable dc testing of the logic subsystem. That is, it will verify that the logic gates are properly interconnected and function correctly in steady-state operation. The delay or timing characteristics will not be tested by this method; other methods can be usedM,5,11 Some of the other advantages of using LSSD are: 1) The correct operation of the logic subsystem is almost independent of any transient or delay characteristics of individual logic circuits. This fact can be seen by considering the operation of the structure in Figure 5. At the time C2 occurs, some of the L2 latches of the SRLs may change state as a result of the signals stored in the LI latches. These changes must propagate through the combinational network N and stabilize at Xl, X2, , X n before C^ can occur. Thus, the signals from the L2 latches must propagate fully through N during the time between the beginning of C2 and the beginning of C-^. The only delay requirement, then, is that the worst-case delay through N must be less than some known value. There is no longer any need to control or test rise time, fall time, or minimum network delays; only the maximum network delay need be controlled and measured. Moreover, individual gate delays are not important; only the total delay over paths from the input to the output of network N need be measured. 2) Using the SRLs as shown in Figure 5 provides the ability to monitor nets buried within the chip. More specifically, it enables the technician debugging a machine to monitor the state of every latch in the logic subsystem. This can be done on a single-cycle basis by shifting all the data in the latches out to a display device. This will not disturb the state of the subsystem if the data

AN APPROACH TO A TESTING SYSTEM FOR LSI are then shifted back into the latches in the same order as they are shifted out. In this way, the status of all the latches could be examined after each clock signal. 5. NETWORK SUBDIVISION Another important advantage of using the LSSD design rules is the ability to subdivide large networks into several smaller ones.

195

It is well known that the machine computation time required to perform automatic test generation and fault simulation does not increase in a linear manner with the network size. A fair approximation is that if N=number of gates in the network, the total processing time is approximately proportional to N2.2 . In practical terms, this means that an attempt to process a large logic network in a single piece will be more expensive than to process the same network in several smaller pieces. If the network follows the LSSD design rules, it is possible to sub-divide it into smaller pieces. Test generation can then be performed on each piece independently, and the computer resources required to perform test generation rise in a more linear fashion with the number of blocks. Network sub-division has another important benefit. If design changes are made during system bring up, or if machine features are added, test generation need only be re-done on the network sub-divisions affected by the change or addition. This can result in great savings in the test generation cycle as a new machine design is being developed. The sub-division procedure follows a simple algorithm: 1) From each network Primary Output (PO) or shift register latch (SRL) do a complete backtrace of all paths converging on that PO or SRL. Stop the backtrace when a primary input (PI) or SRL is reached. This step forms a "cone" whose top is the backtrace starting point; the base points are at Pi's or SRL's. The blocks and nets contained in each cone are recorded in a list. Only the system data and scan inputs to SRL's are backtraced. Clock inputs are handled in a special manner, described below. The cones formed by tracing back from PO " 0 " and SRL "M"'are shown in Figure 6. Combine the cones into sub-networks for test generation. The approximate size of the sub-networks is determined by the system user, and represents a trade-off between the number of test generation processing steps required to cover the entire network and the run time needed to perform each step. In the process of combining backtrace cones, an attempt

2)

196

H.E. JONES, R.F. SCHAUER is made to minimize the amount of logic replicated among sub-networks. This is achieved by combining those cones which contain common logic into the same sub-network provided that the prespecified sub-network size is not exceeded. Note that each sub-network will be bounded by points at which a test stimulus can be applied (PI or SRL) , or those at which a response can be measured (PO or SRL). Using Figure 6 as an example, the cones are: CONE 1 2 3 4 S STARTING POINT PO PO SRL SRL SRL 0 M Q BLOCKS INCLUDED L, G H None G J STOPPING POINT SRLs B, SRLs C, SRL C SRLs B, SRLs D, C, D D C E

COKE 1TR SRL H

Figure 6 : Example Network for Illustration


of Sub-division Procedure Because cones 1 and 4 have a common block, G, they will be combined into a single sub-network. The other cones, 2, 3 and 5, form independent sub-networks. In order to capture a test result in an SRL, the system clock input to that SRL must be pulsed. Thus, the logic network which drives the clock inputs of the output SRL's of a sub-network must be included in that sub-network. When we apply this rule to the example, we find the following four sub-networks:

AN APPROACH TO A TESTING SYSTEM FOR LSI SUB-NETWORK OUTPUTS PO PO SRL SRL 0, SRL N M Q SUB-NETWORK BLOCKS F, G, L F, G, K F, J SUB-NETWORK INPUTS SRLs SRLs SRLs SRLs B, C, B, D, C, D D C E

197

Clock driver networks feeding the input SRLs (B, C, D, E, in the example) need not be included, because the outputs of these SRLs may be controlled by loading the shift register of which they form a part. TEST GENERATION FOR LSSD SUB-NETWORKS The generation of input stimuli to test the various LSSD sub-networks utilizes the shift register capabilities of the SRLs to 'load' in the patterns. Hence, the first step in the automatic generation of test patterns is to verify the correct shifting responses of the shift registers. The test consists of two parts: (1) a "flush test," in which all shift clocks are turned on and a signal is flushed through the register from scan input to scan output ; and (2) a "shift" test, in which a 00110011 pattern is shifted through each register. Analysis has shown that these types of test patterns are sufficient to detect stuck faults in the shift registers paths. Next, the list of blocks and nets for the sub-networks are used to select and build model tables for test generation and to construct sub-network fault lists. The tests for the stuck faults in the sub-networks are generated by use of an algorithm similar to the D-algorithm of Roth . The SRLs are used as inputs and outputs of the network. Because the logic between SRLs is combinational, a very high test coverage, approaching 100% is normally obtained. A 100% coverage may not be possible due to logic redundancies. A simulation step is performed to predict responses to the test patterns, and to measure test coverage. Test generation and simulation is performed individually on each sub-network. The test data for each of the sub-networks is independent of that for any other sub-network. LSSD RULES CHECKING Another important requirement of a.testing system for LSI is a capability for the automatic checking of logic structures for compliance with the design rules which are established. Recall that the six design rules described earlier for LSSD apply primarily to the configurations and the control of specific paths in the network such as the scan paths between SRLs, system data paths between SRLs, and the paths between the clock primary inputs and the SRLs. This suggests that it would be possible to test a network for compliance with the design rules by writing a program which could trace out the particular paths to which the rules apply.

198

H.E. JONES, R.F. SCHA UER The basic idea behind the design rules checking is that a logic simulation program may be used to perform the rules checking. The procedure is similar to that performed to verify the design of a logic network, but the calculations performed are modified and the sequence of the control statements executed by the program is carefully structured to permit testing for violations of the various rules. The adaptation of the logic simulator to allow rules checking is accomplished by modifying the routines which cal culate the response of the logic gates to an applied stimulus. The modified routines are small programs, called behavioral models Behavioral models are provided for primitive logic functions, such as AND, OR, NAND, and NOR. Behavioral models are also provided for the SRLs. The behavioral models are called by the simulation scheduling routine whenever an input stimulusoutput response calculation needs to be performed for a particular type of logic gate, such as an AND gate. These models can vary the stimulusresponse relationship of the logic gates to provide different algorithms for checking compliance to the various rules. The method by which a logic simulation program is used to perform tracing functions may be easily explained. If a logic zero signal is placed on any input of a multiinput AND gate, it forces a zero on the gate's output, regardless of the signal levels on the other inputs to the gate. That is, the zero value is dominant. Now, suppose that we have constructed a multiinput combinational network composed entirely of AND gates such as the network shown in Figure 7. Set only one input to the network to logic 0, set all other inputs to logic 1, and then simulate the network. Call this input with logic 0, "A". A fter simulation, all the paths through the network which begin at "A " will be defined, because every gate in these paths will be a logic 0. This same idea is used in a rules checking system. The sequence of actions performed by the simulation scheduler is organized into distinct steps which correspond to checking procedures for the various rules. The behavioral model used is determined by the rule being checked. If the scan path along the shift register is to be traced out, then the behavioral model used will force a dominant logical value on all nets in the scan path. If the rule concerns the clock signals, dominant values will be forced on nets in the clocking paths. A behavioral model can vary the algebra used to calculate an output value of a gate. This makes it possible to use the simulation program to perform many different types of tracing operations. Note that this is not done in conventional logic simulation because the gate calculation routines always execute a fixed and predefined algorithm. An example of rule checking is shown in Figure 8. Suppose we want to check compliance to Rule 3a, which requires that clock inputs to all SRLs must be 'off' when all clock primary inputs are 'off'. For this test, the behavioral models are programmed to calculate gate stimulus response relationships according to the rules for the three

AN APPROACH TO A TESTING SYSTEM FOR LSI valued logical operations shown in Table 1. (The "0" and "1" states are the usual logic values; the "X" means: don't care).

199

O.K. HERE 0

/
0 AND AND 0 SRL 0

SRL CLOCK INPUT

AND

X AND

1 OR X

AND 0 '

jj

r~

SRL CLOCK INPUT

\
CAN'T TURN OFF CLOCK (VIOLATION)

Figure 6:

Example Network for Illustration of Sub-division Procedure

Figure 7:

Example of Use of Logic Simulator for Path Tracing

Using this behavioral model, we set all the clock primary inputs to their "off" (inactive) state, and set all other primary inputs to X. All internal gates in the network are also set to X. The output values of all gates driven by the clock primary inputs are then calculated. If any gate's output changes from its initialized X value, all the gates that it drives will be calculated. The process repeats until the clock signals reach a primary output or an SRL. In this example, an X remaining on SRL clock input indicates a violation of rule 3a. The remaining rules are tested in similar fashion. In each case, the various behavioral models used, insure that the gate calculations are appropriate for the rules being checked. Automatic rules checking capability allows the designer to check his own logic design for compliance with the design rules. In case of a violation, an error message is provided to allow the designer to quickly locate and correct the problem. Additional types of checking can easily be added. Because other block types such as memory arrays may be represented as behavioral models, the checking system may be extended to include complex designs which contain a wide range of hardware.

200

H.E. JONES, R.F. SCHAUER COST/PERFORMANCE IMPACT OF LSSD The negative aspects of LSSD include the following: 1. The polarity-hold latches in the shift registers are logically two to three times as complex as simple latches. Up to four additional I/O points are required at each package level for control of the shift registers. External asynchronous input signals must not change more than once every clock cycle. All timing within the subsystem is controlled by externally generated clock signals.

2.

3. 4.

The logic gate overhead for implementing the design rules has ranged from 4% to 20%; the difference is due to the extent to which the system designer made use of the L2 latches for system functions. Even for the worst case, the cost overhead at the card or system level is considerably less than 20%, since the relationship is not one-to-one. The requirement for additional I/O pads at chip and module levels is a concern. However, if these I/O pads can be shared to also provide a standard interface for operator and CE consoles, they may eliminate other interconnections and I/O points that would otherwise be required. The overall performance of the subsystem may be degraded by the clocking requirement, but the effect should be small. The clock and the distribution system for the clock signals can be accurately designed and tested to minimize skew. The actual cycle time is determined by the worst-case delay paths, just as in any other design method; so there is no inherent reason why the design rules should greatly increase cycle times. LSSD AND NON-LSSD MIX In cases where non-LSSD designs must be intermixed with LSSD logic, the design rules can be expanded to provide the ability to partition the logic so that the LSSD portions may be handled as stated previously and the non-LSSD portions may be handled by other existing automatic or manual test generation techniques. The general interface between LSSD logic and arrays is illustrated in Figure 9. Because an array contains memory, the Array/LSSD arrangement does not follow the LSSD rules. However, the orderly structure of the array allows the use of automatic test generation methods for the combinational logic between the SRLs and the array. Separate test patterns for the array must be provided by the array designers. The test system can then translate

AN APPROACH TO A TESTING SYSTEM FOR LSI these array test patterns to the appropriate scan-in and scan-out tests and combine them with other test patterns for the network.

201

Figure 9:

General Structure of LSSD/Array Interface

The logic preceding the array is t ested using stimuli presented to it via SRLs and Pis. The output of the logic is written into and then read out of th e array, and then propagated through the logic on the arr ay outputs so as to be observable at POs and/or SRLs. The testing of.the logic at the array outputs requires that the proper stimuli be applied from the array, SRLs and/or Pis and the outputs observed on POs and/or SRLs. It should be noted that high speed tests often used in array testing may not be available unless the array inputs are controllabl e (possibly through combinational logic) from Pis. For an embedded array, AC tests cannot be guaranteed through SRLs , since testing' through the SRLs is by necessity slow ( limited by the scan-in speed). Other non-LSSD networks may contain asynchronous sequential logic, analog networks or specials which contain data storing elements that do not follow the general LSSD design rules. These non-LSSD networks may be partitioned from LSSD networks by use of Stable SRLs (SSRLs). The general form of an SSRL is shown in Figure 10. The LI and L2 are connected to form a SRL as discussed previously while L3 is a "stable" latch used to provide a stable system output that will not change during LSSD scan operations. Figure 11 shows the general interface between non-LSSD and LSSD logic using SSRLs. Here Ci and Cj are clock signals provided by the System Clock to prevent unwanted outputs from occurring during scanning operations.

202

H.E. JONES, R.F. SCHAUER

aVlMm C i u k

LESO N.iwcK I,

ii o

c
__J
tut 13 LffCtl
Ibi

SSRL

c,
1*1

J^s
S;

SSRL

Non-LSSD Network

'.

\ . 1 ,-.,

Figure 10: General Form of an SSRL

Figure 11: LSSD/NonLSSD Interface

10.

CONCLUSIONS Important aspects of an approach to a testing system for LSI have been presented. It outlines a logic design and testing technique that eliminates or greatly reduces many of the problems in designing, manufacturing, and maintaining LSI systems. The following is a summary of some of the benefits offered by this design approach: 1. System performance is not dependent on hardtocontrol ac circuit parameters such as rise time, fall time, or minimum delay. It is dependent only on the longest path delay being less than some specified value. Test generation and testing are simplified to the well understood method of combinational logic network testing. The ability to dynamically monitor the state of all internal storage elements is inherent in the design. This eliminates the need for special test points, simplifies manual debugging, and provides a standard interface for operator and maintenance consoles. The development and use of tools for design verification. simulation and for checking is simplified. The insensitivity to timing problems and the modular design structure help reduce the impact of engineering changes.

2.

3.

4. 5.

AN APPROACH TO A TESTING SYSTEM FOR LSI 6. The level-sensitive design allows the use of a unit logic hardware simulator for development design without creating timing problems in the transition from unit logic to dense functional chips. The method used for testing chips and modules can also be used for diagnostic tests in the field.

203

7.

TABLE I THREE VALUED LOGICAL OPERATIONS WOT will change 0 to 1; 1 to 0; X to X


AND
0 1 0 0 0 0 1 0 1 X X 0 X X

OR
0

0 0

1 1

X X

ACKNOWLEDGMENTS In the preparation of this paper, extensive use was made of the information and material presented at the 14th Annual Design Automation Conference held in New Orleans, U.S.A., June, 1977, by M. Correia, F. Petrini, . . Eichelberger, T. W. Williams, H. C. Godoy, P. S. Bottorff, G. B. Franklin, R. E. France, N. H. Gorges, and E. J. Orosz. (References 1, 2, 3, 6). 1. 2. REFERENCES M. Correia, F. Petrini, "Introduction to an LSI Test System", Proc. 14th Design Automation Conference, June, 1977. E. B. Eichelberger and T. W. Williams, "A Logic Design Structure for LSI Testability", Proc. 14th Design Automation Conference, June, 1977. H. C. Godoy, P. S. Bottorff and G. B. Franklin, "Automatic Checking of Logic Design Structures for Compliance with

3.

204

H.E. JONES, R.F. SCHAUER Testability Ground Rules", Proc. 14th Design Automation Conference, June, 1977.

4.

E. P. Hsieh, R. A. Rasmussen, L. J. Vidunas and W. T. Davis, "Delay Test Generation", Proc. 14th Design Automation Conference, June, 1977. T. M. Storey and J. W. Barry, "Delay Test Simulation", Proc. 14th Design Automation Conference, June, 1977. P. S. Bottorff, R. E. France, N. H. Garges, and E. J. Orosz, "Test Generation for Large Logic Networks", Proc. 14th Design Automation Conference, June, 1977. E. B. Eichelberger, "Hazard Detection in Combinational and Sequential Circuits", IBM J. Res. Develop. p., 9 (1965), pp. 90-99. J. P. Roth, "Diagnosis of Automated Failures: A Calculus and a Method," IBM J. Res. Develop., 10_ (1966), pp. 278-291. E. B. Eichelberger, "Level Sensitive Logic System," U.S. Patent 3783254, January 1, 1974.

5. 6.

7.

8. 9.

10. E. B. Eichelberger, "Method of Level Sensitive Testing a Functional Logic System," U. S. Patent 3761695, September 25, 1973. 11. E. B. Eichelberger, "Method of Propagation Delay Testing a Functional Logic System," U. S. Patent 3784907, January 8, 1974.

TECHNICAL SESSION V

Chairman: G. MUSGRAVE, Brunei university, united Kingdom

ai digital electronic circuiti and syitemi North-Holland Publishing Company ECSC, EEC, EAEC, Brussels S aembou/ig., 1979

G. Uusgrave,

editor,

CMPTER-AIPEP PESIGN

AN E N G I N E E R I N G C O M P O N E N T S DATA B A S E

M. T o m l j a n o v i c h , R . C o l a n g e l o SELENIA S . p . A . ROMA,ITALY

The p a p e r p r e s e n t s the e x p e r i e n c e c o n d u c t e d in S e l e n i a in d e f i n i n g , implementing and u s i n g a d a t a b a s e s y s t e m to manage t e c h n i c a l infor mation on c o m p o n e n t s u s e d in e l e c t r o n i c i n d u s t r y . The d a t a b a s e i s c o n s i d e r e d a s p a r t of a l a r g e r c o r p o r a t e t e c h n i c a l information s y s t e m , d e v o t e d to s e r v e a l l u s e r s in the c o m p a n y , both people and automated s y s t e m s . M o t i v a t i o n s , s t r u c t u r e and c o n t e n t s of the d a t a b a s e will be d e s c r i b e d , t o g e t h e r with some d e t a i l s on p h y s i c a l i m p l e m e n t a t i o n . An online f a c i l i t y , c a l l e d R . A . C . E . , to a c c e s s the d a t a b a s e h a s b e e n r e a l i z e d . T h e d e s i g n g o a l s , the s t r u c t u r e , and the two b a s i c modes of o p e r a t i o n ( d i r e c t a c c e s s and a s s o c i a t i v e r e t r i e v a l ) of R . A . . C E . are described.

FOREWORD T h e r e is a g e n e r a l c o n s e n s u s , today on the fact t h a t , for i n d u s t r i a l o r g a n i z a t i o n s , the way in which information is h a n d l e d i s of p r i m a r y i m p o r t a n c e for s u c c e s s f u l e n t e r p r i s i n g . P a r t i c u l a r l y in e l e c t r o n i c i n d u s t r y , the fast evolving t e c h n o l o g y , the v i t a l need to r e d u c e d e v e l o p m e n t t i m e s in o r d e r to c o p e with a highly c o m p e t i t i v e m a r k e t , forced the o r g a n i z a tion to u s e c o m p u t e r i z e d t e c h n i q u e s to h e l p with the high flow of the t e c h n i c a l i n f o r m a t i o n and to s e t up automated s y s t e m s in s p e c i f i c a r e a s , ( e . g . t e s t i n g , w i r i n g , p . c . b . d o c u m e n tation, e t c . ) . The d e v e l o p m e n t of new and powerful computing t e c h n o l o g i e s and t e c h n i q u e s , such a s c o m p u t e r n e t w o r k s and d a t a b a s e s , make it p o s s i b l e to go f u r t h e r in the p r o c e s s of a u t o m a t i n g i n d u s t r i a l a c t i v i t i e s . T h e next s t a g e i s t h e r e f o r e the i n t e g r a t i o n of e x i s t i n g automated s y s t e m s , t h r o u g h the s h a r i n g of d a t a b a s e s and the on line c o n n e c t i o n of p r o c e s s e s , in an integrated information n e t w o r k . I n t e g r a t i o n h a s t h r e e main o b j e c t i v e s : - r e d u c t i o n of the o v e r a l l t u r n a r o u n d t i m e ; - b e t t e r u s e of c o r p o r a t e f a c i l i t i e s ; - c e n t r a l c o n t r o l of r e s o u r c e s and c o s t s r e l a t e d to s p e c i f i c flows of a c t i v i t i e s . In a rough s c h e m a t i z a t i o n , it i s p o s s i b l e to s e e the i n t e g r a t e d n e t w o r k made of two c o m p o n e n t s : the net of the a r e a s r e q u i r i n g the same d a t a (more g e n e r a l l y , f a c i l i t i e s ) in a s i n g l e w o r k i n g c y c l e ( e . g . , the d e s i g n of p . c . b . ) , and the net which c o n v e y s i n f o r m a t i o n among d i f f e r e n t w o r k i n g c y c l e s ( u s u a l l y p e r f o r m e d o v e r d i f f e r e n t p e r i o d s of t i m e ) . A block r e p r e s e n t a t i o n of the net of the f i r s t t y p e , r e l a t e d to the d e v e l o p m e n t p h a s e s in logic d e s i g n , h a s b e e n c o n c e i v e d by J . V l i e t s t r a and i s d r a w n in f i g . 1 .

207

208

M. TOMLJANOVICH, R. COLANGELO

DESIGN VERIFICATION

CAM PROCESSING

INPUT PROCESSOR

DA
LIBRARIES FIGI-DESIGN

DESIGN DATA BASE

COMPONENTS DATA BASE

TO OTHER -FLOORS-

"FLOOR": AN INTEGRATED DESIGN AUTOMATION (IDA) SYSTEM

A / QUALIFY CONTROL

^ZZZZZ^L

LOGISTICS

\ y '

DOCUMENTATION

,gz

MANUFACTURE

COMPONENTS DESIGN RLE

s S

PRODUCT

STOCKS MAN. ANO PARTS PROCUREMENT

~*

11 I I I

FICURE : - INFORMATION LINKS IN DEVELOPMENT-PRODUCTION PROCESSES

AN ENGINEERING COMPONENTS DATA BASE

209

The highly interacting activities, delimited by different blocks, communicate among themselves through an information "bus"; moreover selected design data are intercepted and accumulated into a design file (data base structured) on which the required central project control can be implemented. As a whole, the picture represents the automation of a complex of activities located at the design "floor" of a company. Other floors such as Quality Control d p t . , documentation d p t . , logistics, fabrication, e t c . can be similarly automated using a structure in which a main "aisle" supports the traffic going into and out of the various application "rooms". If we try to push forward the similarity, we can imagine the entire organization as a building in which "elevators" correspond to files of data and/or information (see fig.2).

INFORMATION SYSTEMS AND AUTOMATION IN SELENIA


The information structure presented so far has to be considered more a trend than a realily. The major drawback to its realization comes from the resistance of the organization to any structural change. Moreover, the outlined structure implies new constraints and rules (standardization)whose introduction must be very carefully planned and timed. On the other side, if automated procedures have been experienced and sufficiently "digested", systems' integration can be considered as a logic consequence of what delebped in order to get "economies of scale". Selenia is just in this situation. Basic DA systems in traditional application areas like p . c . b . layout, wiring and testing have been implemented and generally accepted as useful tools by development dpts. So far, in Selenia praticai actions along the guidelines above drawn consist of: a) a design review of the D.A. subsystems in operation, aiming to an integrated s t r u cture such as in fig. 1 (just started); b) design, implementation and use of an engineering components data base (operational); c) design, implementation and use of an application system utilizing the components data base, called RACE (operational); d) design, implementation and use of a stock management & part procurement data base (operational); e) design, implementation and use of an application system utilizing the stocks management data base, called SIGMA (operational). Hereafter, the systems in b and c will be presented.

CONCEIVING A COMPONENTS DATA BASE


The Engineering Components Data Base has been originally conceived by the Design Automation Group with the purpose of serving: - automated procedures (D.A. subsystems); - human beings (designers). D.A. subsystems require libraries of simulation models, topological data, schematic of components e t c . Each library is dedicated only to a specific application.

210

M. TOMLJANOVICH, R. 'COLANGELO

The various D.A. subsystems work on the same "floor" ( i . e . are operated in the same working cycle). In order to avoid delays, there is a clear necessity to correlate and synchronize the updating and handling of l i b r a r i e s . A central data base has therefore been considered the right choice to solve the above problems . In the last few years technology evolution has greatly changed design methodologies. The number of parameters and constraints to be considered during the specification phase of a project has been increased by the need to prevent fabrication and production troubles ( e . g . , "design with testing in mind"). A large amount of information must be therefore available at design time. In particular, if we examine design activities, a great part of the above information concerns components such as technical characteristics, availability, suppliers, costs, meeting of standards and specs , e t c . The necessity that information, usually handled by different departments in the company, be supplied to a large population of designers at the same time, reinforced the need of a central components data base and suggested the development of an on-line enquiry system. Moreover, it has been recognized that many areas in the company share with designers the need to access up-to-date, reliable, and consistent components' data, e . g . Quality Control, documentation dpt, manufacturing etc: as a matter of fact the whole company. The increase of eligible users has had little effects on contents and structure of the data base, but suggested to take in account different access methods for the on-line enquiry system. There i s , as an example, a basic difference in utilization of components' data during the design or the production phases. At design time, there is need for a great amount of data, arranged in synthetic r e p o r t s , to allow comparisons and ease choices. At production time, almost no choices have to be made, the most of the job being verification and retrieval of few, specific data. THE ENGINEERING COMPONENTS DATA BASE The specification of the Engineering Components Data Base can be summarized in the following points: the data base should store all the relevant data about components to satisfy the user community in the company; easy connection for any requiring application system should be allowed; - access to the data base from remote sites should be taken in account. Hardware and Software configuration The data base has been implemented on a UNI VAC 1100 series machine, using standard DMS110 data management system. Application software has been developed in ASCII COBOL. The reasons for the choice were the following: - the Company owned a central computing facility based on a UNIVAC machine; the different plants were connected to the central computer through batch and interactive terminals;

AN ENGINEERING COMPONENTS DATA BASE

211

some D.A. application systems were operational on the central machine. Contents and data structures Object of the data base a r e all those components certified by the Selenia Quality Control (Q.C.) Dept., i . e . components used or eligible to be used in products manufactured by Selenia. The classes of data stored in the data base have been carefully chosen by analysing u s e r s ' need. In figure 3 there is the list of the primary requirements expressed by the six broad areas in which users have been conventionally divided. Technical data comprise performance and characteristics of a component, as declared by its manufacturer and verified by Q . C . At present, diagrams have not been considered. Schematic and layouts have a reference number to a drawing handbook. Equivalences a r e verified by Q . C . and imply the possibility to replace one component with another. Quality level is the ability of a component to meet standards (such as military, environmen tal, e t c . ) , quality audits, failure r e p o r t s , e t c . Reliability parameters are those involved in computation of MTBF of systems, and a r e used in verification of maintainability, spare parts planning and logistic supports. The reliability data derive from the complexity and failure reports of a component, with no r e ference to the stress of the system in which the component itself should be used. Data for D.A. systems are simulation and testing models, topological data, data for design verification. For compatibility reasons with systems already in operation, the actual data have been left in application l i b r a r i e s , and the data base stores only their direct references. Other data stored a r e : the different names of components ( i . e . manufacturer type , Nato stock number, documentary partnumbers from co-contractors); Italian and English d e s c r i ption', information about manufacturers and suppliers. All the data fit in a structure made of thirtytwo record types, linked by'twenty different sets and stored in nineteen a r e a s . A reduced schema has been drawn in figure 4, with the purpose to give a feeling of the structure and show the allowed access paths. Each rectangular box marks a possible entry point to the s t r u c t u r e . Dashed lines delimit the data structure devoted to the associative retrieval, to be d e s c r i bed later. Its weak connection to the remaining structure gives high freedom in managing the system. First of all, it is possible to avoid selection of out of date components (stared in the data base only for documentary purposes) by avoiding their insertion into the structu re for associative retrieval. Moreover, there is the possibility to transport that structure to a stand alone computer, accessing central computer only to get data about the selected components. This would imply no modification to the overall system. System Administration The load of the system administration falls on D.A. group and Quality Control Dept. The latter is responsible for data collecting and validation, the first is in charge of system

212

M. TOMLJANOVICH, R. COLANGELO
SYNTHESIS: TECHNICAL DATA APPL. SYST: SIMULATION AND TESTING MODELS, GEOMETRICAL DATA QUALITY LEVEL, TECHNICAL DATA, SUPPLIERS AND SECOND SOURCES EQUIVALENCES DOC. PARTNUMBER, DESCRIPTION RELIABILITY DATA, QUALITY LEVELS SUPPLIERS

DESIGN

QUALITY CONTROL MANUFACTURING DOCUMENTATION LOGISTICS PURCHASING

FIGURE 3 - CLASSES OF REQUIREMENTS FOR COMPONENTS DATA BASE

ASSOCIATIVE RETRIEVAL

MFR/SUPPLERN

OATA D.i. \ y^CFtXHCCSj


NATO STOCK NUMBER

FIGURE 4 - SIMPLIFIED SCHEMA FOR ENGINEERING <t>MPONENTS DATA BASE

AN ENGINEERING COMPONENTS DATA BASE


performance and control over application systems.

213

In order to support administration, a number of tools have been provided: - online update, for steady maintainance of data; - batch update, for initial loading and high volume updating; recovery; statistics, both for consistency verification of the data base and for report generation; log facilities, providing reports on system usage. The volume of components handled up to now, is roughly corresponding to 130 thousand manufacturer's types. The global components' turnover can be quantified in 5 pet new acquisi tions and 10 pet revised per y e a r . THE RACE SYSTEM The objective to make the components data base available to a large population of users as seen before, outlined the specifications for an application system. It must: - provide online access to the data base; - have a user interface designed for people with little or no acquaintance with compu ting systems; support cross reference, i . e . access data by means of one of the names of a component ( e . g . manufactur's type, Selenia partnumber, etc.); support interactive associative retrieval , i . e . selection of components which math given technical c h a r a c t e r i s t i c s . The name chosen for the system was R . A . C . E . (Ricerca Associativa Componenti Elettroni^ ci, i . e . Associative Retrieval of Electrical Components), from the last requirement,considered the most innovative one. Functional operations The user interface has been designed taking in account human engineering techniques. The interactive facility works on a "question-answer" basic. The dialogue is under control of the system, which asks questions and (whenever possible) suggests a list of suitable answers. Search goes through eight steps: selection of the class of components ( e . g . Integrated Circuits) selection of the subclass ( e . g . gates) six steps going through six different parameters, such as technology, logical functions, e t c . The data structure for the search has been shown in figure 5 . a . For each parameter, the user is requested to supply one or more values. The system hands back the result of the search, i . e . how many components have been selected. On e r r o r or unsuccessful search, the question will be asked again. The user can abort the session, go to the beginning, go back of one or more steps, or ask for display of results at any point of the session. Results will be shown through a series of reports (logical "pages"), each of which shows on the VDU screen,in the form of a table, homogeneous characteristics.

214

M. TOMLJANOVICH, R. COLANGELO

POMER ARRAY

L TO

ACTUAL 'DATA

FIGURE 5 - DATA STRUCTURES FOR ASSOCIATIVE RETRIEVAL AND OUTPUT SUPPORT

FIGURE 6 - ACCESSES TO THE ENGINEERING COMPONENTS DATA BASE

AN ENGINEERING COMPONENTS DATA BASE

215

At each step of the session, user may ask for help, e . g . for IC technology the system would prompt all the suitable technologies, such as TTL, MOS e t c . Help information and output formats highly depend on the class of components. For this reason, structural data have been stored in the data base, so that RACE programs do not depend on them. Help information, together with subsequent questions to be asked by the system, have been stored in Class, Subclass and Selection parameter records (fig. 5.a). Output information, together with other data such as the list of names in the actual data records have been stored in the structure of figure 5 . b . Output format records contain headlines, read and write formats, dictionary of transcodes for decoded outputs. The cross reference access is straight forward: it allows selection of information by entering "names" of a component or supplier, and supports the same output facilities as the interactive a c c e s s . Aiding facilities, few and easy-toremember commands, decoded and formatted outputs make a "friendly interface" to the u s e r .

OTHER USES OF THE DATA BASE


There a r e many application systems other than RACE connected to the data b a s e . In figure 6 there is a schematic representation of the environment of the data b a s e . The SIGMA system, designed for stocks management and parts procurement, gets cross reference data from the RACE data base. Batch connections have been provided for D.A. systems, logistics and technical documentation application systems. In order to avoid the proliferation of ad hoc programs to satisfy single queries, it has been made possible the use of the standard query language processor UNIVAC QLP1100. This is a very valuable tool, but requires a good skill in order to be effective and preserve the integrity of the data b a s e . QLP is primarily used for data base administration. CONCLUDING REMARKS The purpose of the paper has been to present a real experience concerning distribution and utilization of technical information in industrial environment. System implementation has been deliberately only sketched, in favour of the analysis of users requirements and system position inside the organizational s t r u c t u r e . People and organization, together with computer r e s o u r c e s , a r e the three faces of any CAD (or DA) system. The first two must be considered, by the CAD technical community, the most important, just because they raise non-technical problems. The best automated system counts for nothing, if it is not accepted by the organization and does not offer any benefit to the u s e r . It i s , therefore, desiderable that we, automation developers, the "prophets" of the new industrial revolution based on computer technology, begin to consider success not only in terms of increased productivity, but also in terms of human beings satisfaction. Quoting E . F . Schumacher (from the book "Small is beautiful").. ."to strive for leisure as an alternative to w o r k . . . .would be a complete misunderstanding of one of the basic truths of human existence, namely that work and leisure a r e complementary parts of the same living process and cannot be separated without destroying the joy of work and the bliss of leisure".

G. Uusgrave, editar, COMPTER-AIPEP PESIGN o digital ele c troni c c ir c uits and systems North-Holland Publishing Company ECSC, EEC, EAEC, Brussels 6 Luxembourg, 1979

CUSTOM L S I DESIGN ECONOMICS J.G.M. Klomp N.V. Philips / Elcoma Division Nijmegen, The Netherlands

SYSTEM REQUIREMENTS Some years ago the sentence: Custom LSI design economics was a fiction. Because LSI, in the sense of 1000, 2000 or more gatefunctions on one chip, was the dream of the technologist; custom design was only feasable for very rare specialties and talking about economics in this context was a contradiction in terms. However since Jules verne wrote his: "Round the world in 80 days", fictions tend to turn into realities. The development of several tools was necessary to make the reality of economical LSI happen. To determine what properties these tools should have, let us have a closer look to the different aspects mentioned in the title. 1. LSI. Large scale means that a large number of functions are put together. However a collection of 1000 2000 gates was in earlier days called a (sub)system, so we are INTEGRA TING a complete system on one chip. Why capitals for integrating? Because a system on a chip is more than a bunch of gates which by accident happen to be within a lOOu distance of each other. A system on a chip means that electrical, logical and lay out properties and last but not least testing possibilities have to be merged. It requires inte grated thinking and therefore the tools should have very tight connections and well difined interfaces between the different phases of the design traject, so a faultless and smooth transition and feedback is possible from one step to the other. 2. Custom design With standard building blocks developed by the I.C. manufacturers one can design telephone exchange systems, computers, instrumentation and consumer circuitry military equipment etc. etc. When they come on one chip, the I.C. maker should have in house experts of all these disciplines. It is every where understood that this is impossible. That however puts the burden on the customer to do at least a large portion of the design himself, but they are no experts in the different technologies. By consequence the tools have to be constructed in such a way that they are easy to learn and transparent to the customer and guarantee that a good product can be designed without detailed knowledge of the technology chosen. 3. Economics A number of items contribute to the economy of a design: a) The one which is recognised worldwide is the number of square microns. The last should be squeezed out. b) The hit rate: how many reruns are needed before the chip is according to the given perfect specification. Related to that is c) The design time: How fast can a product be announced on the market once the spec, is ready. 217

218

J.G.M. KLOMP d) Flexibility: how easy is it to do small modifications when the spec, turns out not to be as perfect as expected. e) Testing: a point that is often forgotten: how easy and by consequence how cheap are the good devices separated from the bad ones. This point suffers especially from the last squeeze. Testengineers often say: one should save regardless the costs.

For the tools this leads to the following requirements: a) With respect to area, the result should be, within reasonable margins, comparable with the dimensions of the handlayout of an "average" designer. b + c) The first shot should hit the target specification so considerable effort should be spent on safety, d) The system should be susceptible for small changes at the last minute. As far as e) 13 concerned:this is a designphilosophy and if you are on the wrong track, even the best tools cannot help you. USER ASPECTS Before describing the sytem developed for LOCMOS in Philips some user aspects have to be mentioned, because their impact on the economy is as important as the ones above. When the tools are not well accepted they are only expensive burdens. A computer aided design system is not for the fun of computer people nor just for once but is for designers for every day use; therefore: Develop the system not in research but in the middle of the users, step by step. A complete specification of a new approach contains more wishful thinking than realistic thoughts about tools for every day. User feedback is essential for each following step. Keep the communication channels open. Both computer experts and designers speak their native tongue, however unfortunately in a different way. Rather use programmers with design background, so they understand each other. The program will not be the beauty of the nation, but at least it is used. Use dedicated mini computers rather than large machines. They are cheap, easy to handle, reliable and always to your disposal without the risk of being busy with the high priority managerial/planning jobs which are trying to find out why that design in the queue does not stick to its planning. Designers want fast turnaround. M The computer is not designed to do "creative" things so do not let her do it, it is a waste of money. With an interaction between designer and computer a good balance can be obtained between the "creative mind" and the abilities of the machine for fast and accurate calculations and checkprocedures. SYSTEM DESCRIPTION With the above in mind the LOCMOS design system has been developed. It is a total package so it covers all designphases, form basic electrical analysis to the generation of the numerical controltapes for maskpattern generators and testers. The flow diagram of the system is given in fig. 1.

CUSTOM LSI DESIGN ECONOMICS COMPUTER A IDED DESIGN SYSTEM FOR DIGITAL IC's IN LOCMOS Network Description PHILSIM logic simulator ac and trans analysis Generation of COMPACT LOGIC cells

219

INTER cellplacement & wiring wiringcapacit. CIRCUITMASK

Celllibrary EsT= verification/ generation TESGEN

fig. 1. The cells in the cell library range from simple gates and flipflops to more com plex functions up to nine input variables; also ROM and RAM bits cells are available. An example of a cell structure is given in fig. 2. Each cell has a constant height and variable width. The in and output signals can be reached both from the top and the bottom of the cell. The library contains about 130 items which are all characterized with respect to layout as well as electrical and logical behaviour, including time delay factors, (see fig. 3 and A )

COMPQCT LOGIC CELL

F=(Ci+C2)-Bl+Al-ftt

l
fil

VP

^ ^
Bl

H2

'

Al A

>nn;s

Cl

_
M

i
c F

L -NULL

ns

*^

OXIDE

.5x16,*

Ti ' -6x20>
fig. 2

- P r o p a g a t i o n D*lajf Doratlr.g f a c t o r Lavei Lavai 1 " UpdO) Rafaranca

_ - Upper ( t p d 1 ) R e f a r a n c D 1.6-

m
Prop^f.Uor. U . l . j r . Function of r.nuul
T

tpdu

B ' >

M '4*

" ' "

1.4

tpdl

oalaftl

"

1.2

1.0
tpoO - PO-LC-rp
CJ

tpdi * puLi*r

cn
2
7*

tg - LI - 0,2nfl

fig. 3

fig. 4

CUSTOM LSI DESIGN ECONOMICS

221

Custom designs and LSI's in general often need something special. If a CAD system is not capable of handling these specialties without losing its efficiency or without becoming difficult to handle, users are not willing to accept it as a tool. To bypass this problem, a library is developed which contains so called "primitives", examples of which are given in fig. 5.

AlPS

contact/

buffar elementa

Internal aluniniua interconnection lenenti

Diffualon contacta

Polyailicon Input/ output elraents

D
I
s . .

DD
CD

E3
s

fig. 5. With these basic items, which are related to the single processsteps, the desig ner is able to develop all kinds of specialties, which will not only.satisfy the customer specification, but also will fit into the rest of the system. CIRCUIT DESIGN After acceptance of the development of a design the first thing to do is to con vert it into functions provided by the library and if necessary to develop spe cial building blocks. In the latter case an analysis program is used to check the electrical properties. A correlation has been established between the calculation values and the diffusion product with on accuracy of about ten per cent. When all building blocks are available, the network is coded for the computer, where macro facilities, parameter descriptions etc. are reducing the amount of work, (see fig. 6 and 7)

222

J.G.M. KLOMP

20 a

* * Z.Cl
. Z.Fl Z.F

ACRO -CJ?1CC1C2)) '20 KC1.C2.IM.M.A2) BF((e,2.7/6,e,2)) Tfn.M.Ij.IJ.IJ.Ol) V(A1,2 8 t , C l . C 2 , F ) HCI.C2) t>(0) nu I(l2> 0(0) kun KZ.Fl."Il 0(0) arm I(Z.F),Z.2> NOR tNO

0{F)

D(OF)

OCF)

f i g . 6.

M U ! f i - ^ -

TI"U-3- rDFFOOlQ FDr-FuQcg FOFFOOiO FDFFuOaO FOFFuOoO FOfFOOfO FOFFUOOO FOFF00VO rOFFOlOO FOFFgUO FOFFOWO FDFFOliO FOFFQ140 FDFFglSO TOuTOOlO TOuTOOiO TOuTooao TOliTUObO TUUTVOVO

QIC *'.*

H i r . i ( - S P t . r n D-LOCLn CMnss-CUUPLtD nLCKf* l f C l # 0 , C ) Ur ' ^ - ' - , . , - ) rfjt.i?,,,ti,*.L,ix,x.^ , , , . , , - , , / NAM|) :>/.>,..>


.L)

. A L-. F F. DIS, m,on)

.'LL

'05-0

,X,Ul,X,Ftl,b D(fJJ DtOJ

U t

S*' IUIU S!)


"NIJ

I(i),.M) *^.,,*2) If/.rJ,C)


l(Z*n,<") f-)f,Ql) ..:. i ) Dr .*.-. . M )

ruoj oto)
u(bj 0.1 Bh)

-(_..,!
TUU1* I M I ) .Xt,,.1> n(F) t.. r.ftCM.I NfcTjT* OLCS'U IfjH.no,IH] Ur|,,IB, ii ) liCf'X IflHiOLiI) urj,..* ni.C^<* In".H,Ih) Tj i , ? 1 . l . u ] "Ltx'i* ,,) ' ! '.. U . 1 1 . M 1 oten-a* iri^.FbH.iui ftf T # i l # 1 ' , i i ) KLC*'*. if'C.nt.IC) '.- M 1 "! "*0? Irt|F.*h) Dr,10) .., Ifr,l) Uf), 10) i*"U? I f C * ' * . **<i 'lA'UZ ) ,) l>rri*lO) < : ' - . t , i 11 r>r|2,U) Irc".{.*l-fF DLSCHIPIJU" . . . . D(00,U1) 0(H Dtr>0,01)

.... ....

....
t " a i J01 txsogoto tHSOUOlO twSOQOSO LMSOvOoO tWSQOOO F.MSO00O0 r - s JOCiC tMSDUltlQ t - Dl11 tH$Dnio
MSIJ01O

Ol,t.) O t r i , B f.) DCCK] n ( D , o a,) O I E , f s) fl(r , F . ) "(tl*) runt)) niot] U(UU]

ora,io)

ncl
Ulfit)

LHbDtfl*)0 ENSOtllaO LHSOOloO tHSO01/O t - i : c u o tnSOUlVO .HSOV2U0 fcMaOtfilO t-b".t. tHSUU<lO

f i g . 7.
The design verification is done with a time dependent logic simulator. The de signer describes the sequences of input pulses in a so called Simulation Control Language and the network response calculated by the computer is given as shown in fig. 8.

CUSTOM LSI DESIGN ECONOMICS

223

I PHIL31M

VtHSIUu

t>ME=/t.ut><! tlMfc.ll8 7 i.s m ..STAR

i MUL n MlciH-MPLn n, i m

1 I M E

Il IL

Il NG

Cl lu

I SI.

,)? AM.

lua
MCN i l

' I

OU ON

Ft SN

111 NAU

AtJCDt

RKII HU.

T'lITIALlSt

INPUT

SIGNAL*

o.
7, 10. 11. 15. 17.
.

0 0< 0 0. 1. 0. 0.

n
0 0. 0 0 n. 0.

5. 6. i".

0. 0. 0. 0. 0* 0. 0. Da n 0 0. 0. n. 0.

n. n. n n. n. n* ni

n
n. 1 n. o. 1* n*

si.
S*.
"'t

i5.

ft* # ti 0 0 * Ol Ol Ol 0 * * * 1 1. " . Ol a* Ol . * Ol 0 . Ol * Ol 0 a* 0 Ol * Ol

* * a. aa
* *

g.
1)

....
ft*** ft*** * * * * ft*** ft*** * ** ** * * ft*** ft*** *** t * ** ** * * * * * *

* a a**
a a aa

u u* u
01 Ola 010 ulu 010 010
Olli

** *** ft** ft** ** ft*** *** n***


0*** 0*** 0* 0** 0*** 0***

a
a aa a a **

tu
la la 1 la la

010

uto

M L F H t I L s 11; .a

Ou. js. SiO.


J'''. UbO,

SS. bu. Sil. Sii". ion. iel, SOM. S/l. S'. Sil', SM.

0 la 11 01 01 "l 01 "l "1 11 11 11 11 11 1 1


n. 1 lo 0 00 00 0. 0. 0 1. lu 10 l ' J 10

1* 0 11)

I *
I" PI
Ol 1 "

ni nia 0)0
ftjn

. *

.
10 10 lu

0) 00 0. 0
.

Ou 0

r.
ft

11.

nu Dn
,1.,

u 1 . 1

"0 00 00

IUI l"l 1"! ll 1"1 101

10 nio nio nio

0 0 0

.!ft nn
010

nio

"J OU ft. IUI

im
|0I
. I

u 0 0 o

0 0. ni ni * . * 0 01 1 "1
II

t u
lu lu lu 1U lu 10 lu 10 10
1
il

010 101 100 ulu ull uoi ull U10 010 110 110 100 100 inj mi
nt

a... a**a

n n
0010 001. *0 101 001 OOI. 001 OOlu ODIO

H "
11! Oliai ull! 01101 u I 101

oito! OHOI
OHOI DI 1 0 1 01101

onni
01101 .

oolu

0010 0010 ft I I ft

fig. 8. After a process of simulating the function, correcting the network where necessary and resimulating the specification is met. The designer has now spent about 2 - 3 months to come to this point. As it is the intention to bring this logic on the siliconwafer, nobody is allowed to change the network description any more. So for layout and testing this description is used and not an new coding with all its inherent transfer faultpossibilities. Already during the acceptance and design phases the testengineer has taken part in the discussions to make sure that the circuit does not contain constructions which cannot be tested at all. Now he has to make sure that not only testing can be done, but that it can be performed on a production basis, which means on a standard automatic tester in a short time. For this purpose the test engineer works with a logic test verifier which deals with logic stuck at one/zero defects and a program that generates the d.c. parametric tests. With these aids he is informed about the defects which can or cannot be detected, how efficient the test sequence is, etc. (See fig. 9.). As LSI test equipment is not cheap, this is an important step which, if not carefully handled, will cost a lot of money afterwards.

224

J.G.M. KLOMP

. * H l V t * I f T ifJ: 3UTPUI

, vr.jFic T i m

>-

r i e s 2 la-IO

Fier-

FuILVI*IF"

11 NIL') t I 3 " " * " t

TUCft 3HF DFFIC TN OfTECTtO IT FITO FIRIT i l e

Tue AT ICMO DfFfCT X* OtTICTC B IT FAT*

" " " * " " " * *


It il t

Ml
If

111 *t 42
I il i l li IN

MI
1

A1H.F IH.F IMI.F 1-0.' IN. 111.2 1,1, lu,F IIS.F I. tut, f IH,F IIN, Ml, I
H. C.'l

I I
S

11 11
I*

ITI

1*1

If
1 21 I I
IIB ITI ITI

M 2' 2
II

C.FI te

11
li

ITI ITI

<

:
I I

IH.F| 1*1,FI I M . '1


H I

*c.' c.' lo.n 10.

II
1

ai.'i
'/. 1 11.M M.

- .s
I S)

Wl.FI
ti

*.'

IT* 11 1*2 IT*

1 1 1

no
IT* 1*2

*.".
ti I t i

.FI

ss
S'

.F|

.M

*I.F,
ti 111 11

'.'

s bl kl s
M

ITA IT

FNI.FI


(11 111 II 111

Hl.F I.' -;. .F I.F NI,ri


'-.

TS TT II 1 iS IT II 1

tl.'I

l 11 1*1.FI
l i l

Il,F >'.' 1,' J.' H.F ai,ft |. 1*4.1 2.*


1
s

10

11 IH
1

ici III
IIS

10J 1(1
IO

fig. 9.

Once the logic has been designed and it has been proven that testing is not a problem, the layout phase can be entered. As all information to generate the lay out is available this is not a large problem. The partitioning, if necessary, and the placement of the cells is done by hand. Here the knowledge of the designer who has struggled with his product during the logic setup is used, rather than to excercise with an algorithm. This has two reasons In the first place the computerprograms are not yet so extremely clever, that they beat the designers' brains evidently and second, because of the reason just mentioned the man still has to make himself thoroughly acquainted with the computer placement to be able to do the final optimisation and this costs as much time as doing it himself right from the beginning and he is more involved. The placement information (see fig. 10.) is added to the original network des cription and also stored in the computer. As both the interconnection scheme (derived from the network and placement) and and the cell layout (retrieved from the library) are known, an automatic wiring routine is able to generate a double layer interconnection pattern. The outcome is presented to the designer (see fig. 1 1 . ) , and as the first shot in placement never is 100S optimal a loop starts of iterative and partly interactive manma chine work. The program calculates the contribution of the wiring to the circuit delay and this is fed back into the network. This enables the designer to make a final check of his logic with the actual final layout data, (see fig. 12.). A fter this is accomplished we end up with two magnetic tapes, one is driving the mask pattern generator and the other is to control the automatic tester.

CUSTOM LSI DESIGN ECONOMICS

225

INPUT

LISTING

INTEHIONNECTIO.

. a

*****.**

2 3 1 5 6 7 8 9 10 11 12 IS

.*****.,.***** V69V2 M E T H O K K AND C E L L P L A C E H E N T ******.,********* TOPSTAKT HFLSTAKT ROM SPAC GD0S,2 G0O5.1 CD01.2 G001.1 30,2 SI,3 GIU S1.1 GD02.2 0002,1 Gi)0b,2 GD06.1 GOOD,2 G001.1 GS SI.I Gil SI,J SI, 2 GDI CM) CM)

(21

fig. 10.

<>
(1) (H) (1)

15 16 17 18 19 20 21 22 23 2u 25 26 27 28 29 30 31 32 11

t)
(H) C1>

() () CO
(1) (H) SPAC

(11

r.un

PATH PATH RENO

CSI.2,5)

fig. 11

226
si 52 5i 5j 15 56 5 SM 259 260 61 62 65 6 6S
be, 67 260 269 27o 2T| 2' 71 7u 75 76 277 76 27g j "1 "J 2"3 a 285 fio 287 d 89 90 291 92 91

J.G.M. KLOMP

HFLl " TOF.Il DFSTA9T TPo0ac,0l<oi*c<iaxnji-o TPDJclIllC12|2afn 'Cai,1,3,/2.u,i,J H l . l ' l i 1 HCBO.55,0,3/J.12,1.12 SCALtab oFt.ii UFLCAK'-Cl DFLLIST FOSTAWI GP.J 1 .11 1 1.ODI1 1 GOB2
!,0:R4

'l- i 7S HJ GUNn2 GQ12 GuA SUA GU" GUIO CUT GCJ Gu GU77 RSO I GTS Gl S i , MO GuS 1 S / , 10 58,10 GIH Gb4J

: a
1

a a a . a '

I 1 1 1 1 1 1 2 1

GCL GO Cr.6 GQJ TES'Gn2 GOZ5 02 (.T GII

< a a a a a

1 Gli
1 GO', i GOS G07N 1 ijQdN 1 GTS1 1 FOt'in

2 1 I 1 1 1 1 1 1 1 1 1 1 5 1 1 1 2 1 1 i

7 Su r,4T r.jo GUNC2

a a

. r
a a a a a a a a a a a a a a a a a

G Gum GH
r.UNC GVIU

GUI
r.u7 GU76

SU FI
SI,MON Si.MO r.ua

GI6
S7.IN GINN

GBr

1 1 1 1 1 i 1 1 1 1 1 1 1 1 1 1 1 1 1

CG
GHtSN GOA2 G.L2 I.J'.AU GUC1 GUAI

2 6 1 1 1
1

TES c GUNA2 CUND2 .1 GUNL


GKKB

a a a a
a a a a a

1 1 1
2 1 1 1 1

GUC
GV1CI

G06 G) G"
GSUM

ti
Crr.

Gl GIS

),% GI6
iti UU20

1 1 2 2 1 1 1 1 1 1 1 1 1

GG6 .2 Gu5 Gul GXIN G10N 02 Gulli SI,MB H S5.M0 GI7 ss,: GNA Gd2UN

a
a a a

1 1

a a
m a a a a


1 1 1 1 i 1

CEI L HACHOS FrtflM L I 8 P . A R T - U l f CLHOl FXTfcNEU " I T P D F - ANO V . S P E C I F I C A T I ^

ANO

UPDATED

fig. 12. EVALUATION How does the system meet the original goals? HANDVERSUS MACHINE LA YOUT A couple of designs, made by hand, have been reworked with the help of the sys tem. It turned out that a designer working with the system is able to generate 0 layout which is less that 10?i larger than a complete optimized hand layout. Dut the mantime spent was reduced with a factor of six to nine. The most impor tant point howewer was that the circuits met their specifications the first time. Of course this result was due to the fact that our cells can be reached from both ends, where in the cell itself no extra area is necessary to achieve this. That is why they are called COMPACT LOGIC CELLS. For small series one can generate a layout within three manweeks and still be within less than 20? larger than an optimized layout.

CUSTOM LSI DESIGN ECONOMICS HIT RATE

227

We have now about 150 LSI circuits designed with the system. Two of them failed. From the first one a part of the function was never simulated and by consequence refused to work. The second one had a delay line effect, due to a distributed capacitance over a very long polysilicon line. For the others the first one did meet the given specification. EDUCATION AND TRANSFER As Philips itself does not have just one design centre and as several LSI customers use our products in area's where we do not have enough expertise it was of utmost importance that the system could be learned easily. When it takes half a year before a man is able to use CAD, the benefit is already doutbtful and for outside customers it is impossible. Both for internal and external customers an intensive course is given of one week. In this week all steps are exercised on examples. The second week the customers starts with his own design and some guidance is given. After that they stand on their own feet and only regularly contacts are necessary to discuss implementation problems and especially about testing. This works for internal and external customers. We have made several systems in this way and in the most extreme case the outside customer developed a system of 15000 gates in 12 chips by sending carddecks and receiving computer printouts. We only make and test the parts to their inputs, we are even not allowed to know how the system works, but we have seen it working. FLEXIBILITY As all information about the circuits is stored in the computer small changes can be made easily. The computer input is changed and the whole cycle can be run down with the eye just focussed on the consequences of this change. The coding of the rest is still valid. With respect to layout the system is very flexible, because different approaches to the arrangement of the logic can be done by just exchanging the deck of cards for the placement. FUTURE TRENDS Although the system so far has a good performance, we realize that with the growing complexity some tools will not be adequate. Especially in the logic simulation and testverification only the gate level will take too much computertime. However the defects are made on gate level so we cannot forget them. A mixed mode for high level and low level logic descriptions is therefore now under construction. SUMMARY lhe LOCMOS design system is in use in several centres. The typical turnaround time from accepted specification to mask- and testertapes is four months at a computercost of about $ 3000-4000. So in aboiut a half a year it is possible to make parts which are correct, at a reasonable price, comprizing 1500-3000 gates, which indicates that custom LSI design is economically feasable.

228

J.G.M. KLOMP REFERENCES

1. A.J. Strachan and K. Wagner: Local oxidation of Silicon/CMOS: Technology/design system for LSI in CMOS, IEEE International Solid State Conference 1974. Digest of Technical papers pp. 60-61. 2. J.G.M. Klomp: CAD for LSI, production's interest is in its economics. ACM - Sigda newsletter vol 6 no. 3 1976 pp. 11-15. 3. K. Wagner - J.G.M. Klomp: L0CM0S-CAD ein Wirtschafliches und Produktionsgerichtetes System fr den Entwurf von digitalen LSI Schaltungen. Grossintegration - Technologie - Entwurf - Systemen pp. 275-334. Herausgegeben von Prof. Dr. B. Hfflinger. R. Oldenbourg Verlag Mnchen 1978.

G. Uusgraoe,

ECSC, EEC, EAEC, Brussels

o digital electronic circuits and systems North-Holland Publishing Comapny

editor,

COMPUTER-AIDED DESIGN S UuemboiMa, 1979

AUTOMATIC GATE ALLOCATION PLACEMENT AND ROUTING

Stephen C. Hoffman CALMA Interactive Graphic Systems Sunnyvale, California

Algorithms used for automatic gate allocation, placement and routing do not guarantee a completed or acceptable design. However, when combined with an interactive editing capability, they can decrease the time required for these steps in PCB design. INTRODUCTION Computer aided design of electronic circuits centers around a design data base that contains the engineer's logic design as input to the gate allocation, placement and routing tasks. When the design has been completed, manufacturing output can be automatically generated. (Figure 1) The process of gate allocation is the assignment of logic functions to physical devices. Much of an engineer's design may specify the physical devices to be used for discrete components and higher level functions such as ALU chips or memories. Logic functions, however, need to be assigned to physical devices, and each device is often capable of implementing a variety of logic functions. The gates are assigned to devices so as to reduce the package count and to increase the routability of the board. Placement, of the devices onto the board is done so as to make the board routable. Placement must also conform to spatial restrictions based on design rules for thermal isolation, physical obstructions, critical signal lengths, etc. Routing is the task of interconnecting the device pins using etch' and vias. A very similar set of tasks is involved in the design of master slice LSI, or gate arrays [3]. Logic function templates are placed onto gate array locations, and metalization and contacts are equivalent to etch and vias. The concepts are similar enough so that little or no modification of a PCB CAD system is needed to support gate array design. This technology is likely to be of greater importance to current PCB engineers in the future. AUTOMATIC PROCEDURES Most of the limitations associated with automatic procedures are apparent from examining the algorithms used to automate these tasks. An understanding of how the algorithms perform will aid the engineer and designer to obtain the most benefit and the least frustration from them. Gate allocation is usually combined with placement since the routability of the gate allocation is dependent on the placement. Gate allocation in the absence of placement can only minimize the package count. Routability is defined to be the probability that the board can be successfully completed by an automatic algorithm. There is no deterministic equation that defines routability. Automatic algorithms use a cost function to assess the routability of a placement. 229

230
NC TAPES: I.DRILL 2 COMPONENT INSERTION

S.C. HOFFMAN

REPORTS^ I. PARTS LIST DIGITIZE

REPORTS^ 1. NETLIST 2. BLOCK UST

AUTOMATIC PACKAGING PLACEMENT

NC TAPE

Z^

REPORT: WIRE LIST

AUTOWRAP

INTEGRATED PCB DESIGN ENVIRONMENT Figure 1.

AUTOMATIC GATE ALLOCATION, PLACEMENT AND ROUTING

231

The cost function is based on parameters easily measured by the computer. Parameters commonly used are total length of wire needed to connect all pins on the board, the total area of all nets, the distribution of expected routes, and the package count. Each of these parameters is then weighted for relative importance. Although such a cost function is sophisticated by algorithmic standards, it is greatly simplified when compared with the factors used by human designers to do placement. Automatic gate allocation and placement is performed by two basic algorithms, constructive initial placement and iterative improvement. I will describe the general nature of each algorithm, although there is considerable variation in the details of their implementation in different CAD systems. [1] Constructive Initial Placement begins with a blank board description of where components are allowed to be placed and the list of unplaced devices and gates. The devices and gates are selected and placed one at a time. The next device or gate to be selected is based on such parameters as the number of connections to already placed components, the size of the gate or device, and special attributes such as the number of connections to the board I/O pins. Weighting factors are applied to these parameters to give a number that represents the importance of placing that gate or device next. The most important gate or device is selected and then a position is chosen for it based on the cost equation mentioned earlier. The algorithm compares the cost of placing a device in various locations and, in the case of gates, also tries allocating the gate to placed devices. The alternative with least cost is chosen for placing a device or allocating a gate. The constructive initial placement algorithm has important features that affect its performance. Devices and gates are placed one at a time and they are not moved once they are placed. This means that the algorithm is relatively fast. It also means that the resulting placement is not optimum since optimum locations for a device are often occupied by previously placed components. Since this technique is relatively fast and inexpensive, it can be used to generate a variety of initial placements by varying the parameter weights for unplaced device selection and the weights in the placement cost function as well as by providing preplaced components to bias the final result. A designer can then select the best placement based on the total cost function or based on his own intuition. Iterative improvement algorithms begin with all components placed and all gates allocated. The device locations and gate allocations are then interchanged in an attempt to lower the total cost function. The process is iterative in the sense that the interchanges result in a new placement that can make previously undesirable device interchanges now desirable. Unprofitable interchange attempts are avoided by concentrating on devices and gates whose individual placement cost is high or whose optimum location is furthest from their actual location. Interchange candidates are then tried, and the candidate, if any, that provided the greatest improvement in cost is used for the actual interchange. This algorithm is much slower than the constructive initial placement due to the number of times cost functions are calculated, which can be hundreds of thousands and even millions of times. Often iterative improvement algorithms are implemented using simpler cost functions to speed the process. The algorithm also achieves the most dramatic improvement early in the process and should eventually be stopped when the amount of improvement per CPU time invested falls below an acceptable value. Although this algorithm can operate on random initial placements, it is also a perfect companion for a constructive initial placement algorithm. Constructive initial placement arrives at a good approximate placement much faster than iterative improvement. Running the iterative algorithm after initial placement can yield a significant improvement

232

S.C. HOFFMAN

in the total cost of the placement. Due to the inherent simplifications in the cost functions and the algorithms, automatic placement is seldom judged superior than human effort on small boards, although automatic placement is usually adequate and faster than manual placement. On large boards, where the amount of data can boggle a human designer, placement algorithms often outperform human designers based on simplified cost analysis. This may be due to other strategies employed by the human designer that are not measured by the cost function. Special placement design rules can be followed by automatic algorithms provided that the rules can be reduced to simple concepts such as classifying components and locations and restricting the placement of certain components to a class of locations. Complicated rules with dependent situations are beyond the scope of these algorithms. Discrete components are also poorly handled by many CAD systems. This is often an oversight in the design of the placement programs as opposed to an inherent limitation in either of the algorithms. A common simplifying assumption that excludes discretes is that all devices are the same size. Discrete components and large DIPs must then be placed manually. Large components are usually preplaced and discretes are usually added after automatic placement when dealing with programs based on the fixed size assumption. Research is continuing on placement algorithms. One major problem in evaluating different placement programs is that the determination of optimum routability depends strongly on the performance of the routing algorithms. Automatic routing procedures provide the capability to interconnect the nets on a PC board, while maintaining minimum spacing and adhering to design rules that restrict the use of vias or routes from certain areas of the board. The procedures also allow the routing to make use of existing board features such as initial bus routing and fixed vias. Several important simplifying assumptions are made by routing algorithms. One is that all routes are made on a grid. Common grid sizes of 50 mil (.050 inches) and 25 mil (.025 inches) are used because they match the pin spacing on DIP devices. Another assumption is that minimum spacing can be maintained by using a simple scheme of occupied grid points. A wide bussing trace, for example, may prohibit the use of adjacent grids for routing. Using grid occupancy to insure proper clearance, the bussing trace is said to occupy the adjacent grid cells even though the actual etch does not overlap the grid point. The final assumption is that routes can only be made along orthogonal paths. Some routers do have post-routing clean-up programs that can replace staircase shaped routes with non-orthogonal straight lines, but the routing algorithms use only orthogonal lines for creating the routes. Automatic routing can be viewed as a three-step process. First the nets are put into a sequenced list of connections to be routed one at a time. Then the routing algorithm attempts to route each connection. If it is routed successfully, it is removed from the list; if not, it is skipped and the next connection is routed. The third step, a clean-up program, is run to remove unneeded vias and perform a variety of optional tasks such as realigning traces, thickening traces, or replacing staircase routes with straight line connections. The sequencing of connections is an important task and the designer usually contributes to it by specifying critical nets and controlling the weights of the automatic ordering function. The ordering function is based on measurements of the distance between points to be connected, the amount of area between the points, connection to the I/O connector pins or other critical nets. Nets

AUTOMATIC GATE ALLOCATION, PLACEMENT AND ROUTING

233

should generally be routed in order from short to long, small area to large area, and of course critical nets before non-critical nets. There are an abundance of routing programs. However, they make use of two basic algorithms, the probing algorithm and the flood algorithm. A particular router can make use of both algorithms. The probing algorithm or depth first algorithm uses straight line segments to reach the target. The most direct route is attempted, if it is blocked by an obstacle the algorithm looks for a way around it or backs up. The algorithm usually has limits on the number of probes to try or the number of detours allowed. Thus, a path may exist that will not be found by this algorithm. The probing algorithm finds the first routes very quickly. As the board becomes congested, routes contain more detours and eventually routing attempts begin to fail. This algorithm can complete a high percentage of the connections although it usually does not do 100% routing. The flood algorithm or breadth first algorithm is based on expanding a frontier from one grid to the next until the target is reached. Various modifications exist to the basic algorithm such as to limit the flooding within a window that surrounds the points to be connected. The algorithm can also make flooding occur faster or costlier in one axis direction than the other to give a directional bias for different layers. The flood algorithm is guaranteed to find a path if one exists. The path it finds is also guaranteed to be the shortest or least costly path available. However, this is a slow algorithm compared to the line probe algorithm. Starting from zero, the algorithm can complete a high percentage of connections. It makes more sense, though, to use this algorithm after having used a line probe algorithm. Many of the connections missed by a line probe algorithm can be found by the flood algorithm. Even so, 100% completion still cannot be achieved reliably if at all. The performance of these algorithms is limited by some fundamental aspects of these algorithms. One such aspect is that the algorithms route one net at a time and do not consider the consequences of the current path on the routability of future paths. There are some promising experimental algorithms that are not limited in that aspect, most notably graph theoretical approach' [5] and iterative conflict resolution [4]. There has been and will continue to be research on new algorithms and algorithm improvements. Performance of automatic routers is also dependent on the design rules used. Adopting design rules favorable to automatic routing performance is one of the most effective ways to increase the probability of 100% completion. Some design rules that can affect automatic routing performance are given below. 1. Density of IC's on the board affects routing performance dramatically for both human designers and automatic algorithms. The number of IC's (or equivalent IC's) per square inch is often used as a measure of the difficulty presented in a board design. Actually the number of interconnects compared to the number of channels available for routing is a more relevant measure. Theoretical analysis of routing algorithms indicates that performance drops off dramatically after a certain density of grid occupancy is reached [6]. Grid size, pad size, trace width and clearance rules should be tailored to allow the most efficient use of board space based on grid occupancy representation. Particular attention should be given to the effect of vias and traces on adjacent channels. Blockage of adjacent channels can greatly degrade router performance.

2.

234 3. 4.

S.C. HOFFMAN Board shape can also affect routing performance. X and Y channel capacity is most advantageous. A board that has matched

Special consideration for power and ground. Due to their length, power and ground traces are significant topological barriers to routing. Making use of bus bars or burried power and ground planes can significantly improve automatic routing performance. For multilayer board designs, the use of fixed vias can offer significant improvement over random via placement. Regularly spaced vias insure that channel availability is optimized for routes on internal layers. The number of fixed vias to provide best routing results should be somewhat less than the number of IC pins on the board. Rules concerning the structure of nets such as those encountered in ECL technology can deteriorate router performance and even make the use of automatic algorithms infeasable. Few, if any, routers are capable of adhering to such rules.

5.

6.

INTERACTIVE EDITING Automatic gate allocation, placement and routing is limited not only in the ability to complete the job, but also in the ability to adapt to design rules. In spite of these limitations, automatic procedures offer significant improvements in time needed to complete a design. When evaluating this improvement it is necessary to consider all tasks needed to complete the design. Percentage of routing completed is often an irrelevant indication of the usefulness of automatic procedures. Input data needs to be prepared, and the output of the automatic procedures needs to be edited for completion of the design and for correction of violations of design rules not automatically followed. Although automatic procedures are not promising for someone who expects a fully automatic solution, they are attractive to someone who can generate the required input automatically from digitized schematics and can interactively edit the design at a graphics terminal. The interactive graphics facility can make check plots and allow graphic design data to be manipulated using a tablet and CRT display. It can offer special display functions for identifying rules violations, viewing unrouted connections, and other special functions. Another important feature that can be offered is expanded scope of data representation, such as higher grid resolution to allow optimum positioning of traces or components, and all angle line segments to allow the maximum number of traces to pass between obstacles. The designer is not restrained by the simplifications used in automatic algorithms. A necessary feature of the edited design. It that design rules have much more successfully an interactive edit facility is automatic verification of must verify that all interconnections are complete and been followed. Automation of checking tasks can be done than automation of design tasks.

Graphic edit facilities can be made available on main frame computers for main frame based routers or they can be provided by a minicomputer based interactive graphics system. Mini-based systems can operate as satelite subsystems and interface to the main frame for batch execution of placement and routing programs. Some vendor supplied graphics systems offer automatic placement and routing software that run directly on the mini-computer. Such systems can be self-contained, from digitizing schematic drawings and capturing input for the automatic placement and routing programs, to providing manufacturing output for a wide variety of numerically controlled machines. To optimize the complete design cycle from schematic input to manufacturing output, the strategy for using automatic algorithms changes when manual completion is

AUTOMATIC GATE ALLOCATION, PLACEMENT AND ROUTING

235

foreseen as an inevitable task. Rather than attempting to get the highest percentage of completion from the automatic algorithms, the strategy is to make the manual completion task as quick and easy as possible. This can be done by reserving certain features on the board for use by the designer in the completion process. On a congested board, a human designer will make more intelligent use of these features than an algorithm. One of the most useful features to reserve for manual completion is space for vias. Fixed via locations are described to the router as obstacles on all layers. The designer can then insert vias at these locations or use them for routing traces. Reserving two layers of a multilayer board can also make manual completion an easy task, particularly if it is combined with the use of reserved via locations. Another strategy for six layer boards with fixed vias is to attempt to route two layers automatically without any vias, then route two more layers automatically with vias, and allow the unused vias to be used in completing the design on two more layers. Routing automatically on a large grid (50 mil) and using the extra channels provided by a small grid (25 mil) for manual completion is also a useful technique. DESIGN METHODOLOGY There are many possible ways to configure a CAD system based on various design methodologies. Design methodologies vary from one installation to another and sometimes from one project to another based on variations in budgets for design tasks, variations in company organization, and investment in CAD facilities. These features often play an important role in the perceived benefit of using automatic gate allocation placement and routing. For example, the digitizing of engineering schematics is often perceived as an extravagant item in engineering budgets. Yet, when a manufacturing organization budgets money to design PCB's it must budget for the manual encoding of hand drawn engineering schematics if it is to use automatic algorithms. Manual encoding is error prone and requires manual checking procedures, too. However, if engineering schematics were digitized, not only could the manufacturing organization receive error free machine readable input, but engineering would have higher quality schematic drawings. There are also times when the resources for digitizing schematics are not available. In that case, manual encoding is the only way to enter the' data into the CAD system. It is possible that both of those methodologies are followed in the same installation. Software for automatic gate allocation, placement, and routing should be modular enough to support these variations in methodology One configuartion might be to use automatic gate allocation and placement to provide fully automated generation of wire wrap prototype boards. Another example is the method using package level schematics and an assembly drawing as input to routing programs, bypassing automatic gate allocation and production. CONCLUSION Automatic gate allocation, placement, and routing are useful features of a CAD system. Rather than being at the center of such a system, they are best appreciated as optional tools in the design process. Although limited in performance by the algorithms used, these automatic procedures can provide great cost and time savings when used with design rules that favor the algorithms and the manual completion task.

236 REFERENCES

S.C. HOFFMAN

[1] M. Hanan, P. K. Wolff Sr., and B. J. Agule (1976) "Some Experimental Results on Placement Techniques". Proc. 13th Annual D. A. Workshop pp. 214-224. [2] [3] [4] D. W. Hightower (1973) "The Interconnection Problem - A Tutorial", Proc. 10th Annual D. A. Workshop pp. 1-21. Y. Ozawa, M. Murkani, and . Suzuki (1974) "Master Slice LSI Computer Aided Design System". Proc. 11th Annual DA Workshop pp. 19-25. F. Rubin, (1974) "An Iterative Technique for Printed Wire Routing", Proc. 11th Annual D. A. Workshop pp. 308-313.

[5] M. C. van Lier and R. H. J. M. Otten (1973) "On the Mathematical Formulation of the Wiring Problem", J. Circuit Theory Appi., Vol. I, pp. 137-147. [6] D. C. Wilson and R. J. Smith II (1976) "An Analytic Technique for Router Comparison", Proc. 13th Annual D. A. Workshop pp. 251-258.

G. Uusgraue, editor, COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Componi/ ECSC, EEC, EAEC, Brussels S Luxembourg, 1979

INTEGRATED CAD FOR LSI K. LCOSEMORE COMPELA LIMITED COMPEDA HOUSE, WALKERN ROAD, STEVENAGE, HERTS SGI 3QP The increasing complexity of integrated circuits is making it more and more difficult to design circuits manually. This, coupled with growing manpower costs, has highlighted the need for an integrated design aid system. To date, the conventional systems have tended to concentrate on automating the draughting process. Compeda's GAELIC system has been developed to provide engineers with a complete design facility including powerful automatic layout, logic simulation, design rule checking and circuit function checking in addition to draughting, editing and mask generation functions. The simulator operates on a selective trace, next event basis and detects hazards in the time domain. The automatic layout program is designed for efficient layout of variable size cells in any technology and allows user interaction for further improvements in efficiency. The design rule checking program uses a novel language approach which allows users to code their own design rules and significantly broadens the range of rules that can be applied. For mask function checking, the system includes a program which operates on the mask data to generate a gate map of a circuit. This paper will discuss the detailed requirements of an integrated CAD system and examine the GAELIC approach to meeting them. INTRODUCTION The past decade has seen considerable advances in technology and design expertise in the field of integrated circuits. So much has the complexity of IC's increased that the paper and pencil methods applicable in the 60's no longer give the necessary speed and flexibility required by IC designers today. Automatic draughting (digitisers) eased the problem a little but eventually more automation was required and so blossomed the concept of Computer Aided Design. Here the computer is expected to play a more significant role in the design process by performing such useful functions as checking of input data, straightening up lines, interactive drawing/editing, and finally automatic driving of mask making devices. A typical CAD system will consist of a small computer with some disk storage, a visual display terminal of some sort, and perhaps a digitister and/or magnetic tape drives and/or a plotter. It will have the capability to accept input from either an on-line or off-line digitiser, allow the user the use of interactive graphics techniques to modify and develop his design, then possibly a number of programs which drive mask-making devices either on-line or off-line. 237

238

K. LOOSEMORE

This type of system forms the hub of the design process but completely ignores some of the other functions the designer may wish to perform even though it is quite practicable for them to be carried out automatically. An example of this is in simulation. The designer will begin by designing a circuit as a logic diagram. In order to check its correctness he may choose to use one of the several commercially available circuit simulation programs. Because of the limitations of his CAD system he will probably have to use a different computer! There are several other samples and contexts where it would obviously help to have everything "under one roof". Systems which supply a structured range of utilities all ultimately connected are referred to as Integrated CAD systems and this paper proposes to discuss the requirements of an integrated CAD system compared to that which is available. To describe the requirements of an integrated CAD system it is worthwhile to take a look at the ways in which integrated circuits are designed. In order to get a "true" rather than imagined view of the requirements here, several months were spent talking to I.C designers across a broad spectrum ranging from seme very large USA semiconductor manufacturers to a few small British firms. The first phase of design consists of defining, in some level of logic, the logical make-up of the circuit. There has been a gradual increase in the use of modularity in this area, whereas initially it was sufficient to design the circuit in terms of basic gates, the increase in complexity has meant that much higher level logic modules are being developed and used. Shift registers, counters, encoders, decoders and even blocks of memory are being used as basic building blocks. Because technology is moving so fast this "library" of building blocks does not remain static by any means. So here we come to the designer's first problems. He has designed his logic and now he wants to check that it works. CAD technology has responded to this problem by producing logic simulators. In considering the simulation requirement outlined in the previous paragraph it is possible to list the main requirements of our first Integrated CAD utility, the simulator. 1. It must be library based. Because the design team is constantly needing to update its library of building blocks, there must be a way to specify and store descriptions of those logic structures which are going to be used after. 2. In order to achieve (1) and also to reduce coding effort the simulator must have a macro facility. A convenient way to implement this is to allow new building blocks to be made up from compilation of both existing ones, and the built-in library of basic gates. The macro facility can also reduce the amount of data space needed by the simulator thus allowing larger circuits to be simulated. If used properly the cost of simulation can also be substantially reduced. 3. The logic levels simulated should reflect those actually used in the industry. For digital circuits 4 'logic levels' are required, these being "on", "off", "don't care", and "high impedance" states. 4. Dynamic logic capability. With the increasing use of dynamic logic the simulator needs to be able to model stored charge conditions. 5. Basic internal set of often used gate types. These will include the usual run of "and", "or", etc., plus "wired or", and basic memory types. 6. Timing Characteristics. A minimum requirement is for flexible rise, fall time definition for all of the gates (including those in the internal library) and the ability to vary the timing of the output trace. In par-

INTEGRATED CAD FOR LSI ticular, three different types of output are required. (a) (b) (c) specified times specified intervals on certain events (e.g a gate changing value)

239

I have deliberately left input to the simulator for discussion later as this should be regarded as an input to the integrated system rather than as input to one module. After the logic of the circuit has been validated the next step is layout. For any reasonable size circuit this takes place in two stages. First of all a set of logical modules is designed, consisting typically of counter, shift register and blocks of similar logic. Secondly, these blocks are placed on a chip and the interconnectors between them are routed. Historically the logic modules have been input by making a drawing on paper then digitising it into seme sort of computer data base. This method is still largely used although seme use is being made of "standard cell" libraries supplied by larger manufacturers. A newer approach not fully accepted yet is to design logic modules using a 'Stick' diagram (1) which allows seme degree of technology indpendance. The second phase of layout is concerned with performing physical interconnections between the blocks. Here CAD, or rather Design Automation,has responded with a host of automatic/interactive layout programs. Many of these in the past have been standard cell based but the need has been realised for something more flexible. So what can we expect of CAD modules to help with automatic layout? 1) Must be library based. Quite apart from the obvious need for a library of cells when using a standard cell approach we need, as with the simulator, the ability to build up a library of custom designed modules for use in later designs. 2) Must be able to handle any size of cells. Because of the diversity of complexity of the different cells being laid out (from individual gates up to complete memories) the standard sized cell approach has already reached its limits. 3) Must be technology/process independant. Because of the rapid advances being made in the field any automatic layout module that does not have this capability will quickly find itself out of date. 4) Interaction. Most of the currently used automatic layout programs suffer from a lack of interactive capability. This is proving to be a problem, particularly when designs turn up where it is necessary to put certain modules in particular places. What is required is a layout program which represents true CAD in the sense that it allows the user to interact intimately with the design process as it is carried out, thus tailoring the end product more exactly to the designer's requirement. Non-Automatic Layout There are a few instances in IC design where the use of automatic layout techniques are not applicable. In particular 'memory design'. Because of the regular structure of this type of chip it is often more efficient in terms of manmonths and final size of the design for the design to be carried out "manually". For this purpose the designer requires seme sort of computerised drawing board.

240

K. LOOSEMORE

This utility, because of its broad application to all phases of design will normally stand at the hub of an integrated system and usually consists of an interactive graphics editor using either storage or refresh display and usually coupled to seme sort of digitising capability. Such graphic editing systems are well known and it is not, therefore, worth going into much detail about their requirements. Assuming the layout is now completed what further work is required before sending the data to an IC manufacturing plant? There are two main checks which are performed at this stage. First of all a check is performed to make sure the rectangles, polygons etc actually represent the circuit intended. This is called function checking and is a new concept in CAD. Historically, this has been carried out by the somewhat laborious process of having a few designers/draughtsmen scan over the artwork, device by device, trying to spot the errors. There is now a capability to perform this check automatically and I feel that both this and automatic layout, while not being fully appreciated now, will have to become a must for inclusion in any integrated system. Design Rule Checking The second main check to be performed at this stage is called Design Rule Checking. Because of the limitations with processing of silicon, and in problems like misalignment of masks, circuits have to be designed to certain minimiin widths, separations, etc, to be producable on whatever process is being used. There are a nunber of programs available at the moment, most of them geared towards a particular manufacturer's environment. This restriction on applicability is understandable since IC manufacturers tend to be a secretive lot and, I suspect, like to play "ours is better than yours", with each other. This has meant that of the many CAD systems about, very few have an integrated design rule checker. When the checks are complete it is time to generate control tapes for some mask making device, e.g. pattern generators. So the integrated system needs to have preferably a few different post processor giving the capability of running on several different devices. Also, check plots are going to be needed together with some archiving capability so that successful design effort can be re-used for later designs. An Integrated System So far I have discussed the main modules that can be expected to exist in an integrated CAD system. The next question must be "How do they fit together?". Because of the way in which these modules are used in a design it would seem that a linear system of use is reasonable. In this type of system a design spec, would be input to an automatic layout module, the output frem automatic layout would be input to a functional checking module and so on. This type of system has the advantage that it imposes a certain degree of regimentation on the designer and acts as an aid to design management. This fact stems from the need to run the design through the modules serially thus making sure that stage is finished before the next one starts. From this type of system arose the structures of modern integrated systems, where the interface files between the various modules are exactly the same. This has the effect of a star network of utilities centred about a design database which binds the whole system together. Because of this central position

INTEGRATED CAD FOR LSI in the system the design of the database is very important. What are the requirements of a database for an integrated CAD system? 1. It must be accessible in a variety of ways in an efficient manner. For instance a utility to generate pattern generator tapes needs to access the database in a mask-by-mask basis, whereas a function checker or dimension checker needs to access it polygon by polygon across masks but in the same general area.

241

2. It must be able to structure the design (autcmatically) as a number of separate areas so that when access is required to only a small part of the design, only that area needs to be accessed rather than the whole database. This is particularly important when using a graphic editor since it speeds up response time by limiting the amount of database accessing required. 3. It must attempt to reduce the amount of data stored as an absolute minimum. Because of the rate at which complexity is increasing in LSI designs certainly a million polygons are not far away and the way is open for quite clever storage schemes to manage the amount of computer data that this implies. 4. One of the ways of achieving (3) and also tailoring the database to the design process is to allow the definition of sub designs of which instances may appear several times on the chip. These should be capable of a large depth of nesting rather than a depth of 2 or 3 allowed by some of todays' systems. The GAELIC System In order to illustrate an integrated like to take as an example Ccmpeda's over several years from an idea by a Edinburgh and has successfully moved CAD system and how it is used I would GAELIC system which has been developed research worker at the University of with the times.

The GAELIC system consists of a central hub including a 2J dimensional graphic editor working directly on the database. Around this are provided several utilities concerned with mass input/output with the system and a number of powerful utilities including Automatic Layout, Simulation, Function Checking and Design Rule Checking. These utilities are connected together by a database which at the time of its design, represented a breakthrough in data storage for LSI. The resulting system is one that appears to have more flexibility of use than any other. Input to the system is via a number of routes. Circuits can be digitised using an off-line digitiser (of which 2 standard ones are supported at the moment) or input from other CAD systems. Because of the automatic design aids, which require information other than graphical the system also inputs a language which is compiled directly into the database and which can also be generated frem the database. Because of the nature of the job carried out by the database, it needs to be implemented in the most efficient way on each machine on which GAELIC is mounted. This tends to reduce its machine indpendance so the readable language concept has been used with advantage as a completely mac h ine independant design definition. It is also used as a 'first instance' hook for individual users to interface their own software to the GAELIC system.

242

K. LOOSEMORE

The outputs from GAELIC are also varied. As well as utilities to drive no less than four different plotters, the system shows"a whole cross section of the use it has had over the years supplying the means to drive Ferranti Master Plotter in both modes, GYREX, David Mann, and Electromask pattern generators and also to output a design to stand alone design system. In between the input and ouput utilities stand the automatic design aids. The logic simulator is applicable to both static and dynamic logic and both combinational and sequential circuits. It incorporates a wide variety of built in gates and memory devices and provides facilities for macro devices and complex gates. The output is in the form of a trace of both selected inputs and outputs together with 'spikes' and 'glitches' that may be detected. The automatic layout module is unique in that it is the only commercially available one allowing variable sized cells. Cells are defined and archived using any of the methods available for input to the GAELIC system and this together with a circuit definition is processed either autcmatically or interactively to generate a set of masks for the circuit. Output is to the GAELIC database allowing several iterations round the loop to be performed so that the auto layout program can be used for sub designs as well as for a complete chip. The system contains enough parameterisation to allow the auto layout module to be technology and process independant. Because of the novel approach adopted by this program, time spent at the terminal is of the order of half an hour per run thus allowing the designer to run the program several times exxerimenting with different layouts. The two main checking programs comprise an automatic function checker and an automatic design rule checker. The function checker works from a set of mask data held in the database and together with some extra information supplied by the designer outputs what is, effectively, a cirucit diagram of the design. This program is, of course, not restricted to design output from the automatic layout module but is sufficiently flexible in terms of how the circuit is constructed to allow great latitude in the constraints it puts on the designer. This means that even very loosely constrained manual layouts are applicable to analysis using this technique. Finally the fourth main component of the system is a completely new type of Design Rule Checker embodying a procedural approach for efficiency and flexibility. The designer codes up in tabular form a set of custom rules which are compiled into a set of programming language routines. These are further compiled together with a piece of standard code to produce a custom program whose speed of operation stems from cutting down on generality and tailoring the program particularly to one set of rules. This approach when embedded in a computer operating system has been found to help with design management considerably. The GAELIC system contains such a large range of facilities that it is impossible to cover in detail all of them here. But it does represent what is possibly the most advanced integrated CAD system available today. References (1) STICKS, a new approach to LSI design. J.D. Williams Master's Thesis MIT, June 1977. (2) IC Design - Misery and Magic. K.J. Loosemore. NU- lH/-

BAIN DEFINITION

I
AREA

Access by layer (or mask) Access by physical locality Reflects design structure Minimises filestore accesses Compactly .stores design data

GAELIC DATABASE SYSTEM

GAELIC LANGUAGE

GRAPHICS DIGITISORS EDITOR

OTHER DESIGN SYSTEMS

AUTO

IC

LAYOUT

GAELIC VLSI DESIGN SYSTEM FUNCTION CHECKING

DIMENSION CHECKING

ARCHIVING

ELECTRON BEAM SYSTEMS

PATTERN GENERATORS

CHECK PLOTTERS

OTHER DESIGN SYSTEMS

GAELIC VLSI DESIGN SYSTEM OVERVIEW

E.E.C. PROJECT SESSION

Chairman: E. DE MARI, European communities

G. Uusgraoe,

C ECSC, EEC, EAEC, Bruneis

oi digital electronic circuiti and systems North-Holland Publishing Company


S Luxembourg,

editor,

COMPUTER-AIDED DESIGN 1979

EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS INTRODUCTION A. De Marl * The Commission of the European Communities

ABSTRACT
T h i s paper I n t r o d u c e s t h e s a l i e n t f e a t u r e s o f a f e a s i b i l i t y study on computeraided design of d i g i t a l c i r c u i t s and systems, sponsored by t h e Commission o f t h e European Communities, w i t h t h e aim of assessing s t a t e - o f - t h e - a r t t e c h n i q u e s , requirements and p o s s i b i l i t i e s of f u r t h e r development w i t h i n t h e Member S t a t e s . The Study was awarded by t h e Commission i n mid-1977 t o an I n t e r n a t i o n a l cons o r t i u m led by SAGET, and c o m p r i s i n g NIXDORF, PLESSEY, SEMA, w i t h t h e c o n s u l t a n c y of BRUNEL UNIVERSITY. I t was completed i n October 1978. R e s u l t s are d e s c r i b e d In t h r e e papers f o l l o w i n g t h i s i n t r o d u c t o r y p r e s e n t a t i o n .

INTRODUCTION Logic circuit design has been heavily Influenced in recent years, and will be probably more so In the near future, by the dramatic evolution of technology towards higher levels of component Integration and complexity. Subsystems with increasing inherent intelligence allowing the most diverse and self-contained functions are manufactured on single chip devices, altering in a major degree the design process needed to obtain tractable and fully controllable products. Computer aids, developed In the past years, have been stretched to the utmost of their capabilities In the attempt, often unsuccessful, to cover ever Increasing requirements In terms of modelling accuracy, performance prediction and testability, to name but a few. On the other hand, since new significant computer-aided integration design packages call for accurate planning and substantial investments (up to millions of European Communities Units of Account) a proceeding feasibility study becomes indispensable. The project, reported herein, concerns such a technical and economical feasibility study (user-oriented) undertaken by the Commission of the European Communities on computer aids, methodology and user environment. Specific themes covered range from quantitative system conceptualisation and description to modelling (from basic logic elements to computer components), performance prediction and design for testability. Because of limited resources, device studies at the physics level on one side, and mask design problems on the other, have been left outside the scope of the project. The ultimate objective of the study was the definition of recommendations for cooperative development actions, If they appeared to be beneficial and desirable, within the European Communities' Member States. *on part time secondment from FIAT-TEKSID 247

248

A. DE MARI

The project comprised two parts: a. b. assessment, through an exhaustive survey of the current state-of-the-art of computer aids, design methodology, and user requirements (Survey Task); evaluation of survey data, identification of problem areas, of evolution in technology, conclusions and recommendation for further work (Analysis Task).

BACKGROUND The Counci I of the European Communities approved on 15th July 1974, a Resolution on a Community policy on data processing (OJ No. C86,20.7.1974, p.I), thus paving the way for concrete actions to be proposed by the Commission in the broad data processing area. The Resolution was based on the awareness of: the Importance of data processing for the economic and technological position on the Community in the world the unbalance of data processing Industry in the world and unsatisfactory level of applications within the Community the effectiveness of competition and the need to encourage Europeanbased companies to become more competitive and the conviction that: both companies controlled from outside the Community Member States and European companies can coexist and prosper in an expanding market a more effective use of resources is obtainable through cooperation and joint actions in suitable fields. The Resolution welcomed, among other initiatives, the Commission's intention to submit priority proposals concerning a limited number of joint prospects of European interest in the field of data processing applications and the promotion of data processing applications and of Industrial development projects on areas of common interest involving transnational cooperation. On 13th March 1975, the Commission submitted to the Council a proposal for a Council Decision adopting a number of draft projects on data processing (OJ No. C99, 2.5.1975, p.10) comprising a study of developments in computeraided design. The proposal included basic motivations, main objectives, project structure and summarised content. On l8/29th May 1975, the Economic and Social Committee drew up its opinion on the communication from the Commission to the Council concerning initial proposals for priority projects on data processing (OJ No. C263, 17.11.1975, p.44). Such opinion stated, among other comments, that the proposed priority projects could make a useful contribution to the Community policy on data processing. Moreover, the opinion Indicated that the responsibility for the implementation of such projects reside largely within the data processing industry in the Community.

EUROPEAN COMMUNITIES STUDY: INTRODUCTION

249

On 23rd September, 1975, a Resolution was deliberated by the European Parliament concerning its opinion on the communication from the Commission of the European Communities to the Council containing initial proposals for priority projects in data processing (OJ No. C239, 20.10.1975, p.16). Such resolution included, among other items, the approval of the Commission's proposed choice of projects in the field of data processing, as the first specific practical measures to be taken with a view to establishing Community data processing policy. On 22nd July 1976, the Council decided to adopt a series of three joint data processing projects (OJ No. L223, 16.8.1976, p.II), including the study in computeraided design of digital electronic circuits (thereafter called "CAD Electronics Study"), and approved the appropriations necessary for carrying out the projects within the budgets of the European Communities. The Council motivated the decision, also on the ground of recognised priority for those projects, likely to help to meet the needs of users and to Increase the ability of the Europeanbased data processing industry to satisfy these needs on the European and world markets; In particular, improved computer-aided design techniques were considered necessary to contribute to the strength of the European electronics industry. The Commission was entrusted for Implementing the projects. In this task, the Commission is assisted by an Advisory Committee composed of representatives of the Member States. The Committee was thus set up for the specific purpose of assisting projects adopted in the above Decision. In addition, for the operational task of carrying out each of the projects adopted, provision was made within the Decision for a project director (or leader), assisted and advised by a technical subcommittee. The Advisory Committee was charged with specific duties which included the choice of Commission project leaders, the choice of the organisations to which the work was to be entrusted, and the composition and responsibilities of the technical subcommittees. The organisational and operational structure, therefore, was centred for each project on one or more outside organisations selected as contractors to perform the actual work supervised by a Commission project leader, assisted by the technical sub-committee (thereafter "Technical Committee"), consisting of one or more technical experts per Member State. Each project leader was to report directly to the Advisory Committee, whose duties were to assist the Commission In the execution of all data processing projects. Advisory and Technical Committee were set up in the Fall 1976, and the project leader for the CAD Electronics Study was selected in November 1976, to start work on the preparatory chores of the project the 1st December. TENDER ACTION Technical specifications and work statements for the CAD Electronics Study were prepared by the project leader during the months of December 1976 and January 1977 with the assistance and advice from the Technical Committee. A call for tenders was published the 1st February 1977 (OJ No. C24, 1.2.1977, p.23), with the essential information on the tender action regarding duration (six weeks, closing date 18th March), availability of Invitation to Tender Document (8th February), open briefing for all potential tenderers (23rd February), indication on the procedure to be followed for evaluation of bids and brief description of the project with duration (12 months) and estimated level of effort (total of 42-58 manmonths). The Invitation to Tender Document, delivered 8th February, comprised the technical specifications and work statement for the study, conditions for presentation of tender, evaluation criteria, administrative and contract conditions; maximum budget available for the tender was indicated as 210,000 Accounting Units.

250

A. DE MARI

Following the close of the tender action, which included interviews of the tendering teams, a systematic and thorough tender evaluation procedure, adopted earlier by the Advisory Committee, was performed during the month of April by an Evaluation Group. Such Group Included the Project Leader, members of the Technical Committee, independent experts from within and outside the Commission, thus obtaining a competent and objective coverage of the various parts of the tenders: technical, managerial, administrative. Numerical gradings were assigned by the Evaluation Group to a number of detailed aspects of each part of the tenders, and were subsequently merged through a predefined weighting algorithm to reach a final quantitative assessment of each bid, complemented by qualitative appreciations and judgements. The offer presented by a consortium, led by SAGET S.a.r.l. (Luxembourg), was unanimously recommended by the Evaluation Group as the front-runner being fully capable of executing the study, subject to some negotiations of minor points. The consortium included NIXDORF (West Germany, PLESSEY (United Kingdom), SEMA (France), and the consultancy of BRUNEL University (United Kingdom). On 2nd May the Advisory Committee was also unanimous in supporting the recommendation of the Evaluation Group. With regard to such recommendations, the Commission awarded the contract to SAGET in June 1977, at the completion of the necessary negotiations with the tenderer of technical and contractual details. STUDY SPECIFICATIONS A brief synopsis of the technical specifications for the study, appeared in the Invitation to Tender Document (ITT No. T/3/77, 8.2.1977, App.I) are summarised below. I Technical Objectives a. Assessment of current state-of-the-art of computer-aided logic circuit design, indications of cost benefits, user requirements, problem areas, impact of technology evolution. b. Time projection of designers opportunities and requirements within an extrapolated electronics and computer evolution in the 1979-82 period. c. Investigation of the opportunity (in terms of strategic, scientific, industrial, and economic benefit) for further development projects within the EEC Member States, taking into account developments elsewhere (e.g. USA and Japan). d. Recommendations for further Community work, if appropriate, with detailed justifications. I |A Technical Breakdown: Specific Topics

The topics listed below represent, as backbone to the study, a selection of computer-aided design aspects of logic circuits, made on the basis of estimated highest returns and consistency with resources available. 1. Description Languages Hardware description languages, or in general product specification means, are considered, for the present purposes, as tools for describing circuit structures. Internal and external functional relationships and algorithms. Topics relevant for the investigation include degree of diffusion and adequacy in the industrial environment, level of conceptual abstraction (algorithmic, PMS, register transfer, etc.), language specialisation, application dependency, technology dependency. 2. Synthesi s The process leading to a detailed logic design from the functional behaviour of the system quantitatively defined (e.g. with a description languagelis referred to as synthesis. Availability, requirements

EUROPEAN COMMUNITIES STUDY: INTRODUCTION

251

and possibilities of conceiving general strategy, methodology, formalisation and computer aids to assist the designer in such a delicate process (especially for the higher levels of complexity of modern practical systems) represent an Important topic of investigation of the study. Three aspects, highly related to technology evolution, are of particular relevance: a. b. system architecture, concerning the distribution of resources and the internal organisation of the system; hardware/software trade-offs, as a calibration of the degree of involvement of software predicted with quantitative tools rather than influenced by qualitative attitudes; partitioning, concerning the separation of the system into modules, on several hierarchical levels, according to the typical opportunities or constraints the designer is faced with.

c.

3. Component Model I ing Components may be defined as primitive devices which may be interconnected for implementing a specific application system. In view of the ever increasing circuit complexity, associated to the inherent limitations of computer hardware/software tools, component families considered within the scope of the present study range from logic components (gates, drives, ) to computer components (processor, memory I/O control,....) with emphasis on macrologlccomponents (address, registers,....) and microcomputer components (arithmetic, timing, ). Topics of relevance to the study Include delay modelling (zero and unit delay, assignable gate delays, nertial delays, etc.), logic value multiplicity, fault modelling, model standardisation for different components or technologies. 4. Performance Prediction Circuit performance prediction, usually obtained with circuit simulation software packages, Is an essential tool for design verification including logic verification. The treatment of error conditions (signal spikes, hazards, races, oscillations) deserves the major emphasis. Other pertinent aspects comprise input description techniques (pre-processors), main internal driving mechanism (time flow algorithm: fixed-time increment, next event, selective trace), initialisation strategy, data structure efficiency, logic specialisation (synchronous, asynchronous, combinational, sequential). 5. Testab i I i ty As circuit complexity Increases, testing becomes more and more a vital problem to be tackled at the very Inception of the design stage. Design strategy and verification of circuit testability require computer tools for two main purposes: test pattern generation and fault simulation. The techniques adopted, such as D-algorlthm and Boolean differences for the former purpose, parallel fault and deductive fault simulation for the latter, are objects of the investigation. Characteristics of computer tools to be examined Include fault types, costeffectiveness, ease of usage. I IB Technical Breakdown: General Topics

Technical aspects of general nature, strongly linked to two or more specific topics of section IIA, considered relevant to the study, are listed below. 1. Methodology The entire range of specific topics (IIA) should be subject to a methodology investigation to devise a set of procedural ground rules for all specific Items and their relative nterdependencies to achieve an overall design methodology. 2. Integration with related procedures Organisational, hardware and software requirements must be satisfied to interface collateral and

252

A. DE MARI

downstream processes (production, assembly, testing, documentation, etc.). The level of Integration and interfacing problems to such processes, some of them being automated, are particularly relevant to the study. 3. Computer resources The overall hardware configuration of the computer resources running the application software package represents an important issue. Properties of specific interest include on-line multi-user configurations, batch processing support, system reactivity, graphics and special terminal capability, remote access, in-house versus service bureau operation, overall costs. 4. Software implementation Key issues In software management and implementation Include language, programme and data layout, input/output organisation, modularity, documentation, portability. 5. Man-machine interface Acceptance by the users community of a computer aid deeply involved In the design process is strongly conditioned to a well treated human engineering problem. The necessary prerequisites for a successful package, also among the small design outfits and most diversified user groups, are a matter of investigation. I I I Work Breakdown The work breakdown structure comprising several Tasks and Work Packages is synthetically summarised below. A. Survey Task An exploratory survey of the current state-of-the-art of computer-aided design of logic circuits and systems should be pei formed in Europe, USA and Japan, with particular attention to the EEC Member States. The survey comprises all specific and general topics listed above (Section II), available cost/benefit values, user requirements, organisational environments, problem areas. The Survey Task includes several Work Packages concerning standards, organisation, a first-survey iteration, data ordering, a second survey iteration, data integration. B. Analysis Task The Analysis Task comprises firstly a critical evaluation of the data gathered and ordered in the Survey Task so as to expose technical inadequacies of current computer aids, design methodology and computer resources, to identify a cost/benefit judgement in every specific area, to define the profile of the users population potentially served by such future developments, and finally to draw broad guidelines for a follow-on development project within the EEC scene, if appropriate. Secondly, this task draws detailed final conclusions of technical, economical and organisational nature on the basis of results established during the previous Activity. Such conclusions should take into account time projections and extrapolations to predict the impact of changing technology and environment during the period 1979-82. In case motivations emerge for further action, this Activity requires the preparation of a proposal for follow-on development projects, depicting technical specifications of the products, schedules, budgeting, operational matters, benefits with particular regard to EEC Member States opportunities. The Analysis Task includes several Work Packages concerning pre-analysis, evaluation, problem areas, preliminary guidelines, impact of technology change, solutions, EEC states opportunities, recommendations and action Ian. More detailed information on the Technical Specifications of the Study is availabIe in Ref. I.

EUROPEAN COMMUNITIES STUDY: INTRODUCTION STUDY EXECUTION

253

Work by the multinational consortium started in July 1977. The Survey Task was successfully completed early March 1978 with approximately two months discrepancy with the original schedule. The slippage, also caused by the coincidence of holiday periods with the peak of the two-step Iteration, was absorbed by the contractor through additional overlap of the Analysis Task and by prolonging the study period by one month. Thus the Analysis Task was completed July 1978 and reports were available October 1978. Throughout the entire study execution monitoring was performed by the Commission at monthly Intervals and constructive interactions with the Technical and Advisory Committees were assured to provide the appropriate guidelines on emphasis and resource allocations. Organisation, technical details and study results are reported In the following papers (ref. 2, 3, 4 ) . ACKNOWLEDGEMENTS The author gratefully acknowledges the competent assistance and advice of the Members of the Technical Committee throughout the initiation and execution of the project. The continuing Interest and essential contribution of Mr. S. Blr, Head of the Joint D.P. Project Bureau of the Commission, are highly appreciated. Finally the commitment of the Project Team, and particularly the dedication of Mr. W. E. Qui I lin, SAGET Project Manager, and of Mr. G. Musgrave of Brunei University are fully acknowledged. REFERENCES 1. A. De Mari, "European Community Study on CAD of Digital Circuits and Systems", Proceedings, International Conference on Interactive Techniques in Computer-Aided Design, Bologna, Italy, September 1978. 2. W. Qui I lin, "European Communities Study on CAD of Digital Circuits and Systems - Organisational Aspects", Proceedings, Symposium on ComputerAlded Design of Digital Electronic Circuits and Systems, Brussels, Belgium, November 1978. G. Musgrave, "European Communities Study on CAD of Digi ta I'Circuits and Systems - TechnicaIPerspective", ibid. A. Carter, "European Communities Study on CAD of Digital Circuits and Systems - Survey in USA and Canada", ibid.

3. 4.

G. Uusgraoe, editor, COMPUTER-AIDED DESIGN o digital electronic circuits and systems North-Holland Publishing Company ECSC, EEC, EAEC, Brussels S Luxembourg, 1979

EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS ORGANISATIONAL ASPECTS W.E. Quill in CAD Electronics Project Manager SAGET (Luxembourg) S.a.r.l. Abstract The study had two stages, the first a Survey Task for data collection and the second the Analysis Task, for detailed analysis of the data collected and formulation of CAD Business Plans. The organisational details of these two stages are given, showing how this multinational study was conducted, problems overcome and results produced within the required timescales. 1 Introduction to the SAGET Study Team

1.1 Response to the Commission of the European Communities' Invitation to Tender. In early February 1977 the Invitation to Tender for the CAD Electronics Study, Tender Number T/3/77 was received by SAGET, Luxembourg, and preparation of a bid for the study commenced. In the limited time before the return date of 21st March, it was necessary to form a suitable European Consortium, as well as giving full consideration to the technical, managerial, financial and contractual portions of the Invitation to Tender. Assisted by personal contacts, and contracts which already existed between SAGET, Plessey and other European organisations, the consortium which was organised by SAGET was comprised of the following organi sati ons:Sema, France Nixdorf, West Germany Plessey Radar, UK Plessey Central Research Establishment, UK Brunei University, UK Expertise in Technological Surveys Major European Computer Manfacturers Large Electronic Company using CAD techniques LSI Design and Development CAD Research Group

The first three companies named were employed by SAGET as subcontractors, the latter two organisations as consultants to the project. In addition to these organisations, assistance and advice to the study was offered by Plessey Telecommunications, with regard to the Telecommunications aspects of CAD and Plessey Microsystems Inc. of Irvine, California with regard to arrangements for United States interviews. It was considered that, with these organisations in the bidding team, an excellent coverage of the various CAD fields had been achieved, and also a good geographical spread amongst EEC-member nations had been arranged. It was not possible, of course, to include organisations from every member nation in a relatively small study contract, but the organisations included 255

256

W.E. QUILLIN

in the SAGET bid had sufficient technical experience, personal contacts and linguistic abilities to cover all relevant establishments in the member nations. As the contract progressed, the organisation of this coverage was considerably assisted by the members of the Commission's Technical Committee for this CAD Electronics project and the Commission's Project Leader. With assistance from the members of the Consortium the SAGET bid was prepared and delivered to the Commission by the required date. Following the Commission's adjudication phase, the contract was offered to SAGET. Preparatory work started in June 1977 during the Contract Negotiation Phase and the contract was fully agreed at the beginning of July 1977. 2 TECHNICAL ORGANISATION OF CONTRACT

The activities proposed for the contract in the SAGET bid were similar in most respects to the activities which had been given by the Commission in their Invitation to Tender. The contract had two major tasks. The first was a Survey Task to collect data from organisations active, or potentially active, in CAD for Digital Electronics, on a world-wide basis. The second task, the Analysis Task was to analyse the collected results and produce Business Plans, showing how best the Commission could assist the development and use of CAD techniques. 2.1 The Survey Task

The Survey Task was itself divided into two activities, a first survey to cover as large a number as possible, with the timescale and financial constriants, of relevant organisations, on a world-wide basis, followed by ordering of the collected data, and a second, more detailed survey of a smaller number of organisations - the ones which had been shown by the results of the first survey as having most to contribute to the project. (a) First Survey Iteration

The First Survey was given a formal structure, by providing interviewers with questionnaires. Three different but inter-related questionnaires were used:Questionnaire A Questionnaire Questionnaire C CAD Users CAD Non-users CAD Suppliers

In addition to the main questionnaires, A and C had supplementary parts to be completed to give details of packages in use or provided; and to collect details of Synthesis, Modelling and Test Pattern Generation, the interviewer was provided with additional questionnaires S, M and T. The questionnaires were designed by using the technical survey expertise of SEMA together with the detailed CAD knowledge of the other members of the SAGET Consortium. The questionnaire was agreed at the beginning of August, 1977, after having been tried out in Pilot interviews in UK, France and Germany. In fact, these Pilot interviews did not result in any major changes being required and they showed there was no linguistic problem in having the questionnaires in English.

EUROPEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS The survey in Europe was started, where holiday arrangements permitted, in August and the majority of interviews were completed by mid-October 1977.

257

The detailed interview schedules required for the interviews in the United States proved more difficult to arrange, and it was not until October that it was possible to have interview schedules and agreements for interviews in a suitable form to permit these to start. The United States interviews were divided between two interviewers, one covering the South-East, and Eastern states, plus an interview in Canada, and the other covering the West coast. Altogether 20 establishments were interviewed in the USA and Canada. Interviews in Japan were difficult to organise from Europe, but, because of long-established links between SAGET and the Oki Company in Japan, Oki were able to arrange assistance from the CAM Committee of Japan (Computer Aided Manufacture), who gave considerable support to the arrangement and scheduling of the interviews. SAGET would like to express its thanks to Oki and the CAM Committee of Japan for this support. The Japanese interviews took place at the beginning of November 1977, and eight establishments were interviewed, as had been planned. The interviews were divided amongst members of the SAGET Consortium along the following lines:United Kingdom France, Italy, Benelux Germany, Denmark
Sweden

Plessey Radar, Brunei University -Sema Nixdorf


Brunei University

United States, Canada Japan

Plessey Radar SAGET

It has been planned to interview 67 organisations on the first iteration, but to ensure as good coverage as possible of relevant organisations, within the Project's constraints, 85 organisations were eventually visited. A very high degree of co-operation was found from establishments visited, and there were very few refusals for company security reasons. The questionnaire was not sent to establishments before interview, except on the few occasions this was requested, it was considered that the sending of such a comprehensive quetionnaire could have proved off-putting to the establishments. Prior to interview a summary of the required details, together with introductory letters from the Commission and SAGET were provided. Initial contacts were made with organisations by telephone, this proved a much better and more flexible approach than letters or telex contact, enabling the correct personnel to be identified, if they were not already known, and this also permitted a discussion of the project giving its aims and organisations to prospective interviewees. In general, the interviews were conducted by one person, but where interviews of special interest were concerned, two interviewers were used where possible. Interviews were scheduled to last one day.

258

W.E. QUILLIN All the interviews were completed by mid-November and a preanalysis of the data collected was commenced to enable locations to be identified for the second survey iteration. Also, design of the second iteration benchmark tests was started by Brunei University. (b) Second Survey Iteration

Pre-analysis of the first survey data and design of the benchmarks was completed by the end of December 1977 and arrangements for the second survey were made so that interviews could commence as soon as possible after the Christmas holiday period. The identification of establishments for the second survey was made in the light of the pre-analysis of the first survey data, and in conjunction with the Technical Committee and the Commission's Project Leader. The second survey did not have a formal questionnaire structure as did the first, but interviewers' notes for guidance were prepared. These concentrated on two factors, points which may have needed amplification from the first survey, and the benchmark tests. Interviews on the second iteration were tailor-made for the establishments being interviewed, and interviews were conducted in general by two people and lasted about two days. The first iteration results showed that, due to the structure and company security of Japanese industry, there would be little to be gained from a second iteration visit to Japan. Hence visits were concentrated in Europe and USA. Names of all personnel to contact in the organisations were known from the first iteration. These contacts were first made, once again, by telephone, backed up by Tetters of introduction from the Commission and SAGET. Scheduling the second iteration interviews was more difficult than the first iteration, due to the need for computer access and technical assistance for the running of benchmark tests. It took a longer time period to complete all interviews than had been expected, and all European interviews were not finished until the start of April 1978. To avoid this causing excessive slip to the overall project timescales, this part of the Survey Task was overlapped as much as possible with the start of the Analysis Task. For the second survey in USA, a team of two people from SAGET and Brunei University visited the selected organisations in February. All the planned visits were made, despite considerable difficulties caused by extremely bad weather in the North-Eastern United States. The original project plan had been to cover 20 organisations in this second survey. By the end of the Survey Task, a total of 23 organisations had been surveyed, giving as much data as possible for the Analysis Task. 2.2 The Analysis Task

The Analysis Task was commenced as soon as possible during the Survey Task, starting with the Pre-Analysis of the first survey data, continuing with totalisation of answers and comments collected on this first survey, and continuing with a detailed analysis of the second iteration benchmark results, as these became available. Hence there was much more overlap of the Survey and Analysis Tasks than had been planned at the start of the study.

EURPOEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS This overlap helped minimise delays to the total project which could have been caused by scheduling difficulties and holiday periods during the Survey Task. Compared with the Survey Task, project timescale scheduling during the Analysis Task was much easier, not being influenced by factors outside the control of members of the SAGET Consortium. Analysis of the data collected in the Survey Task was completed by the end of April, and during May the preparation of the final report started, portions of this being allocated to the various members of the Consortium for initial writing, within agreed technical guidelines and timescales, prior to final integration. This report consisted of an outline of the Survey Task (which had been fully documented in three volumes, as the Survey Phase Report), details of the Analysis Phase activities, a chapter on the current situation in CAD with respect to Logic Specification, Test Pattern Generation, and Similarities and Differences in CAD for Circuit/System Design and CAD for IC Design. This was followed by a chapter on the Technological Evolution, discussing both the needs of CAD this evolution brought about, and the benefits to CAD of this evolution. Finally the Proposed CAD Business Plans were given, starting with an overview of CAD Problem areas and the EEC opportunities to which these gave rise, followed by details of three proposed Business Plans, two of which were in the area of increasing CAD awareness and the benefits to be gained by using currently available CAD aids, and the third being in the field of component model development to assist the usage of CAD Packages. These Business Plans carefully reflected the needs and requirements which had been established during the Survey and Analysis Tasks. 3 EEC OPPORTUNITIES

259

The following is a list of opportunities which were identified for the Commission together with comments on these:A. To influence constituent governments, for them in turn to influence certain education and training courses to reflect the impact of the 'digital revolution'. To establish, throughout the member nations, organisations responsible for retraining of electronic engineers. Item to be supported by a training programme which would be established by seconded leading experts in the field to provide:(1) (2) (3) (4) (5) Program Syllabus. Lecture Notes. Video tape lectures and demonstrations. Develop Computer Aided Instruction (CAI) for CAD. Be reponsible for program of 'Workshops' held throughout Europe, often within industrial companies.

B. C.

260

W.E. QUILLIN The seconded consultants would be supported by a resident team who would provide:(1) (2) (3) (4) (5) (6) Administration. Software documentation. Software maintenance. Day-to-day advisory service. Workshop/Conference organisation. Audio Visual Aids group.

These could be provided for each member nation in the national language. It should be noted that although the problems are many, in no way should this be a large organisation. This organisation could be responsible for providing CAD packages to training establishments for student familiarisation. There is evidence that some suppliers would release earlier versions of their packages for a nominal fee. It is envisaged that by changing the seconded consultants each year or every other year, then different problem areas will be tackled efficiently and effectively with a dedicated team. More importantly this organisation will provide the catalyst for spontaneous development. D. The procurement organisations should pressurise component suppliers to provide as much detail as possible about components, and in some cases have joint projects to establish data characteristics suitable for component modell ing for CAD. Establish a component model data base and provide the necessary back-up service to maintain and update it and give user advice. Utilise European data communications (e.g. Euronet) to distribute E. Set up a number of specific product projects with companies from several member nations involved. (N.B. This is not to do global reporting, but to produce a defined product such as ATE for processors, concurrent logic simulator, an Automatic Test Pattern Generation to Automatic Test Language for Avionic Systems compiler, etc.) H. To encourage members to retain those top engineers in this field by ensuring the industry is viable. Several instances were found during the study of teams of European CAD experts being enticed to the United States for much greater rewards and better facilities.

E.

F. G.

EUROPEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS I. Provide scholarships for engineers who could be retrained in the digital electronics field for Industry. (The scholarships should be for 12 months and support the man on a post graduate course.) Establish a comprehensive set of standards for European use for CAD. These would interface to electronic design standards and cover circuit simulation, modelling, testing and verification. The aim would be to allow transferability and interchange of both CAD packages and techniques, and also of designed circuit elements and component models. Interfaces allowing the standard elements to be inputs to and outputs from currently available packages would be developed. It is a widely held view that this field is continuously changing and that it would currently be extremely difficult to agree and implement a comprehensive set of standards for CAD. It is envisaged that projects such as the total co-ordination of the Business Plan for Component Modelling would, in themselves provide de facto standards. From the set of comprehensive data collected during the survey phases together with the subsequent analysis process, a number of possible action plans were formulated. After considerable discussion within the study team, with members of the Commission and using the expert advice of members of the Technical Committee, the most relevant action plans, with the highest feasibility were expanded in the following areas:Component Modelling. CAD Symposium CAD Education and Training 4 CONCLUSIONS

261

J.

This study has conducted a number of detailed interviews in the field of CAD for digitial electronics around the world. From thorough analysis of the data collected the areas of maximum need in this subject have been identified and Business Plans for these generated. The holding of the Symposium represented fulfilment of one of these Business Plans, and involved a considerable amount of effort from the Commission, its Project Leader, the SAGET team and Symposium organisers. Hence it was one result of the study contract - without the contacts made on the Study the organisation of such a Symposium would not have been possible. This was the first action taken as a result of this study, and every effort will be made to ensure the other recommendations to assist in the development of CAD for digital electronics are followed up as rapidly as possible.

262

W.E. QUILLIN

ACKNOWLEDGEMENTS

and Industrial A f f a i r s , D G I I I , in particular the D.P. Projects Bureau and t h e i r Project Leader for t h i s study, Mr. A. DeMari, the Technical Committee f o r t h i s project, SAGET's sub-contractors, Sema, Nixdorf, and Plessey Radar, and consultants from Brunei University and Plessey Research Centre, for the e f f o r t , support and assistance they have a l l given during t h i s study.

SAGET wishes t o thank t h e EEC D i r e c t o r a t e General f o r I n t e r n a l

Market

SAGET (Luxembourg) S.a.r.l. Project Management

SEMA NIXDORF PLESSEY RADAR

BRUNEL UNIVERSITY PLESSEY RESEARCH CENTRE

Team Members

Consultants

SLIDE 1

THE STUDY TEAM

EUROPEAN COMMUNITIES STUDY: ORGANISATIONAL ASPECTS


Questionnaire A C
AS CS CAD Users CAD Non-Users CAD S u p p l i e r s

263

A Supplementary C Supplementary

Package Details Package Details

S M

Synthesis Modelling/Simulation

Test Pattern Generation

Interviews

85 Total

57 Europe 20 U S A 8 Japan 45 CAD Users 4 CAD Non-Users


44 CAD Suppliers

SLIDE 2

First Survey

264 F i r s t I t e r a t i o n Pre-Analysis

W.E. QUILLIN

Totalisation of Answers to Decision Boxes Totalisation of Comments Made Graphical Representation of Numeric Totals Detailed Analysis of Second I t e r a t i o n Benchmarks Documentation of Current Situation in C A D Documentation of Technological Evolution Production of C A D Business Plans

SLIDE 4

Analysis Phase A c t i v i t i e s

SURVEY TASK Design Questionnaire

Symposium Organisation

Symposium

First Iteration Interviews Design Benchmarks

Second Iteration Interviews

Pre-Analysis Technology Prediction and State of the Art Summary Analysis of Data Collected

ANALYSIS TASK

Formulate Business Plans Edit Report

>

July . Aug . Sept . Oct . Nov . Dec . Jan . Feb . Mar . Apr . May . June . July . Aug . Sept . Oct . Nov 77 77 78 PROJECT TIMESCALES

266 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j)

W.E. QUILLIN Influence constituent governments. Establish retraining for electronic engineers. Training programme by seconded experts. Pressurise component suppliers.

Establish a component model data base. Use data communications (e.g. EURONET) to distribute (e), Set up specific product projects. Encourage retaining of top engineers. Provide scholorships for engineers. Establish comprehensive standards for CAD.

SLIDE 6

EEC Opportunities

ECSC, EEC. EAEC, Brussels

o digital electronic circuits and syitems North-Holland Publishing Company

G. Musgrave,

editor,

COMPUTER-AIDED DESIGN S Uuembou/ig, 1979

EUROPEAN CGMJNITIES STUDY ON CAD OF DIGITAL CIRCUITS TAND SYSTEMS TECHNICAL PERSPECTIVE GERALD MUSGRAVE BRUNEL UNIVERSITY UNITED KINGDOM

A study of this kind which also evokes several manyears of effort results in a great deal of data, much of which has to remain confidential to the European Economic Commission. However, in this paper most of the generalised trends and profiles will be presented together with the design constraints of the data capture procedure. Details of the survey philosophy and design wiI I be out I ined together with the data ordering procedure. The analysis of this data, its correlations and contradictions, will be given in global terms. SURVEY PHILOSOPHY There are many strategies which could have been adopted for the collection of technical data; I Iterative survey, postal questionnaire, workshop and conference attendance, consultation with leading technocrats, or take note of sales presentations, organisation visits and interviews, to name but a few. In this project the many characteristics and attributes of the aforementioned survey techniques have been incorporated in the strategy adopted. The questionnaire has the prescribed format for rigour and ease of analysis whereas the Interview has the form which obtains response but with flexibility to take cognisance of the situation. For these reasons the first of the two-1 eve I data collection used an interviewer who was primed by a questionnaire. The object of the first level was to provide the broad data base which would have significance for the rest of the project. Thus the first survey was designed to collect data which as far as possible could fit into a prescribed format for quantitative analysis. To this end a matrix format which indicated 'functions' by rows and 'degrees' by columns was used wherever possible. The general structure of the questionnaire was such that it minimised the number of errors by the interviewer, by having as many pre-answered boxes for ticking as possible, and the general sect Iona I Ising assured thorough cover of the total field as well as providing easier data ordering. In order to provide motivation for free expression, adequate room for comments was always provided. To further encourage this collection of data which may fall outside the framework of the questionnaire the Interviewers were encouraged to seek information about new work and future research plans. After due consultation with the multi-linguistic team it was agreed that the questionnaire would be in English, since this was technically the most comprehensive language for the field of study, and that there would be no written translations. However, It was considered essential that a 'native-tongued' interviewer would conduct the survey in Europe. This also helped to minimise the number of formal definitions required which is a major problem in the jargon semantics of digital systems. 267

268

G. MUSGRA VE

A single questionnaire would have been deal from many aspects of the study, but cognisance of multiroles that CAD can have within a company had to be recognised. Principally three classes were established: the user, the nonuser and the supplier. Of course there are various masks over these classes, such as the Internal development supply and use, but very often these functions are arrived at by separate departments within a group and thus do not violate the three sections of the first iteration survey namely: Questionnaire A Questionnaire Questionnaire C For current and past users of CAD For nonusers of CA D For suppliers of CAD

In general each of the questionnaires was identical in dataseeking aims al though the structure and details varied to reflect the interviewee standpoint. The commonality is important in order to enable correlation and contrasts to be drawn, particularly In respect of user and supplier (customer and marketer). In order to gain a full spectrum of views on CAD applied to digital electronic systems the marking function of the interview had to be recognised so management views, including cost accounting, were brought together with those of technical personnel such as designers, researchers, and production and test engineers. This rendered a further lateral structuring to the questionnaire. Effecting this required a topdown structure with details of company profits followed by questions orientated towards management and determination of company policy. This was followed by a general enquiry of how, and to what effect, CAD was used within the company. The more technical aspects were dealt with in three separate sections dealing with synthesis, modelling and testing (S, M and respectively). Here the questions had to be answered by the specialist although the general philosophy to provide as many preanswered boxes to be ticked was continued. It was also judged essential to validate the questionnaires and briefing notes for the interviewers by two pilot studies in each of the U.K., Germany and France thus obtaining a measure of the linguistic problems as well as conducting basic field trials. The planned work time for each interview was one day so there was a limit to the depth of enquiry at this first Iteration. Nevertheless, adopting the questionn aire/Interviewer philosophy would enable sufficient data in breadth and depth to be gathered to provide identification of problem areas and market leaders, thus enabling the analysis team to Identify those institutions which could be judged to be the most useful from which to gather depth information, which could form a potential major contribution to a European CAD work programme. A target of twenty Institutions was set from the second iteration survey which was deemed to be in greater depth. The second iteration could not be an analytically straightforward questionnaire because it had to reflect the Individual findings in the first survey. Never theless from an analytical point of view It was essential that it had some structure. Consequently the second Iteration was conceived as an indepth inter view where the first part was to consolidate the data given in the first iteration and the second an evaluation of the existing software by means of test examples (pseudo "benchmarks"). It was recognised that there would be difficulty in obtaining permission to do benchmark testing, particularly in view of the high cost of running complex programs, therefore the objective was to devise a set of tests which explored the attributes and limitations of the CAD program. To summarise, the surveying philosophy of always using people for direct inter views with a well designed prompter (questionnaire) to ensure a common data base was used. The strategy of structuring the questionnaire to cover the various dispositions of the organisations and the functions of the employees was needed

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

269

to cover the breadth and depth requirement for a comprehensive survey. Thus a second survey of some 20 of the first sample was conducted in order to glean the details of some existing packages and an appreciation of the future from the leading users and suppliers of CAD for the digital electronics field. QUESTIONNAIRE PROFILE Essentially there was only one questionnaire with some change in emphasis and slant to accommodate the user, nonuser and supplier of CAD aides. In general the questionnaire commenced with the most general Information questions such as company operations data and gradually became more detailed to the level of Identifying software packages. Each package warranting the completion of a supplementary questionnaire depending on whether the program was classified as synthesis, modelling/simulation or test pattern generation (supplementary questionnaires S, M and T ) . * The initial task of the questionnaire was to identify the organisation's products and then to build up a detailed picture of components used, their quantities and complexities on a prescribed grid system. A typical set of questions are given In Figure I. This grid would then provide a basic set appertaining to such factors as throughput of new designs, length of production runs or Indeed the technology used, all of which could be a correlating factor behind the use or nonuse of CAD techniques. A s a subsequence of these factors, the final parts were concerned with defining where CAD systems were used within the organisation. In order to seek the less quantifiable data, such as the reasons for using CAD, the cost benefits and problem areas etc., it was essential to be as comprehen sive as possible so that the intervlewees's memory was fully primed, and not dominated by the most recent access. Figure 2 gives a typical matrix structure used. A t the same time it was desirable to have some degree or rating of respective reasons which resulted in the columned aspects of the arrays. The final 'don't know' column was essential to enable this Important data to be recorded as well as to ensure a completed 100% response to the functions. This technique has been used many times by market researchers and tends to give much more reliable data compared to say: A4 Question 3 Can the benefits of CAD be specified other than cost factors (e.g. longer term product improvement, better company mage, reduction in lead time)? I ease comment The weakness with this type of question is that the responder may merely wish to satisfy the question and not provide a totally considered view. Of course, it is Important not to totally regiment the views and to counter this there was variation of format as well as opportunity to give open comments. Figure 3 covers this particular aspect. In order to ensure that important trends, which may be seen from response to one question, were indeed present, there were varying degrees of overlap built into some parts of the questionnaire. For example, functions relating to CAD problem areas had degrees of commonality with 'How do you view the need for development of CAD functions?'. This results In double confirmation or otherwise giving an Indication of the confidence In the results. The final set of general questions were all designed to ascertain attitutdes e EEC proj projects In this field; all had three degrees of freedom. to possible * A complete copy of the questionnaire and benchmark tests has been published "EEC CAD PROJECT, Questionnaire and Benchmark Tests", edited by G. Musgrave.

270

G. MUSGRAVE

For the more detailed information the supplementary questionnaires were used where the general information about a package was ascertained under the follow ing outlne: 1. 2. 3. 4. 5. 6. I denti fy package Host machines Input descriptions Output medi um Backup User prob lems

Because Users and Suppliers were Identified separately these resulted in indicating the differences in attitudes towards each package between users and suppliers; they also show how much the users understand the packages and If the suppliers understand the user needs. The questionnaire investigates two import ant characteristics about the three specific topics; these are: accuracy of the automated procedures, and efficiency of the automated procedures. Information about these two characteristics was obtained by asking for details of the techniques used. A ll standard techniques have been included in the questions and there was room for describing the original techniques. By knowing about the techniques used in the package, it was possible to assess the degree of accuracy and efficiency. Logic synthesis covered the automatic generation of a design from some specification language. Form S contained questions on the subject and included: 1. 2. 3. 4. 5. Circuits considered A lgorithms used Practicability of generated designs Implementation Output modes

Design verification covered two areas; how was a circuit modelled, and how was the model exercised in order to verify the design? Testing covered automatic generation of test patterns and fault simulation. These two subjects overlap because the simulators used for verification have the same structure as sim ulators for grading test patterns; therefore form M contained questions on simulators for design verification and fault simulators for verifying test patterns, and included: 1. 2. 3. 4. 5. 6. Level of description and simulation Circuits considered Modelling of circuit delays A lgorithms used Extra questions covering fault simulators Output modes

Form contained questions only on automatic generation of test patterns, and included: 1. 2. 3. 4. 5. Fault types considered Circuits considered Modelling elements allowed A lgorithms used Output modes

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

271

At all stages of the questionnaire design the expertise within the consortium was used to help perfect the system often via several trial iterations with inhouse guinea-pigs. SAMPLING CRITERIA The nature of the questionnaires dictate that all three categories, user, nonuser and supplier of CAD, must be included but there were other dimensions upon which the potential sample was based and some of these are summarised as follows: 1. Geographical distribution. It was considered essential to cover USA and Japan as wel I as the European countries. 2. Gain a spread of companies whose products covered a wide spectrum; computers, telecommunications, instrumentation, military systems, aerospace, consumer electronics, components. 3. A full spectrum of size of organisation. 4. As broad a spectrum of technology as possible (e.g. R.T.L. to processors). 5. A broad spectrum from University research through R & D departments to those organisations where CAD Is only used in production. Of course there was deviation from the ideal sampling criteria by refusals and other factors outside the control of the project team, however these guidelines were used and resulted in the following profiles. NO. OF ORGANISATIONS SURVEYED COUNTRY 4 Bene Iux 12 France 15 Germany, Denmark 7 Italy 17 U.K. 2 Sweden 8 Japan 20 U.S.A

NATURE OF ACTIVITY Computers Te 1ecommun i cat 1 ons Instrumentation/ Process control Mi 1tary System/Aero Consumer Elee. Others

CAD
USERS

CAD
NON-USERS

CAD
SUPPLIERS TOTALS

23 13 3 12 3

1 0 1 2 1 2

18 5 4 5 6 22

42 18 8 19 10 35

1 1

A similar well-balanced cover has been achieved in respect of the size of the organisations Interviewed with the range spanning those in excess of 300,000 employees and turn-over in excess of SI000M through to the research institutions with effectively very little capital turnover and less than ten employees. Data on an excess of 80 packages has been collected. of note are that: The general points worthy

272

G. MUSGRAVE 1. Suppliers of software (including Internal developers) have a broader spectrum of packages than the user category. 2. The users of CAD systems tend predominantly to use only simulation packages. 3. A growing development of packages associated with testing often with related A.T.E. machines.

SECOND ITERATION To be able to accurately determine the current state-of-the-art of CAD in digital electronics and to help the European electronics industry, a further survey was undertaken that was different in nature but had a continuity from the first survey. It was planned that approximately twenty companies be revisited on this second iteration survey. The interviewers used were totally familiar with the first iteration answers and had experience and expertise in the current standards practices in both industry and research centres on the topic of CAD of digital electronics. The structure of the second visit was two-fold. Firstly to gain further detailed discussions of the answers and comments made in the first iteration, and secondly to apply a set of 'benchmarks' appropriate to the packages In use in the company. These aspects were covered in detail by a booklet which was produced for guidance of the interviewer. The first part of the booklet prompted and aided discussions on the organisations' answers to the first iteration questionnaire. These discussions were valuable for several reasons: a. They gave a means of assessing the accuracy of the quantised answers.which are essential for the analysis phase. b. They gave information about how pessimistic or optimistic the answers were. c. They allowed a follow-up of Important comments given. d. They allowed an update on Important lines of development not anticipated at the time of the first iteration survey. The first part of the booklet included discussions of package details which acted as a preliminary to running the tests. Thus information could be gained on how advanced were the techniques used in a package, whether they were good standard CAD techniques or new, non-standard techniques. It is essential for the survey to come down to hard facts and come face-to-face with the CAD packages and to gain proof that the answers are based on interviewees' own experience with actual working programs. The technique used for doing this was the running of tests to enable information to be gained on: a. The facilities used by the organisation to back up the CAD packages. b. How well the users interface with the system by seeing how they coped with the tests and the problems associated with them. c. How advanced their CAD techniques are by the results they produce. The tests have been designed to survey two important characteristics about CAD rog rams: a. b. The accuracy of their results. Efficiency in producing these results.

To be able to investigate efficiency fully a realistic circuit of at least 500-1000 equivalent logic gates needed to be used. However the planned time allocation for the second iteration interviews was two days per institution and a realistic circuit would take at least two weeks to get running on any system. Therefore the tests were kept very small but were designed to probe the programs

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

273

for accuracy when confronted with difficult circuits that contain critical tim ing and feedback topologies. The tests enabled relative efficiencies to be examined. It is important to note that the problems the tests have been designed to look for are the same problems encountered in large circuits in practical use. Also the tests form an deal basis for aiding the discussions on the first part of the booklet especially In respect of the problems facing both the suppliers and users of CAD packages. Some 13 tests using 7 different circuits for use in surveying the specific topics, design verification, fault simulation and automatic test pattern generation, and a further 4 tests for use in surveying the specific topic of logic synthesis. The tests cover the subject of design verification by testing worst case hazard analysis with respect to realistic delay tolerances, and initialisation of state variables. Test pattern generation is covered by testing how the automated procedures can cope with deep states, redundant states, and critical timing. The fault simulation Is covered in two ways; firstly if the package does not contain automatic test pattern generation, then there is a test with a given stimulus and fault set; secondly if the package contains automatic test pattern generation, then a fault simulator will be an Integral part and the generated patterns will be put through the fault simulator. The logic synthesis is covered by tests on combinational, synchronous and asynchronous machines. TESTS As many of the packages used fall Into the simulation and testing area then many of the tests reflect the predominance of these packages by assessing the fol I owlng prob lems. Initialisation: this is the problem of determining whether a circuit reaches a definite state starting from an unknown state. This problem can be tackled by simulation in two ways: multivalued simulation or twovalued simulation using a number of starting states. The former is pessimistic and the latter optimistic. A ccurate solutions to the problem require pathtracing or algebraic techniques but very few current CAD systems have solutions which work in complex CADs. These problems proved to be very difficult and all but one package failed to initialise the circuits. The standard technique used was to set the circuits' memory to the unknown state and then to simulate a homing sequence. The one successful package was able to Initialise two of the circuits by the use of a first order Initialisation procedure, but failed on test 10 because circuit 2 is an order four initialisation problem. It was generally commented that practical circuits very rarely have any Initial isation problems because the designer has become aware of the limitations of the CAD programs and thus always builds in master reset lines. In fact many organ isations have placed constraints on the designer in order to overcome the problem. What was more revealing was the knowledge or lack of It that users had when tack ling this type of problem. Timing analysis: one of the most important problems is to determine whether a circuit will function correctly under any variations due to manufacturing toler ances and field environment. Both of these affect the delays in components. The most common way of examining delay tolerances is by multivalued simulation. However, this can give pessimistic results. These problems also proved to be very difficult for the few packages that could handle delay tolerances. The standard technique was the use of an ambiguous gate model where the unknown

274

G. MUSGRAVE

value X is given at the output of a gate during the min-max times. This technique was the cause of many pessimistic results and the failure of many tests. It was generally commented that this problem had to be overcome by further development work to produce good and accurate worst case analysis because there was no known work that offered a satisfactory solution. Of course In all cases if the user is sufficiently experienced there are heuristic techniques which either avoid or overcome the problems. Very often in the more experienced organisations they have resulted In direct discipline of the design team. Automatic Test Pattern Generation:- the tests for this area were graded and many of the programs being used were only capable of handling the less sequentially complex tests. Many such programs rely on a gate-level description and, apart from the efficiency problem, this means that there Is no functional description and so the program cannot tell what Input conditions are likely to generate hazards. Test generation and fault simulation have all the problems of performance prediction and much more severe problems of efficiency. Thus information was collected on the fault models considered, how the test sequences were graded, and the degree of automation achieved and how much human interaction was required. Fault Simulation:a separate set of tests were used from the above because there is a high predominance of programs where the fault simulator is used to establish test sequences by either manual or random, or a combination of both test generation procedures. Synthesis Tests:- In general this is an area which has had a declining amount of attention in the last five years because many people believe it has little to offer particularly in respect of cost savings. Nevertheless the tests were offered because of the potential application of state assignment and minimisation to certain areas of IC designs such as cells/arrays structure. The results of the survey indicate that few establishments have this capability although an increasing number of research organisations are attempting to address this area. In a negative way these tests help correct some of the misconceptions of the first survey in that some of the research establishments who claimed to have programs in this field, when confronted by the tests declared the programs to be unfinished or incapable. ANALYSIS AND SOME RESULTS FIRST ITERATION DATA The first iteration of the survey gathered a very large amount of data which would have been extremely difficult to nandle had it not been for the matrix formating of much of the information which lent Itself to computer data processing. As much of the data was gathered under the agreement of confidentiality it is not possible to reiterate it in detail in this paper. However, a subset of the questions are presented namely: CAD problem areas. Reasons for using CAD and Needs for the period 1979-1982, with only global figures so that no individual company may be identified. Despite this superficial consideration of the data it nevertheless gives a useful guide to trends and contradictions as well as serving as an example of the data sets accumulated in the survey. For reasons of presentation of the data where the 'peaks and troughs' are the Important aspects In valued judgements, a system of weighting was used which effectively emphasises the deviations from the norm. This construction is explained in two examples In Figure 3 which show how the score can vary between I and 8. Considering the question of 'CAD problem area' with reference to Table I and the comments obtained on the criteria offered could be summarised as follows. A

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

275

general concern of how to cope with the retraining and recruiting of personnel, which links with the European concern regarding the social effects of runaway technology (reference opening address of symposium). However, all areas look for new packages to handle the ever Increasing complexities demanded. The universal constraint upon progress is the cost of, and investment in software programs. Hardware costs, In contrast to software costs, are reducing partly because of the application of CAD to the hardware cost, and efficiency. This may point to an opportunity for any high technology nations with access to nations with low labour costs. To indicate the capability of the technique Table II gives the scoring for suppliers only. Some of the Interesting comparisons/correlations are: software costs are seen as a serious problem, together with the recruitment of skilled personnel. Strangely, lack of theoretical understanding was not thought to be a problem whereas the development of high complexity packages was thought to be a problem. In general there were a very high number of supplementary answers or additional information offered with this table indicating a high level of Interviewee involvement. All of this data contributed to the analysis phase of the project. In order to show how CAD had already assisted technology, and industry in particular, a number of questions were asked about the reasons for using CAD. The questions were broadly divided into considerations of manpower, time saving and product quality Improvements. For the total study the histograms are shown In Table III whereas the totals for Europe are given In Table IV. The Interesting comparisons show that USA and Japan were scoring highly with a reasonably smooth histogram whereas Europe scored lower and had a surprisingly large spread of answers. (Note: the European answers represent 64 of the total survey.) Improvement of design quality Is, almost without exception, the single most important reason for using CAD with most countries rating this reason very highly. It was also cited as the reason why many cost evaluation systems are abandoned by the quality baseline changes. Closely linked with this improvement In quality is the factor of increased complexity achieved at system level or dictated to by component technology. Of course an important part of the study was to collect evidence of trends, needs and potential for the future, therefore views of all Interviewees about the 'Needs for the period 1979-1982'were sought. The tabulations of this data are given in Table V for the world and Table VI for suppliers only. In fact no matter what dlcotomy of data is used al I groupings want better CAD packages to handle higher complexities of circuit design. Also following from'this, CAD packages for top down design approach are consistently looked for/hoped for! A number of possible E.E.C, projects were offered to users of CAD with suppliers being given a different format. The global results are given in Table VII, although the sets worthy of most comments cannot be given other than to say that some of the greatest enthusiasm for E.E.C, project tended to come from nonmember countries! It can be appreciated that there is a mass of data correlations that can and has been done as part of the project analysis of the first iteration survey but this was not done In isolation from the second iteration data. In order to give an Indication of the depth of treatment a table (Table VIII) shows the areas of application of CAD by a group of principal users. The tables link the percentage of CAD used In the design/production of integrated circuits, printed circuit boards and complete sub-systems with the size of the company and the size of computers used. Comments made by the companies on 'CAD essential for' and 'CAD difficulties in' are added to give emphasis to important points made by key people. Some interesting points which arise from this table and other aspects of the report are that CAD contributes more to the design of LSI hardware than to any other area of possible application. However, there is a 'spill over effect' In two ways. Once CAD Is applied to other lower technologies although

276

G. MUSGRAVE

initially used in the LSI department and secondly when the CAD tool was purchased for testing applications (because of high costs in this area many packages were purchased for this aspect) there is a spread of the technique to other departments. So despite the many misgivings given In comments upon the unwillingness of good designers to use CAD tools, there Is evidence that there is acceptance albeit Indirectly in many cases. The group of tests outlined in the earlier part of the paper were used extensively In the second part of the survey and because they were in effect assessing individual packages only tabulation of that data which gives general characteristics Is present here with the following comments. Table IX Initialisation Tests

The problem of Initialisation does not cause very much inconvenience in practice even though the problem Is still far from being solved. Although the problems set In the benchmark tests are artificial they showed how poor the packages were In that no program was able to handle the three tests. This also gave an Insight Into how well the users could cope with such problems. In fact it was alarming to find a high proportion of users who were totally oblivious to this problem. The standard technique for initialising is to set the circuit to the unknown X state and then simulatea homing sequence: the circuit is initialised if all states disappear. This technique, however, is pessimistic because the simulator cannot deduce the fact that with state variable signal say a, a.a = 0 when a = X. This is the reason so many systems failed in these tests. Due to the fact that the full Initialisation problem is very difficult and a circuit has to be initialised in some way prior to a simulation run there have been a few good heuristic techniques developed that work around the problem. One technique is to use a different model that is approximately the same functionally that will initialise, for example It is common to use an edge triggered J K latch In place of a master slave J K latch when the set and reset lines are not in use. Another technique that is not so common is to set the memory of a circuit to a random pattern of O's and I's at the start of each simulation run, thus by simulating a homing sequence many times the probability that the circuit will initialise can be determined (monte-carlo analysis). In practice, the initialisation of a circuit is achieved using the standard technique. If this Is not available or does not work then the circuit is forced manually to a known state by the user. Table X Timing Analysis Tests

This table gives a catalogue of some of the experiences of applying the four timing tests to those packages which purport to be able to cope. In general it was commented in the interviews that better techniques have to be developed to cope with all cases of critical timing at all levels of modelling especially at the systems level where the problems In using the standard technique are most severe. The standard technique was the use of an ambiguous gate model where the unknown value X Is given at the output of a gate during the min-max times. Where there is an X-state on a node then a hazard Is assumed. This technique is pessimistic in a similar way to the initialisation standard technique but is much more severe because the problem cannot be manually overcome. These problems showed that nearly all of the systems did not have worst-case analysis techniques capable of dealing with all types of critical timing. Table XI Fault Simulation Test

The fault simulator was considered by many organisations as the principle tool of the test engineer to aid test program development. As has been outlined in the

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

277

state-of-the-art lectures of this symposium there are a number of types of fault simulator and the evidence given In the table Indicates most are favoured by some organisation giving a wide spectrum of results. Table XII Automatic Test Pattern Generation

The ATG programs for sequential logic were not widespread, in fact only a few of the investigated systems had this capability in the true sense. The tabulation shows how with very simple circuits it is possible to run up some long C.P.U. times and only achieve 80? cover which shows the pointer to the L.S.I, testing rob I ems. Table XI I I Efficiency This is the sort of tabulation which has great import but in no way must the figures be taken too categorically for the following reasons. a. The test was a relatively small circuit and was only a single circuit example. (Topology can play an important factor.) b. Many of the printout data did not give compatible Information on simulation c.p.u. times, some of these being calculated by hand. c. Different versions of the programs were being used without proof of versions being given. d. Functions of the primitives used and indeed what primitives had been preprogrammed. Notwithstanding these variations the tabulation shows the Importance of language, primitives, and delays. It is also obvious that these functions can change the clock pulses/sec. by an order of magnitude. Of course the picture is not complete until the relative running costs of the various machines are taken into account. Despite these pitfalls it is interesting to note the diversity of the software performance on currently offered programs. General Comments on Test Results Clearly the tests and the results obtained were not wholly objective but that FAIR subjective evaluation is necessary in order to collate the often diverse Information. This method is acceptable because the project was not to obtain a close ranking of packages but to ascertain general capabilities and trends in techniques and philosophy. These can be usefully commented upon under the headings of modelling, simulation, A.T.G. and system integration. Model I ing There are many conflicts that arise when a particular model is exercised for any application. The primitive of a unit delay gate Is ideal for test pattern generation in terms at least of ease of generating the tests but it suffers from the problems of overheads associated with large circuits, and lacks the accuracy needed for design verification. But the dlcotomy arises when the more accurate models which reflect detailed timing Information and extra functional behaviour, ideal for design verification, are the basis for test pattern generation; In short, virtually impossible models for A.T.G. Thus the solutions must be In the ability of the programs to have several modes for different applications but all of which are data compatible. This nevertheless must be viewed as a stop gap solution and in the long term a single model must be sought. These comments also have a bearing on the component modelling library which Is essential for effective commercial utilisation. Those packages which have been the most successful have had a component library which has been maintained and

278

G. MUSGRAVE

documented. Often this has been easily achieved if the package has been an internal development where the library can be restricted to suit a narrow user community. Certainly there Is evidence that the lack of a component library or lack of information of the models used In the library are undermining the use of CAD. As the technology dictates greater complexity there is a need for closer liaison between component manufacturers and their customers. In fact it has been suggested in future IC manufacturers should supply an accurate and complete computer model for their components. Logic simulation As the tests on Initialisation required up to fourth order ndeterminancy it was not surprising to find most packages failing on these examples. The standard technique of resetting all memory elements to an unknown state prior to simulation in practice is adequate for the majority of circuits being designed in industry. Some simulations had this ability automatically built into the program, but clearly as circuits become more complex higher orders of ndetermi nancy have to be coped with. The more general observation to be made was that many of the users were unaware of the problem and its potential solutions (details given in previous section). This ignorance could be forgiven in the case of initialisation because it is not a serious practical problem at present, however, the same malaise was apparent with worst case timing analysis. This is far more important than initialisation because it is far harder to avoid critical timing situations especially for nonfunctional test sequences which are in vogue at the present time. (Basic primitives used also dictate non-functional testing.) Hence the need for suppliers to fully educate the users In the limitations of their packages and the manual techniques for overcoming the same. This also Indicates a need for full appreciation of all the different types of simulators and suitable applications. A.T.G. and fault simulation The application of fault simulators is ever increasing as the testing problem continues to increase product costs. There was little evidence that testability was being fully evaluated at the early design stages but there was evidence that the post design test pattern generation was beginning to dictate requirements on designers. There are many programs using a number of techniques in fault simulation, often a combination but the essential classes are serial, parallel, deductive and concurrent. The technique for fault simulation that is most widespread in practice is the parallel fault simulation. This is due to the fact that apart from the serial method (processing one fault at a time) the parallel method is the easiest to implement and gives good results without using excessive computer time especially If a large number of bits are used for processing (e.g. CDC-60 bits per word gives 59 faults per processing pass). System integration This area Is one of the most important aspects of CAD in that the degree of integration can have very profound influence upon the overall cost effectiveness of CAD. It is apparent that few companies have been able to exploit for example a common data base or even a common data base management system from conceptual design through to test programs and A.T.E. interfaces. Certainly this work has been the prerogative of the large organisations. It Is they who were able to claim important economies by using CAD tools In digital systems.

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE CONSIDERATIONS FOR THE FUTURE The project team considered aspects of present research work and publications under the headings of logic specification and synthesis, simulation and verification and test pattern generation. As many of the leading researchers in this field are contributing to the symposium It was considered unnecessary to report the findings here. For the same reasons aspects of technological evolution both from a components view and its effect on computer architecture for CAD, are not detailed, although a bibliography at the end of this paper may be useful to the readers. CONCLUSIONS

279

There are many problems that have been revealed In this survey and analysed for this study. Very few of them can be tackled with autonomous solutions, they all required multiple solutions but in essence they stem from two factors: 1. Ever increasing complexity afforded by future technologies 2. Lack of understanding and appreciation of present CAD techniques by many practising engineers. In this section three headings have been used to group the problems, namely research, educational, and industrial. Essentially the research is considering solutions which by their nature are a little tenuous and certainly long term. In the industrial section specific projects are identified which could be undertaken in a much shorter time with a more definite return. In respect of the educational section some ideas will yield short term benefits, but If the basic problems are to be overcome this must be in the fundamental training level of electronic system designers and hence this will yield benefits in the longer term. Research In general there are small groups and teams of researchers working on a number of problems In the CAD field throughout Europe. They all appear to be extremely short of resources in terms of computing power and man power and certainly acutely short of finance. What is interesting Is that, In contrast with the USA the groups do not appear to be in close communication and certainly are not benefitting from each others' activities as well as they might. In general the research tasks which could yield the Important solutions to problems in CAD may be listed as follows: 1. Specification tools:- It has been clearly identified by all Involved In this subject area that there is a need to be able to specify a total system and evaluate that system withvarious Implementation options. Note that this transcends any software or hardware partitioning. Algorithm development:- It Is recognised that there is a need for improvement in the algorithms for simulation and test pattern generation programmes. Most of these are based on theory that was developed in the mid-60s and at present few people are working on say heuristic techniques to achieve the sort of efficiencies that are needed for the complex systems. Analogue interfaces:- All digital systems have analogue interfaces with their inputs and outputs. There is virtually no research progressing which Is attempting to embrace the analogue circuitry with the digital circuitry from a total

2.

3.

280

G. MUSGRAVE

systems evaluation and reliability point of view. Some people solve their problems both In the combined analogue/digital and the solely digital world by partitioning, but there has been no fundamental research determining the dicotomles of the system. This pragmatic approach is the only means of coping with the complexity of total systems. 4. Software management:- The potential of computer aided design can never be fully utilised until the problems of software, portability, machine independence and general data based management stages have been fully developed. Predominantly the researchers in this field are geared to large software systems. Often there is little integration amongst the various functional packages. For example many of the following functions require separate data: conceptual design through to prototype evaluation, production models, engineering drawings, components list, test schedules, maintenance schedules and diagnostics, not to mention the differences of their inherent users.

Industrial Many of the establishments surveyed considered there was considerable merit in establishing collaboration throughout Europe on projects which could give immediate benefit. Although there was a desire to have established standards for component libraries, CAD software, peripheral terminals etc., many of the practising engineers believe that these standards, although highly desirable, are unlikely to gain acceptance until this general field has been established for a number of years. They also believe that the imposition of standards from above is unlikely to gain great acceptance because of the already existing and confusing American, Japanese and multiple nation European standards. A much better approach would be to establish working tools which, because of their existence would effectively become adopted as standards. In general the following are a set of projects which are considerably shorter termi than those under research which could be adopted with some predictable successes In Europe. 1. Components data base: One of the Important problem areas that is expending many man years is to establish the correct models for existing simulation programmes. One of the problems is that the component suppliers are not providing the necessary detail in order to make the models sufficiently accurate for the more sophisticated CAD techniques. Therefore a European collaboration to use the total procurement power of its members in order to pressurise component suppliers to provide such detail could be invaluable. This would also be used to establish a component data base with the appropriate back-up service to maintain and up-date it and give user advice. It could also utilise a European data communication system such as Euronet. Projects: To set up a number of very specific product orientated projects with Companies from several member nations. This could be extremely successful in providing software and hardware for solving specific problems and at the same time give Europe experience independent from the USA. A typical set of products which could be developed are:-

2.

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE a. b. e. An ATE for microprocessors A concurrent logic simulator A general purpose compiler from automatic test pattern generation to say automatic test language for avlonic systems (Atlas) An associative processing machine for symbolic manipulation etc. To establish within industry a number of viable projects similar to the above which will help retain top engineers in this field and give them a meaningful role within European CAD, avoiding the enticement to the US for many of these people. To a similar end, scholarships should be provided for practising engineers In the digital electronics field to gain the knowledge available at research institutes particularly In the US To establish the differing employment profile the future extensive use of CAD will have and to predict from this the variations required In training schemes, numbers to be trained, numbers employed, skill categories of those employed. Such research into social aspects of CAD was a frequent request during European first iteration interviews.

281

d. e.

f.

Educational Throughout the survey stage it has been thoroughly established that there is a shortage of good digital circuit design experts in Europe. This calls for a whole set of strategies pertaining to the education, re-education, training and re-training of existing personnel. Although It Is difficult to establish a common programme throughout Europe there Is nevertheless sufficient commonality in this problem area to suggest that solutions both on a European and member nation level are viable. The strategy for this problem Is outlined below. Long term: To Influence member governments and for them to turn to influence certain University/Polytechnic courses to reflect the Impact of the 'digital revolution'. To identify throughout the member nations establishments where there is sufficient upto-date expertise and facilities to enable institutes for retraining of engineers to be established. Medium term: To develop an appropriate syllabus, lectures, demonstrations and above all a programme of workshops which could be based as a European project. This should be undertaken as soon as possible in order to prepare the ground for the longer term member nation re-training programme. It is suggested that by secondment of experience in the CAD field the Commission could provide a detailed syllabus, and lecture notes, as well as using video-tape medium to provide the much needed detailed tuition in existing CAD techniques. This organisation would need to be supported with demonstration, software documentation, and advisory services. This could provide the focal point for various courses at both engineering and managerial level, and more Importantly provide the basis of education of the educators for their ongoing roles In the respective member nations.

282

G. MUSGRAVE

Short term: Most of the important symposiums, colloqulms and conferences In this field are held in the US. They also tend to be orientated to the general capability and economic structure of the USA and there is a need to establish within Europe suitable, regular, conference/colloqulm facilities which many more practising European engineers could attend. This will provide the Important grape-vine communication system that is essential for the solution of day-to-day problems, and prevent the development of similar facilities in parallel around Europe, which prevents optimal use being made of the scarce CAD design resources. The problem identification and solutions cannot be listed in order of priority nor can they be taken as individual items as they all reflect the tremendous need for a totally co-ordinated and integrated approach to this very important developing subject. Many of them may be inappropriate for EEC action, but through its technical representatives and various committee structures it could provide the necessary catalyst for action within the member nations. It should be said it almost does not matter which solutions are developed as long as some solutions are developed today, to make a start in plugging the gaps that exist in CAD knowledge, facilities and systems. It can be justly claimed that the study itself has had a useful catalytic action and the very fact that this paper is being presented at a European symposium with many guests from outside the community is a useful step. ACKNOWLEDGEMENTS Clearly this paper reflects the work of a team of people, namely SAGET consortium, and all their contributions are recognised and in particular my research fellow Phil Moorby who has devoted himself to this project over the past year.

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

283

FIG. I.

TYPICAL SET OF QUESTIONS ESTA BLISHING COMPANY'S PRODUCT PROFILE

5 I terns designed each year

0
1 2 3 4 5 6 others 7 8 lease specify SS I/MS I Custom LSI Microprocessor Memories Computer subsystems

210

11100

>I00

In o r d e r t o a p p r e c i a t e t h e c o m p l e x i t y o f your design o p e r a t i o n , please i n d i c a t e the approximate range o f c o m p l e x i t y i n terms of e q u i v a l e n t l o g i c gates f o r t h e areas o f a c t i v i t i e s . IC PCB Sub systems

I II

10 100 1000 10,000 100,000 1,000,000

Gates

101 1001

10,001 100,00

> 1,000,000 DefI ni t i ons: IC I n t e g r a t e d C i r c u i r s . Logic component, from a few gates t o m i c r o p r o c e s s o r on a s i n g l e chi p. PCB P r i n t e d C i r c u i t Board, w i t h a c t i v e components o f any c o m p l e x i t y . Subsystem. S e p a r a t e l y t e s t a b l e p a r t o f t o t a l system, e . g . D i s p l a y sub system.

284
FIG. 2. TYPICA L

G. MUSGRAVE

MA TRIX

STRUCTURE A C D E = = essential s t r o n g reason neutra I weak reason no reason a t a I I

A.2 REASONS FOR USING CAD

Savings in manpower: design production testing maintenance Time savi ng in: design production testing maintenance Improvement of design quality Documentation Necessi ty: increase c o m p l e x i t y i n c r e a s e of work load

Don't know

s h o r t a g e of s k i I led man power 6 Provide common database f o r t o use A b i l i t y to evaluate designs different all

A b i l i t y t o chance s p e c i f i c a t i o n 9 Research a

10
1 1 \ \i others lease sped fy

14

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

285

Examples of weighting used in the following Tables CHART CONSTRUCTION (A) When there are five possible answers (Table I)
A C
18.2

Example: Savings In man power

D
7.3

DK 3.6

29

40

1.8 I

Weight: A = 8, 6, C = 4, D = 2, E = 0, Don't Know discounted. Example: (8 29 + 6 40 + 4 18.2 + 2 7.3 + 0 .8)/(00 3.6) 5.80 resultant weighting

Reasons for Using CAD A = Essential = Strong Reason C = Neutral D Weak reason DK = Don't know (B) When there are only four possible answers (Tables IV and VI), the weights are:
A
33.9

C
13.6 j

DK 6.8

A = 8, = 4, C = 0, Don't know discounted

45.7

Example: (45.7 8 + 33.9 4 + 13.6 0)/(00 6.8) = 5.38 resultant weighting A C D Problem Areas Serious problem Minor problem No problem EEC Opportunities Very Interested Little nterested Uni nterested

Needs for Period 197982 A urgently needed may be needed C Unimportant DK Don't know NOTE:

The weighted value or score can only vary between 0 and 8.

FIGURE

286
COUNTRIES:

G. MUSGRAVE

EUROPE USA JAPAN

CAD PROBLEM AREAS most serious CRITERIA Capital costs of CAD software Capital costs of CAD hardware Running costs (hardware cost and manpower cost) Retraining of personnel to use CAD Recruiting of personnel skilled In CAD No single comprehensive data base covering all CA D operations No single comprehensive input coding description language covering all CA D operations Inadequate CAD packages to cope with the complexity of ci reu its Gaining confidence in technical results produced by CAD Inadequate algorithms for CAD Man machine Interface peripherals Inadequate CAD packages to cover hybrid electronics 3 4 5 6 7

77, 7/ //,
'//

7 7/ 77y, V,\7
'//,
'//

i / 77 1 ,
f

/f

ft/

/ /

I hV/,
7 f

/ /

//

//

/ ,

A 7 VA7/ 77 't
// //,

/7

///

7 ' 71 777,

TABLE I

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE COUNTRIES: EUROPE + USA + JAPAN Suppliers Only

287

CAD PROBLEM AREAS most serious CRITERIA Capital costs of CAD software Capital costs of CAD hardware Running costs (hardware cost and man-power cost) Obtaining skilled personnel for research and development work Lack of theoretical understanding Developing packages to cope with complexity of circuits Inadequate algorithms for CAD Man-machine Interface peripherals Developing packages to cover hybrid electronics Too small a market for packages

//, 7/ 7/ 7// 7 / /

7/ 7 7/ 777 7 7/ 7/ 7/ 77 7/ 7/ '7/ '7 7/ '7/ 7 77


/ / / / / /

/ /

'// / / r
/

7/ 7/
/,

r /

TABLE I I

288
COUNTRIES:

G. MUSGRAVE

EUROPE + USA + JAPAN

REASONS FOR USING CAD essential CRITERIA


Savings i n man-powe r d e sign Savings i n man-powe r p r o d u c t i o n Savings i n man-powe r Savings i n man-powe r maintenance Time saving i n de sign Time s a v i n g in p r o d u c t i o n Time s a v i n g i n te sting te sting

1 2

777/ 7/77 7 77/7/ 7777/ 7/ 7/7


% // %

A A A 7777y> 7 77
% //, '//

Time saving in mainte nance Improvement of d e sign q u a l i t y Documentation Increase c o m p l e x i t y Increase of work load Shortage of s k i l l e d man-powe r P r o v i d e common data-base a l l t o use A b i l i t y to e valuate designs A b i l i t y t o change for

77 777/, \7///, / 777/ 7 7/77

d i f fe re n t

777/ 7 7 777/ 7 7/ ///I 7/ 7 77 A 77 7/ 7 777/77 777/ 7 7/ 7777 7/V 7 /7VAA A77


/, %,

spe cification

Research and de ve lopme nt t o o l

A t 777/ 7/77 '7 7/, 77


7/ A
///

77

7n

TABLE

II I

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

289

COUNTRY:

TOTAL EUROPE

REASONS FOR USING CAD essential CRITERIA Savings In man-power design Savings i n man-power p r o d u c t i o n Savings In man-power t e s t i n g Savings In man-power maintenance Time saving In design Time s a v i n g In p r o d u c t i o n Time s a v i n g i n t e s t i n g Time s a v i n g i n maintenance Improvement of design q u a l i t y Documentation Increase c o m p l e x i t y Increase o f work load I 2 3 4 5 6 7 8

Shortage of s k i 1 led man-power P r o v i d e common data-base a l l t o use A b i l i t y to evaluate desIgns for

7777 7/77, 7/ 7 77/ / / / , t, 777777/ 77 77 777/7/7 777777V/f. 777777'/jf, t, 7777 7///., 777/ 77 7 77 7/7 77/A77At I '77, 7 777/ 77 'A 7 77777/7 1 7 7 77, 7

7 7777777't

different

A b i l i t y t o change s p e c i f i c a t i o n Research and development t o o l

A,A A AA A, A A 77777 A,
1

77777/ 77/7 7

TABLE

IV

290
COUNTRIES:

G. MUSGRAVE EUROPE + USA + JAPAN

NEEDS FOR THE PERIOD 19791982 most needed CRITERIA Single comprehensive database covering all CAD operations Single comprehensive Input description language covering al 1 CAD operations CAD packages able to handle higher complexities of circuit design University/polytechnics to teach more CAD in their courses More CAD courses to be provided for engineers in Industry Better manmachine interface peripherals Adequate CAD packages to cover hybrid electronics CAD packages for top down design approach (conceptual specification to detailed implementation) CAD packages to cope with conceptual specification independent of hardware or software implementation 3 4 5 6 7

'///

/A VA77 VA VA A
1

7 V/ VAAA'/A 7 '777/

A% A '/
A A
//

/// //

/// /// /// ///

A/ / / A/ / /
/// /// //

'7 77 77 A ' 7 A 77
777 7/
/ /

A,A A A,VA VAVAA 77


%
/
j

A A // /A 7//

// /// /// /

7)/ //,
1
1

%
TABLE V

Vi

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE TOTAL: supplies only NEEDS FOR THE PERIOD 1979-1982 most needed CRITERIA Single comprehensive data base covering all CAD operations Single comprehensive input description language covering al 1 CAD operation CAD packages able to handle higher complexities of circuit design University/polytechnics to teach more CAD in their courses More CAD courses to be provided for engineers in industry Better man-machine Interface peripherals Adequate CAD packages to cover hybrid electronics CAD packages for top down design approach (conceptual specification to detailed Implementation) CAD packages to cope with conceptual specification independent of hardware or software implementation 3 4 5 6 7

291

w A77/
7

I 1 i t I7i I 1 VAI A//AA7 A i A 77 1I I 1i I


\ %

'',

'/

'7,

7/
%

TABLE

VI

292

G. MUSGRAVE

INTEREST

IN

EEC

PROJECTS most needed

CRITERIA How i n t e r e s t e d would you be i f an EEC p r o j e c t was s e t up t o o r g a n i s e a c e n t r a l comprehensive Database for d i g i t a l electronics? How i n t e r e s t e d would you be i f an EEC p r o j e c t was s e t - u p t o p r o v i d e a CAD computer s e r v i c e ? How n t e r e s t e d would you be i f an EEC p r o j e c t was s e t up t o p r o v i d e a CAD computer package? How i n t e r e s t e d would you be 1 f an EEC p r o j e c t was s e t up t o p r o v i d e CAD standards?

A//'A V 'A A 7, /AA/ 7777/


'/ '//

7 77 7/, 77 7777 7.

',

A 77, 77 A 7/
7 7/
'/

77 77 / / / 7/

TABLE VI I

GENERAL ACTIVITIES OF COMPA NY

CO. SIZE

TYPE OF PACKAGES

COMPUTER USED CORE A VA ILA BLE CORE REQUIRED

PERCENTAGE

DF

CA D

COMMENTS ESSENTIAL FOR:

ON

CA D

IC

PCB

SUB SYSTEMS

DIFFICULTIES IN:

Computer Man.

In house 4 Commer cial

UNI VA C 500K

>90
1 I 1 h

<I0 O

Testing general quality

Cost effectiveness

M i 1 i ta ry electronics dev. i prod.

Comme cial

IBM 370/168 IBM 360/91 UNIVAC 1108 SMC 3100

>90
t 1 1 l\

>90

' >90
1 1 I l\

Production design and test ing. Better quality

Speel f 1 cat Ion to design translation PCB design and checki ng


en -

Custom LSI design Test Equipment

Commer cial

ICL 1900 37KW I8KW

>90
1 1 I l\

LSI design quality

Model 1ng Simulation Testing

Production

I n house

INTERDATA 832 IRIS 80 IBM 370 200K min

>90

Only for i ncreased com plexity (LSI)

Retraining of personnel

Production

I n house

INTERDATA 832

100
I I I )

QualIty and comp 1 ex 1 ty

Comp 1 ex 1ty/Tech nical software costs

Engineering planni ng Design Automation

In house & Commer cial

CDC CYBER I 74 DEC 15 MOD COMP NOVA 32KW

>90
/ / /

>90
1 1

l\

>90
I I I

Manpower savings Reduction in lead time of 50 Product quality

Higher complexities Capital costs Obtaining ski 1 led men

GENERAL ACTIVITIES OF COMPANY

CO. SIZE

TYPE OF PACKAGES

COMPUTER USED CORE AVAILABLE CORE REQUIRED

PERCENTAGE

OF

CA D

COMMENTS ESSENTIAL FOR:

ON

CA D

IC

PCB

SUB SYSTEMS

DIFFICULTIES IN:

Computer 4 Te 1ecomm unicatlons

Comme ciai

IBM 370/158

>50

General savings but not yet essentia 1

Obtai ni ng ski 1 led personnel. The need for a data base

Large system eng i neering

In house

MITRA 15 32 KW 16 KW

>90
1/ / A

< 10

Testi ng Complex ICs

Software costs Manpower costs and training 1 nadequate Algorithms

Production

I n House System

SEL 325S

>90
11 1 li . 10

Performance before con struction (cus tomer req.) Man power savings

Communications between di sci pi i nes and departments

Computer Manufac turer

I n House 4 Commer cial

DEC 10 GT 62 75KW 5I2KW

> 90 / 1 1 l\

>90
/ / <.50

1I \

Very much an IC design. Reduction In lead time. Tool for R 4 D

IC Design

KEY:

large.

medi um.

small,

in terms of turn over

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE


TABLE
METHOD USED Ist order initial i sation 1st order Standard 3 valued logic standard 3 valued logic standard 3 valued logic

295

INITIALISATION TESTS
COMMENTS required an option to be given to run special routine for 1st order Initialsation

INITIALISATION RESULTS ci rcui t Initialsed correctly clrcuit initialsed correctly not initialised not Initialised not initialised

user did not know about the special routine for 1st order initialisation user did not know about the special routine for 1st order initialisation

N.B.

All other establishments were unable to do the test.

296
TABLE X
METHOD USED minmax 3 valued

G. MUSGRAVE

TIMING

A NA LYSIS

TESTS
COMMENTS standard technique fails on this ci rcuit successful run due to its path tracing simulation crashed after 128 pulses when feedback was active. J K flip flops do not recognise dynamic hazards on clock Input no pessimism on negative edge of SELECT because tolerance given to SI S2 gates only accurate results obtained

HAZARDS DETECTED hazard on output

33 tolerance path tracing ambiguous gate model

no hazards

hazard generated around feedback loop

minmax multi va 1ued

only on positive edge of SELECT

33 tolerance path tracing 33 tolerance path tracing 33i tolerance path tracing probabi1 i ty func tion of the components probabi11ty func tion of the components probabi1 i ty func tion of the components probabi 1 i ty func tion of the components mi max muIf 1 va 1ued

only on positive edge of SELECT only on OSC

accurate results obtained

no hazard on output

no hazards detected for the wrong reason successful run due to its probabi lity based algorithm, but very slow

no hazards

no hazards

accurate results obtained

only on OSC

accurate results obtained

hazard on output

algorithm could not detect that no hazard Is on the output

hazard generated on both edges of SELECT at output

pessimistic hazard on negative edge of SELECT

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE TABLE XI


METHOD USED serial seri a 1 para 1 le 1 deductive deducti ve deductive paral lel paral lel

297

FAULT

SIMULATION

TEST
COMMENTS l| KB store used

PRIMITIVES USED functional FF functional FF gates gates NAND gates NAND gates gates functional FF

FAULTS/CPU-SEC 0.38 0.033 0.33 0.077 0.56 1 .9

6 KB store used, 31 faults simulated/pass 26 KB store used

3.1

47 faults simulated/pass no times supplied

TABLE XI
N0

AUTOMATIC

TEST

GENERATION

TESTS TOTAL CPU TIME 1320 sees 194 sees 7 sees 15 sees 3 sees 1 sees <8 sees 2 sees 19 sees 414 sees 318 sees 7 sees 65 sees

^TTST^ ULTS 4M 367 50 84 85 (A) (A) (B) (B) (B)

NO. OF PATTERNS 677 798 12 16 1 1 1 1 14 4 1 1 110 149 62 68

FAULT COVER 90% 88% 100* 100% 100% 100% 100% 100% 98% 87% 85% 81% 81%

85 (B) 52 (B) 30 (B) 45 (C) 54 (D) 474 (D) 78 (D) 72 (D) N.B.

The tests got progressively more difficult from A to D

TABLE X I I I

EFFICIENCY DELAYS CLOCK PULSES/CPU SEC

MACHINE CAPABILITY AND SIZE

LANGUAGE

PRIMITIVES USED

S 1ow Med i um Fast Large

assembler assembler assembler assembler assembler/LST

functional FF gates gates gates functional gates gates functional FF

typical typlca1 uni t typical

54 39 54 408

Slow SmaI 1 Medi um SmaI 1 Fast Large

zero unit unit asynchronous zero synchronous unit unit typical unit

4
20
0.2 8

Fortran assembler

S low Smal I Fast Large Fast Large Fast Large Fast Large

Fortran Fortran Fortran Fortran Fortran

NAND gates NAND gates functional FF functional FF

10 85

21 23 34

gates

unit

TABLE XI I I

EFFICIENCY (CONTINUED)

MACHINE CAPABILITY AND SIZE

LANGUAGE

PRIMITIVES USED

DELAYS

CLOCK PULSES/CPU SEC

Fast Large

SIMULA 67

functional FF

probabi 1ity va 1ues typica 1 typica 1

2.2

Medi um Medi um Med i um Med i um

Fortran Fortran

gates gates

7.8 2.5

300
Bib Iiography Speci fcations

G. MUSGRAVE

Holt, A.W. - Information System Theory Project. Report No. RADC-TR-68-305. Heath, F.G. - The LOGOS System.

Applied Data Research, Tech

IEE Conf. CAD. No. 86 1972.

Lewi n, D.W., - Specification and Design Languages for Logic Systems, AGARD Conf. Computer Aided Design and Electronic Circuits No. 130, 1973 Clare, G.R. - Designing Logic Machines using State Machines, McGraw Hill, New York 1973. Iverson, K.E. - A Common Language for Hardware and Software Applications, Proc. AFIPS SJOC 21 pp.345-351 1962. Schoor, H. - Computer Aided Digital Design and Analysis Using Register Transfer Language. IEEE Trans. Electronic Computer EC 13 pp. 730 - 737 1964. Duly, J.R 4 Dletmeyer, D.L. - A Digital System Design Language (DDL). on Computers CI 7 1968 pp.830-861. IEEE Trans

Friedman, T.D 4 Yang, S.C. - Methods used in an Automatic Logic Design Generator (ALERT), IEEE Trans, on Computer CI8 1969. Bell, C C . 4 Simulation Breuer M.A. - A note on Three Valued Logic Simulation IEEE Transaction on Computers Vol. C2I April 1972. ChappeI S.G. 4 Yau S.S. - Simulation of Large Asynchronous Logic Circuits using an Ambiguous Gate Model. Proc. Fall Joint Computer Conf.1971 Breuer M.A. 4 Friedman A.D. - Diagnosis and Reliable Design of Digital Systems Pitman 1977. Szygenda 4 Thompson - Digital Logic Simulation, Computer March 1975. Armstrong, D.B. - A Deductive Method for Simulating Faults in Logic Circuits IEEE Trans on Computers Vol. C-21, pp.464-471, May 1972. Ulrich, E.G, Schuler, D.M. 4 Baker, T.E. - A Computer Program for Logic Simulation Fault Simulation and the Generation of tests for Digital Circuits. ACIA Congress on Simulation of Systems. Aug. 1976 Roth, J.P. - Developments in Diagnosis, IBM Report RC 5769 (25006) Dec.1975 Computer Sciences Dept. IBM Watson Res. Centre, Yorktown Heights, N.Y. or IEEE Computer Soc. Repository R76-68 Parker K.P. - Adaptive Random Test Generator, Design Automation and Fault Tolerant Computing J _ p.62-83, 1976. Newall, A - The PMS and ISP Descriptive System for Computer Structures, AFIPS, SJCC 36 pp.351-384 1970

EUROPEAN COMMUNITIES STUDY: TECHNICAL PERSPECTIVE

301

Technology Trends 'CAD Electronics Study' for the Commission of the European Communities. Luxemburg, Re ITT CEC T/3/77. Noyce, R.N. Microelectronics, Scientific American 237,3.63 Meindl, J.D. Microelectronic Circuit Elements, bid 237,3,70 Koiton, W.C. The Large Scale Integration of Microelectronic Circuits, bid 237,3,82. Oldham, W.G. The Fabrication of Microelectronic Circuits, bid 237,3,111 Hodges, D.A. Microelectronic Memories, bid 237,3,130. Toong, H.D. Microprocessors, bid 237,3,146. Terman, L.W. The Role of Microelectronics In Data Processing, ibid 237,3,163. Oliver, B.M. The Role of Microelectronics in Instrumentation and Control bid 237,3,180. Mayo, J.S. The Role of Microelectronics in Communications, bid 237,3,192. Sutherland, I.E, 4 Carver,.M . Microelectronics Computer Science, ibid 237,3,210 Kay, A.C. Microelectronics and the Personal Computer, ibid 237,3,211. Proc. IEEE Issue on Microprocessor Applications, Feb. 1978. Altman, L., 4 Cohen, C L . The Gathering Wave of Japanese Technology, Electronics 50,12, p.99 9th June 1977. Japan Presses Innovations to Reach VLSI Goal, Electronics, 50,12,110, 9th June 1977 Electronics, 50,22, 27th October 1977. Special Issue 'A nnual Technology Update' Saget,

Altman L. Here come the big, new 64K ROMs, Electronics, 51,7,94. 30th March 1978 Wilson, D.R Cell Layout Boosts Speed of Low Power 64K ROM, Electronics 51,7,96, 30 March 1978 Holdt, T., 4 Yu, R. VMOS Configuration Packs 64 kilobits Into I75mi 12 chip. Greene, R. Dense Interchangeable ROMs Work with Fast Microprocessors, Electronics, 51,7,104, 30th March 1978 Altman L. New MOS Processes set speed, density records. Electronics, 50,22,92, 27th October 1977 Altman L. Five Technologies Squeezing more Performance from LSI Chips, Electronics, 50,17,91, 18th August 1977. Pashley R., et al, MOS Scales Traditional Devices to Higher Performance Levels, Electronics, 50,17,94, 18th August 1977. Jenne, F.B. Grooves add new dimensions to VMOS structure and performance, Electronics, 50,17,107, 18th August 1977 Sander W, et al, Injection Logic Boosts Bipolar performance while dropping cost. Electronics, 50,17,107, 18th August 1977. Eichelberger E.B., 4 Williams T.W. A logic design structure for LSI Testability Proc. 14th Annual Design Automation Conference, June 1977 Yamada A. et al Automatic Systemlevel Test Generation and Fault Location for large digital systems, Proc. 15th Annual Design A utomation Conference June 1978, p.347 Gate Arrays taking over in Logic using ECL Electronics International, March 30, 1978, p.3940 Mystery Computer Unveiled Electronics International, March 29, 1973, p.5152.

302

G. MUSGRAVE

Masakl . , e t a l . 200 Gate ECL Master s l i c e L S I , ISSCC Conference D i g e s t , p . 6 2 6 3 , February 1974. Hoi l o c k , S. High Speed Programmable Logic A r r a y s , D i g e s t of IEE Colloquium on High Speed C i r c u i t s and Techniques, p . I . 1 1 . 2 , November 1976. Braecklemann. W. A M a s t e r s l i c e LSI f o r Subnanosecond Ramdon L o g i c , Conference D i g e s t , P.108109, February 1977. ISSCC

C o l a o , S. High Speed A p p l i c a t i o n s o f t h e CD I P r o c e s s , D i g e s t of IEE Colloquium on High Speed C i r c u i t s and Techniques, p . 2 . 1 2 . 7 , Nov.1976. Nakano T. e t a l , A 920 Gate M a s t e r s l i c e , February 1978. ISSCC Conference D i g e s t , p.6465,

Computer A r c h i t e c t u r e A p s i r a l l , D . The microprocessor and t s a p p l i c a t i o n , Cambridge U n i v e r s i t y P r e s s , Cambridge. Brooks, F.P. An Overview of Microcomputer a r c h i t e c t u r e and s o f t w a r e , Euromicro Symposium, V e n i c e . Cherance. R . J . D esign of h i g h l e v e l language o r i e n t e d p r o c e s s o r s , .Notices V o l . 12 No. I . Jan 1977. Proc,

Sigplan

Down J . 4 T a y l o r F.E. Why d i s t r i b u t e d computing, N a t i o n a l Computer C e n t r e , UK F l e g e l H. Aufbau und E i g e n s c h a f t e n s c h l s s e l f e r t i g e r CAD 1978, F r a n k f u r t . s t a t i o n , VD MAProc.

F l y n n , M.J. Towards more e f f i c i e n c y computer o r g a n i s a t i o n s . P r o c . AFIPS SJCC ' 7 2 . Gelenbe.E. 4 M a h l , Computer a r c h i t e c t u r e and n e t w o r k s , North H o l l a n d P u b l i s h i n g , Amsterdam. Hyla G. CAD m i t H i l f e von Computers Graphics Systemen, VD MAProc. Frankfurt. 1980 some a r c h i t e c t u r a l 1978, IEEE

Joseph. E. Computers and networks COMCON 76. Widdoes, L . C

t r e n d s . Proc.

A r c h i t e c t u r a l c o n s i d e r a t i o n s f o r general purpose m u l t i p r o c e s s o r s 13th IEEE Comp. Soc. I n t l . Conf. 1976.

An I n t e g r a t e d Hardware Software System f o r Computer Graphic In Time S h a r i n g , ESL and p r o j e c t MAC. MIT Cambridge, MASS., D ecember 1969.

G. Musgrave, editor, COMPUTER-AlVV DESIGN o digital electronic circuiti and systems North-Holland Publishing Company ECSC, EEC, EAEC, Brussels 6 Luxembourg, 1979

EUROPEAN COMMUNITIES STUDY ON CAD OF DIGITAL CIRCUITS AND SYSTEMS SURVEY IN U.S.A. AND CANADA

A.H. Carter Technical Services Manager Engineering Department Plessey Radar Limited Addi estone
INTRODUCTION Over twenty establishments were v i s i t e d in the survey. They were chosen to provide a representative mix of industrial a c t i v i t i e s , both as suppliers and users of CAD systems and universities covering research and development. The objective of t h i s paper is to highlight the CAD a c t i v i t i e s reviewed in the survey, and as a consequence, help to stimulate cost-effective applications of integrated CAD systems in industry. The main topics covered in the paper are:-

1 2 3 4 5 6 1 1.1

Survey Interviews Awareness of CAD/CAM System Performance An Integrated CAD/CAM System Approach New Design Criteria and Equipment Practices Benefits of a CAD/CAM System Integrated Data Base for Engineering Manufacture and Test

SURVEY INTERVIEWS Common Themes from Survey Interviews

From the grass roots information gathered from the survey discussions a number of common themes emerged. 1 2 3 4 5 Start-up costs are high Need to know more about existing packages Package evaluation information needed Need to understand reasons for existing methods before applying CAD Need to change existing methods in order to optimise CAD system 303

304 6 7

A.H. CARTER Implementation of CAD had major impact on design manufacture and test operations Major benefits quoted were common

One result of establishing many of these common themes was to see the need and initiate this symposium 1.2 Interview Methods

Although a considerable amount of time and effort had been given to establishing a comprehensive questionnaire, it was essential at all interviews to hold detailed general discussions with those visited lasting several hours, before attempting to formally complete the questionnaire. It was obviously important to hold discussions with a range of people inside a company, both CAD/CAM Managers and staff, and particularly Design Engineers who use the systems. The questionnaire was very valuable and at the end of the general review it stimulated further discussions. 2 2.1 AWARENESS OF CAD/CAM ACTIVITIES AND SYSTEM PERFORMANCE Awareness

It was soon apparent that even though the USA holds national and international symposiums on CAD/CAM there is still a great need to improve the awareness of individuals who are working in CAD/CAM of the current state-ofthe-art. Many key specialists commented, 'If only there was available an analysis and appreciation of existing CAD/CAM packages and the benefits and problems in the hands of the users, it would be extremely valuable'. 2.2 Education and Training Inside Companies

Those companies that are most advanced in applying CAD/CAM and are reaping the benefits have used considerable time and money to ensure training and education of all their staff, up to and including their top executives. The impact of CAD/CAM when effectively applied is so great that a programme of education is essential if all functions in the business are to contribute to achieving the CAD/CAM implementations and the resulting benefits. NOTE: Automation is usually related to production and test. CAD/CAM is Automation and Control in Engineering Production and Test but there is little awareness of this potential in most companies. Existing Packages The existing packages can be divided into two broad categories: (a) (b) 2.3(a) In-house only Commercially available In-house Systems

2.3

Some very large companies know how extensive the impact of CAD/CAM is to t h e i r competitiveness, hence they invest large sums of money, but w i l l not make t h e i r systems available outside t h e i r company. They integrate their

EUROPEAN COMMUNITIES STUDY: SURVEY IN USA AND CANADA System with their own equipment and design practices. They establish their own in-house design criteria, and production and test techniques and procedures are modified to optimise the use of CAD/CAM Automation Systems. 2.3(b) Commercially Available Systems

305

Most companies use commercially available systems. These systems will be in considerable demand as industry becomes more aware of their advantages. However, the source codes of these systems are generally not available to users and therefore they cannot be enhanced or modified by the users to improve effectiveness. Another major drawback with proprietary software is the difficulty of integrating packages from different sources and achieving computer machine transportability. 3 3.1 AN INTEGRATED CAD/CAM SYSTEM APPROACH Simulate, Analyse and Test PCB and Customer LSI

The Logic Simulation, analysis, design verification, and auto-test pattern generation modules, form the central part of an integrated CAD/CAM system. Figure 1 illustrates an integrated CAD/CAM System. In the diagram the vertical functions are applied to PCBs. The horizontal functions cover system simulation, synthesis, and custom LSI applications. The point that is important to note here, is the need for simulation packages to be applied to PCBs and custom LSI designs. United States companies are using these simulation and analysis modules in this dual role and for the same basic reasons. Clearly when designing for customer LSI it is essential to simulate, analyse and verify the designs, before submitting them to a costly and time-consuming manufacturing process; in other words, the designs must be right first time. The same principle is true for PCBs, especially now, most PCBs have typically 50% of their ICs as MSI devices and hence there are many very complex PCBs, and again it is essential to get the designs right first time. In one company, a critical reason for using the simulator and auto test pattern generation packages, was the shortage of staff to produce complex test patterns to apply to their custom LSI designs before submitting these designs to manufacture. Having used these modules, they also found that the depth of testing that could be achieved with these modules gave an insight into their own designs which was far superior to their normal manual method of producing test patterns. The Design Engineers found that the in-depth analysis they obtained resulted in them changing their designs to eliminate design errors and improve the testability of their design. This is vital in achieving a design to meet requirement specifications. 3.2 Integration of PCB Packages

In Figure 1 the flow diagram is shown in the form of a cross. The vertically connected packages typify how packages could be linked/integrated to cover the complete range of PCB CAD/CAM operations from design through to manufacture and test. This type of integrated CAD/CAM approach is now being made in the USA and as a result major benefits in time, cost and quality are being achieved.

306

A.H. CARTER

A major impact of this approach is that some projects no longer build prototypes to verify their designs before producing their first production PCBs. The ability to achieve Right First Time is vital if significant time and money is to be saved by using CAD/CAM. The packages that are used to achieve this Right First Time capability are themselves continuously verifying their activities and the design data they are processing. Stage-by-stage verification, and correction at many different stages in the process, is the key to eliminating all errors. 3.3 Integration of Custom LSI Packages

Producing LSI devices is obviously an expensive design manufacture and test process. There is no way of building a representative prototype and therefore CAD/CAM is essential to establish Right First Time, particularly with Custom LSI, because there is often no large quantity production that can justify expensive time and money investment in producing the product. The right-hand part of the horizontal flow diagram in Figure 1 typifies the use of CAD/CAM packages to produce the Custom LSI devices. There is a distinct advantage to be gained by using the modular approach illustrated in Figure 1. The logic simulation and hence design analysis and design verification can be done by the logic designer before submitting his design to a specialist organisation for manufacture of Custom LSI. The specialist organisation can benefit by partitioning his CAD/CAM requirements into three modules:1 2 3 Circuit simulation Placement and layout Process simulation

Because the techniques can change in each of these areas and it is noticeable that major packages have been developed basically keeping these areas as independent although related to each other and hence need of modular inter1 inking/integration. 4 NEW DESIGN CRITERIA AND EQUIPMENT PRACTICES

As mentioned earlier CAD/CAM is a form of Automation that spans Engineering Design, Manufacture, Test and Quality of all three functions. As in most automation, in order to optimise its use, many controls (or lack of) and procedures, must be changed. For example, percentages testability will be established at the design stage and formally quantified and documented. Design verification will include worst case tolerance and hence improve design quality and reduce cost of ownership in the field. Density of I/Cs per board area and number of pins allocated for achieving high percentage testability will have to be established as part of the design criteria.

EUROPEAN COMMUNITIES STUDY: SURVEY IN USA AND CANADA 5 5.1 BENEFITS OF A CAD/CAM SYSTEM Benefits, Time, Cost and Quality

307

Companies that are most advanced in CAD/CAM applications claim the foremost benefit is a drastic reduction in design manufacture and test cycle time. Companies quote 30% to 50% reduction in the cycle time from receipt of contract to customer acceptance, and these figures are based on projects having been completed using an Integrated CAD/CAM System against estimated times based on past experience of similar project completion times before the application of the CAD/CAM System. Cycle time reductions of this magnitude obviously bring major benefits. Some of the general benefits of CAD/CAM systems are as fol lows:1 2 3 4 5 6 7 8 9 10 11 12 13 14 6 Increased capture of market Improved control of design criteria and equipment practices Establishment of Engineering Design Data Base Improved interface between Engineering Production and Test Improved quality and testability Increased productivity/man Increased period, to sell a given product Increased profits Cash flow improvement Improved accuracy of predicted cycle times and cost estimates Worst-case design analysis Reduction in fault diagnostic costs Reduction in cost of ownership to customer Reduction in operational down-time

INTEGRATED DATA BASE FOR ENGINEERING MANUFACTURE AND TEST

A CAD/CAM System virtually captures the bulk of the prime engineering design information. Studies since the USA visit have shown that in the PCB area the integration of the various packages will enable the basic design data to be input only once and this information can then be used across the packages. This reduces the cost and time of inputting into a number of autonomous (stand-alone) packages. It also means that all changes to the prime data are only done once so avoiding the danger of having prime data not up-to-date in a range of isolated autonomous systems; in fact a Prime Data Base is established. In the EDP world, EDP Data Base Systems have been established primarily on large Main Frame computers. However, changing technology has produced mini and micro computers that provide considerable computer power relatively cheaply and this in turn enables EDP systems to consider distributed mini computer systems

308

A.H. CARTER

coupled to main frame. This means a new generation of interactive EDP systems using mini computers is now being established. The CAD/CAM Integrated System approach is also based on using mini computers in a batch and interactive mode. Clearly, CAD design analysis requirements capture the basic Engineering Data Base up-stream of the EDP systems. There is therefore a major incentive to link/converge the basic CAD/CAM and EDP systems together. 7 CONCLUSIONS

The survey proved very successful because it was interactive. The objective of the symposium is to provide a high degree of interaction between delegates and speakers and highlight the prime packages, their performance and their use by major parts of the industry. The USA/Canada visit provided very valuable information. There will be a continuing need to establish the relative performance of CAD/CAM packages and take into account the ease or difficulty of integrating these packages. In the future new packages should be designed taking into account the need to integrate with other packages and ideally the source code should be available to users, so that they can enhance the systems to meet their own special requirements. In order to create cost-effective applications of CAD/CAM systems in industry a major impact will occur in design manufacture and test, new design criteria and equipment practices will be needed. The application of CAD/CAM to industry can significantly cut the cycle times from receipt of contract to delivery.

0 AUTO TEST GENERATOR

il
8YSTEM SIMULATION

CUSTOM L S I ROUTE

LOCIC SYNTHESISERS

LOCIC SIMULATION

CIRCUIT

SIMULATION

SEMI RUTO LAYOUT

PROCESS SIMULATION

v
E RUTO PLACEMENT TO ROUTING

1
PC ROL TE

,'

I
RUTO TE ST
F

ENG. D BABE

BUSINESS DATA BABE

3 3

INTEGRRTED C D / C M SYSTEM ENGINEERING DfTR BSE

FIG. 1

TECHNICAL FORUM

Chairman: Jacob VLIETSTRA, Philips, Eindhoven, The Netherlands

G. Musgrave, editor, COMPUTER-AIDED DESIGN oi digital electronic circuits and systems North-Holland Publishing Company ECSC, EEC, EAEC, Brussels S Luxembourg, 1979

TECHNICAL FORUM I Chairman Panel Jakob Vlietstra, Philips Eindhoven Luther Abel, DEC, USA Fred Hembrough, Raytheon, USA Roy McGuffin, ICL, UK F. Klashka, Siemens, West Germany

Introduction Each member of the panel had presented papers earlier in the symposium in which they had indicated the extensive use their respective companies are making of CAD facilities, even though the individual implementation varied considerably. As advocates of CAD the chairman immediately asked for confessions! What problems had they had with their various CAD Suites? This resulted in some very frank statements from the speakers, which prompted discussion and comment from the conference audience. For convenience and clarity it is best to summarise the discussions under the basic problem headings of data base management, system integration, man machine interfaces and discipline versus creativity. Database Management This was recognised by many participants to be an important key to the success of any CAD application. There were contrasting views between those who believe there should be rigid control of what data (usually meaning component information, function, electrical parameter, mechanical parameters, geometry, source etc.) should be placed on the data base and those who advocate that every user should have the right to place any component in the library, even if it was from the 'radio shack' round the corner. The former group who either used a 'vetting' committee or quality control engineers from the purchasing departments agreed there were problems on two fronts. The rigidity tended to stifle innovation from the designers on the one hand, and as the quality assurance departments never used the data base there was little concern about its accuracy and up-keep. There was a body of opinion who believed that the design staff - the primary users were the people who should control the data base - 'after all they cared! 1 . At the same time it was appreciated that the same group wanted degrees of freedom which cannot be tolerated with such incredibly large complex systems. Thus many favoured the engineering compromise of having local data bases and global data bases. The former being more flexible and the latter being more strictly controlled. There were other stratas of the data base which can be best described as functional levels. For example the component information was considered by some to be distinct and separate from simulation information. However, having a separate simulation library caused problems of how to keep them in synchronism. The arguments presented centred on the necessary knowledge and understanding of the required model by the user for a particular application, thus it was necessary to have separate simulation data bases. This was certainly a problem area highlighted in the survey study and agreed by the participants that many people used packages without the necessary understanding of the models used. In general there seemed to be acceptance of the difficulty of allowing flexibility to the users and retaining discipline and control of the data base. Although many 313

314

J. VLIETSTRA

of the technical aspects of the data base had been solved, on the management side there was still a great deal to be learnt. Certainly some members agreed that component libraries were very important and they were absorbing a great deal of time and effort which could be shared. System Integration This subtitle can have two meanings 1. 2. the integration of different software packages to form a CAD suite the integration of CAD with the user and the manufacturing process.

In fact both aspects were discussed in the forum by the panel and speakers from the floor all of whom indicated the importance of both aspects of system integration. There is a growing probability that no one company will be able to develop inhouse software for the many applications of CAD to its business if it has not already got a substantial investment. Consequently more companies will be purchasing programs which, to be effective, must be compatible with existing packages, data base and manufacturing machines (A.T.E., N.C. machines, etc.). A whole range of 'integrated' systems was cited, for example, it was necessary in one extreme to use different input languages for different parts of the suite with further languages for editing. Nevertheless the company was able to enjoy many of the advantages of CAD while it progressed the system integration. A number of experienced CAD specialists did emphasise the importance of having a planned overall structure to the whole CAD suite, but that it was essential to do a step-by-step development; this is particularly important when there are so many applications with ill-defined problems. By defining the problem/application in small steps and solving it before moving to the next one, you gain the confidence of the user and thus ease the widely expressed problem of gaining acceptance of design programs amongst designers. It was evident that once CAD had gained status in project terms the task of the central CAD unit was considerably easier in many ways although their work load increased as the demand for their services grew. The centralized coordination was considered to be important particularly when the company had multi-site working with the often further disadvantage of different host computers. (Surprisingly there was little formal discussion on the portability of software although it was a topic of frequent discussion at informal sessions.) A number of speakers were keen to emphasise the importance of ensuring that the outputs from the design programs were able to drive the manufacturing aides (mask plotters, N.C. tapes, automatic wiring machines, A.T.E. etc.) without further processing/translation. It was further considered desirable and possible to rationalise the manufacturing process using CAD to cope with large as well as small digital systems product. The integration of the user with the software available had been mentioned in connection with a number of papers presented at the symposium. The education of users was the subject studied in the EEC project. Fred Hembrough summed it up when he said Optimistic and pessimistic views (of CAD potential) are bad particularly when held by managers, it is most important that they know what can and cannot be done (using CAD) for their project'. He went on to say his company have a project to make CAD software 'User Friendly', a phrase that many users hope will be practised by the CAD program creator.

TECHNICAL FORUM I Man Machine Interfaces

315

In general many of the speakers indicated that there were a number of projects aimed at improving man's aids to communicate with the machine. The Scandinavians seemed prominent in this field, some of the projects cited were special purpose designed machines to support computer graphics for CAD, 'pen and paper' input to accommodate the creative design environment, and voice input. A Hungarian member indicated that an optical scanning process showed considerable promise for input of diagrammatic information. There was certainly a difference between the USA and European companies in that the former had a much greater use of graphical interfaces which supported their belief in interactive working with many of the CAD programs. However, it was pointed out that perhaps the problems of man/machine interfaces were over emphasised because although the old generation of design draughtsman may not like the new technology, the younger generation 'know no different' and therefore accept it and work efficiently with it. Discipline Versus Creativity One of the dominant problems throughout the symposium was that of coping with the ever increasing complexity ("doubling every year for the next eight years"). Some companies, notably IBM, had opted for discipline/design rules which the designers had to adhere to, with an extra component overhead of 20-25% but achieving testable designs. This provoked considerable exchange of views on the balance between creativity flexibility complexity rules control testability

and the list may be extended, but what was apparent was that no one could offer the panacea. Roy McGuffin pointed out that we did not understand the Nature of Design, so how could we produce successfully aids to creative design. Certainly a number of people from different industries felt that partial solutions could only be achieved by working from the top-down in a structured CAD environment.

G. Musgrave,

ECSC, EEC, EAEC, Brussels

ai digital electronic circuits and systems North-Holland Publishing Comapny

editor,

COMPUTER-AIDED DESIGN S Luxembou/ig, 1979

TECHNICAL FORUM II Chairman Panel Jakob Vlietstra, Philips Eindhoven Cliff Gaskin, Litef, West Germany Mel Breuer, University of Southern California, USA Doug Lewin, Brunei University, UK R. Schauer, IBM, USA

Introduction The panel were chosen for their contrasting views on the problem of testing and these were clearly stated when the panellists summarized their views and philosophy in their opening statements. Cliff Gaskin has an optimistic outlook; the problems in testing are being solved every day and very often with the close collaboration of component manufacturers and users. This collaboration was very important because the problems of getting the models correct for the application program cannot be solved by the systems test engineer alone, appreciation and understanding of the software as well as the component model was important. In his opinion no company could afford to be without automatic generation of test stimuli because there were always going to be design changes and the present product lead times do not allow for manual regeneration of test waveforms. Prof. Breuer took a more pragmatic view. Testing problems really do not exist at the SSI and MSI levels only in LSI and VLSI technology. Here the problem is extremely difficult and there is no universal solution to this only a number of strategies all of which should be pursued. a. b. c. Test what you can test for and redefine the problems which are outside your testing capability. If your application program works as specified, if it does not then the chip is sent back. The most important strategy is to design for testability. In the design stages use CAD programs to give feedback, ensuring observability and control ability are available. Use the whole range of techniques such as added logic, three operating modes etc. to achieve these characteristics.

He finished on an optimistic note by saying that many of the present problems could be solved if the designer took responsibility for testing. In contrast Prof. Lewin painted a very pessimistic picture. The problem of testing sequential circuits was very old, 11 years, and he could not cite any real progress in that time. He asked the VLSI designers to consider more regular structures in their design so that partitioning could allow effective synthesis to be practised. Also as the VLSI models more effectively at the silicon level rather than the gate level, then the logic design tools should work at this level including the test pattern generation algorithms and fault models. However the worst aspect of this technology advance in Doug Lewin's opinion was at the total system level, how do you use VLSI, how can it be put to good use as there are no system design tools, no system specification and consequently no possibility of 317

318

J. VLIETSTRA

testing at this non-specified total system level. Software designers could cope with complexity because they were their own masters and could do a top-down design using structured programming. But the system hardware designer could only go part of the way down the design structure before he depended upon the component manufacturer. Dr. Schauer immediately seized on this latter point to show how IBM did not have these impasses. IBM make their own components and thus their system designers have the required detailed component specification fault modes and diagnostics. He went on to share Cliff Gaskin's view that industry will cope, engineering change will go on and that the problem of testing needed a flexible environment for the designer. When design engineers work in an interactive mode on test generation they can use their full engineering intuition and achieve results. The chairman asked for comments from the floor hoping the audience had not been too fractured by the diverse views expressed by the platform. It was Dr. Bennetts (Southampton University, UK) who felt moved to present his thesis on testing. Summarising his views he considered that people were not looking at the real problem, they were only seeking short term solutions. In the electronics industry we needed to take note of the philosophy prevalent in high risk industries, aerospace and nuclear, and accept fault tolerance as a design criteria. Paul Roberts (SMC, USA) considered that Mel Breuer's suggestions on redefining the problems were just not acceptable because most of the present designs just cannot be redefined. Other speakers asked whether the testing problems will effectively halt the development of VLSI and could this be a reason why custom LSI manufacture was increasing. The panel did not believe VLSI development would be particularly hampered, nor did they believe this was a reason for the growth of custom LSI as any sane manufacturer would insist on some testing, if only to satisfy his customer. Perhaps the only agreement that could be seen in this session was the necessity to ensure that the designer took responsibility for testing very early in his design strategy.

EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE

Chairman: S. BIR, E.E.C., Brussels, Belgium

o digital electronic circuits and systems North-Holland Publishing Company ECSC, EEC, EAEC, Brusieli t Luxembourg, 1979

G. Musgrave,

editor,

COMPUTER-AIDED DESIGN

FINAL SESSION EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE Chairman Speakers S. Bir C. Layton Prof. H. De Man Head of DP Projects Bureau, EEC Directorate General for Internal Market and Industrial Affairs, EEC Chairman of Technical Committee of CAD Electronics Study

The Chairman explained how at a very early stage of the CAD study project it was recognised that a European symposium on the subject would be useful to community members and provide informative feed-back to the Commission. The purpose of this session was to give delegates the opportunity to hear from the Director Christopher Layton just how this particular project area mapped into the overall strategy of the Commission and for him and the Chairman of the Technical Committee, Prof. De Man to hear from the delegates their views on what direction further work in this area should take. Prof. De Man reminded the audience of some of the proposals for future work that the study had revealed. He pointed out that the report on the project had given priorities to this list and developed detailed business plans for a number of the proposals but that the list (Table I) presented here did not reflect these findings. Prof. De Man also took the opportunity to point out that procedures for initiating and then completing the study had taken some four years and that with the evolution rate of technology it was essential that future decisions in this area were taken quickly and acted upon fast. Christopher Layton briefly explained to delegates where this type of work fitted in to the EEC structure and plans. Essentially the action on Data Processing has a four year programme with unanimous support of member states and this has two specific headings: a. b. Standardisation, and Application support scheme (32,000,000 Europe accounting units).

The CAD electronic field had implications under both headings and therefore fitted within the spectrum of work. He did however, wish to make it clear that the standardisation was not an effort to close Europe but more to provide common I.S.O. implementation and thus allow national procurements to deal with the finer details. Engineers and managers were reminded that as with any programme there were competing claims for resources and that any decisions to favour one proposal instead of another would be taken on a broad base and not necessarily on narrow technical merits. Mr. Layton then made a number of comments relating to the proposals of Table I. 1. The component data bank - a good application area which was highly eligible and there was political concensus for this and hence resources.

321

322 2.

S. BIR Data Communication Network - Euronet - This proposal clearly had implications which could provide the necessary catalyst for further development but would need discussions with community P.T.T.'s. Product Areas - although community support would be desirable Mr. Teer's opening remarks were cited, namely that the first major task to learn to apply the stateof-the-art technology is that up to 50% must be really viable. Education - there was a despondent note to Mr. Layton's comments here, he said the community has not grappled with education and that there was a need for much greater discussion at member state level and possibly a need for more initiative in the educational field.

3.

4.

He finally placed CAD of Electronics in context with VLSI systems and pointed out that the EEC like member states and the United States recognised the new radical range of problems this technology threatened. The EEC had created two groups to study and report on 1. 2. Applications of VLSI CAD of VLSI.

In fact some members of these groups were attending this symposium and were benefitting from the international discussion of the CAD aspects of VLSI. The Chairman then invited questions and comments from the delegates. Many spoke giving views and experiences as well as asking questions; the most dominant theme was that of the problems with creating and maintaining a common data base with appropriate CAD component models. Many delegates considered it unlikely that component manufacturers could be pressurised to provide the necessary details for effective models to be generated. Although it was recognised that if one vendor did then it was likely that many would follow. There were those delegates who took the view that there were many models already available within European industry, often obtained by 'negative engineering' (slicing open chips). One delegate Mr. Gaskin felt that there should be a European spirit and he for one would consider making Litef's models available to the community and if a number of companies did this the proposal could quickly provide a needed service. In general there appeared to be support for the data base proposal particularly if it included a six month study which would answer some of the problems cited in the papers and discussion at the symposium. It was Mr. Patrick (GEC Marconi UK) who expressed the disappointment of many delegates that not enough information was released about the findings of the study. It was Mr. Bir who answered this by explaining that a great deal of information had been solicited from companies under an undertaking of confidentiality. Equally the study took in a spectrum of views and it was this overall spectrum and the more general aspects which should be concentrated upon and releasing more details would only cause deviation from the larger problems and goals. Delegates were invited to write to the Director if they wished to express further views on the EEC perspective. Mr. Layton expressed a special thanks to the contributors from outside the Community and a general appreciation to all those who had participated was extended by Mr. Bir on behalf of the EEC.

EUROPEAN ECONOMIC COMMUNITY PERSPECTIVE

323

T A B L E

E. E. C.

O P P O R T U N I T I E S

1. 2.

Education to influence constituent governments to influence education and t r a i n i n g courses to r e f l e c t the impact of the d i g i t a l r e v o l u t i o n . Retraining to establish organisations and f a c i l i t i e s f o r the retraining of electronic engineers, needs f o r sharing expert knowledge in the community i n order to make retraining e f f i c i e n t and e f f e c t i v e . Procurement Lobby by c o l l e c t i v e pressure on component suppliers more d e t a i l about components could be forthcoming. In some cases j o i n t projects to establish data characteristics suitable f o r component modelling f o r CAD could be established. Common Database and Network Communication establish a component model database and provide the necessary back-up service to maintain and up-date i t and give user advice. To t h i s end u t i l i s a t i o n of a European data communications network would be advantageous. Standards establish a comprehensive set of standards f o r European use of CAD. Many of these standards would be established de facto by network system as in 4. The aim would be to allow t r a n s f e r a b i l i t y and interchange of both CAD packages and techniques and also of designed c i r c u i t elements and component models.

3.

4.

5.

I N D E X

OF

A U T H O R S

ABEL, L. AVENIER, J r P . BREUER, M.A.

139 149 57

CARTER, A.H. COLANGELO, R. DAVIGNON, E. DE MAN, H. DE MARI, A. GASKIN, C. HEMBROUGH, F. HOFFMAN, S.C. JONES, H.E. KANI, K. KLASCHKA, F. KLOMP, J. LATTIN, B. LEWIN, D. LIPP, H.M. LOOSEMORE, K. MCGUFFIN, R.W. MICHARD, J. MUSGRAVE, G. MUTEL, J. PABICH, R. QUILLIN, W. RAULT, JTC. ROBERTS, P.E. SCHAUER, R.F. SZYGENDA, S.A. TEER, K. TERAMOTO, M. TOMLJANOVICH, M. WOLSKI, .T. YAMADA, A.

303 207 3 81 247 173 123 229 187 103 133 217 169 25 91 237 13 149 - 2 6 7 149 123 255 149 183 187 41 7 103 207 183 103

325

ISBN 444 85374

CDNA06379ENC

You might also like