Professional Documents
Culture Documents
com: community portals, blogs, videos, trends-surveys, e-letters (IP, PLDs, and Chip Designer), resource catalog, back issues, ...
April/May 2010
Lead Story:
DESIGN
MEETS
AUTOMATION
SEE INSERT
www.chipdesignmag.com FOR DETAILS ON
THE MAIN EVENT
FOR ELECTRONIC
DESIGN
Affiliate Sponsors:
FOR MORE DETAILS, VISIT:
www.dac.com
t "EWBODFTJO4ZTUFN-FWFM%FTJHOBOE4ZOUIFTJT
RE A D O NL INE
w ww. chi pdes i gn m ag .c om
www.dinigroup.com • 7469 Draper Avenue • La Jolla, CA 92037 • (858) 454-3419 • e-mail: sales@dinigroup.com
www.chipdesignmag.com
www.SLDCommunity.com
www.LPDCommunity.com
This “shift” way from being seen as an “EDA tool vendor” Is this just a problem of scale, i.e. is the EDA industry
to instead being perceived as a “system solution provider” is too small?There are plenty of other examples in the
both evolutionary and essential for business survival. EDA SW domain where collaborations/ networks/ seamless
companies can no longer focus solely on the design and integration between multiple vendors is reality.
manufacture of the chips. Instead, they must consider the
chips in relationship to the package and board--and even Once EDA realizes this as well, they have a chance to
in terms of both hardware and software. This is one reason become solution providers. -- T V
why IP has become so critical in the EDA tool chain.
PHYSerDesTX/RXPLL/DLLAnalog Blocks
4423 Fortran Court, Suite 170
San Jose, CA 95134 • 408.942.9300
www.Mixel.com
Come visit us at DAC Booth # 416
WFM
TZTUFNMF
'1(
TQ
H
JO
FD
JO
J¾
"
DB
USB
UJP P SU
O
TV QQ
WFSJ¾
D BUJPO
3FVTF%FMJWFSFE
CFSTIJQ DFS
UJ¾
NFN
HF
DF DBU
JPO
VB
SU
OH
J¾
FSTJU
MB
EFCVH
DB
F
UJW
UJP
VOJW
OB
8IZ
3FJOWFOU
5IF8IFFM
.BYJNJ[F UIF SFVTF PG ZPVS *1 $PSFT XJUI 0QFO $PSF 1SPUPDPM
UIF POMZ
PQFO
JOEVTUSZXJEF TPDLFU TUBOEBSE 6UJMJ[JOH 0$1*1µT GSFF JOGSBTUSVDUVSF
FMJNJOBUFT UIF OFFE UP EFTJHO
EPDVNFOU
USBJO BOE FWPMWF B QSPQSJFUBSZ
TUBOEBSEBOETVQQPSUUPPMT5IJTGSFFTVQWBMVBCMFSFTPVSDFTBOENBYJNJ[FT30*
5IF 0$1 4QFDJ¾DBUJPO EFTDSJCFT B SPCVTU
DPOUJOVPVTMZ VQEBUFE
*1 DPSF
JOUFSGBDF
PSTPDLFU
UIBUJTTVQQPSUFEBOECVJMUVQPOCZUIFCJHHFTUOBNFT
JO UIF JOEVTUSZ 5IF NBOZ CFOF¾UT PG 0$1*1 NFNCFSTIJQ JODMVEF TVQQPSU
UPPMT
USBJOJOH
NBSLFUJOHPQQPSUVOJUJFTBOENVDINPSF
7JTJU PVS 8FC TJUF UPEBZ BOE TFF IPX NFNCFSTIJQ XJUI JOEVTUSZ MFBEFST
DBOTBWFZPVUJNF
NPOFZBOESFTPVSDFTXXXPDQJQPSH
-XQH ?Q<I>@/1.&1 ,
$QDKHLP&RQYHQWLRQ&HQWHU
Sponsored by: In technical
cooperation with:
ZZZGDFFRP
GENERAL CHAIR’S WELCOME
LET’S MEET AT DAC!
Dear Colleague:
The 47th edition of the Design Automation Conference in Anaheim is just
around the corner, and I look forward to welcoming you there. As the central
“meeting place” for electronic design and design automation where the
industry puts on its grand annual show, there are many facets to DAC. It’s
the place new contacts are made, where deals are sealed, where theory
meets practice, where colleagues across the industry network, where the
seeds of great new ideas are sowed – and much more. DAC is our annual
signpost that points the way to the future.
As organizers of the event, we work with DAC’s sponsors and hundreds
of volunteers to make it worth your time to attend. This year, in addition to
reinforcing traditional strengths, we have added a number of exciting new
elements. Here’s a sample of what you can see at DAC:
• The keynote lineup features three distinguished and accomplished
industry luminaries: Doug Grose, CEO of GLOBALFOUNDRIES will
address the central role of the foundry in electronic design on Tuesday.
Bernie Meyerson, Vice President for Innovation at IBM Corporation, TECHNICAL PROGRAM
will discuss his vision for next-generation IT infrastructure for EDA and HIGHLIGHTS
the move towards cloud computing, and Iqbal Arshad, Corporate
Vice President of Innovation Products at Motorola, will overview his The technical program for DAC 2010 exceptional-quality
experiences in driving the Motorola Droid from concept to product. technical papers, panels, special sessions, WACI (Wild and
• A vibrant exhibition showcases nearly 200 companies, including all of Crazy Ideas), full day tutorials and User Track. The program
the largest EDA vendors and a significant foundry presence. The Exhibitor is tailored for researchers and developers in the electronic
Forum theater features focused technical presentations from exhibitors, design and design automation industry, design engineers, and
while IC Design Central’s exhibit area and a presentation stage that management. It highlights the advancements and emerging
brings together the entire ecosystem for SOC enablement, including IP trends in the design of electronic circuits and systems.
providers, design services provides, and foundries.
The core of the technical program consists of 148 peer-
• A special Embedded/SOC Enablement Day on Thursday is designed reviewed papers selected from 607 submissions (a 24%
to further advance DAC’s partner eco-system, attracts a mix of chip
creators, ecosystem suppliers, and research-focused participants. acceptance ratio). Organized in 35 technical sessions, these
papers cover a broad set of topics ranging from system-level
• A robust and exciting technical program includes an exciting array of
panels and special sessions that complement a carefully selected subset design, low-power design, physical design and manufacturing,
of the contributed research papers. embedded systems, logic and high level synthesis, simulation,
verification, test and
• The User Track program, specifically designed by and for EDA tool users,
features presentations and poster sessions that highlight outstanding emerging technologies.
solutions to critical design and methodology challenges, and case studies Popular submission themes included:
of innovative tool use. In its second year, it is 50% larger than last year’s
acclaimed program. 1. Power Analysis and Low-Power Design
(83 submissions, 5 sessions)
• An excellent slate of tutorials covers topics such as low-power design,
ESL, and software development for the EDA professional. 2. Physical Design and Manufacturability
• Management Day includes invited presentations and networking (72 submissions, 4 sessions)
opportunities for decision-makers in the industry, and highlights issues at
3. System-Level Design and Analysis
the intersection of business and technology.
(69 submissions, 4 sessions)
• An impressive constellation of fourteen colocated events and six DAC
workshops complements the DAC program: this includes established Some of the novel ideas presented in these papers includes
conferences and symposia such as AHS, DFM&Y, DSNOC, HOST, cutting-edge research in property checking, global routing,
HLDVT, NANOARCH, SASP, and SLIP, as well as meetings on emerging variation characterization, silicon mismatch, cache design for
topics such as bio-design automation, mobile/cloud computing, and routers, rewiring, logic optimization with don’t cares, Boolean
smart grids. matching, and low energy processor design. The papers reflect
As you can see, there’s tons of the increasing importance of system-level design, low-power
good stuff in store – come join us in Anaheim! design and analysis, and physical design and manufacturability.
Sachin S. Sapatnekar
General Chair, 47th DAC
KEYNOTES
FROM CONTRACT TO COLLABORATION: DELIVERING A
NEW APPROACH TO FOUNDRY
Douglas Grose, Chief Executive Officer, GLOBALFOUNDRIES, Sunnyvale, CA
The list of challenges facing the semiconductor industry is daunting. Chip design continues to increase in complexity,
driven by product requirements that demand exponentially more performance, functionality and power efficiency,
integrated into a smaller area. In parallel, manufacturing technology is facing increased challenges in materials, cost and
shorter product lifecycles. This confluence of factors puts the industry at a crossroads and the foundry industry at
TUESDAY center stage.
JUNE 15 Chip design companies need to redefine relationships with their manufacturing partners, and foundries must create a
new model that brings manufacturing and design into an integrated and collaborative process. This presentation will
explore the challenges of bringing the next generation of chip innovation to market through leveraging an integrated
global ecosystem of talent and technology. The world’s top design companies want more than a contract manufacturer;
they want a level of collaboration and flexibility supported by a robust partner ecosystem of leading providers in the EDA,
IP and design services sectors.
Over the last five years the semiconductor industry has acknowledged, but struggled to deal with, the end of classical
device scaling in silicon technology. This has had ramifications across all aspects of the technology spectrum,
as a steady stream of innovations, ever more fundamental, have been required to drive accustomed generational
improvements in Information Technology (IT). Adding to this challenge on the demand side there has been an
WEDNESDAY accelerating and seemingly insatiable need for IT resources, driven by the emergence of the ‘Internet of Things’. With
JUNE 16 such heavy and growing IT demands, key metrics such as system power, cost/performance, and application specific
benchmarks have become a core focus of emerging solutions. It is these same metrics and constraints that also require
advances in the efficiency and optimization of IT. In this talk, I will review how our industry is dealing with each of these
challenges, and explore emerging compute paradigms, such as Cloud Computing, that are impacting EDA directly.
Sponsored by:
MANAGEMENT DAY
Tuesday, June 15
Management Day 2010 is focused on issues at the intersection of
business and technology, and is specifically directed to managers
and decision-makers. Three sessions make up this year’s event.
Two sessions will feature managers representing IDMs, fablight
ASIC providers, and fabless companies, as well as senior managers
designing today’s most complex nanometer chips, and will discuss
the latest solutions and their economic impact. The third session
will be a panel that involves the presenters and the audience in a
Detailed conference and exhibition information is
brainstorming discussion. now available online: www.dac.com.
Register today!
QUESTIONS? Call +1-303-530-4333
PANELS WORKSHOPS
This year’s DAC panels cover nearly every aspect of the design SUNDAY, JUNE 13
flow. The panel sessions start off with a look to the future by a wide • DAC Workshop on Synergies between Design Automation &
range of leaders from the semiconductor industry. The other seven Smart Grid
panels have something for everyone. Panels will explore the future of
• Multiprocessor System-On-Chip (MPSOC): Programmability,
TSV/3D technology, the current state of high-level synthesis, different Run-Time Support and Hardware Platforms for High Performance
approaches to addressing process variability, the future of low-power Applications at DAC
design methodologies and how to bridge pre-silicon verification/post-
• DAC Workshop on Diagnostic Services in Network-On-Chips
silicon validation. One panel will also take a look at what is needed
(DSNOC) - 4th Edition
for an always-connected car. Finally, if you’ve wondered what cloud
computing is all about, a panel will explore how cloud computing fits MONDAY, JUNE 14
in with the EDA industry. • IWBDA: International Workshop on Bio-Design Automation at DAC
TUESDAY, JUNE 15 • DAC Workshop on “Mobile and Cloud Computing”
• EDA Challenges and Options: Investing For the Future • DAC Workshop: More Than Core Competence...What it Takes for
Your Career to Survive, and Thrive! Hosted by Women in
• Bridging Pre-Silicon Verification and Post-Silicon Validation Electronic Design (WWED)
• Who Solves the Variability Problem?
WEDNESDAY, JUNE 16
• 3-D Stacked Die: Now or the Future? COLOCATED EVENTS
• Does IC Design Have a Future in the Clouds? FRIDAY, JUNE 11
• What’s Cool for the Future of Ultra Low-Power Designs? • IEEE International High-Level Design Validation and Test Workshop
THURSDAY, JUNE 17 (HLDVT 2010)
• Designing the Always-Connected Car of the Future
SUNDAY, JUNE 13
• Joint User Track Panel - (Session 8UB) - What Will Make Your Next • International Symposium on Hardware-Oriented Security and Trust
Design Experience a Much Better One? (HOST)
• What Input Language is the Best Choice for High-Level Synthesis • 8th IEEE Symposium on Application Specific Processors
(HLS)? (SASP 2010)
• Design for Manufacturability Coalition Workshop - “A New Era
for DFM”
SPECIAL SESSIONS • IEEE/ACM 12th International Workshop on System-Level
Interconnect Prediction (SLIP)
Special sessions will deal with a wide variety of themes such as
• North American SystemC Users Group (NASCUG 13 Meeting)
progress in networks-on-chip research, virtualization for mobile
embedded devices, challenges in analog modeling, introduction • System and SOC Debug Integration and Applications
to cyber-physical systems, design for reliability, designing resilient MONDAY, JUNE 14
systems from unreliable components, a holistic view on energy • 4th IEEE International Workshop on Design for Manufacturability &
management – cell phones to power grids and post-silicon validation. Yield (DFM&Y)
Leading research and industry experts will present their views on • Choosing Advanced Verification Methods: So Many Possibilities,
these topics. So Little Time
TUESDAY, JUNE 15 • Advances in Process Design Kits Worshop
• Post-Silicon Validation or Avoiding the $50 Million Paperweight TUESDAY, JUNE 15
• Virtualization in the Embedded Systems: Where Do We Go? • ACM Research Competition
• Joint DAC/IWBDA Special Session - Engineering Biology: • NASA/ESA Conference on Adaptive Hardware and Systems
Fundamentals and Applications (AHS-2010)
WEDNESDAY, JUNE 16 THURSDAY, JUNE 17
• A Decade of NOC Research - Where Do We Stand? • IEEE/ACM International Symposium on Nanoscale Architectures
• The Analog Model Crisis - How Can We Solve It? (NANOARCH’10)
• Design Closure for Reliability FRIDAY, JUNE 18
THURSDAY, JUNE 17 • 19th International Workshop on Logic & Synthesis (IWLS)
• WACI: Wild and Crazy Ideas
• Cyber-Physical Systems Demystified
• Computing Without Guarantees
• Smart Power: From your Cell Phone to your Home
EXHIBITOR LIST (AS OF APRIL 12, 2010)
Accelicon Technologies, Inc. Helic, Inc. Synfora, Inc.
ACCIT - New Systems Research Hewlett-Packard Co. Synopsys, Inc.
ACE Associated Compiler Experts bv HiPEAC Synopsys, Inc. - Standards Booth
Agilent Technologies IBM Corp. Synopsys-ARM-Common Platform Innovation
Agnisys, Inc. IC Manage, Inc. SynTest Technologies, Inc.
Aldec, Inc. ICDC Partner Pavilion & Stage Tanner EDA
Altair Engineering IMEC - Europractice Target Compiler Technologies NV
Altos Design Automation Imera Systems, Inc. Teklatech
Amiq Consulting S.R.L. Infotech Enterprises Tela Innovations
AnaGlobe Technology, Inc. iNoCs Tiempo
Analog Bits Inc. Interra Systems, Inc. TOOL Corp.
Apache Design Solutions, Inc. Jasper Design Automation, Inc. True Circuits, Inc.
Applied Simulation Technology Jspeed Design Automation, Inc. TSMC
Artwork Conversion Software, Inc. JTAG Technologies TSMC Open Innovation Forum, Apache
ASIC Analytic, LLC Laflin Limited TSMC Open Innovation Forum, Cadence
ATEEDA Legend Design Technology, Inc. TSMC Open Innovation Forum, eSilicon
Atoptech Library Technologies, Inc. TSMC Open Innovation Forum, Helic, Inc.
Atrenta Inc. Lynguent, Inc. TSMC Open Innovation Forum, Integrand
austriamicrosystems Magillem Design Services TSMC Open Innovation Forum, Lorentz
AutoESL Design Technologies, Inc. Magma Design Automation, Inc. TSMC Open Innovation Forum, Magma
Avant Technology Inc. Magwel NV TSMC Open Innovation Forum, Mentor
Avery Design Systems, Inc. MathWorks, Inc. (The) TSMC Open Innovation Forum, MoSys
Axiom Design Automation Menta TSMC Open Innovation Forum, Solido
BEEcube, Inc. Mentor Graphics Corp. TSMC Open Innovation Forum, SpringSoft
Berkeley Design Automation, Inc. Mephisto Design Automation TSMC Open Innovation Forum, Synopsys
BigC Methodics LLC TSMC Open Innovation Forum, Tela Innovations
Blue Pearl Software Micro Magic, Inc. TSMC Open Innovation Forum, Virage Logic
Bluespec, Inc. Micrologic Design Automation, Inc. TSSI - Test Systems Strategies, Inc.
Breker Verification Systems Mirabilis Design Inc. Tuscany Design Automation, Inc.
Cadence Design Systems, Inc. Mixel, Inc. UMIC Research Centre
Calypto Design Systems MOSIS Uniquify, Inc.
Cambridge Analog Technologies MunEDA GmbH Univa UD
CAST, Inc. Nangate Vennsa Technologies, Inc.
ChipEstimate.com NextOp Software, Inc. Verific Design Automation
Ciranova, Inc. Nusym Technology, Inc. Veritools, Inc.
CISC Semiconductor Design+Consulting GmbH Oasys Design Systems, Inc. WinterLogic Inc.
ClioSoft, Inc. OneSpin Solutions GmbH X-FAB Semiconductor Foundries
CMP OptEM Engineering Inc. XJTAG
CoFluent Design OVM World XYALIS
Concept Engineering GmbH Physware, Inc. Z Circuit Automation
Coupling Wave Solutions PLDA Zocalo Tech, Inc.
CST of America, Inc. POLYTEDA Software Corp.
DAC Pavilion Progate Group Corp. Orange text denotes a new exhibitor
Dassault Systemes Americas Corp. Prolific, Inc.
DATE 2011 Pulsic Inc.
Denali Software, Inc. R3 Logic Inc.
Design and Reuse Rapid Bridge, LLC
Dini Group Real Intent, Inc.
DOCEA Power Reed Business Information
Dorado Design Automation, Inc. RTC Group - EDA Tech Forum
Duolog Technologies Ltd. Runtime Design Automation
E-System Design Sagantec
EDA Cafe-IB Systems Sapient Systems
EDXACT SA Satin IP Technologies
Entasys Inc. Seloco, Inc.
Enterpoint Ltd. Semifore, Inc.
EVE-USA, Inc. Si2
Exhibitor Forum Sigrity, Inc.
ExpertIO, Inc. Silicon Design Solutions
Extension Media LLC Silicon Frontline Technology
Extreme DA SKILLCAD Inc.
FishTail Design Automation, Inc. Solido Design Automation
Forte Design Systems Sonnet Software, Inc.
Gary Stringham & Associates, LLC Springer
GateRocket, Inc. SpringSoft, Inc.
GiDEL StarNet Communications
Global Foundries Synapse Design
Gradient Design Automation Synchronicity - see Dassault Systèmes
EXHIBITION
The 47th DAC exhibition is located in Halls B and C of the Anaheim Convention Center.
Visit the DAC exhibition for an in-depth view of new products and services from nearly 200 vendors spanning all aspects of the electronic
design process, including EDA tools, IP cores, embedded system and system-level tools, as well as silicon foundry and design services.
EXHIBITION
HOURS
MONDAY, JUNE 14 -
WEDNESDAY, JUNE 16
9:00am - 6:00pm
Register Online by May 17 and Save!
REGISTRATION OPTIONS: WORKSHOPS/COLOCATED EVENT registration also
includes entrance to the Exhibition, Monday through Wednesday.
Internet registration is open through June 18. Mail/fax registrations are accepted
through June 8. Visit the DAC website for online registration, complete conference
and exhibition details, travel and hotel reservations and information on
FULL CONFERENCE REGISTRATION includes: access to all three days visiting Anaheim at www.dac.com.
of the Technical Sessions, User Track Sessions, Embedded/SOC Enablement Day,
access to the Exhibition, Monday through Wednesday, the 47 Years of DAC DVD CANCELLATION/REFUND POLICY:
Proceedings and the Tuesday Night Party. Written requests for cancellations must be received in the DAC office
by Monday, May 17, 2010 and are subject to a $25.00 processing fee.
STUDENTS FULL CONFERENCE REGISTRATION IEEE Cancellations received after May 17, 2010 will NOT be honored and all
MEMBER OR ACM MEMBER registration fees will be forfeited. No faxed or mailed registrations will be
A special student rate applies to individuals who are members of ACM or IEEE and accepted after June 8, 2010.
are currently enrolled in school. Students must provide a valid ACM or IEEE student
membership number and a valid student ID. ACM/IEEE Student registration includes: Telephone registrations are not accepted!
all three days of the Technical Conference, Embedded/SOC Enablement Day, Faxed or mailed registrations without payment will be discarded.
access to the Exhibition, Monday through Wednesday, the 47 Years of DAC DVD
Proceedings and the Tuesday Night Party. 8VHU7UDFN
ONE/TWO DAY REGISTRATION INCLUDES: include the day(s) User Track Sessions $185 $240
you select for the Technical Conference, access to the Exhibition, User Track (UT)
Sessions, Monday through Wednesday, and the “47 Years of DAC”
DVD Proceedings. 7XWRULDOV
EXHIBIT-ONLY REGISTRATION allows admittance to the Exhibition, Full-day $300 $400 $200
Monday through Wednesday and includes the Tuesday Night Party.
Half-day $180 $240 $120
USER TRACK SESSIONS registration includes entrance to the Exhibition,
Monday through Wednesday and all Keynotes. User Track Sessions are included Quarter-day $100 $130 $80
in the Full Conference registration and the One-/Two-day registration on the day(s)
attending the technical conference. :RUNVKRSVDW'$&
MANAGEMENT DAY registration for this event includes entrance to the DAC Workshop on Diagnostic Services
Exhibition, Monday through Wednesday, and all Keynotes. in Network-on-Chips (DSNOC) -
TUTORIALS are offered on Monday, June 14 and Friday, June 18. There is one 4th Edition
quarter-day tutorial, two half-day tutorials, and four full-day tutorials. The full-day tutorial Multiprocessor System on Chip
registration fee includes: continental breakfast, lunch, refreshments and tutorial notes. Sunday,
(MPSOC): Programmability, Run-Time $150 $195
The half-day tutorial registration fee includes: continental breakfast, refreshments June 13
Support and Hardware Platforms for
and tutorial notes. The quarter-day tutorial registration fee includes: refreshments and High Performance Applications at DAC
tutorial notes.
DAC Workshop on Synergies between
EMBEDDED/SOC ENABLEMENT DAY is a day-long track of sessions Design Automation & Smart Grid
dedicated to bringing industry stakeholders together in one room to shed light on
where SOC design is headed. The day is comprised of presentations from leading DAC Workshop: More Than Core
SOC enabling sectors, including embedded processors, embedded systems, EDA, FREE up
Competence...What it Takes for Your
FPGA, IP, foundry, and design services. to 100
Career to Survive, and Thrive! Hosted Monday, attendees
by Women in Electronic Design (WWED) June 14
&RQIHUHQFH Advance Rate Late/On-site Rate
5HJLVWUDWLRQ5DWHV Received by May 17 Received After May 17 DAC Workshop on “Mobile and Cloud
Computing” $150 $195
Full Conference $475 $595 $230 $570 $695 $295 International Workshop on Bio-Design Monday,
One-Day Only (Tue., Wed., Thurs.) $325 $325 Automation at DAC (IWBDA) June 14 & $230 $305
Tuesday,
Two-Day Only (Tue, Wed., Thurs.) $525 $525 June 15
Exhibit-only access all days $50 $95
(Mon. - Wed.) ACM/IEEE Non-member
Member
Monday Exhibit-only (Monday) FREE FREE
Management Day (Tuesday) ACM/IEEE Student Non-member
$95 $95 Student Member
Embedded/SOC Enablement Day $95 $95
½ News
½ Technology Trends
½ Design Centers
½ Blogs
½ iDesign
½ Focus Report
½ Commentary
½ Technical Papers
½ Email Newsletters
ChipDesignMag.com
Dedicated to the information needs
of the IC design market
IN THE NEWS By Jim Kobylecky, Managing Editor
Hardware/Software Development
How to choose the right prototype for pre-silicon software development
PROTOTYPING
Prototyping for software development can be done at different Available later in the design flow, but still well before silicon, FPGA
stages, with various pros and cons. Although not reflected in Figure prototypes can serve as a vehicle for software development, as well.
1, previous-generation chips are often used for actual application They are fully functional hardware representations of SoCs, boards
software development while the project is under way. Depending and I/Os. They implement unmodified ASIC RTL code and run
on the number of changes from one chip generation to the next, at almost real-time speed, with all external interfaces and stimulus
software can be developed on the previous device, and as soon as connected. They offer higher system visibility and control than the
ESL
control capabilities of virtual platforms. Their key advantage is their much easier to provide a virtual platform for download via the
ability to run at high speed – multiple MIPS or even 10s of MIPS Internet then to ship a board and deal with customs, bring-up and
– while maintaining RTL accuracy, but depending on the complexity potential damages to the physical hardware.
of the project, they will typically be available much later in the design
flow than virtual prototypes. Due to the complexity and effort of WHICH PROTOTYPE SHOULD I CHOOSE?
DESIGN
HARDWARE-SOFTWARE
mapping the RTL to FPGA prototypes, it is not really feasible to use So, how do users choose the appropriate prototype for early software
them before RTL verification has stabilized. Finally, once stable and development? Several characteristics determine the applicability of
available, the cost of replication and delivery for FPGA prototypes is the chosen prototyping approach and the models it is built from.
higher than for software-based virtual platforms. Summarized in Figure 2, they fall into the following eight categories:
Emulation provides another hardware-assisted alternative to • Time of Availability: The later models become available in the
enable software development. It differs from FPGA prototypes design flow compared to real silicon, the less their perceived
in that it enables better automated mapping of RTL into the value to hardware/software developers will be.
hardware together with faster compile times, but the execution • Execution Speed: Developers normally ask for the fastest
speed will be lower and typically drop to the single-MIPS range models available. Execution speed almost always is achieved by
or below. The cost of emulation is also often seen as a deterrent to omitting model detail, so it often has to be traded off against
replicating it easily for software development. Both emulation and accuracy.
FPGA prototypes are limited when it comes to true hardware/ • Accuracy: Developers normally ask for the most accurate models
software co-development because at this point in the design flow, available. The type of software being developed determines how
the hardware is pretty much fixed, as RTL is almost verified. accurate the development method must be to represent the
Design teams will be very hesitant to change the hardware actual target hardware, ensuring that issues are identified at
architecture unless a major architecture bug has been found. the hardware/software boundary. However, increased accuracy
requires simulating more detail, which typically means lower
execution speed.
• Production Cost: The production cost determines how easily a
model can be replicated for furnishing to software developers. In
general, software models are very cost-effective to produce and
can be distributed as soon as they are developed. Hardware-
based representations, like FPGA prototypes, require hardware
availability for each developer, often preventing proliferation to a
large number of software developers.
• Bring-up Cost: Any required activity needed to enable a models
outside of what is absolute necessary to get to silicon can be
Figure 2: Eight model characteristics for choosing prototyping solutions considered overhead. The bring-up cost for virtual prototypes
and FPGA prototypes is often seen as a barrier to their use.
Finally, after the actual silicon is available, early prototype boards using • Debug Insight: The ability to analyze the inside of a design,
first silicon samples can enable software development on the actual i.e., being able to access signals, registers and the state of the
silicon. Once the chip is in production, very low-cost development hardware/software design, is considered crucial. Software
boards can be made available. At this point, the prototype will run at simulations expose all available internals and provide the best
real-time speed and full accuracy. Software debug is typically achieved debug insight.
with specific hardware connectors using the JTAG interface and • Execution Control: During debug, it is important to stop the
connections to standard software debuggers. While prototype boards representation of the target hardware using assertions in the
using the actual silicon are probably the lowest-cost option, they are hardware or breakpoints in the software. In the actual target
available very late in the design flow and allow almost no head start hardware, this is very difficult – sometimes impossible – to
on software development. In addition, the control and debug insight achieve. Software simulations allow the most flexible execution
into hardware prototypes is very limited unless specific on chip control.
instrumentation (OCI) capabilities are made available. In comparison
• System Interfaces: It is often important to be able to connect • For verification, the combination of transaction-level models
the design under development to real-world interfaces. While (TLM) with signal-level RTL offers quite an attractive speed-
FPGA prototypes often execute fast enough to connect directly, up, and users have started to adopt this combination of mixed-
DESIGN
development using virtualized interfaces of new standards, e.g., level simulation for increased verification efficiency. This use
USB 3.0 can be done even before hardware is available. model is effective even when RTL is not fully verified yet and
FPAG prototypes are not yet feasible.
• For software development, system prototypes, i.e. the
PROTOTYPING
• Application software can often be developed without taking the Today, many companies already view prototyping as mandatory
actual target hardware accuracy into account. This is the main to ensuring functional correctness of their designs and enabling
premise of SDKs, which allow programming against high-level early software development. As this article has illustrated,
APIs representing the hardware. however, there is no “one size fits all” prototyping solution –
• For middleware and drivers, some representation of timing may developers must select the approach that best meets their specific
be required. For basic cases of performance analysis, timing project requirements. One thing is certain: with the trend toward
annotation to caches and memory management units may be software continuing to escalate, implementing prototyping and
sufficient, as they are often more important than static timing of combinations of different prototyping techniques will gain even
instructions when it comes to performance. greater importance for future design projects. V
• For real-time software, high-level cycle timing of instructions can
be important in combination with micro-architectural effects. As director of product marketing at Synopsys, Inc.,
• For time-critical software – for example, the exact response Frank Schirrmeister is responsible for the System-
behavior of interrupt service routines (ISRs) – fully cycle- Level Solutions products Innovator, DesignWare®
accurate representations are preferred. System-Level Library and System Studio, with
a focus on virtual platforms for early software
Given the above considerations, it comes as no surprise that none of development. Prior to joining Synopsys, Frank
the prototyping techniques fits all applications. For users who need held senior management positions at Imperas, ChipVision, Cadence,
to balance time of availability, speed and accuracy of prototypes, AXYS Design Automation and SICAN Microelectronics.
combining different prototyping techniques offers a viable solution.
ESL
TRANSACTION-LEVEL
MODELING
T he ballooning size and complexity of system-on-chip
(SoC) designs has become an urgent driver of higher
levels of design abstraction. Just as it’s been a long time
• Verification must be achievable at an early stage without
gate- and cycle-level simulation.
• Architects and designers don’t specify in detail the
since you could design electronic circuits one transistor at a functionality of the entire SoC. Some blocks will be
time, it’s now impossible to create an SoC one gate at a time. designated for detailed custom design, but many will be
Not only would your design time push you way beyond the imported as IP, with their internal workings opaque.
useful market window, but coordinating the roles of the • Software, executing in one or more processors on the
many members of your design team, from architect to tester, SoC, provides ever greater amounts of functionality.
would be impossible. Writing that software can take as long, or longer, than
designing the silicon platform on which it will run.
Transaction-level modeling (TLM) has provided a means • Early validation of architecture and software often
for starting designs at a more abstract level. But the path requires emulation hardware, much of which uses
from TLM down to physical implementation is far from technology like FPGAs, which is far different from
smooth. There are too many holes and inconsistencies in what the end silicon will look like. You must therefore
the flow; each company has to invent something themselves be able to express the design at a level where it you can
to get a useful result. The widespread use of third-party target it at the emulator and at silicon without requiring
intellectual property (IP) and the need to incorporate significant rework.
software add entirely new dimensions to the problem. What
you need is a more unified approach to turning high-level The result, even before taking into account any analog
abstract design concepts into real chips. circuitry you need to have on-chip, is a heterogeneous
amalgam of bits and pieces that you have to bring together
MOVING UP A LEVEL into a design. In the early stages of the planning, you may
It’s hard to use the word “unified” when describing SoC designate some of the blocks as IP, with their functionality
flows. Depending on the process node and performance either partially or completely known; you’ll mark others
requirements, you have innumerable options involving for custom creation, and you won’t know their specific
speed, power, and manufacturing yield optimization. behavior until someone actually does the design. You will
However, for a design described in RTL, it’s still relatively have some functionality written in RTL (even if bare-
straightforward to push the design through a flow and have bones), some in SystemC, and some in C/C++ or some
polygons come out the other end. That flow may make use of other software language.
involved scripts put together by clever CAD managers, but
variations within the digital domain are generally related to This means that architects need to be able to pull the pieces
optimization rather than actual behavior. together at a “rough” level, making sure that everything plays
nicely together, or specifying the rules so that everything
So even with these variations, the RTL-to-silicon flow is far will play nicely, and then dispatching the pieces for
more predictable than what’s required to transform a more implementation and integration. TLM provides a way for
abstract description into RTL. You simply can’t assemble an architects to manage such high-level planning, but if the
SoC using a single behavioral description in one language. designers that will implement custom blocks essentially
end up throwing away what the architects did and starting
• Architects need to be able to experiment with broad their designs based on a paper spec, not only is work being
ranges of functionality without having to specify gate- redone, but errors can be newly introduced. A flow that
level behavior. They should be able to make first-level connects the TLM work to the RTL work will reduce both
performance, power, and area tradeoffs. design time and the number of validation iterations.
DIFFERENT LEVELS OF TLM This means that simply having a single TLM model for a
It’s inaccurate simply to talk about TLM as if it were a single block or piece of IP isn’t sufficient. Different blocks may
MODELING
level of abstraction above RTL. Abstraction comes at the have different accuracy levels; you can’t just plug them
cost of accuracy, and, depending on the task, you may need together and expect them to work. In fact, designers of
to select different levels of abstraction in order to achieve IP and custom blocks may need to develop multiple
sufficient accuracy depending on what you’re trying to do. interfaces to address the needs of different steps in the
The key to achieving this is the fact that TLM really deals design process. Exactly what those expectations should be
ESL
with interfaces: different blocks are plugged together, with has not been standardized.
their behaviors abstracted and their interfaces interacting.
The cost of developing these different levels of TLM model
Accuracy boils down to the fidelity with which the interface varies widely. Virtual prototypes are easiest because they’re
will model the finished block’s behavior. Greater accuracy written in software, and therefore can take advantage of all
comes at the cost of longer verification times, and different the tools available in the software world for verification and
development phases will require different tradeoffs between debug. They also require that only the salient functionality
accuracy and verification time, as illustrated in Figure 1. of the model be implemented. This means that, typically,
you can get virtual prototypes of common IP functions
• Software engineers need the least fidelity; all they need from companies that don’t sell the actual implementation
is for the block to function correctly. Timing is, more IP. These companies focus their value on structuring the
or less, not an issue. This allows for highly abstracted models so that they’ll execute quickly and efficiently.
functional models, also known as virtual prototypes,
that can execute in the range of 10 – 100 million Cycle-approximate and cycle-accurate models require
instructions per second. much more work to build, and are much more closely tied
• Architects need a higher level of accuracy so that they to the specific IP they model. Therefore, when purchasing
can confirm, for example, that bus-level transactions IP from a given vendor, you will typically get these models
occur properly. Here the level of “handshake” may be from them, since only they know, and can model, the inner
sufficient; the actual number of clock cycles occurring isn’t workings of their secret circuits. Developing these models
important. Such cycle-approximate models can execute can represent as much as 30% of the effort required to
in the range of a million instructions per second. create the RTL code itself.
• For more detailed verification of RTL blocks, designers
need to verify the cycle-accurate behavior of the DEFINING A FLOW
interfaces. This further slows the models down to around Having identified not one, but at least three different
100,000 cycles per second (which is more than an order kinds of TLM model that, in one form or another, define
of magnitude slower than the “handshake” level since functionality that will end up in silicon (or software on
we’ve gone from instructions-per-second to cycles-per- silicon), the next obvious question is how to craft a flow
second, and an instruction takes more than one cycle). that, in the ideal, allows you to synthesize from the abstract.
It’s likely that such synthesis would be inefficient in its
early days, but that was also the case with RTL when logic
synthesis was new; eventually the tools have improve to
the point where only in rare cases would you countenance
doing a digital design at a level below RTL.
ESL
Circuit-level optimizations are good, but typically give you cases or for early testing, where some models aren’t
a few tens of percent at best, and more and more of the available in abstract versions and so must be emulated
TRANSACTION-LEVEL
MODELING
circuit-level tricks can be automated. to keep performance up.
• Testing software using virtual prototypes, which tests
This means you need optimization tools that work at the the software algorithms.
TLM level, along with models that provide estimates of • Testing software on a hardware emulator, which tests
power and performance accurately enough to make the right the system’s ability to execute the software.
architectural tradeoffs.
This only makes sense if the work done early at the abstract
IP evaluation is another task that you will have to manage level can be used later to confirm the implementation
at the TLM level. While power and performance are a part work. Today that would mean using the abstract models
of that evaluation, you must also be able to confirm that to confirm the behavior of hand-generated blocks. If
the IP you’re considering will play nicely with the rest of TLM-level synthesis were available, then tools could be
the system, with as little wrapping as possible. You also used to confirm the correctness of that synthesis, much
need to know that you can implement all the features you the way equivalence checking was used to validate early
need, and that you have to implement very few, if any, of logic synthesis tools.
the features you don’t need.
In order for this to work, however, if you have robust debug
Software evaluation is a newer task for the architect. Part capabilities that span the range of abstraction. If a signal in
of the job may be actually checking out a specific piece of an RTL block is misbehaving, that problem must ultimately
software, but, to a large extent, the big problem is ensuring be correlatable to some higher-level model behavior. This
that typical software can execute efficiently – you’re not isn’t so hard when going from the specific to the abstract,
testing the software, you’re testing the system as it runs but going the other direction is harder: trying to correlate
software. a high-level model failure with a specific implementation
issue is tough because the high-level model has such specifics
Having accomplished these tasks at the abstract level, abstracted out – by intent.
implementation can begin. There are really three different
elements to implementation: Debugging must also span different verification
methodologies. Where you are doing hardware emulation
• Creation of new blocks and simulation together, for example, a debug methodology
• Assembling the blocks (newly created and IP) has to recognize events and design elements on both the
• Creation of software simulated and emulated sides.
Functional verification then means All of these requirements are hard enough to achieve
today in the purely digital domain. But SoCs increasingly
• Testing the architecture using cycle-approximate include significant analog functionality as well, and the
models days of analog and digital ignoring each other are fast
• Checking out the new blocks using cycle-accurate disappearing. All of the flow elements involving modeling,
models and simulation architecture, implementation, validation, and debug apply
• Testing the assembly of the entire system. This equally – and raise even greater challenges – for analog.
would actually happen in stages, starting with cycle-
approximate models and transitioning to cycle- Figure 2 shows what a complete TLM-to-RTL flow might
accurate where needed. Since full simulation of the look like.
entire system is likely to take too long, hardware
coordinated environment.
Figure 2. TLM-to-RTL Flow • A unified debug environment is needed to ensure that
The phases and steps can be described as follows: problems can be easily identified regardless of the level
of abstraction or the mode of verification, whether static
• Architecture phase or dynamic.
• Create a system model, including a stimulus
environment, drawing from an IP library as much as These elements cross the property lines of a few different standards
possible. and standards bodies. OSCI has owned the TLM specifications;
• Create new TLM models where needed. the SPIRIT Consortium, now a part of Accellera, has focused
• Simulate to validate the architecture and functionality on IP metadata; emulation interaction has been the domain
(including software) at the TLM level. of the SCE-MI standard, also owned by Accellera. There is no
• Create a virtual prototype to give to the software organization specifically focusing on the needs of debug.
development team.
• Hardware design phase Coordination and cooperation between the different standards
• Automatically map IP blocks to RTL and generate the RTL groups and sub-groups as well as between the companies
interconnect. participating in the standards will be needed to provide all the
• Create RTL blocks either by synthesizing from TLM links to make this work. While some companies resist standards
models or by hand, with equivalence checking to ensure out of fear of losing a competitive advantage, there are plenty
that the generated RTL matches the TLM models. of opportunities to compete even in the face of a unified flow.
• Perform full-chip RTL simulation for select corner Each step of the flow will be challenged to provide the highest
cases. performance, the greatest productivity, and the appropriate cost.
• Software design phase The industry as a whole will benefit by focusing innovation on
• Create software, validating with the virtual prototype. those areas, and, as the industry moves forward, participating
• Integration phase companies will have greater opportunities to reap the rewards.
• Perform complete hardware/software integration
validation using simulation and emulation. At the same time, users must embrace the technology and
• Connect to the physical design flow for implementation. validate flows. It’s insufficient for tools providers simply to
support designs input in higher-level languages and synthesize
REQUIREMENTS FOR UNIFYING FLOWS RTL. All of the pieces described above must be woven together
If such a flow is going to be possible without each company into methodologies that gain real traction with real users. Only
defining its own proprietary version, some common elements then can the TLM-to-RTL flow be considered reality. V
need to come together in the form of standards and ecosystem
offerings. Lauro Rizzatti is general manager of
EVE-USA. He has more than 30 years
• IP modeling needs to be made consistent, with new of experience in EDA and ATE, where he
characteristics that will make evaluation easier. These held responsibilities in top management,
include interface standards, performance estimates of product marketing, technical marketing
key transactions, power estimation, and area estimation. and engineering.
• Formalization of different TLM levels will help ensure
that users of IP can obtain appropriate models, and that
they will know what to expect when getting them.
FIRMWARE
Vendor (and Other Tricks of Low-Power Verification)
Simulation-based hardware/software co-verification lets you address power problems early and well.
CO-VERIFICATION
SIMULATION
J ust how important is hardware/software co-verification
in low-power ASIC and SoC design and engineering?
You might ask the large semiconductor company* that a few
years back designed a device per the specs of a significant
customer, which assembled and sold smartphones. The
specs – that a varied combination of functions could
execute concurrently without exceeding a certain power
budget, measured in milliwatts – were fairly typical for the
low-power realm. Figure 1: Software is relevant to most power-management functions in
low-power ASIC/SoC designs (such as switching states), which is why the
premium on hardware/software co-verification is on the rise.
When the customer received the silicon, however, its
engineers struggled to get the device to behave as advertised. For designers of such devices, a whole series of new questions
Despite their best efforts to string together functions that abound. Does the system correctly power up and change
should have been well within the device’s limits, their power states? Does it meet performance requirements while
applications kept exceeding the power budget. powering up/down its components? Does it meet power
budgets and battery life requirements?
It should be said that the semiconductor company deserves
heapings of kudos for sticking around to help. After realizing Answering those questions with any degree of certainty
it didn’t have tools or expertise to do extensive hardware/ invariably hinges on verifying those areas of the design
software co-verification, the company wound up hiring an where software and hardware interact together the most.
entire firmware team. After much effort, expense, and, most Often this is a confounding task that confronts designers
significantly, delay to the customer’s product, these contract with a long list of seemingly contradictory requirements.
coders put together software infrastructure that bridged And though no one technique is right for every design
the ASIC’s underlying power-management features to the situation, we think that a good starting point is to first
customer’s skills and objectives for the new product. model power-management functionality at RTL and then
verify the hardware and software together in an optimized
The story is not unusual. Given the increasing complexity of environment.
ASICs and SoCs, it’s no longer enough for semiconductor
companies to focus on silicon and deliver meager amounts Here’s why, and how.
of diagnostic software as an afterthought. This is especially
true where power-management looms large, as it does in ANNOTATE AN EXISTING DESIGN WITH UPF
just about any device with batteries, a market segment that Until recently, an engineer wanting to really drill down and
seems poised for a major rebound. The worldwide mobile look for power-related bugs in an ASIC or SoC design faced
phone market grew 11.3 percent in the fourth quarter of a series of unattractive hardware simulation choices. Gate-
2009, according to IDC. And the research firm estimates level verification was highly detailed and impossibly slow.
that the market for voice/data mobile devices (that generally Though marginally faster, the various ways of simulating
are power hogs compared to their voice-only counterparts) at RTL were complicated by the need to insert additional
grew by nearly 30 percent year over year. power management information, which required intrusive
RTL code changes.
What about just focusing on software simulation to verify of the RTL is required, a boon given ever increasing gate
applications most tightly wedded to the silicon? Though counts.
fast, this approach too often lacked most or all of the detail
needed for debugging hardware/software interactions, For all the benefits of UPF, in the end the standard is mostly
where some of the thorniest low-power issues arise. aimed at hardware verification. Advanced verification of
an ASIC or SoC loaded with power-aware features means
FIRMWARE
The arrival of the Unified Power Format (UPF) changed taking a hard look at software, or more specifically, verifying
things for the better. A TCL-based format for specifying low- the hardware and software together.
power intent throughout the design and verification flow,
UPF was designed to allow for reuse and interoperability One way to do this is to execute the software on top of an
between different tools. For those who care about such HDL-simulated CPU. Despite all the theoretical advantages
things, UPF 2.0 also became an industry standard with the of tying code directly to the hardware description instead of
adoption of IEEE Std 1801™-2009 in March 2009. (Full a higher level model, this approach can be both painfully
disclosure: Mentor Graphics chaired the IEEE standards slow and relatively opaque. As deadlines loom and managers
activity.) turn up the pressure, too often all the engineer can say with
any authority is that something is not quite right between
Reuse of certain functional blocks or even entire designs is the software and the underlying HDL.
among the holy grails of ASIC/SoC design, which is ever
more costly. UPF enables such reuse, providing a relatively Tools that speed up the simulated CPU can help matters.
straightforward means to annotate old designs with new Mentor’s solution, for example, replaces the HDL-based
power management features. Engineers can supplement CPU simulation with a model that’s tied in with the rest
existing designs with power-aware features by specifying of the logic simulation – and that operates at dramatically
these in a separate UPF file. Or they can experiment with higher speeds, a benefit that flows from a host of features in
different power control schemes by simply changing this the tool, including optimized memory access.
separate file while leaving the essential design description
alone. The alternative, continuing to tweak the RTL of A quick primer on optimized memory access: During
every design that requires power-management features, is verification, it’s important to first confirm that fetching
tedious and error-prone. instructions from memory does in fact work. But once this
is verified, huge efficiencies are gained by abstracting it
UPF allows for more than just defining power domains, away. In general, the more a tool avoids spending time at a
switches and other elements of the power architecture. It can pin-wiggle-level of detail continuously checking something
be used to create power strategy via power state tables; to set that you already know works, the better.
up and map low power design elements such as retention,
isolation and level shifters; and to match simulation and A high-speed processor model allows design teams to run,
implementation semantics. for example, tightly embedded RTOSs, whose importance is
rising in lock-step with the increasing need for fine-grained
UPF-FRIENDLY SIMULATOR SPEEDS VERIFICATION management of underlying hardware. Combined with a
Unlocking the value of the UPF file requires a verification solid software debugger, running an RTOS can be useful in
platform built to work with the standard. Mentor Graphics a host of verification and debugging scenarios.
Questa is one, though there are others available from
several of the larger EDA vendors. The workflow, in short, For example, imagine an engineer working on software that
is to first go through and verify that the RTL actually controls power states. He wants to boot it up, get to a simple
performs correctly, and then to toggle some settings on the prompt, and then use the software debugger to observe the
simulator and run it a second time to check the power-aware state change from turbo mode to sleep mode. The engineer
functionality described in the UPF file. No recompilation enters a command line prompt, which runs the software,
and then sits back to watch all the changes going on while
FIRMWARE
verification environment is providing an engineer with pin problems, the better.
level visibility to both the hardware and software, or more
precisely, with an ability to closely observe when the power In other words, it’s best to avoid making the call to those
control module writes out to one of the power islands and crack firmware coders if you can. V
changes its power state. Another is allowing the user to be * Apologies for the anonymity. But everyone knows that despite its sprawling
CO-VERIFICATION
SIMULATION
size (0.5% of worldwide GDP, according to Wikipedia) the semiconductor
able to dynamically select which memory accesses run in the industry is more like a small village that prizes discretion than a mega-city
logic simulator. that celebrates the broadcasting of every foible and failing.
The speedup can be dramatic. Last fall at ARM techcon3 Marc Bryan has been both a leading and
we presented a case where a high-speed simulator increased contributing member of tool development teams
the speed of embedded software execution by a factor of for more than 24 years. Currently serving as
10,000. the Product Marketing Manager for Mentor
Graphics’ Codelink products, Bryan came to
To be sure, there are alternatives to simulation-based Mentor after five and a half years with ARM's
hardware/software co-verification. Emulation is one, a tool division. A prior hands-on role at Korg R&D
method which can provide closer-to-final-product speeds provided extensive embedded processor-based, system-level design
but often fails to provide sufficient visibility. Other emulation and implementation experience.
headaches include increased setup time (emulation is post-
synthesis) and complexity surrounding place and route. Barry Pangrle is a Solutions Architect for Low
Power in the Engineered Solutions Group at
The real selling point of simulation is that design teams can Mentor Graphics Corporation. He has been a
start doing power-related hardware/software co-verification faculty member at UC Santa Barbara and Penn
before their designs are done. Of course, it’s always possible State University where he taught and performed
to wait and throw more people at a design or verification research in high-level design automation. He has
problem. But in IC design, as is true throughout engineering published over 25 reviewed works in high level
design automation and low power design.
Why Software Matters result is more complexity and a higher risk of failure—
particularly when it’s not well tested with the hardware.
By Ed Sperling
3. All of the major embedded software companies
Software and hardware may not mix easily, and engineers on
each side of the wall may not talk the same language, but these except one have been bought by large semiconductor
days no one has the luxury of ignoring one side or the other. companies, which increasingly are required to include
software stacks with their chips to create complete
That message came through loud and clear at a panel discus- platforms for applications.
sion sponsored by the EDA Consortium yesterday evening,
which included top engineers at Wind River, Green Hills and Driving these changes are some fundamental shifts in the
MontaVista. Among the key facts in the discussion: hardware. Jack Greenbaum, director of engineering at Green
Hills, said the shift from 8-bit bare-metal software to 32-bit
1. The majority of engineers working on an SoC are
microcontrollers has opened up a huge opportunity for more
software engineers, who represent the biggest portion complex software. In addition, the shift from 32- to 64-bit has
of the non-recurring engineering expenses. allowed small devices such as microcontrollers to now start
using full-featured operating systems such as Linux because
2. A couple decades ago a typical chip had thousands of memory is so cheap.
lines of embedded code. Now there are millions of lines
of code, and no one person understands all of it. The To read more, please visit the System-Level Design commu-
nity at: sldcommunity.com
3. Power analysis engine: A critical third element, an Although the static timing and power analysis done by the
analysis engine, is required to simulate the design under synthesis tools is indispensable to ensure correct hardware
“real-world” conditions. This requires a methodology implementations, it is not sufficient. Because actual
for emulating the system-on-a-chip (SoC) hardware peak/average power consumption can vary tremendously
performance at high speed together with a mechanism with operating conditions, dynamic analysis capability is
to track the toggling of the gates in the design. With required as well. For this purpose, the Palladium product
a carefully calibrated way to use the toggling, it also family promises to estimate SoC power consumption
must be possible to accurately estimate how much under “real-world” conditions while actual system software
power the gates will actually consume. is being executed.
4. Power-aware verification methodology: Power-saving
techniques like those mentioned above (clock gating, Finally, verification consumes 60% to 70% of the R&D
multiple voltage domains, dynamic voltage frequency effort on today’s SoC-development projects. With power-
scaling, etc.) significantly affect the functional behavior aware designs, verification challenges grow more intense
of the SoC. They also can add major complexity to the as special register-transfer/gate-level features to reduce
verification process, unless one has the proper tools power (e.g., clock gating) add further circuit complexity.
and methodology to deal with these issues. Power-management features are added that are under
hardware and software control. In addition, various design
Tying these four elements together lets designers deal modes must be verified at different operating voltages. The
with the challenges of lower-power SoC design in a timely, net result is a huge expansion of the system state space,
cost-effective way with higher productivity. which must be verified. Here, power modes become yet
another dimension of design parameters. All (or as many
A POWER-AWARE WORKFLOW as possible) combinations of power and operating modes
Companies have begun working to address the requirements must be verified or else problems will arise (such as data
of low-power SoC design at the system level. For loss, dead-lock conditions, etc.).
requirements capture, for example, a product called InCyte
Chip Planning promises to become the “cockpit” where the Power-aware, metric-driven verification with technologies
high-level decisions and constraints on individual blocks like Conformal and Incisive goes a long way toward solving
are made. As the different design teams implement blocks these challenges. With these technologies, power-intent
within the system and determine whether they’ve met information of the design is captured (in a format like CPF).
unit-level specifications, this product allows them to feed It is then used during verification to ensure that the design
that information back to chip planning. This collaborative behaves as it would with all of the power-control logic in
process enables system-level refinements to be made. If one the RTL. These tools are used to infer all of the power
block team is able to do better than their specifications, modes that need to be covered in the design, automatically
“slack” may be freed up for another block team, which may creating the appropriate coverage metrics and assertions.
be struggling to meet their specifications. Verification then continues as normal. Now, however, it is
fully power-aware.
For hardware synthesis, a product called C-to-Silicon
Compiler combines high-level synthesis with conventional
THE OLD VERSUS THE NEW With the major portion of today’s electronic products
targeting mobile applications, power consumption has
evolved to become a primary design constraint. Any effective
design flow and methodology must simultaneously consider
all design constraints (including power) in a seamless closed-
loop, multi-objective, planning-to-signoff solution.
SYSTEM-LEVEL DESIGN
LOW POWER
Take A New Approach to the Power-
Optimization of Algorithms and Functions
ALGORITHM OPTIMIZATION
T he power consumption of digital integrated circuits (ICs)
has moved to the forefront of design and verification
concerns. In the case of handheld, battery-powered devices
synthesis. While these approaches can raise the design’s level of
abstraction, they have significant limitations. For example, they
provide poor quality-of-synthesis results except for the narrow
like cell phones, personal digital assistants (PDAs), e-books, range of application spaces that they can efficiently address.
and similar products, users require each new generation to Additional issues include the following:
be physically smaller and lighter than its predecessors. At the
same time, they expect increased functionality and demand • The model of computation in C/C++/SystemC,
longer battery life. It’s therefore obvious why low-power design sequential, threaded, flat memory has been fine-tuned to
is important in the context of this class of products. In reality, execute on von Neumann computing platforms. This approach
however, low-power considerations impact almost every modern is inappropriate for hardware designs that feature fine-grain
electronic system—including those powered from an external parallelism and heterogeneous storage. As a result, common C/
supply. C++ idioms and style (loops, pointers, byte-oriented data types,
etc.) must be laboriously “rewritten” to prepare an application for
CHALLENGING THE CONVENTIONAL WISDOM C/C++ synthesis.
The architecture of a system is a first-order determinant of that
system’s power consumption. When it comes to the functional • C/C++ synthesis tools work by customizing a few generic
blocks themselves, the hardware design engineer must determine template architectures. As a result, there’s not much room
the optimal micro-architecture for each block. Different micro- for architectural variation. Achieving good quality often
architectures have very different area, timing, latency, and power requires a “long tail” of effort, massaging both source code
characteristics. The register transfer level (RTL) is the earliest and constraints in tool-specific ways because of the non-
stage of design abstraction at which it’s possible to gain sufficiently transparent effect on architecture. The resulting code/
accurate estimations of characteristics like area and power. If constraints aren’t portable or maintainable. In addition,
created by hand, however, RTL is very fragile, complex, and the ad-hoc, proprietary constraint languages may not be
time consuming to capture. As a result, there’s typically sufficient parameterizable, requiring multiple sets of source code and/
time to create only one micro-architecture (or a very limited or constraints to cover the required architectural space.
number of micro-architectures). A wide range of alternative • Automatic parallelization of sequential code is feasible
implementation scenarios therefore remains unexplored. In mainly for digital-signal-processor (DSP) -like (“loop and
addition, RTL does not support sophisticated parameterization, array”) applications. This restricts its use to only a few blocks
so an IP block cannot be retargeted into multiple systems-on-a- of the design. Even within this “sweet spot,” it’s difficult to
chip (SoCs) with different area/speed/power targets. address essential “system issues,” such as memory sharing,
caching, pre-fetching, non-uniform access, concurrency, and
The ideal scenario is to have an environment in which design integration into the full chip design.
engineers can create and functionally verify behavioral • Loss of control with regard to the process of timing
representations at a high level of abstraction. They should then closure also is a problem. When compiling a “behavioral”
be able to quickly and easily convert these representations into C/C++ description into hardware, the semantic model
equivalent RTL for detailed power analysis. Furthermore, this of the source (the sequential code) is so different from the
ideal scenario includes the ability to be able to create a single semantic model of the ensuing hardware that the designer
behavioral representation and to use it to generate and evaluate loses predictability. It’s difficult for the designer to imagine
a full range of alternative RTL implementations. Currently, the what should be changed in the source to effect a particular
predominant high-level alternative to RTL-based design has desired improvement in the hardware. Furthermore, small
been to use sequential programming-based C/C++/SystemC and apparently similar changes to the source can result in
representations in conjunction with some form of behavioral radically different hardware realizations.
The ability to evaluate a wide range of micro-architectures can This PAClib library is written in Bluespec SystemVerilog
result in more optimal results than painstakingly hand-coded (BSV). It augments standard SystemVerilog with rules and
RTL. For many of the reasons listed above, however, not all rules-based interfaces that support complex concurrency and
high-level (behavioral) languages and associated HLS engines control across multiple shared resources and across modules.
facilitate the ability to automatically generate the full range of BSV features the following: high-level abstract types; powerful
micro-architectures for evaluation. parameterization, static checking, and static elaboration; and
advanced clock specification and management facilities. One
In the same way that it would be unimaginable for software of the key advantages is that the semantic model of the source
developers to neglect to evaluate alternative algorithms, it should (guarded atomic state transitions) maps very naturally into
be unimaginable for hardware designers to proceed without the semantic model of clocked synchronous hardware. BSV’s
considering alternative micro-architectures. In reality, however, computation model is universal (equally suitable for datapath
the lack of rigorous micro-architecture evaluation is the norm and control). So it can directly address system considerations,
rather than the exception. But what if design engineers had the such as memory sharing, caching, pre-fetching, non-uniform
ability to quickly and easily evaluate the entire gamut of micro- access, concurrency, and integration into the full chip design.
architecture alternatives ranging from highly parallel to highly
serial implementations? With full architectural transparency, the designer also can
make controlled changes to the source with predictable effects
A NOVEL APPROACH on timing. Due to the extensive static checking in BSV, these
A new approach, such as PAClib, has emerged using a plug-and- changes can be more dramatic than the localized “tweaking”
play library of common pipeline building blocks. Those building techniques favored when working with standard RTL. As
blocks are designed for constructing algorithms and datapath a result, designers can achieve timing goals sooner without
functions as illustrated in Figure 1. (This is, of course, a very compromising correctness. The end result of using PAClib is
simple representation; PAClib modules can be instantiated by that hardware design engineers continue to think like design
other modules and wrappers and so forth.) engineers. But they now have access to rapid algorithmic design
and architectural exploration capabilities.
ALGORITHM OPTIMIZATION
be to add pipeline buffers to the outputs of the f_permute
functions as illustrated in Figure 4b and also to the inputs of the
f_permute functions as illustrated in Figure 4c. These buffers
increase the hardware cost. Yet they will likely decrease the
critical path length and allow synthesis at a higher frequency,
thereby increasing overall throughput. Yet another possibility is
Figure 2: This high-level block diagram depicts an IEEE 802.11a
transmitter.
to implement just one stage, but to loop the data through this
stage three times to replicate the actions of the three stages (see
The IFFT is constructed from two basic computational functions, Figure 4d).
f_radix4 and f_permute, which are treated here as black boxes.
Conceptually, the IFFT is a cascade of three identical stages as
illustrated in Figure 2. The input and output of each stage—and
of the IFFT as a whole—are vectors of 64 complex numbers
with 16-bit real and imaginary parts. Each stage also receives
a set of coefficients, which may be different for each f_radix4
instantiation.
Propagation; Circuits and Systems; Computer; Electron These awards complement the existing awards:
Devices; Microwave Theory and Techniques; and Solid
State Circuits. • The D. O. Pederson Best Paper Award presented in
the TCAD Journal
Since its formation, CEDA has worked to expand its • The William McCalla Best Paper Award presented at
support of emerging areas within EDA and brought more ICCAD
recognition to members of the EDA profession. And as • The prestigious Phil Kaufman Award jointly sponsored
my two-year term ends, I look back with pride at the way with the EDA Consortium
in which CEDA’s Executive Committee has found many
creative ways to further expand support of the EDA The ongoing Distinguished Speaker Series offers a number
community that makes me quite proud. of complementary events at EDA conferences and will
continue in 2010. CEDA has also begun experimenting
Just announced is the formation of the Design Technology with new ways of reaching the EDA community, including
Committee, a group of executives from EDA user a live webcast of the ICCAD keynote. A digital edition of
companies. DTC has the goal to work with groups inside Design and Test Magazine is available for a reduced fee and
and outside the IEEE to promote best practice sharing the first issue of Embedded Systems Letters is available
and strategic solutions to address gaps between EDA online, while an on-line calendar of EDA key conference
capabilities and future needs. dates can be found at www.ceda-org.
CEDA now sponsors 14 EDA conferences and workshops, Cadence’s Andreas Kuhlmann will become CEDA’s
including DAC, ICCAD, DATE and ASPDAC. It created president in January and is committed to continued support
the Embedded Systems Letters, a new publication for of the EDA community with more valuable activities in
rapid communication of short notes in an increasingly 2010. He and the rest of CEDA’s Executive Committee
important EDA area. This adds to the quarterly Currents would benefit by having your help. I encourage you to
newsletter and the mainstay TCAD Journal. visit the CEDA website (www.c-eda.org) to learn more
about us and to get more involved. Organizations such as
Two new awards presented in 2009 help to recognize the CEDA are driven by the efforts of volunteers. V
accomplishments of members of our community. The
yearly Richard Newton Technical Impact Award is jointly John Darringer is President of the IEEE
sponsored with the ACM Special Interest Group on Council on EDA. He can be reached at
Design Automation (ACM SIGDA). It is awarded to an jad@us.ibm.com. For more information
individual or individuals for their outstanding technical on CEDA visit http://www.c-eda.org
contributions to EDA, recognized over a significant period
of time. The Early Career Award, also to be presented
yearly, recognizes an individual who has made innovative
Register today at the Early Bird rates & save up to $200 on your
Conference Pass! Or, register now for a FREE Expo Hall Pass!
High performance requirements, quality of service (QoS) needs, Because of the complexity and design requirements of modern SoCs,
and physical design constraints make the integration of an increasing it is inevitable that a NoC approach will become the de fact way to
number of heterogeneous IP cores in a SoC a formidable challenge. develop these devices.. The question remains whether semiconductor
It’s made even more challenging because traditional on-chip companies will develop such technology in-house or will opt to license
interconnect is taking up an increasingly large part of the SoC design, it. Over the last fifteen years, the industry has been relatively slow in
not only in chip area, but in design complexity and design resources moving towards licensing interconnect technology. The reasons have
as well. Previous generation approaches such as a bus, crossbar or been twofold: the existence of prior infrastructures and not enough
hybrid combinations of buses and crossbars cannot meet area and “bang-for-the-buck” in commercial interconnect products to warrant
power budgets, or frequency targets. Even in the smaller designs, the switching cost. Now, however, commercial NoC products are
power requirements alone make traditional interconnect approaches outperforming traditional internal solutions by significant margins,
sub-optimal. and the costs faced by semiconductor vendors for the internal
development of a new interconnect infrastructure around an NoC
Designers are moving toward NoC architectures because they offer a architecture cannot be justified. Indeed, the semiconductor vendors
light-weight packet-based communication system that meet stringent that are using NoC technology for on-chip interconnect today are
area and power requirements, without sacrifices in performance. already reaping the benefits of bringing faster, lower power and
In a NoC, data-packets travel between processing elements in the cheaper products to the market. V
interconnect through physical links, and where the physical properties
of each link can be independently configured according to bandwidth, K. Charles Janac is the Chairman, President
latency, clocking, power or physical design requirements. Interconnect and Chief Executive Officer of Arteris Holdings.
processing elements can be switches, data-width converters, clock- Charlie has over 20 years experience building
domain converters, power-isolator blocks, security modules and technology companies. He was employee number
others. Packet-based formatting allows the processing elements to be two of Cadence Design Systems, and later served
very simple and link configurability provides the optimal link design as CEO of HLD Systems, Smart Machines and
for each communication path. Nanomix. Born in Prague, Czech Republic, he
holds both a B.S. and M.S. degrees in Organic Chemistry from Tufts
A common misconception is that NoC architectures introduce University and an MBA from Stanford Graduate School of Business.
additional latency. But robust NoC solutions allow for packet He holds a patent in polymer film technology.
configurability. A flexible packet format allows the designer to make
As promising as these forecasts sound, only a few large IDMs are These demands make MEMS more susceptible to unwanted
well positioned to benefit from this rapidly growing market. This is coupling between sensing modes as well as between the MEMS
due to the specialized expertise, long development time, and high sensors and electronics. The present approach to MEMS design—
cost of bringing MEMS devices to market. Almost all MEMS with separate design tools and ad-hoc methods for transferring
devices are tightly integrated with electronics—either on a common MEMS designs to IC design and verification tools—is simply not
silicon substrate or in the same package. Yet MEMS design has up to these new challenges. The time has come to “democratize”
traditionally been separated from IC design and verification. MEMS design and bring it into the IC design mainstream. The
result would be reduced design costs and shortened time to market.
MEMS devices are typically designed by PhD-level experts in In addition, the MEMS design would no longer be confined to
such fields as mechanical, optical, and fluidic engineering. They teams of specialists inside IDMs.
use their own two-dimensional (2D) and three-dimensional (3D),
mechanical computer-aided-design (CAD) tools for design entry A critical key to accomplishing this “democratization” is to build
and finite-element-analysis (FEA) tools for simulation. Eventually, an integrated design flow for MEMS devices and the electronic
the MEMS design must be handed off to an IC design team in circuits with which they interact. A structured design approach
order to go to fabrication. But the handoff typically follows an should be used that avoids manual handoffs. Companies like
ad-hoc approach that requires a lot of design re-entry and expert Coventor and Cadence are now working together to develop such
handcrafting of behavioral models for functional verification. integrated methodologies. Their goal is to shield IC designers from
the complexity of MEMS design while reducing the time, cost, and
Moreover, MEMS historically requires specialized process risk of developing MEMS-enabled products. V
development for each design, resulting in a situation often
described as “one process, one product.” While there are a number Dr. Joost van Kuijk is vice president of marketing
of specialized MEMS foundries, support from pure-play foundries and business development at Coventor. Dr. van
has been very limited. According to one analyst report, it takes an Kuijk has more than 16 years of experience in
average of four years of development and $45 million in investment the MEMS field, specializing in modeling and
to bring a MEMS product to market. simulation. He received a PhD in micro system
technology from Twente University, where he also
Several trends are converging to make this level of effort and received a diploma in technology information.
expertise unacceptable—not only for new entrants in the MEMS In addition, Dr. van Kuijk holds an MSc in
market, but for the best-positioned IDMs as well. First, the fast- mechanical and precision engineering from Delft University.
paced consumer-electronics market demands design cycles that
and increase the return on invested capital of the global semiconductor industry
integration and innovation. It addresses the challenges within the supply chain
12400 Coit Road, Suite 650 | Dallas, TX 75251 | 888-322-5195 | T 972-866-7579 | F 972-239-2292 | www.gsaglobal.org
One system,
infinite verification possibilities.
Speed, Capacity, & Lowest Cost of Ownership.
ZeBu-Server:
Servver: Billions of Cycles for Billions
ZeBu-S ns off Gates
Billion
First, EVE's ZeBu emulators broke the billion-cycle barrier. Now, ZeBu-Server - EVE's next generation emulation system -
has broken the billion-gate barrier. Scalable to handle up to 1 billion ASIC gates, and with execution speeds up to
30MHz, ZeBu-Server is a multi-mode, multi-user emulator suitable for all system-on-chip (SoC) hardware-assisted
verification needs, across the entire development cycle, from hardware verification and hardware/software integration
to embedded software validation.
Used in virtually every ASIC/SoC industry, from graphics and computer peripheral applications, to processor and
wireless mobile applications, ZeBu emulators are truly a universal solution.
Contact EVE-Team today for a FREE consultation at info@eve-team.com, or visit us online at www.eve-team.com.
© 2010 EVE. All rights reserved. Contact us at 408-457-3200 or 888-738-3872 (toll-free) // www.eve-team.com // info@eve-team.com