You are on page 1of 13

Narasu’s Sarathy Institute of Technology, Salem

Electrical and Electronics Engineering

Latest Technologies

FLEXIBLE AC TRANSMISSION SYSTEM

A Flexible Alternating Current Transmission System (FACTS) is a system


composed of static equipment used for the AC transmission of electrical energy. It is meant to
enhance controllability and increase power transfer capability of the network. It is generally a
power electronics-based device.

FACTS is defined as "a power electronic based system and other static equipment
that provide control of one or more AC transmission system parameters to enhance
controllability and increase power transfer capability."

FACTS could be connected:

• in series with the power system (series compensation)


• in shunt with the power system (shunt compensation)
• both in series and in shunt with the power system

Series compensation

In series compensation, the FACTS is connected in series with the power system. It
works as a controllable voltage source. Series inductance occurs in long transmission lines, and
when a large current flow causes a large voltage drop. To compensate, series capacitors are
connected.

Shunt compensation

In shunt compensation, power system is connected in shunt with the FACTS. It works as
a controllable current source. Shunt compensation is of two types:

Shunt capacitive compensation

This method is used improve the power factor. Whenever an inductive load is connected
to the transmission line, power factor lags because of lagging load current. To compensate, a
shunt capacitor is connected which draws current leading the source voltage. The net result is
improvement in power factor.

1
Shunt inductive compensation

This method is used either when charging the transmission line, or, when there is very
low load at the receiving end. Due to very low, or no load -- very low current flows through the
transmission line. Shunt capacitance in the transmission line causes voltage amplification
(Ferranti Effect). The receiving end voltage may become double the sending end voltage
(generally in case of very long transmission lines). To compensate, shunt inductors are connected
across the transmission line.

HVDC TRANSMISSION SYSTEMS

A high-voltage, direct current (HVDC) electric power transmission system uses direct
current for the bulk transmission of electrical power, in contrast with the more common
alternating current systems. For long-distance distribution, HVDC systems are less expensive
and suffer lower electrical losses. For shorter distances, the higher cost of DC conversion
equipment compared to an AC system may be warranted where other benefits of direct current
links are useful.

The modern form of HVDC transmission uses technology developed extensively in the
1930s in Sweden at ASEA. Early commercial installations included one in the Soviet Union in
1951 between Moscow and Kashira, and a 10-20 MW system between Gotland and mainland
Sweden in 1954. The longest HVDC link in the world is currently the Inga-Shaba 1,700 km
(1,100 mi) 600 MW link connecting the Inga Dam to the Shaba copper mine, in the Democratic
Republic of Congo. High Voltage Direct Current solutions have become more desirable for the
following reasons:

 E nvironmental advantages
Economical (cheapest solution)
Asynchronous interconnections
Power flow control
Added benefits to the transmission (stability, power quality etc.)

The HVDC technology

The fundamental process that occurs in an HVDC system is the conversion of


electrical current from AC to DC (rectifier) at the transmitting end, and from DC to AC
(inverter) at the receiving end. There are three ways of achieving conversion:

Natural Commutated Converters:

Natural commutated converters are most used in the HVDC systems as of today. The
component that enables this conversion process is the thyristor, which is a controllable
semiconductor that can carry very high currents (4000 A) and is able to block very high voltages
(up to 10 kV). By means of connecting the thyristors in series it is possible to build up a
thyristor valve, which is able to operate at very high voltages (several hundred of kV).The

2
thyristor valve is operated at net frequency (50 hz or 60 hz) and by means of a control angle it is
possible to change the DC voltage level of the bridge. This ability is the way by which the
transmitted power is controlled rapidly and efficiently.

Capacitor Commutated Converters (CCC):

An improvement in the thyristor-based commutation, the CCC concept is characterised


by the use of commutation capacitors inserted in series between the converter transformers and
the thyristor valves. The commutation capacitors improve the commutation failure performance
of the converters when connected to weak networks.

Forced Commutated Converters:

This type of converters introduces a spectrum of advantages, e.g. feed of passive


networks (without generation), independent control of active and reactive power, power quality.
The valves of these converters are built up with semiconductors with the ability not only to turn-
on but also to turn-off. They are known as VSC (Voltage Source Converters). Two types of
semiconductors are normally used in the voltage source converters: the GTO (Gate Turn-Off
Thyristor) or the IGBT (Insulated Gate Bipolar Transistor). Both of them have been in frequent
use in industrial applications since early eighties. The VSC commutates with high frequency (not
with the net frequency). The operation of the converter is achieved by Pulse Width Modulation
(PWM). With PWM it is possible to create any phase angle and/or amplitude (up to a certain
limit) by changing the PWM pattern, which can be done almost instantaneously. Thus, PWM
offers the possibility to control both active and reactive power independently. This makes the
PWM Voltage Source Converter a close to ideal component in the transmission network. From a
transmission network viewpoint, it acts as a motor or generator without mass that can control
active and reactive power almost instantaneously.

Advantages Of HVDC Over AC Transmission

• Undersea cables, where high capacitance causes additional AC losses. (e.g., 250 km
Baltic Cable between Sweden and Germany)
• Endpoint-to-endpoint long-haul bulk power transmission without intermediate 'taps', for
example, in remote areas
• Increasing the capacity of an existing power grid in situations where additional wires are
difficult or expensive to install
• Power transmission and stabilization between unsynchronised AC distribution systems
• Connecting a remote generating plant to the distribution grid, for example Nelson River
Bipole
• Stabilizing a predominantly AC power-grid, without increasing prospective short circuit
current
• Reducing line cost. HVDC needs fewer conductors as there is no need to support multiple
phases. Also, thinner conductors can be used since HVDC does not suffer from the skin
effect
• Facilitate power transmission between different countries that use AC at differing
voltages and/or frequencies
• Synchronize AC produced by renewable energy sources

3
• HVDC can carry more power per conductor, because for a given power rating the
constant voltage in a DC line is lower than the peak voltage in an AC line. In AC power,
the root mean square (RMS) voltage measurement is considered the standard, but RMS is
only about 71% of the peak voltage. The peak voltage of AC determines the actual
insulation thickness and conductor spacing. Because DC operates at a constant maximum
voltage without RMS, this allows existing transmission line corridors with equally sized
conductors and insulation to carry 100% more power into an area of high power
consumption than AC, which can lower costs.

EMBEDDED SYSTEM

An embedded system is a special-purpose computer system designed to perform one


or a few dedicated functions, often with real-time computing constraints. It is usually embedded
as part of a complete device including hardware and mechanical parts. In contrast, a general-
purpose computer, such as a personal computer, can do many different tasks depending on
programming. Embedded systems control many of the common devices in use today. Since the
embedded system is dedicated to specific tasks, design engineers can optimize it, reducing the
size and cost of the product, or increasing the reliability and performance. Some embedded
systems are mass-produced, benefiting from economies of scale. Physically, embedded systems
range from portable devices such as digital watches and MP4 players, to large stationary
installations like traffic lights, factory controllers, or the systems controlling nuclear power
plants. Complexity varies from low, with a single microcontroller chip, to very high with
multiple units, peripherals and networks mounted inside a large chassis or enclosure.

In general, "embedded system" is not an exactly defined term, as many systems have
some element of programmability. For example, Handheld computers share some elements with
embedded systems — such as the operating systems and microprocessors which power them —
but are not truly embedded systems, because they allow different applications to be loaded and
peripherals to be connected.

Embedded systems programming is not like normal PC programming. In many ways,


programming for an embedded system is like programming a PC 15 years ago. The hardware for
the system is usually chosen to make the device as cheap as possible. Spending an extra dollar a
unit in order to make things easier to program can cost millions. Hiring a programmer for an
extra month is cheap in comparison. This means the programmer must make do with slow
processors and low memory, while at the same time battling a need for efficiency not seen in
most PC application

Embedded development makes up a small fraction of total programming. There's also


a large number of embedded architectures, unlike the PC world where 1 instruction set rules, and
the Unix world where there's only 3 or 4 major ones. This means that the tools are more
expensive. It also means that they're lower featured, and less developed. On a major embedded
project, at some point you will almost always find a compiler bug of some sort.

Debugging tools are another issue. Since you can't always run general programs on
your embedded processor, you can't always run a debugger on it. This makes fixing your
program difficult. Special hardware such as JTAG ports can overcome this issue in part.
However, if you stop on a breakpoint when your system is controlling real world hardware (such
4
as a motor), permanent equipment damage can occur. As a result, people doing embedded
programming quickly become masters at using serial IO channels and error message style
debugging.

To save costs, embedded systems frequently have the cheapest processors that can do
the job. This means your programs need to be written as efficiently as possible. When dealing
with large data sets, issues like memory cache misses that never matter in PC programming can
hurt you. Luckily, this won't happen too often- use reasonably efficient algorithms to start, and
optimize only when necessary. Of course, normal profilers won't work well, due to the same
reason debuggers don't work well. So more intuition and an understanding of your software and
hardware architecture is necessary to optimize effectively.

Memory is also an issue. For the same cost savings reasons, embedded systems
usually have the least memory they can get away with. That means their algorithms must be
memory efficient (unlike in PC programs, you will frequently sacrifice processor time for
memory, rather than the reverse). It also means you can't afford to leak memory. Embedded
applications generally use deterministic memory techniques and avoid the default "new" and
"malloc" functions, so that leaks can be found and eliminated more easily.

Other resources programmers expect may not even exist. For example, most
embedded processors do not have hardware FPUs (Floating-Point Processing Unit). These
resources either need to be emulated in software, or avoided altogether.

Real Time Issues

Embedded systems frequently control hardware, and must be able to respond to them in
real time. Failure to do so could cause inaccuracy in measurements, or even damage hardware
such as motors. This is made even more difficult by the lack of resources available. Almost all
embedded systems need to be able to prioritize some tasks over others, and to be able to put
off/skip low priority tasks such as UI in favor of high priority tasks like hardware control

Fixed-Point Arithmetic

Some embedded microprocessors may have an external unit for performing floating
point arithmetic(FPU), but most low-end embedded systems have no FPU. Most C compilers
will provide software floating point support, but this is significantly slower than a hardware FPU.
As a result, many embedded projects enforce a no floating point rule on their programmers. This
is in strong contrast to PCs, where the FPU has been integrated into all the major
microprocessors, and programmers take fast floating point number calculations for granted.
Many DSPs also do not have an FPU and require fixed-point arithmetic to obtain acceptable
performance.

A common technique used to avoid the need for floating point numbers is to change
the magnitude of data stored in your variables so you can utilize fixed point mathematics. For
example, if you are adding inches and only need to be accurate to the hundreth of an inch, you
could store the data as hundreths rather than inches. This allows you to use normal fixed point
arithmetic. This technique works so long as you know the magnitude of data you are adding
ahead of time, and know the accuracy to which you need to store your data.
5
LATEST DISTRIBUTED CONTROL SYSTEM TECHNOLOGY

Distributed control systems are powerful assets for new and modernized power plants. Thanks
to three product generations of technology innovations, these systems now provide new benefits
— including improved O&M efficiency, greater plant design flexibility, and improved process
control and asset reliability — that help competitive plants advance in the game.

With nearly 30 years of evolution — and three fundamental technology generations —


since their initial introduction into power plant applications, distributed control systems (DCS)
have improved considerably. Though specific release dates vary among vendors, the first
generation of DCS appeared during the 1980s, the second generation during the 1990s, and third
generation in the mid-2000s.

With each major system release, many new DCS capabilities and features have been
added, resulting in new benefits for plant designers and owners.

First Generation: The Early DCS

The introduction of microprocessor-based plant control occurred shortly before 1980


with simple single-loop controllers. This technology quickly evolved into a DCS with control
processor redundancy, high-density input/output (I/O) systems, and a human machine interface
(HMI).
Perhaps the most significant feature of the early DCS was the ability to geographically
distribute control system processors and I/O components, thus influencing power plant designs
by greatly reducing the amount of field wiring needed between control equipment and field
instruments.
As the first-generation DCS evolved, advances in technology enabled PC-based
engineering tools as well as function block programming, which greatly simplified the
construction and flexibility of controller-based application code. As controller speed and
memory increased, control system engineers quickly realized that control logic strategies truly
would only be limited by the engineers’ imagination.
When compared to previous technologies — plant computers and electrical analog
control systems — the first-generation DCS stands out as a tremendous leap in technology for its
time.

Second Generation: The Open System DCS

One limiting factor of first-generation systems is that they were designed to use
proprietary communication technologies. Consequently, connections to third-party systems were
typically limited to custom-developed interfaces. This changed during the 1990s, and the DCS
became recognized as the optimal vehicle for integrating process data from the various
automation platforms used within a typical plant.

The open system DCS provided standard communication interfaces for connecting the
various automation subsystems. Supporting integrated plant operations for all automated plant
equipment, the DCS provided a centralized and common "single window view" of plant data for
control, logical interlock, alarm, and history. Enterprise management solutions, also enabled by

6
the open system, provided new opportunities for fleet management centers to improve operations
by remotely monitoring plant processes, analyzing unit efficiencies, and supporting coordination
between operating units.

Additionally, the use of commercial off-the-shelf technology emerged during this period
as standard Ethernet networking components and Microsoft Windows-based systems were
applied at the DCS HMI layer.

As demand for more open systems grew — along with strong interest in integrating field
bus technology and making full use of an integrated operations and engineering environment —
the third-generation DCS emerged.

Third Generation: The Extended Automation DCS


Today’s power generators are faced with intense pressure to improve production
reliability and bottom line profitability. As a result, current business goals focus on increasing
operational efficiency and overall equipment effectiveness (OEE). In support of OEE — a tool
used to identify production loss and asset availability — third-generation DCS employ powerful
object-oriented design technology to enable efficiency improvements within daily operations and
maintenance (O&M) activities.

Additionally, advanced process optimization technology is added to support


improvements in process efficiencies such as power plant heat rate. Asset optimization is
available to improve production reliability through improved process stability as well as through
asset monitoring for predictive maintenance. Control system technology also now integrates
several field bus protocols, thus enabling more flexible plant designs as well as improved data
for maintenance. An example of a third-generation DCS is ABB’s Industrial IT System 800xA.
Aspect System Technology:
Embedded within the 800xA DCS system’s platform core is a new object-oriented
technology called an "aspect system." Aspect system technology provides an enterprise-wide
data management tool within the DCS operator’s console. It allows plant O&M information to be
directly linked to DCS graphical objects. This means users with secure access to the DCS screens
(such as plant operators, maintenance personnel, and managers) can get personalized views of
important plant information. Providing the right information to the right person at the right time
for informed decision-making saves time and thereby improves operational efficiency.

" Aspect links," which are simple, menu-driven links to O&M information, can be
launched via mouse click from DCS graphical objects, alarm points, or a controller configuration
drawing (Figure 1). Aspect links of interest to plant operators may include alarm decision system
information, operational help screens, live video feeds, start-up instructions, and trends. Links of
interest to instrumentation and control personnel may include detailed troubleshooting
information such as plant piping and instrumentation drawings, equipment O&M manuals,
application guides, and smart device management tools. Links used by maintenance management
may include work orders, fault reports, or spare part inventories.

7
1. Linked up.
Improving the efficiency of plant operations and maintenance, the 800xA distributed
control system (DCS) provides aspect link technology for navigating to important plant
information from DCS client screens. Source: ABB Permissions can be configured to manage
individual views into the aspect links, thereby ensuring that system users can only view
information relative to their specific job function.
Process Optimization and Asset Optimization.
To support the goal of increased plant process efficiency, advanced control can be added
to the DCS using model predictive controller (MPC) technology. The MPC approach provides a
multi-variable algorithm that runs at a much higher frequency than earlier optimization
techniques (typically, cycle times are measured in seconds, rather than minutes). The result is an
accurate process model that can be added to base system controls to produce less variability and
smoother transitions. Less variability typically enables processes to operate closer to equipment
design limits, therefore enabling significant improvements in steam temperature, ramp rate, heat
rate, situations with complex coordinated control, and reduced emissions.

Asset optimization, now available within most third-generation DCS designs, facilitates
increased OEE and avoids unplanned shutdowns, thereby increasing plant availability. Asset
optimization can also extend the life of plant assets by using advanced predictive maintenance
techniques. For plant assets, a logical analysis function called the "asset monitor" provides 24/7
supervision of the plant device or process. Assets that can be monitored include DCS
components, communication networks, smart instrumentation, process control loops, pumps and
drives. Power plant processes such as feedwater heaters, water quality, and heat exchangers can
also be monitored. Asset monitor options can be scaled to include any number of assets, from
plant to fleet

8
By applying object-oriented technology, asset optimization is seamlessly integrated
with commercially available computerized maintenance management systems (CMMS). From
the DCS process graphics, plant maintenance staff can get an asset management view of the plant
to access work orders, spare part inventories, and maintenance activities. They can also rely upon
the DCS to identify problems and automatically generate a fault report for automated download
back into the CMMS.

Expanded Connectivity for Process Control.

Third-generation DCS controllers and I/O hardware occupy much smaller footprint than
earlier systems. DIN rail components operate using 24VDC and can be routed via redundant
fiber optic networks. This makes for a more scalable solution, as it is much easier and
economical to physically distribute clusters of remote I/O throughout the plant. DCS controller
technology has also evolved to support SIL 2 and 3 standards for safety as well as the traditional
National Fire Protection Association 85 requirements applied to many utility applications.

Integrated fieldbus is a significant third-generation DCS enhancement. In particular,


bussed communication reduces field wiring, and provides beneficial data for asset management.
Because the technology allows mixing bus protocol connections within a common controller, it
gives plant designers great flexibility for plant layout and final control element device selection.
Today’s control systems support the integration of many protocols, including Profibus,
Foundation Fieldbus, Device Net, and IEC 61850.

IEC 61850 is a recent development that is used for electrical system integration into the
plant DCS. With capabilities of integrating intelligent electrical devices (IED) for control and
asset monitoring and device management, the IEC 61850 standard is emerging with connectivity
options for protection relays, drives, medium- and high-voltage switchgear, and other equipment.
Also, specifically for power plant applications, DCS controllers can integrate field-bussed
specialty cards for turbine control (overspeed, auto synch, and valve position), vibration
condition monitoring, and flame scanners.

Finally, thought they’re not classified as fieldbus protocols, the highway-addressable


remote transducer (HART) and Modbus over Ethernet have also been more tightly integrated
into the third-generation DCS controller level.

2. Extended automation DCS.


Third-generation distributed control systems offer many options for connecting plant
process instruments and devices using fieldbus, Ethernet, and wireless technologies, as well as
through traditional hardwired I/O systems. Source: ABB
Engineering Tool Enhancements.

The DCS software interface employs object-oriented technology to provide user-


definable "library objects." This approach allows complete control strategies — such as motor-
operated valve control, faceplate, graphic element, and aspect links — to be packaged into a
single library object that is available as an element within the project library.

9
As an object is used repeatedly throughout a project, it maintains its reference
"inheritance" to the original library object. This allows for a consistent design approach for all
similar plant devices and also simplifies maintenance of control configurations when code
modifications are required. Control programming methods are available to support function
blocks from previous first- and second-generation DCS systems as well as IEC version function
blocks, ladder logic, instruction list, structured text, and sequential flow charts.
Improved Power Plant Simulators.

When used for operator training, simulator systems typically provide a substantial
opportunity to improve plant operational efficiency and expertise. Simulators can also serve as
testing grounds for verifying DCS logic changes. In earlier DCS generations, power plant
simulators offered controller hardware-based "stimulated" or PC "emulated" simulators. The
latest DCS simulator technology provides a "virtual controller" PC-based environment for
running the original equipment manufacturer (OEM) version of the controller configuration.

The virtual controller is easier to maintain than the previous-generations’ hardware-based


stimulated simulators. Furthermore, when combined with the OEM HMI and actual operator
process graphics, the virtual controller approach provides the most realistic simulation system
environment and can be easily coupled to a range of low- to high-fidelity simulation process.

SIMULATION SOFTWARES

What is MATLAB

Stands for MATrix LABoratory


It is developed by The Mathworks, Inc. (http://www.mathworks.com)
It is an interactive, integrated, environment
for numerical computations
for symbolic computations (via Maple)
for scientific visualizations
It is a high-level programming language
Program runs in interpreted, as opposed to compiled, mode
MATLAB is a numerical computing environment and fourth generation programming language.
Maintained by The Math Works, MATLAB allows easy matrix manipulation, plotting of
functions and data, implementation of algorithms, creation of user interfaces, and interfacing
with programs in other languages. Although it is numeric only, an optional toolbox uses the
MuPAD symbolic engine, allowing access to computer algebra capabilities. An additional
package, Stimulant, adds graphical multidomain simulation and Model-Based Design for
dynamic and embedded systems. In 2004, Math Works claimed that MATLAB was used by
more than one million people across industry and the academic world.

Interactions with other languages MATLAB can call functions and subroutines written in the
C programming language or Fortran. A wrapper function is created allowing MATLAB data
types to be passed and returned. The dynamically loadable object files created by compiling such
functions are termed "MEX-files" (for Mat lab Executable).

10
Libraries written in Java, ActiveX or .NET can be directly called from MATLAB and many
MATLAB libraries (for example XML or SQL support) are implemented as wrappers around
Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be done
with MATLAB extension, which is sold separately by MathWorks.
Through the MATLAB Toolbox for Maple, MATLAB commands can be called from within the
Maple Computer Algebra System, and vice versa.

PSPICE

PSpice is a SPICE analog circuit and digital logic simulation software that runs on personal
computers, hence the first letter "P" in its name. It was developed by MicroSim and is used in
electronic design automation. MicroSim was bought by OrCAD which was subsequently
purchased by Cadence Design Systems. The name is an acronym for Personal Simulation
Program with Integrated Circuit Emphasis. Today it has evolved into an analog mixed signal
simulator.

PSpice was the first version of UC Berlekey SPICE available on a PC, having been released in
January 1984 to run on the original IBM PC. This initial version ran from two 360KB floppy
disks and later included a waveform viewer and analyser program called Probe. Subsequent
versions improved in performance and moved to DEC/VAX minicomputers, Sun workstations,
the Apple Macintosh, and the Microsoft Windows platform.

PSpice, now developed towards more complex industry requirements, is integrated in the
complete systems design flow from OrCAD and Cadence Allegro. It also supports many
additional features, which were not available in the original Berkeley code like Advanced
Analysis with automatic optimization of a circuit, encryption, a Model Editor, support of
parameterized models, has several internal solvers, auto-convergence and checkpoint restart,
magnetic part editor and Tabriz core model for non-linear cores.

VLSI TECHNOLOGY

Very Large scale integration (VLSI) is the process of creating integrated circuits by
combining thousands of transistor-based circuits into a single chip. VLSI began in 1970s when
complex semiconductor and communication technologies were being developed .The
microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have
increased in complexity into the hundreds of millions of transistors.

This is the field which involves packing more and more logic devices into smaller and
smaller areas. Thanks to VLSI, circuits that would have taken boardfuls of space can now be put
into a small space few millimeters across! This has opened up a big opportunity to do things
that were not possible before. VLSI circuits are everywhere ... your computer, your car,
your brand new state-of-the-art digital camera, the cell-phones, and what have you.
All this involves a lot of expertise on many fronts within the same field.
VLSI has been around for a long time, there is nothing new about it ... but as a side effect of
advances in the world of computers, there has been a dramatic proliferation of tools that can be
used to design VLSI circuits. Alongside, obeying Moore's law, the capability of an IC has
increased exponentially over the years, in terms of computation power, utilisation of available
11
area, yield. The combined effect of these two advances is that people can now put diverse
functionality into the IC's, opening up new frontiers. Examples are embedded systems, where
intelligent devices are put inside everyday objects, and ubiquitous computing where small
computing devices proliferate to such an extent that even the shoes you wear may actually do
something useful like monitoring your heartbeats! These two fields are kind related, and getting
into their description can easily lead to another article.

The first semiconductor chips held in one transistor each. Subsequent advances added
more and more and as a consequences more individual functions or systems were integrated over
time. The first integrated circuits held only a few devices ,perhaps as many as ten diodes
,transistors, resistors and capacitors ,making it possible to fabricate one or more logic gates on a
single device. Now known retrospectively as small-scale integration (SSI),improvements in
technique led to devices with hundreds of logic gates known as ,large scale integration(LSI)i.e.
systems with atleast a thousand logic gates .Current technology has moved far past this mark
and today’s microprocessor have many millions of gates and hundreds of millions of individual
transistors.

At one time, there was an effort to name and calibrate various levels large scale
integration above VLSI. Terms like Ultra - large scale integration(ULSI) were used. But the huge
number of gates and transistors available on common device has rendered such fine distinction
moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use.
Even VLSI is now somewhat quaint, given the common assumption that all microprocessors are
VLSI or better.

As of early 2008,billion transistor processor are commercially available, an example of


which is Intel’s Montecito Itanium chip. This is expected to become more common place as
semiconductor fabrication moves from the current generation of 65 nm processes to the next 45
nm generations(while experiencing new challenges such as increased variation across process
corners)Another notable example is Nvidia’s 280 series GPU .This microprocessor is unique in
the fact that its 1.4 Billion transistor count ,capable of a teraflop of performance is almost
entirely dedicated to logic (Itanium ‘s transistor count is largely due to the 24 MB L3
cache).Current designs, as opposed to the earliest devices ,use extensive design automation and
automated logic synthesis to lay out the transistor enabling higher levels of complexity in the
resulting logic functionality. Certain high performance logic blocks like the SRAM cell, however
,are still designed by hand to ensure the highest efficiency (sometimes by bending or breaking
established design rules to obtain the last bit of performance by trading stability)

NANOTECHNOLOGY

Nanotechnology, shortened to "Nanotech", is the study of the control of matter on an atomic


and molecular scale. Generally nanotechnology deals with structures of the size 100 nanometers
or smaller, and involves developing materials or devices within that size. Nanotechnology is very
diverse, ranging from novel extensions of conventional device physics, to completely new
approaches based upon molecular self-assembly, to developing new materials with dimensions
on the nanoscale, even to speculation on whether we can directly control matter on the atomic
scale.

12
There has been much debate on the future of implications of nanotechnology. Nanotechnology
has the potential to create many new materials and devices with wide-ranging applications, such
as in medicine, electronics, and energy production. On the other hand, nanotechnology raises
many of the same issues as with any introduction of new technology, including concerns about
the toxicity and environmental impact of nanomaterials [1], and their potential effects on global
economics, as well as speculation about various doomsday scenarios. These concerns have led to
a debate among advocacy groups and governments on whether special regulation of
nanotechnology is warranted.

Fundamental concepts

One nanometer (nm) is one billionth, or 10-9, of a meter. By comparison, typical carbon-carbon
bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12-0.15 nm,
and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular
life-forms, the bacteria of the genus Mycoplasma, are around 200 nm in length.

To put that scale in another context, the comparative size of a nanometer to a meter is the same
as that of a marble to the size of the earth. Or another way of putting it: a nanometer is the
amount a man's beard grows in the time it takes him to raise the razor to his face.

Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and
devices are built from molecular components which assemble themselves chemically by
principles of molecular recognition. In the "top-down" approach, nano-objects are constructed
from larger entities without atomic-level control.

13

You might also like