You are on page 1of 44

16.

4 The Intel 4040 and the Birth of the Microprocessor


Before we describe the birth of the microprocessor, we need to briefly introduce the integrated circuit that made the microprocessor possible. The transistor, invented in 1947, works by controlling the flow of electrons through a structure embedded in silicon. This structure is composed of nothing more than adjoining regions of silicon with different concentrations of impurities. These impurities are atoms of elements like boron, phosphorous, and arsenic. By combining silicon with oxygen you get silicon dioxide, SiO2, a powerful insulator that allows you to separate regions of silicon. By evaporating (or sputtering) aluminum on to the surface of a silicon chip, you can create contacts and connectors. By putting all these elements together, several transistors can be combined to create a simple functional circuit, rather than a component. This is the IC (or integrated circuit) whose invention is attributed to Jack Kilby of TI and Robert Noyce of Fairchild The first practical IC was fabricated in 1959 at Fairchild and Texas Instruments and Fairchild began its commercial manufacture in 1961 [Tredennick96]. As manufacturing technology evolved, more and more transistors were put on single silicon chips with the maximum number of transistors per chip doubling every year between 1961 and 1971 [Tredennick96]. The basic functional units evolved from simple gates to arithmetic units, small memories, and special-purpose functions such as multiplexers and decoders. In 1967 Fairchild introduced an 8-bit ALU chip that included its own accumulator. It was inevitable that someone would eventually invent the microprocessor because, by the late 1960's, computers built from discrete transistors and simple integrated circuits already existed. Moreover, integrated circuits were getting more and more complex day by day. The only real issue was when would a semiconductor manufacturer decide that a general-purpose digital computer was economically worth developing. Credit for creating the world's first microprocessor, the Intel 4040, goes to Hoff and Fagin, although William Aspray [Aspray97] in the Annals of the History of Computing points out that the microprocessor's development was a more complex and interesting story than many realize.

In 1969 Bob Noyce and Gordon Moore [FaginHoff97] set up the Intel Corporation to produce semiconductor memory chips for the mainframe industry. A year later Intel began to develop a set of calculator chips for a consortium of two Japanese companies. These chips were to be used in the "Busicom" range of calculators. Three engineers from Japan worked with M. E. Hoff at Intel to implement the calculator's digital logic circuits in silicon. Hoff has a PhD from Stanford University and a background in the design of interfaces for several IBM computers. When Hoff studied the calculator's logic, he was surprised by its complexity (in contrast to the general-purpose digital systems found in digital computers). Hoff found the calculator not only overly complex, but was concerned by the packaging issues raised by the use of seven different LSI (large scale integration) chips. Bob Noyce encouraged Hoff to look at the design of the calculator. One of Hoff's major contributions was to replace the complex and slow shift registers used to store data in the calculator with the DRAM memory cells that Intel was developing as storage elements. This step provided the system with more and faster memory. Hoff also suggested adding subroutine calls to the calculator's instruction set in order to reduce the amount of hardwired logic in the system. These ideas convinced Hoff to go further and develop a general-purpose computer that could be programmed to carry out calculator functions. By the end of 1969 Stanley Mazor, who also had had computer design experience, joined the development team. Mazor added a fetch indirect instruction and (with Shima) coded an interpreter to execute 1-byte macroinstructions. Shima also proposed including a conditional jump based on the status of an external pin. Towards the end of 1969 the structure of a programmable calculator had emerged and Intel and Busicom chose the programmable calculator in preference to Busicom's original model. However, the project was delayed until Fredrico Faggin joined Intel in 1970 and worked on transforming the logic designs into silicon. In order to create a chip of such complexity (for its time), Faggin had to develop new semiconductor design technologies. The 4004 used about 2,300 transistors and is considered the first general-purpose programmable microprocessor, even though it was only a 4-bit device.

It is now interesting to note that Faggin et al's article [FaginHoff97] states that Intel discouraged the use of computer simulation because of its cost and Faggin did most of his circuit design with a slide rulea device that few of today's students have ever seen. The first functioning 4004 chip was created in 1971. Busicom's actual calculator was constructed from a 4004 CPU, four 4001 ROMs, two 4002 RAMs and three 4003 shift registers appeared in 1971. By the end of 1971 the 4004 was beginning to generate a significant fraction of Intel's revenue. Faggin realized that the 4004 was much more than a calculator chip and set about trying to convince Intel's management to get the rights to this chip from Busicom. Both Faggin and Hoff used the 4004 to control in-house systems (e.g., in a chip production tester and an EPROM programmer). Because Busicom was having financial problems, Intel was able to negotiate a deal that gave Busicom cheaper chip-sets for their calculators in return for nonexclusivity of the 4004. This deal probably ranks with the purchase of Alaska by the USA from Russia as the best buy of the century.

Structure of the 4040 The 4004 was a 4-bit chip that used BCD arithmetic (i.e., it processed one BCD digit at a time). It had 16 general-purpose 4-bit registers, a 4-bit accumulator, and a four-level 12-bit pushdown address stack that held the program counter and three subroutine return addresses. Its logic included a binary and a BCD ALU. It also featured a pin that can be tested by a jump conditional instruction in order to poll external devices such as keyboards. This pin was replaced by a more general-purpose interrupt request input in later microprocessors. The 4004 was followed, remarkably rapidly, by the 8-bit 8008 microprocessor. In fact, the 8008 was originally intended for a CRT application and was developed concurrently with the 4004. By using some of the production techniques developed for the 4004, Intel was able to manufacture the 8008 as early as March 1972. However, the invention of the 4004 in 1971 eclipsed an even more important event in personal computing the invention of the 8 inch floppy disk

drive by IBM. The personal computer revolution could never have taken place without the introduction of a low-cost means of both storing data and transferring it between computers. The 5 inch floppy disc drive from Shuggart first appeared at the end of 1976.

16.5 The Golden Erathe 8-bit Microprocessor


A golden era is a period of history viewed through rose-colored spectacles when there appeared to be relative stability, life was good, and the bad things don't seem to matter much now. The 8-bit era between about 1975 and 1980 was good because the first few microprocessors were available at affordable prices and everyone could use them. Before then, computer power was very expensive indeed and only large organizations and university departments could afford mainframe or minicomputers. Intel's 8008 microprocessor didn't last a long time. As the first 8-bit microprocessor it was relatively crude and unsophisticated. It had a poorly implemented interrupt mechanism and multiplexed address and data buses. The first really popular general-purpose 8-bit microprocessor was Intel's 8080 (in production in early 1974). This had a separate 8-bit data bus and 16-bit address bus. This address bus could address up to 216 = 64K bytes of data (a gigantic memory space in 1975). Intel didnt have the market place to itself for very long. Shortly after the 8080 went into production, Motorola created its own competitor, the 8-bit 6800. For a short period, engineers and computer scientists tended to be divided into two groups, Motorola enthusiasts and Intel enthusiasts. Although the 8080 and 6800 were broadly similar in terms of performance, they had rather different architectures. Both the 8080 and 6800 have modified accumulator plus single-address instruction formats. The 6800 was, to some extent, modeled on the PDP 11 and had much a cleaner architecture than the 8080. Frederico Faggin himself said "In many ways, the 6800 was a better product. However, the combination of timing, more aggressive marketing, availability of better software and hardware tools, and product manufacturability gave Intel the lead [Faggin92]." This was not the first time (nor the last time) that commercial considerations outweighed architectural factors.

The division of the world into Intel and Motorola hemispheres continued when small companies developed two other 8-bit microprocessors. Frederico Faggin left Intel with Ralph Ungerman in 1994 to found Zilog. Their first processor, the Z80, was manufactured in 1976. This device represented a considerable advance over the 8080 and was object-code compatible with the 8080. That is, the Z80's architecture was a superset of the 8080's architecture and could execute the 8080's machine code instructions. Zilog's Z80 was a success because it was compatible with the 8080 and yet incorporated many advances such as extra registers and instructions. It also incorporated some significant electrical improvements such as an on-chip DRAM refresh mechanism. A lot of Z80's rapidly found their way into the first generation of personal computers. You could also say that the Z80 had a devastating effect on the microprocessor industry the curse of compatibility. The success of the Z80 demonstrated that it was economically advantageous to stretch an existing architecture, rather than to create a new architecture. Clearly, by incorporating the architecture of an existing processor in a new chip you can appeal to existing users who do not want to rewrite their programs to suit a new architecture. The down side of backward compatibility is that a new architecture cannot take a radical step forward. Improvements have to be tacked on in an almost random fashion. As time passes, the architecture becomes more and more unwieldy and difficult to program efficiently. Just as Fagin left Intel to create the Z80, Chuck Peddle left Motorola to join MOS Technology and to create the 6502. The 6502's object code was not backward compatible with the 6800. It you wanted to run a 6800 program on the 6502, you had to recompile it. The relationship between the 8080 and Z80, and the 6800 and the 6502 is not the same. The Z80 is a super 8080, whereas the 6502 is a 6800 with its architecture re-arranged. For example, the 6800 has a 16-bit index register X, that can point at one of 216 memory locations. If you wish to copy data from one place in memory to another using indexing, you have to execute a loop containing something like
LDX LDA INC STX LDX A 0,X X Source Dest Source Pick up pointer to source Get element pointed at Increment pointer Save source pointer Pick up pointer to destination

STA A 0,X INC X STX Dest

Store element pointed at Increment pointer Save destination pointer

As you can see, this is a rather longwinded way of doing things. The 6502 provides two pointers, an X pointer and a Y pointer. The code can now be simplified to
LDA INC STA INC 0,X X 0,Y Y Get element pointed at Increment pointer Store element pointed at Increment pointer

However, where the 6800 had a 16-bit X pointer, the 6502 has an 8-bit X pointer and an 8-bit Y pointer. In 1976 Motorola got involved with Delco electronics who were designing an engine control module for General Motors. The controller was aimed at reducing exhaust emissions in order to meet new government regulations [Daniels96]. Motorola created a processor (later known as the 6801) that was able to replace a 6800 plus some of the chips required to make a system. This processor was backward compatible with the 6800 but included new index register instructions and an 8-bit x 8-bit multiplier. Daniels describes how he was given the task of taking the 6801 and improving it. They removed instructions that took up a lot of silicon area (such as the decimal adjust used in BCD arithmetic) and added more useful instructions. Later, on a larger scale, this process led to the development of RISC architectures. The new processor, the 6805, was introduced in 1979 and it and its variants became some of the best selling microprocessors in the history of the industry.

Origins and Rise of the Personal Computer Today, the term PC or personal computer is taken to mean the IBM PC or a clone thereof. That was not always so. For some time after the microprocessor had burst onto the scene with the 4004 and 8008, the personal computer was most conspicuous by its absence. I remember telling a colleague in 1977 that I was surprised that no large company had taken one of the new microprocessors and used it to build a personal computer.

Perhaps no major company wanted to create a personal computer market because, at that time, there were no low cost peripherals such as hard disk drives, display technology was very crude (simple text-only displays on domestic TVs), there were no operating systems and rather less applications software. The enthusiast or hacker blazed the trail to the PC market. I built my own microcomputer in 1976 using Motorola's 6800. The basic system was not too difficult to construct because Motorola provided a kit of parts (CPU, memory, serial I/O, and a bootstrap loader in ROM). Of course, there weren't any peripherals. I had to take it to the university where I was a postgraduate and connect it to a Teletype (a mechanical keyboard and printer). To use it at home I had to design and construct a video display using about 50 logic chips. There was no secondary storage and I had to construct an interface to a domestic cassette recorder to store data at 300 bits/second. I even built a paper-tape reader by machining blocks of plastic in the university's workshop. It was fun to put all these bits and pieces together to create a microcomputer. Of course, the computer couldn't do anything because there was no software at that time. If you wanted software, you had to write it in machine code because assemblers and compilers weren't available. Six months after the 8008 was introduced in 1972, the first (ready made) computer based on the 8008 was made. This was the Micral that was designed and built in France, although it was not successful in the USA. The term "microcomputer" was first coined to refer to the Micral. Another early microcomputer based on the 8008 was the Scelbi-8H marketed in kit form by the Scelbi Computer Consulting Company at $565 with 1 Kbytes of RAM.. Quite a lot of interest in early microprocessors came from the amateur radio community because they were accustomed to constructing electronic systems and were becoming more and more interested in digital electronics (e.g., Morse code generators and decoders and teletype displays). In June 1974, Radio Electronics magazine published an article by Jonathan Titus on another 8008-based microcomputer called the Mark-8. As a result of this article, several user groups sprang up around the US to share information a forerunner of the web. In January 1975 Popular Electronics magazine published one of the first articles on microcomputer design by Ed Roberts, the owner of a small company called MITS (based in Albuquerque, NM). MITS was a calculator company going through difficult times and Roberts was gambling on the

success of his 8080-based microcomputer kit that sold for $395 and included 256 bytes of random access memory. Altair was programmed from a row of switches on the front panel. You had to enter a program bit-by-bit an operation reminiscent of the time of the first mainframes. Ed Roberts's microcomputer was called Altair 8800 (I heard that the writer asked his daughter what the computer in Startrek was called and she said "Computer". So he asked where the Enterprise was heading. She said " Altair.") Although the Altair was intended for hobbyists, it had a significant impact on the market and sold 2000 kits in its first year [Garetz85]. It increased the number of people using microprocessors and helped to encourage the early development of microprocessor software. Moreover, the Altair had a bus, the so-called S-100 bus, that could be used to connect peripheral cards to the computer. At least it would once peripherals had been invented. The first Altair was a bit like the first telephone there was no one to call. The S-100 bus was not designed by a team of specialists and yet it was eventually granted an IEEE standard (IEEE 696). This was a case of an industry being set up by small groups of entirely ordinary people (as opposed to large organizations). Indeed, you could say that the Altair was to have another effect on the development of the computer industry one of the first to write programs for the Altair was a young man called Bill Gates who created a BASIC interpreter in July 1975. The very early microprocessors were expensive Mark Garetz in an article in Byte in 1985 [Garetz85] describes a conversation he had with an Intel spokesman in 1975 who told him that the cost of a microprocessor would never go below $100. On the same day, Garetz was able to buy a 6502 for $25 at the WESCON conference. With prices at this level enthusiasts were able to build their own microcomputers. Lots of "personal computers" sprang up during this time. Some were based on the 6502, some on the 8080, some on the Z80 and some on the 6800. The very first systems were all aimed at the electronics enthusiast because you had to assemble them from a kit of parts. I built the 6800 equivalent of the MITS Altair 8800; the SWTP 6800. (This is still in my garage because I can't bear to throw away my own little piece of computer history.) Typical machines of the early 8-bit era were the Apple I and Apple II, the KIM-1, the Commodore PER, the Sinclair ZX80, and the VIC-20. These created a generation of computer programmers and hardware experts.

The 6502-based KIM 1 microcomputer had 2 Kbytes of ROM holding a primitive operating system, 1 Kbytes of RAM, an octal keypad, an LED display, and used a domestic cassette record to store programs. In the late 1970s I was teaching the principles of microprocessors to students with the aid of a KIM 1. However, in just a few years the IBM PC was to appear and all these first-generation computers would be swept away to leave only the IBM PC and Apple's Mac in the ring. At the time I was surprised that no large organization seemed to want to jump on the personal computer bandwagon. Tredennick stated that there was a simple reason for this phenomenon microprocessors were designed as controllers in embedded systems such as calculators and cash registers and the personal computer market in the 1970s represented, to a first approximation, zero percent of a manufacturer's chip sales [Tredennick96]. In March 1976, Steve Wozniak and Steve Jobs designed their own 6502-based computer that they called the Apple 1 they had both been influenced by the Altair 8800. A year later in 1977 they created the Apple II with 16 Kbytes of ROM, 4 Kbytes of RAM and a color display and keyboard. Although unsophisticated, this was the first practical personal computer. The next development was unanticipated the writing of the first spreadsheet called VisiCalc in 1978. Although spreadsheets are common today, they did not exist before the personal computer. The spreadsheet enabled the personal computer to jump from the enthusiast-driven market to the business market. In turn, this created a demand for other business software such as the word processor. We now look at the development of modern architectures, the 16/32-bit second-generation CISC chips and the RISC chips that followed. We will return to the personal computer later.

16.6 The CISC Comes of Age


When I first moved from a department of Electronic Engineering to a department of Computer Science in 1977, I was rather surprised. Electronic engineers loved the microprocessor. Computer scientists seemed to hate it. One of my colleagues even called it "the last resort of the incompetent". Every time I got excited by a new development in microprocessor

architecture, another colleague said sniffily, "The Burroughs B6600 had that feature ten years ago." The hostility shown by some computer scientists to the microprocessor was inevitable. By the mid 1970s the mainframe von Neumann computer had reached a high degree of sophistication with virtual memory, advanced operating systems, 32- and 64-bit wordlengths and, in some cases, an architecture close to a high level language. Computer scientists regarded the new microprocessor as little more than a low-cost logic element. The 8bit microprocessors did not display the characteristics that computer scientists had come to expect (e.g., the ability to manipulate complex data structures, or to implement virtual memory systems). However, electrical engineers did not share the skepticism of their colleagues in computer science and were delighted with a device they could use to build powerful low-cost controllers. Even more, they were overjoyed to get back the device they had invented and which they felt had been hijacked by computer scientists and their priesthood. Although we are now surrounded by a veritable sea of computers, I can remember the time when I was an undergraduate and the university's computer was located in a special building and your only contact with it was by speaking to the operators through a glass partition. As time passed and microprocessor technology improved, it became possible to put more and more transistors on larger and larger chips of silicon. Microprocessors of the early 1980s were not only more powerful than their predecessors in terms of the speed at which they could execute instructions, they were more sophisticated in terms of the facilities they offered. For example, they supported complex addressing modes, data structures and memory management. The first mass-produced 16-bit microprocessor was the Texas Instruments TMS9900, a single chip version of TI's 990 minicomputer. Sadly, this device did not thrive. An interesting feature of the 9900 was its lack of on-chip user registers. All registers were stored in memory (including the PC and stack pointer). A single workspace register pointed to the set of 16 registers in RAM. Although locating registers in memory is intrinsically slower than putting them on the chip alongside the CPU, all you have to do to change the register set is to modify the workspace register. This feature made the TMS9900 well suited to interrupt processing and exception handling. Although the TMS9900 was arguably superior to the 8086, it was not well

supported by TI (I remember that well because some of my colleagues were using it) and it never achieved the sales it deserved. Other 16-bit processors that did not make it were the National Semiconductor Pace and the General Instruments CP1600. The two 16-bit microprocessors that were successful were Intel's 8086 and Motorola's 68000. Intel took the core of their 8080 microprocessor and converted it from an 8bit into a 16-bit machine, the 8086. In retrospect, this may have been a good commercial decision (in the light of Intel's share of the microprocessor market). However, Intel's rush to create a mass-market 16-bit machine led to an architecture with a lot to be desired and left the world with the "640K limit" on DOS based programs on IBM's personal computers. When moving from 8 bits to 16 bits, you have to deal with several issues. First, the increased size of the minimum instruction allows you to have more on-chip registers. Second, you have to decide what instructions and addressing modes to implement. Finally, you have to decide how to expand the address space beyond 16 bits. The 8086 had a superset of the 8080's registers; that is, all user-accessible 8080 registers were carried over to the 8086. However, the 8086 remained, essentially, a machine with a 16-bit address capable of accessing only 64K bytes of memory. The 8080's 16-bit address bus was expanded to 20 bits by appending a 4-bit segment value; that is, at any time the 8086 could access only one of 64 Kbytes within one of 16 segments. The total address space was 16 x 64K = 1 Mbyte. When the 8086 was adopted by the IBM PC, 384 Kbytes were allocated to the operating system and video display memory and the remaining 1 M 384 K = 640 Kbytes devoted to user programs (hence the infamous 640 Kbyte limit). We will look in more at the development of Intels 80x86 family when we cover the IBM PC. Motorola did not extend their 8-bit 6800 to create a 16-bit processor. Instead, they started again and did not attempt to achieve either object or source code compatibility with earlier processors. By beginning with a clear slate, Motorola was able to create a microprocessor with an exceptionally clean architecture in 1979. We have already seen that the 68000 had 8 general-purpose address and 8 general-purpose data registers (unlike the 8086's complex dedicated register structure). The term "orthogonal" was

used to describe the 68000's architecture because an addressing mode that could be used with one operation could also be used with a similar operation in the same class (mostly). The 68000 was one of the first microprocessors to use microcoding to define its instruction set. The earlier microprocessors had random logic instruction decoding and control units. Ironically, the 68000 was not a 16-bit microprocessor, because it had a 32bit architecture. Address and data registers were 32 bits wide and 32-bit operations were supported. Addresses were 32-bits wide and segmentation was not necessary. The 68000 itself had only 24 address pins and only 224 = 16 Mbytes of external memory could be directly accessed (even though all addresses were processed as 32 bits within the chip). Perhaps it was because the competition was the 16-bit processor that Motorola's advertising people never sold the 68000 as a 32-bit chip. At the time, the only other microprocessor in the same class was Zilog's Z8000 which had been introduced not long after the 8086. Although nominally a 16-bit processor, the Z8000's first 8 registers could be used as sixteen 8 bit registers, or the 16 registers could be used as eight 32-bit registers or as four 64-bit registers. The Z8000 provided a 32-bit multiplication instruction with a 64-bit product (the 68000 permitted only 16 x 16 bits to give a 32-bit product). More importantly, the Z8000 had user and supervisor operating modes that provided the operating system with a protection mechanism just like the 68000. Due to early bugs in the Z8000's design (it had a random logic control unit), this processor never proved successful. Zilog went on to develop a successor, the Z80000, but by then they were too late. Several manufacturers adopted the 68000, Apple for the Macintosh, and the Atari and Amiga computers. All three of these computers were regarded as technically competent and had large and enthusiastic followers. Alas, the Macintosh was sold as a relatively high-priced black box from a single source and the Atari and Amiga computers had the image of the enthusiast or hobbyist. Indeed, the Commodore Amiga in 1985 had many of the hallmarks of a multimedia machine but was derided as a games machine because few then grasped the importance of advanced graphics and high-quality sound. Its operating system employed preemptive multitasking and a GUI [Halfhill94].

A few 68000-based computers were also developed for the then specialist UNIX market. The popularity of the PC and competition in that market led to ever cheaper and cheaper hardware. In turn, this led to the growth of the PC's software base. Although the 68000 was developed into the 68020, 68030, 68040 and 68060, this family ceased to be a major contender in the personal computer world. Versions of this family were developed for the embedded processor market and Motorola continued to play a major role in this area. In the early 1980s semiconductor technology did not permit much more than the basic CPU on a chip. Advanced features such as floating pointer processors and memory management units had to be located on separate socalled coprocessor chips that were tightly coupled to the CPU. Motorola produced both a MMU and an FFP for its 68000 series. In 1984 Motorola introduced the 68020 which expanded the 68000's architecture to include bit field instructions and complex memory-indirect addressing modes. A small 256-byte instruction cache was also included on the chip and some of the 68000's limitations preventing it from supporting true virtual memory were removed. I've always thought that the 68020 represents the highpoint of CISC architectures. Later members of the 68K family have concentrated on performance and have not added to the basic architecture. Indeed, the 68060 has removed some of the 68020's instructions and resorts to their emulation in software. Motorola's 68030, launched in 1987, was a 68020 with a larger cache and an on-chip memory management unit. In 1989 Motorola introduced the 68040 incorporating a six-stage pipeline to improve the performance of the underlying 68020 architecture. The 68040 also implemented a floating-point coprocessor. It is interesting that Motorolas 68K family not only shares architectural features of IBMs Series/360 computers, it shares features of their development cycle. The 68000 was developed from a basic but powerful computer into a computer with floating point facilities, virtual memory, and on-chip cache over a decade or so. Intel's share of the PC market ensured that it would remain heavily committed to providing a continuing upgrade path for its old 86X family. In

1995 Motorola introduced its ColdFire line of processors. These are, essentially, based on the 68K architecture and are intended for embedded applications.

16.7 The RISC Challenge


We have already looked at the RISC architecture in some detail and we will not go over it again. The RISC processor is included in this section for two reasons. First, the RISC represented a major change in direction in the computer architecture world when it first appeared and many observers expected the RISC processor to sweep away all CISC processors of the 8086 and 68000 varieties. The second reason is that it didn't sweep away these CISC families. The accepted definition of RISC is "reduced instruction set computer", although I have encountered the notion that the term "regular instruction set computer" would be more appropriate. In practice, there is no such thing as a pure RISC processor the term RISC simply describes a general historic trend in computer architecture that stresses speed and simplicity over complexity. The characteristics of processors that are described as RISC are: register-to-register architecture regular instruction set pipelined organization one cycle per instruction. The origins of RISC go back to John Cocke at the IBM research center in Yorktown Heights, NY, in the mid 1970s when IBM was working on the 801 project in an attempt to improve the cost/performance ratio of computers. IBM later used the experience gained in the 801 project when it developed its PC RT system for small workstations in engineering and scientific applications. The RT chip had some of the characteristics or RISC architectures but was fabricated in relatively old MOS technology and clocked at only 6 MHz. Consequently, the RT chip was not a commercial success, although it laid the foundations for the more successful PowerPC. It was the work carried out by David Paterson at the University of Berkley in the early 1980s that brought the RISC philosophy to a wider audience.

Paterson was also responsible for coining the term "RISC" in a paper he wrote in 1980. The Berkeley RISC is an interesting processor for many reasons. Although it was constructed at a university (like many of the first mainframes such as EDSAC), the Berkeley RISC required only a very tiny fraction of the resources consumed by these early mainframes. Indeed, the Berkeley RISC is hardly more than an extended graduate project. It took about a year to design and fabricate the RISC I in silicon. By 1983 the Berkeley RISC II had been produced and that proved to be both a testing ground for RISC ideas and the start of a new industry. The Berkeley RISC was later converted into a commercial product, the SPARC architecture which is one of the few open architectures in the microprocessor world. An architecture is open if more than one manufacturer can produce it (this is not the same as second-sourcing where a company licenses another company to manufacture its product). By 1986 about ten companies were marketing processors described as RISCs. RISC architectures were quite controversial in the 1980s, partially because the large processor manufacturers were being told that they had been getting it wrong for the last few years and partially because a lot of the impetus for the RISC came out of universities rather than industry. It is important to emphasize that at least two developments in semiconductor technology made RISC possible. The first was the development of low-cost high-speed memory. When memory is very expensive, complex instructions make great sense. A complex instruction allows you to do a lot of computing for very little stored code. Consider the 68020's instruction BFFFO ([12,A0],D1*4,8),{D4:D5},D0. This remarkably powerful (complex) instruction generates an effective address by adding 12 to A0, reading the pointer at that address and then adding 4 times D1 plus 8 to the pointer. The byte at this effective address provides the starting point for a bit field whose length is in D5 and whose location is the number of bits in D4 from bit 7 of the effective address. Finally, the processor scans the specified bit field and returns the location of the first "1" in the bit field in D0. If there's a more complex CISC instruction than this, I'd be interested to see what it is.

This CISC instruction does a massive amount of computation and is specified in relatively few bits. Carrying out the same operation with simple instructions would require a much larger instruction space and hence increase system cost if memory is very expensive. The second development that made RISC architectures possible was the increase in bus widths. An 8-bit architecture can optimize instruction space by implementing 8-bit, 16-bit, and 24-bit instructions. Since RISC processors have register-to-register architectures, their instructions require at least log2(op_codes) + 3log2(registers). If we assume a system with 32 registers and 265 instructions, the minimum instruction size is log2256 + 3log232 = 8 + 15 = 23 bits. Another early 32-bit RISC project was managed by John Hennessy at Stanford University. Their processor was called MIPS (microprocessor without interlocked pipeline stages). In the UK, Acorn Computers (Now ARM Ltd) designed the 32-bit ARM processor that we covered earlier in this book. Unlike The Berkeley RISC and MIPS, the ARM processor has only 16 registers and uses the instruction space freed up by required only 3 x 4 = 12 register address bits to provide a flexible instruction set that allows every instruction to be conditionally executed. We now describe two important second-generation RISC processors the DEC Alpha and the PowerPC.

The DEC Alpha Having dominated the minicomputer market with the PDP11 and the VAX series, Digital set up a group to investigate how the VAX customer base could be preserved in the 1990s and beyond the Alpha. According to a special edition of Communications of the AMC devoted to the Alpha architecture (Vol. 36, No 2, February 1993), the Alpha was the largest engineering project in Digitals history. This project spanned more than 30 engineering groups in 10 countries. The group decided that a RISC architecture was necessary (hardly surprising in 1988) and that its address space should break the 32-bit address barrier. Unlike some of the companies that had developed earlier microprocessor families, DEC adopted a radical approach to microprocessor design they thought about what they wanted to achieve before they started making silicon. As in the case of IBMs System/360, Digital decoupled architecture from organization is order to create a family of devices with a common

architecture but different organizations (they had already done this with the PDP11 and VAX architectures). Apart from performance and a life span of up to 25 years, DECs primary goals for the Alpha were the ability to run the OpenVMS and Unix operating systems and to provide "an easy migration path" from VAX and MIPS customer bases [Sites93]. Digital were far sighted enough to think about how the advances that had increased processor performance by a factor of 1,000 in past two decades might continue in the future. That is, Digital thought about the future changes that might increase the Alphas performance by a factor of 1,000 and allowed for them in their architecture. In particular, Digital embraced the superscalar philosophy with its multiple instruction issue. Moreover, the Alphas architecture was specifically designed to support multiprocessing systems. The Alpha has a linear 64-bit virtual address space and address segmentation is not used. Similarly, the Alphas registers and data are all 64 bits wide. Digital did, however, make a significant compromise in the organization of the Alphas register file. Instead of providing general-purpose registers, the Alpha has separate integer and floating-point registers. Separating integer and floating-point registers simplified the construction (i.e., organization) of the chip set. Each register set contains 32 registers. Sites [Sites93] discusses the decision to provide 32 registers and concludes that more registers would increase chip complexity without significantly increasing the performance (in any case, adding more registers increases the time to perform a context switch). Because the Alpha architecture was designed to support multiple instruction issue and pipelining, it was decided to abandon the traditional condition code register. Branch instructions test an explicit register. If a single CCR had been implemented, there would be significant ambiguity over which CCR was being tested in a superscalar environment. Digitals Alpha project is an important milestone in the history of computer architecture because it represents a well thought out road stretching up to 25 years in the future.

The PowerPC

The PowerPC was the result of a joint effort between IBM, Motorola, and Apple. Essentially, IBM provided the architecture, Motorola fabricated the chip, and Apple used it in their range of personal computers. IBM was the first company to incorporate RISC ideal in a commercial machine, the 801 minicomputer. The 801 implemented some of the characteristics of RISC architectures and its success led IBM to develop more powerful RISC architectures. IBM created their POWER architecture for use in their RS/6000 series workstations. The POWER architecture had RISC features, superscalar instruction issue, but retained some traditional CISC features such as complex bit manipulation instructions. Furthermore, POWER also provided single-instruction multiple register transfers between the register file and memory. A consortium of IBM, Motorola and Apple engineers took the POWER architecture and developed it into the PowerPC family of microprocessors. As in the case of Digitals Alpha architecture, the PowerPC was designed to allow for future growth and a clear distinction was made between architecture and implementation. The POWERs architecture was somewhat simplified and any architectural features that stood in the way of superscalar dispatch and out-of-order execution were removed. The architecture was also extended to provide a 64-bit superset. The first member of the PowerPC architecture was the 601 that was originally designed my IBM and modified by Motorola to include some of the facilities of Motorolas own RISC device, the 88110. Some of the later members of the PowerPC family were the 603 (a low-cost, low-power processor aimed at the personal computer and laptop market), the 604 (a high-performance version aimed at both personal computers and workstations), and the 620 (a 64-bit version aimed at the server and workstation markets). We are now going to return to a more detailed discussion of the most successful of the personal computers, the IBM PC. This machine was to link together the fate of three organizations: IBM, Intel, and Microsoft.

16.8 The Growth of PC-compatible Architectures and the Rise of WinTel

The first serious computer was the 6502-based Apple II, introduced in 1977. The success of the Apple II demonstrated that there was a potential mass market for personal computers. The development of microprocessor architectures since 1981 has been as much affected by commercial considerations as by improvements in technology because of the inexorable rise of the IBM PC. I have always felt that the PC owes an immense debt of gratitude to Apple for failing to capitalize on their 68000-based Macintosh that so many people once regarded as superior to the PC. Apple's commercial failure demonstrates that anyone in the semiconductor industry must realize that commercial factors are every bit as important as architectural excellence and performance. In short (and with hindsight), IBM adopted open standards. By making the IBM PC non-proprietary, anyone could build a copy or clone of the IBM PC. Hundreds of manufacturers started producing parts of PCs and an entire industry sprang up. You could buy a basic system from one place, a hard disk from another, and a graphics card from yet another supplier. By publishing standards for the PC's bus, anyone could create a peripheral for the PC. What IBM lost in the form of increased competition, they more than made up for in the rapidly expanding market. Finally, IBMs open standard provided an incentive for software writers to generate software for the rapidly expanding PC market. Apple favored a very different approach. They marketed the computer. And the operating system. And the peripherals. Apple refused to publish detailed hardware specifications or to license their BIOS and operating system [Tredennick96]. Apple had the better processor and an operating system that was regarded as more user friendly. In particular, non-computer specialists loved the Apple because they found it so easy to use in contrast with the PC. Even though you could buy peripherals for the PC from many sources, getting, say, a video card to work with the operating system could sometimes prove very difficult. As time passed the sheer volume of PCs and their interfaces (plus the software base) pushed PC prices down and down. The Apple was perceived as over priced. Even though Apple adopted the PowerPC, it was too late and Apple's role in the personal computer world was marginalized. By the mid 1990s it was possible to joke "Question: What do you get if you merge Apple with IBM? Answer: IBM."

Tredennick points out that the fundamental problem facing anyone who wishes to break into the PC field is amortization [Tredennick96]. A manufacturer has to recover its development costs over its future sales. Intel can spread the cost of a processor like the Pentium over, say, 50 million units. In other words, the amortization cost of each processor is just a few dollars and is well below its manufacturing costs. On the other hand, a competitor attempting to break into the market will have to amortize development costs over a much smaller number of processors. MS-DOS and Windows Although the bulk of this book has been about the basic principles of computer architecture, issues far removed from "performance" or "architectural suitability" affect the practice of computer architecture. One of these influences is the operating system, which, in the PC world, has become dominated by Microsoft's Windows. Users interact with operating systems in one of two ways: via a command language like UNIX or MS-DOS, or via a graphical user interface like Windows. The user interface is, of course, only a means of communication between the human and the operating systemthe underlying operating system that manages files and switches between tasks is not directly affected by the user interface. Here we take a brief look at two user interfaces: the command line driven MS-DOS and the Windows graphical user interface. Operating systems have been around for a long time and their history is as fascinating as that of the processor architectures themselves. In the early days of the computer when all machines were expensive mainframes, the major computer companies designed operating systems to run on their own machines. One of the first operating systems that could be used on a variety of different computers was UNIX, which was designed by Ken Thompson and Dennis Richie at Bell Labs. UNIX was written in C; a systems programming language designed by Richie. Originally intended to run on DECs primitive PDP-7 minicomputer, UNIX was later rewritten for the popular PDP-11. This proved to be an important move, because, in the 1970s, most university computer science departments used PDP-11s. Bell Labs was able to license UNIX for a nominal fee, and, therefore, UNIX rapidly became the standard operating system in the academic world.

UNIX is a very powerful and flexible, interactive, timesharing operating system that was designed by programmers for programmers. What does that mean? If I said that laws are written for lawyers, I think that a picture might be forming in your mind. UNIX is a user friendly language like a brick is an aerodynamically efficient structure. However, UNIX is probably the most widely used operating system in many universitiesthis is what Andrew Tannenbaum had to say on the subject, "While this kind of user interface [a user friendly system] may be suitable for novices, it tends to annoy skilled programmers. What they want is a servant, not a nanny." UNIX is a powerful and popular operating system because it operates in a consistent and elegant way. When a user logs in to a UNIX system, a program called the shell interprets the users commands. The commands of UNIX take a little getting used to, because they are heavily abbreviated and the abbreviations are not always what you might expect. Because of UNIXs immense popularity in the academic world, it influenced the thinking of a generation of programmers and systems designers. The first truly popular command-line operating system was MS-DOS because it was designed to run on IBMs PC. In 1980 IBM commissioned Bill Gates to produce an operating system for their new PC. Bill Gates was known to IBM because he had written a version of the language BASIC for the Intel 8080based Altair personal computer. Because IBMs original PC had only 64K bytes of RAM and no hard disk, a powerful operating system like UNIX could not be supported. Gates didnt have time to develop an entirely new operating system, so his company, Microsoft, bought an operating system called 85-DOS. This product was modified and renamed MS-DOS (Micro Soft Disk Operating System). Version 1.0 of MS-DOS, which was released in 1981, occupied 12K bytes of memory and supported only a 160 Kbyte 5 in diskette. MS-DOS performed all its input and output transactions by calling routines in a read-only memory within the PC. These I/O routines are called the PCs BIOS (basic input/output system). MS-DOS 1.0 also included a command processor, command.com, like UNIXs shell. Over the years Microsoft refined MS-DOS to take advantage of the improved hardware of later generations of PCs. MS-DOS 1.0 begat version 1.1, which begat version 2.0, and so on. After much begetting, which made Bill Gates

one of the richest men in the World, MS-DOS reached version 6.2 in 1994. New versions of MS-DOS are eagerly awaited, and many PC users purchase the updates as soon as they are released. With so many versions of MS-DOS sold in such a short time, I would not be surprised if someone made the comment "You dont buy MS-DOS; you rent it". MS-DOS shares many of the features of UNIX but it lacks UNIXs consistency. More importantly, the pressure to maintain backward compatibility with older versions of PCs and PC software has meant that MS-DOS cannot handle programs larger than 640 Kbytes. MS-DOS was designed for the 8086 microprocessor that could address only 1 Mbyte of address space. Unlike UNIX, MS-DOS was not designed as a timesharing system and, therefore, there is no logon procedure. In other words, MS-DOS has no security mechanism and the user can do anything he or she wants. UNIX has a superuser who is protected by a password and who has special privileges. The superuser is able to configure and maintain the operating system. An MS-DOS file name is restricted to eight characters (UNIX file names can be up to 255 characters). UNIX and MS-DOS allow the type of the file to be described by an extension after the filename; for example, the MS-DOS file test.exe indicates a file called test that is in the form of an executable program. Both UNIX and MS-DOS do not enforce the way in which file extensions are usedyou can give a file any extension you want. MS-DOS can be configured or tailored to the specific system on which it is to run. When MS-DOS is first loaded into memory, two files called CONFIG.SYS and AUTOEXEC.BAT are automatically executed. These files set up an environment to suit the structure of the computer; they tell the operating system where to find device drivers for the video display, the mouse, the printer, the sound card, and so on. The advantage of MS-DOSs configuration files is that they allow the system to be tailored to suit the actual software and hardware environment. Many believe that one of the most important factors in encouraging the expansion of computers into non-traditional environments has been the development of intuitive, user-friendly interfaces. On the other hand, every time you install a new package, there is a good chance that it will modify the two configuration files (more if it runs under Windows). After a time, the configuration files become very difficult to understand. Even worse, when you delete an application, the changes it made to the configuration file are left behind.

Like UNIX, MS-DOS is, essentially, a programmers language. The need to make computers accessible to those who want to employ them as a tool, forced the development of graphical user interfaces like Windows that provide a GUI, or graphical user interface. This type of graphical operating system interface is good because many users can access operating system functions or use applications programs without ever reading the users manual. However, some programmers dont like the GUI environment because they feel that the traditional command language is much more efficient and concise. In many ways this criticism of the Windows environment is valid. Microsofts Windows (at least until after version 3.11) is not an operating system, it is a front-end that sits between the user and the underlying system (in this case versions of MS-DOS later than 3.0).

Development of the 80x86 Family Just as Windows has dominated the operating system, Intel architectures have dominated the PC's hardware and architecture. The dominance of the Windows operating system and the Intel family in the PC market let to the coining of the term WinTel to describe the symbiotic relationship between the Microsoft Corporation and Intel. The relationship between Microsoft and Intel is not entirely symbiotic. Although Intel's chips and Microsoft's operating system form the foundations of the personal computer, each of these two organizations views the world from a different perspective. Microsoft is happy to see other semiconductor manufacturers produce clones of Intel processors if chip prices are driven down, more people can buy computers with Microsoft's software. Similarly, if the freely available Linux operating system becomes more widely available, Intel can sell more chips for "Linux boxes". Due to the inherent complexity of the 86x processors, we can provide only an overview of their development here. In 1975 work started on the 16-bit 8086 that was to be compatible with the 8-bit 8080. Intel seemed to get the worst of both worlds when their 8086 carried over some of the 8080's primitive architecture and yet the 8086's object code was not compatible with the 8080. However, because the underlying architectures of the 8080 and 8086 were compatible, the automatic translation of 8080 machine code into 8086 machine code was not difficult.

Intel was the first company to introduce the coprocessor to augment the performance of its processors. Intel added the 8087 coprocessor to 8086based systems to extend the processor's architecture to include floating point operations. In order to simplify systems design, Intel introduced a modification of the 8086 called the 8088 that had the same architecture as the 8086 but which communicated with memory via an 8-bit bus. In other words, the 8088 provided 16-bit computing with 8-bit hardware complexity (figure 16.7). In 1981 IBM adopted Intels 8088 in its first PC which was clocked at 4.77 MHz. Figure 16.7 Structure of the 8086

At the beginning of 1982 Intel introduced its first of many major upgrades to the 86x family, the 80286. The 80286 had a 16-bit data bus and operated at 6 MHz its clock speed gradually rose to 20 MHz over its life span). As well as increasing its speed by using a faster clock than the 8086, the 80286 had a more efficient organization that increased throughput by a factor of about 5. This trend of increasing speed via improving a microprocessors organization became a feature of the microprocessor industry. IBM's adoption of the 86286 for its PC AT architecture in 1984 ensured the continuing success of Intel's 86x line. Figure 16.8 Structure of the 80286

The 80286 had 24 address lines making it possible to access 224 bytes (i.e., 16 Mbytes) of memory (the same as the 68000). The microprocessor had a fairly crude on-chip memory management unit that could be used to support multitasking and virtual memory systems where up to 1Gybtes of virtual memory could be accessed (see figure 16.8). In order to make the 80286 compatible with software written for the 8086, the 80286 operated in two modes real and protected. In the real mode, the 80286 looked like an 8086 and could address only 1 Mbyte. In the protected mode, the 80286 could access its entire 16 Mbyte memory space. However, because the protected mode still forced you to use segmentation with 64 Kbyte pages and it was impossible to go from protected to real modes, the protected mode was little used by programmers. The 80286 was followed by the 80386 in 1985. This processor had 32-bit address and data buses. The 80386 represented a considerable step up from the 80286 and had improved memory management with three operating

modes. The 80386s new operating mode, virtual 8086, made it easier to run 8086 programs. In 1989 Intel introduced the 80486 which was a relatively modest step up from the 80386 with relatively few architectural enhancements. The 80486 was Intels first microprocessor to include on-chip floating-point arithmetic and a 4-way set-associative cache. Intel later developed its Pentium in 1993. This was Intels 80586, but, because Intel couldnt patent a number, they chose the name Pentium with an initial speed of 60 MHz (rising to 166 MHz in later generations). The Pentium was architecturally largely the same as a 32-bit 80486 but had a 64bit data bus a few new instructions were added to the 80486s instruction set. Performance was enhanced by using parallel processing and a 16K cache memory divided into instruction and data sections.

Intel Clones Just as the IBM PC was copied by other manufacturers to create so-called PC clones, members of the 80x86 family have also been cloned by other semiconductor manufacturers. Note that it is not possible to directly copy a microprocessor because that would infringe patents. It is, however, possible to make a functional copy of a microprocessor that simulates the original. Because microprocessors are so complex, it is impossible to verify that an 80x86 clone is exactly functionally equivalent to an actual 80x86. For this reason, some users have been reluctant to use 80x86 clones. The first Intel clone was the NextGen Nx586 produced by NextGen (which was later taken over by AMD). This chip provided a similar level of performance to early Pentiums running at about 90 MHz but was sold at a considerably lower price. The Nx586 had several modern architectural features such as superscalar execution (with two integer execution units), pipelining, branch prediction logic, and separate data and instruction caches. At full speed, the Nx586 could execute two instructions per cycle. The Nx586 did not attempt to execute the Intel instruction set directly. Instead, it translated Intel instructions into a simpler from and executed these instructions.

Other clones were produced by AMD and Cyrix. The AMD K5 took a similar approach to NexGen by translating Intels variable-length instructions into fixed-length RISC instructions before executing them. AMDs next processor, the K6, built on NexGens Nx586 experience by including two instruction pipelines bed by four instruction decoders. By early 1999 some of the clone manufacturers were attempting to improve on Intels processors rather than just creating lower cost functionally equivalent copies.

Summary
In this chapter we have provided a brief history of the development of the computer. As we have emphasized, the computer arose because there was a need for it and the technology (developed for the communications industry) already existed. The development of computers often occurred in cycles, with a first model followed by models of increasing sophistication. This process can be seen in IBMs Series/360, and the Motorola 680x0 and Intel 80x86 families. Another important factor in the history of computer development was standardization. IBM lead the way with its Series/360 that provided a stable platform for the development of software. However, a possible negative effect was the extension of the life span of inappropriate architectures in order to run old software. Nowhere is this more evident than in the case of Intels 80x86 family where features from the dark ages of the 8086 are carried through to Pentium processors and beyond.

Microprocessor
Exclusive: Watch Free Movies & TV Shows using IMDb
From Wikipedia, the free encyclopedia Jump to: navigation, search

Intel 4004, the first commercial microprocessor A microprocessor incorporates the functions of a computer's central processing unit (CPU) on a single integrated circuit (IC),[1] or at most a few integrated circuits.[2] It is a multipurpose, programmable device that accepts digital data as input, processes it according to instructions stored in its memory, and provides results as output. It is an example of sequential digital logic, as it has internal memory. Microprocessors operate on numbers and symbols represented in the binary numeral system. The advent of low-cost computers on integrated circuits has transformed modern society. General-purpose microprocessors in personal computers are used for computation, text editing, multimedia display, and communication over the Internet. Many more microprocessors are part of embedded systems, providing digital control over myriad objects from appliances to automobiles to cellular phones and industrial process control.

Contents

1 Origins 2 Embedded applications 3 Structure 4 Firsts o 4.1 CADC o 4.2 Gilbert Hyatt o 4.3 TMS 1000 o 4.4 Intel 4004 o 4.5 Pico/General Instrument

o 4.6 Four-Phase Systems AL1 5 8-bit designs 6 12-bit designs 7 16-bit designs 8 32-bit designs 9 64-bit designs in personal computers 10 Multicore designs 11 RISC 12 Special-purpose designs 13 Market statistics 14 See also 15 Notes 16 References 17 External links

Origins
During the 1960s, computer processors were constructed out of small and medium-scale ICs each containing from tens to a few hundred transistors. These were placed and soldered onto printed circuit boards, and often multiple boards were interconnected in a chassis. The large number of discrete logic gates used more electrical powerand therefore produced more heat than a more integrated design with fewer ICs. The distance that signals had to travel between ICs on the boards limited a computer's operating speed. In the NASA Apollo space missions to the moon in the 1960s and 1970s, all onboard computations for primary guidance, navigation and control were provided by a small custom processor called "The Apollo Guidance Computer". It used wire wrap circuit boards whose only logic elements were three-input NOR gates.[3] The integration of a whole CPU onto a single chip or on a few chips greatly reduced the cost of processing power. The integrated circuit processor was produced in large numbers by highly automated processes, so unit cost was low. Single-chip processors increase reliability as there are many fewer electrical connections to fail. As microprocessor designs get faster, the cost of manufacturing a chip (with smaller components built on a semiconductor chip the same size) generally stays the same. Microprocessors integrated into one or a few large-scale ICs the architectures that had previously been implemented using many medium- and small-scale integrated circuits. Continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers. The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4-bit and

8-bit microprocessors, such as terminals, printers, various kinds of automation etc., followed soon after. Affordable 8-bit microprocessors with 16-bit addressing also led to the first generalpurpose microcomputers from the mid-1970s on. Since the early 1970s, the increase in capacity of microprocessors has followed Moore's law; this originally suggested that the number of components that can be fitted onto a chip doubles every year. With present technology, it is actually every two years, [4] and as such Moore later changed the period to two years.[5]

Embedded applications
Thousands of items that were traditionally not computer-related include microprocessors. These include large and small household appliances, cars (and their accessory equipment units), car keys, tools and test instruments, toys, light switches/dimmers and electrical circuit breakers, smoke alarms, battery packs, and hi-fi audio/visual components (from DVD players to phonograph turntables). Such products as cellular telephones, DVD video system and HDTV broadcast systems fundamentally require consumer devices with powerful, low-cost, microprocessors. Increasingly stringent pollution control standards effectively require automobile manufacturers to use microprocessor engine management systems, to allow optimal control of emissions over widely varying operating conditions of an automobile. Nonprogrammable controls would require complex, bulky, or costly implementation to achieve the results possible with a microprocessor. A microprocessor control program (embedded software) can be easily tailored to different needs of a product line, allowing upgrades in performance with minimal redesign of the product. Different features can be implemented in different models of a product line at negligible production cost. Microprocessor control of a system can provide control strategies that would be impractical to implement using electromechanical controls or purpose-built electronic controls. For example, an engine control system in an automobile can adjust ignition timing based on engine speed, load on the engine, ambient temperature, and any observed tendency for knockingallowing an automobile to operate on a range of fuel grades.

Structure

A block diagram of the internal architecture of the Z80 microprocessor, showing the arithmetic and logic section, register file, control logic section, and buffers to external address and data lines The internal arrangement of a microprocessor varies depending on the age of the design and the intended purposes of the processor. The complexity of an integrated circuit is bounded by physical limitations of the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, and the heat that the chip can dissipate. Advancing technology makes more complex and powerful chips feasible to manufacture. A minimal hypothetical microprocessor might only include an arithmetic logic unit (ALU) and a control logic section. The ALU performs operations such as addition, subtraction, and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation (zero value, negative number, overflow. or others). The logic section retrieves instruction operation codes from memory, and initiates whatever sequence of operations of the ALU requires to carry out the instruction. A single operation code might affect many individual data paths, registers, and other elements of the processor. As integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger; allowing more transistors on a chip allowed word sizes to increase from 4- and 8-bit words up to today's 64-bit words. Additional features were added to the processor architecture; more on-chip registers speeded up programs, and complex instructions could be used to make more compact programs. Floating-point arithmetic, for example, was often not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and then as part of the same microprocessor chip, speeded up floating point calculations. Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example, carry and overflow within each slice, the result was a system that could handle, say, 32-bit words using integrated circuits with a capacity for only 4 bits each. With the ability to put large numbers of transistors on one chip, it becomes feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory, and increases the processing speed of the system for many applications. Generally, processor speed has increased more rapidly than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory.

Firsts
Three projects delivered a microprocessor at about the same time: Garrett AiResearch's Central Air Data Computer (CADC) (1968), Texas Instruments (TI) TMS 1000 (1971 September), and Intel's 4004 (1971 November).

CADC
This section relies on references to primary sources. Please add references to secondary or tertiary sources. (March 2010) For more details on this topic, see Central Air Data Computer. In 1968, Garrett AiResearch (which employed designers Ray Holt and Steve Geller) was invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy's new F-14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as the core CPU. The design was significantly (approximately 20 times) smaller and much more reliable than the mechanical systems it competed against, and was used in all of the early Tomcat models. This system contained "a 20-bit, pipelined, parallel multi-microprocessor". The Navy refused to allow publication of the design until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly unknown.[6] Ray Holt graduated California Polytechnic University in 1968, and began his computer design career with the CADC. From its inception, it was shrouded in secrecy until 1998 when at Holt's request, the US Navy allowed the documents into the public domain. Since then several[who?] have debated if this was the first microprocessor. Holt has stated that no one has compared this microprocessor with those that came later.[7] According to Parab et al. (2007), "The scientific papers and literature published around 1971 reveal that the MP944 digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the first microprocessor. Although interesting, it was not a single-chip processor, as was not the Intel 4004 they both were more like a set of parallel building blocks you could use to make a general-purpose form. It contains a CPU, RAM, ROM, and two other support chips like the Intel 4004. Interesting it was made from the exact P-Channel technology and operated at Mil Spec's with larger chips. An excellent computer engineering design by any standards. Its design indicates a major advance over Intel and two year earlier. It actually worked and was flying in the F-14 when the Intel 4004 was announced. It indicates that todays industry theme of converging DSPmicrocontroller architectures was started in 1971."[8] This convergence of DSP and microcontroller architectures is known as a Digital Signal Controller.[citation needed]

Gilbert Hyatt
Gilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and Intel, describing a "microcontroller".[9] The patent was later invalidated, but not before substantial royalties were paid out.[10][11]

TMS 1000
The Smithsonian Institution says TI engineers Gary Boone and Michael Cochran succeeded in creating the first microcontroller (also called a microcomputer) and the first lone-chipped CPU in 1971. The result of their work was the TMS 1000, which went commercial in 1974.[12] TI stressed the 4-bit TMS 1000 for use in pre-programmed embedded applications, introducing a version called the TMS1802NC on September 17, 1971 that implemented a calculator on a chip.

TI filed for a patent on the microprocessor. Gary Boone was awarded U.S. Patent 3,757,306 for the single-chip microprocessor architecture on September 4, 1973. In 1971 and again in 1976, Intel and TI entered into broad patent cross-licensing agreements, with Intel paying royalties to TI for the microprocessor patent. A history of these events is contained in court documentation from a legal dispute between Cyrix and Intel, with TI as intervenor and owner of the microprocessor patent. A computer-on-a-chip combines the microprocessor core (CPU), memory, and I/O (input/output) lines onto one chip. The computer-on-a-chip patent, called the "microcomputer patent" at the time, U.S. Patent 4,074,351, was awarded to Gary Boone and Michael J. Cochran of TI. Aside from this patent, the standard meaning of microcomputer is a computer using one or more microprocessors as its CPU(s), while the concept defined in the patent is more akin to a microcontroller.

Intel 4004

The 4004 with cover removed (left) and as actually used (right) Main article: Intel 4004 The Intel 4004 is generally regarded as the first commercially available microprocessor,[13][14] and cost $60.[15] The first known advertisement for the 4004 is dated November 15, 1971 and appeared in Electronic News.[16] The project that produced the 4004 originated in 1969, when Busicom, a Japanese calculator manufacturer, asked Intel to build a chipset for high-performance desktop calculators. Busicom's original design called for a programmable chip set consisting of seven different chips. Three of the chips were to make a special-purpose CPU with its program stored in ROM and its data stored in shift register read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project, believed the Busicom design could be simplified by using dynamic RAM storage for data, rather than shift register memory, and a more traditional general-purpose CPU architecture. Hoff came up with a fourchip architectural proposal: a ROM chip for storing the programs, a dynamic RAM chip for storing data, a simple I/O device and a 4bit central processing unit (CPU). Although not a chip designer, he felt the CPU could be integrated into a single chip, but as he lacked the technical know-how the idea remained just a wish for the time being. While the architecture and specifications of the MCS-4 came from the interaction of Hoff with Stanley Mazor, a software engineer reporting to him, and with Busicom engineer Masatoshi Shima, during 1969, Mazor and Hoff moved on to other projects. In April 1970, Intel hired Italian-born engineer Federico Faggin as project leader, a move that ultimately made the singlechip CPU final design a reality (Shima instead designed the Busicom calculator firmware and assisted Faggin during the first six months of the implementation). Faggin, who originally developed the silicon gate technology (SGT) in 1968 at Fairchild Semiconductor[17] and designed

the worlds first commercial integrated circuit using SGT, the Fairchild 3708, had the correct background to lead the project into what would become the first commercial general purpose microprocessor, since it was his very own invention, SGT in addition to his new methodology for random logic design, that made it possible to implement a single-chip CPU with the proper speed, power dissipation and cost. The manager of Intel's MOS Design Department was Leslie L. Vadsz. at the time of the MCS-4 development, but Vadasz's attention was completely focused on the mainstream business of semiconductor memories and he left the leadership and the management of the MCS-4 project to Faggin, who was ultimately responsible for leading the 4004 project to its realization. Production units of the 4004 were first delivered to Busicom in March 1971 and shipped to other customers in late 1971.

Pico/General Instrument

The PICO1/GI250 chip introduced in 1971. This was designed by Pico Electronics (Glenrothes, Scotland) and manufactured by General Instrument of Hicksville NY In 1971 Pico Electronics[18] and General Instrument (GI) introduced their first collaboration in ICs, a complete single chip calculator IC for the Monroe/Litton Royal Digital III calculator. This chip could also arguably lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip. The layout for the four layers of the PMOS process was hand drawn at x500 scale on mylar film, a significant task at the time given the complexity of the chip. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. They had significant previous design experience on multiple calculator chipsets with both GI and Marconi-Elliott.[19] The key team members had originally been tasked by Elliott Automation to create an 8 bit computer in MOS and had helped establish a MOS Research Laboratory in Glenrothes, Scotland in 1967. Calculators were becoming the largest single market for semiconductors and Pico and GI went on to have significant success in this burgeoning market. GI continued to innovate in microprocessors and microcontrollers with products including the CP1600, IOB1680 and

PIC1650.[20] In 1987 the GI Microelectronics business was spun out into the Microchip PIC microcontroller business.

Four-Phase Systems AL1


The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers and an ALU.[21] It was designed by Lee Boysel in 1969.[22][23][24] At the time, it formed part of a ninechip, 24-bit CPU with three AL1s, but it was later called a microprocessor when, in response to 1990s litigation by Texas Instruments, a demonstration system was constructed where a single AL1 formed part of a courtroom demonstration computer system, together with RAM, ROM, and an input-output device.[25]

8-bit designs
This section and the sections below needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (June 2011) The Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor. The 8008 was not, however, an extension of the 4004 design, but instead the culmination of a separate design project at Intel, arising from a contract with Computer Terminals Corporation, of San Antonio TX, for a chip for a terminal they were designing,[26] the Datapoint 2200 fundamental aspects of the design came not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle developed the original design for the instruction set and operation of the processor. In 1969, CTC contracted two companies, Intel and Texas Instruments, to make a single-chip implementation, known as the CTC 1201.[27] In late 1970 or early 1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet to deliver the part, CTC opted to use their own implementation in the Datapoint 2200, using traditional TTL logic instead (thus the first machine to run 8008 code was not in fact a microprocessor at all and was delivered a year earlier). Intel's version of the 1201 microprocessor arrived in late 1971, but was too late, slow, and required a number of additional support chips. CTC had no interest in using it. CTC had originally contracted Intel for the chip, and would have owed them $50,000 for their design work.[27] To avoid paying for a chip they did not want (and could not use), CTC released Intel from their contract and allowed them free use of the design.[27] Intel marketed it as the 8008 in April, 1972, as the world's first 8-bit microprocessor. It was the basis for the famous "Mark-8" computer kit advertised in the magazine Radio-Electronics in 1974. The 8008 was the precursor to the very successful Intel 8080 (1974), which offered much improved performance over the 8008 and required fewer support chips, Zilog Z80 (1976), and derivative Intel 8-bit processors. The competing Motorola 6800 was released August 1974 and the similar MOS Technology 6502 in 1975 (both designed largely by the same people). The 6502 family rivaled the Z80 in popularity during the 1980s. A low overall cost, small packaging, simple computer bus requirements, and sometimes the integration of extra circuitry (e.g. the Z80's built-in memory refresh circuitry) allowed the home

computer "revolution" to accelerate sharply in the early 1980s. This delivered such inexpensive machines as the Sinclair ZX-81, which sold for US$99. A variation of the 6502, the MOS Technology 6510 was used in the Commodore 64 and yet another variant, the 8502, powered the Commodore 128. The Western Design Center, Inc (WDC) introduced the CMOS 65C02 in 1982 and licensed the design to several firms. It was used as the CPU in the Apple IIe and IIc personal computers as well as in medical implantable grade pacemakers and defibrillators, automotive, industrial and consumer devices. WDC pioneered the licensing of microprocessor designs, later followed by ARM and other microprocessor Intellectual Property (IP) providers in the 1990s. Motorola introduced the MC6809 in 1978, an ambitious and thought-through 8-bit design source compatible with the 6800 and implemented using purely hard-wired logic. (Subsequent 16-bit microprocessors typically used microcode to some extent, as CISC design requirements were getting too complex for purely hard-wired logic only.) Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief surge of interest due to its innovative and powerful instruction set architecture. A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka CDP1802, RCA COSMAC) (introduced in 1976), which was used on board the Galileo probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to implement CMOS technology. The CDP1802 was used because it could be run at very low power, and because a variant was available fabricated using a special production process (Silicon on Sapphire), providing much better protection against cosmic radiation and electrostatic discharges than that of any other processor of the era. Thus, the SOS version of the 1802 was said to be the first radiationhardened microprocessor. The RCA 1802 had what is called a static design, meaning that the clock frequency could be made arbitrarily low, even to 0 Hz, a total stop condition. This let the Galileo spacecraft use minimum electric power for long uneventful stretches of a voyage. Timers or sensors would awaken the processor in time for important tasks, such as navigation updates, attitude control, data acquisition, and radio communication. Current versions of the Western Design Center 65C02 and 65C816 have static cores, and thus retain data even when the clock is completely halted.

12-bit designs
The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a range of peripheral support and memory ICs. The microprocessor recognised the DEC PDP-8 minicomputer instruction set. As such it was sometimes referred to as the CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the 6100 was being incorporated into some military designs until the early 1980s.

16-bit designs
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16, introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as the IMP-8. Other early multi-chip 16-bit microprocessors include one that Digital Equipment Corporation (DEC) used in the LSI-11 OEM board set and the packaged PDP 11/03 minicomputerand the Fairchild Semiconductor MicroFlame 9440, both introduced in 1975-1976. In 1975, National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE, which was later followed by an NMOS version, the INS8900. Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also compatible with their TI-990 line of minicomputers. The 9900 was used in the TI 990/4 minicomputer, the TI-99/4A home computer, and the TM990 line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS 9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but could only address 16 KB. A third chip, the TMS 9995, was a new design. The family later expanded to include the 99105 and 99110. The Western Design Center (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs and later the Super Nintendo Entertainment System, making it one of the most popular 16-bit designs of all time. Intel "upsized" their 8080 design into the 16-bit Intel 8086, the first member of the x86 family, which powers most modern PC type computers. Intel introduced the 8086 as a cost effective way of porting software from the 8080 lines, and succeeded in winning much business on that premise. The 8088, a version of the 8086 that used an 8-bit external data bus, was the microprocessor in the first IBM PC. Intel then released the 80186 and 80188, the 80286 and, in 1985, the 32-bit 80386, cementing their PC market dominance with the processor family's backwards compatibility. The 80186 and 80188 were essentially versions of the 8086 and 8088, enhanced with some onboard peripherals and a few new instructions; they were not used in IBMcompatible PCs because the built-in perpherals and their locations in the memory map were incompatible with the IBM design. The 8086 and successors had an innovative but limited method of memory segmentation, while the 80286 introduced a full-featured segmented memory management unit (MMU). The 80386 introduced a flat 32-bit memory model with paged memory management. The Intel x86 processors up to and including the 80386 do not include floating-point units (FPUs). Intel introduced the 8087, 80287, and 80387 math coprocessors to add hardware floating-point and transcendental function capabilities to the 8086 through 80386 CPUs. The 8087 works with the 8086/8088 and 80186/80188,[28] the 80187 works with the 80186/80188, the 80287 works with the 80286 and 80386,[29] and the 80387 works with the 80386 (yielding better performance than the 80287). The combination of an x86 CPU and an x87 coprocessor forms a

single multi-chip microprocessor; the two chips are programmed as a unit using a single integrated instruction set.[30] Though the 8087 coprocessor is interfaced to the CPU through I/O ports in the CPU's address space, this is transparent to the program, which does not need to know about or access these I/O ports directly; the program accesses the coprocessor and its registers through normal instruction opcodes. Starting with the successor to the 80386, the 80486, the FPU was integrated with the control unit, MMU, and integer ALU in a pipelined design on a single chip (in the 80486DX version), or the FPU was eliminated entirely (in the 80486SX version). An ostensible coprocessor for the 80486SX, the 80487 was actually a complete 80486DX that disabled and replaced the coprocessorless 80486SX that it was installed to upgrade.

32-bit designs

Upper interconnect layers on an Intel 80486DX2 die 16-bit designs had only been on the market briefly when 32-bit implementations started to appear. The most significant of the 32-bit designs is the Motorola MC68000, introduced in 1979. The 68K, as it was widely known, had 32-bit registers in its programming model but used 16-bit internal data paths, 3 16-bit Arithmetic Logic Units, and a 16-bit external data bus (to reduce pin count), and externally supported only 24-bit addresses (internally it worked with full 32 bit addresses). In PC-based IBM-compatible mainframes the MC68000 internal microcode was modified to emulate the 32-bit System/370 IBM mainframe.[31] Motorola generally described it as a 16-bit processor, though it clearly has 32-bit capable architecture. The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga. The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982[32][33] After the divestiture of AT&T in 1984, it was renamed the WE

32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop supermicrocomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the world's first book-sized supermicrocomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V operating system. Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981 but was not a commercial success. It had an advanced capability-based object-oriented architecture, but poor performance compared to contemporary architectures such as Intel's own 80286 (introduced 1982), which was almost four times as fast on typical benchmark tests. However, the results for the iAPX432 was partly due to a rushed and therefore suboptimal Ada compiler.[citation needed] The ARM first appeared in 1985. This is a RISC processor design, which has since come to dominate the 32-bit embedded systems processor space due in large part to its power efficiency, its licensing model, and its wide selection of system development tools. Semiconductor manufacturers generally license cores such as the ARM11 and integrate them into their own system on a chip products; only a few such vendors are licensed to modify the ARM cores. Most cell phones include an ARM processor, as do a wide variety of other products. There are microcontroller-oriented ARM cores without virtual memory support, as well as SMP applications processors with virtual memory. Motorola's success with the 68000 led to the MC68010, which added virtual memory support. The MC68020, introduced in 1985 added full 32-bit data and address buses. The 68020 became hugely popular in the Unix supermicrocomputer market, and many small companies (e.g., Altos, Charles River Data Systems) produced desktop-size systems. The MC68030 was introduced next, improving upon the previous design by integrating the MMU into the chip. The continued success led to the MC68040, which included an FPU for better math performance. A 68050 failed to achieve its performance goals and was not released, and the follow-up MC68060 was released into a market saturated by much faster RISC designs. The 68K family faded from the desktop in the early 1990s. Other large companies designed the 68020 and follow-ons into embedded equipment. At one point, there were more 68020s in embedded equipment than there were Intel Pentiums in PCs.[34] The ColdFire processor cores are derivatives of the venerable 68020. During this time (early to mid-1980s), National Semiconductor introduced a very similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later renamed 32016), the full 32-bit version named the NS 32032. Later, National Semiconductor produced the NS 32132, which allowed two CPUs to reside on the same memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10, but the NS32332which arrived at approximately the same time as the MC68020did not have enough performance. The third generation chip, the NS32532, was different. It had about double the performance of the MC68030, which was released around the same time. The appearance of RISC processors like the AM29000 and MC88000 (now both dead) influenced the architecture of the final core, the NS32764.

Technically advancedwith a superscalar RISC core, 64-bit bus, and internally overclockedit could still execute Series 32000 instructions through real-time translation. When National Semiconductor decided to leave the Unix market, the chip was redesigned into the Swordfish Embedded processor with a set of on chip peripherals. The chip turned out to be too expensive for the laser printer market and was killed. The design team went to Intel and there designed the Pentium processor, which is very similar to the NS32764 core internally. The big success of the Series 32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt instructions had very good price/performance and was adopted by large companies like Canon. By the mid-1980s, Sequent introduced the first symmetric multiprocessor (SMP) server-class computer using the NS 32032. This was one of the design's few wins, and it disappeared in the late 1980s. The MIPS R2000 (1984) and R3000 (1989) were highly successful 32-bit RISC microprocessors. They were used in high-end workstations and servers by SGI, among others. Other designs included the interesting Zilog Z80000, which arrived too late to market to stand a chance and disappeared quickly. In the late 1980s, "microprocessor wars" started killing off some of the microprocessors[citation needed] . Apparently[vague], with only one bigger design win, Sequent, the NS 32032 just faded out of existence, and Sequent switched to Intel microprocessors[citation needed]. From 1985 to 2003, the 32-bit x86 architectures became increasingly dominant in desktop, laptop, and server markets, and these microprocessors became faster and more capable. Intel had licensed early versions of the architecture to other companies, but declined to license the Pentium, so AMD and Cyrix built later versions of the architecture based on their own designs. During this span, these processors increased in complexity (transistor count) and capability (instructions/second) by at least three orders of magnitude. Intel's Pentium line is probably the most famous and recognizable 32-bit processor model, at least with the public at broad.

64-bit designs in personal computers


While 64-bit microprocessor designs have been in use in several markets since the early 1990s (including the Nintendo 64 gaming console in 1996), the early 2000s saw the introduction of 64bit microprocessors targeted at the PC market. With AMD's introduction of a 64-bit architecture backwards-compatible with x86, x86-64 (also called AMD64), in September 2003, followed by Intel's near fully compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel 64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications without any performance penalty as well as new 64bit software. With operating systems Windows XP x64, Windows Vista x64, Windows 7 x64, Linux, BSD, and Mac OS X that run 64-bit native, the software is also geared to fully utilize the capabilities of such processors. The move to 64 bits is more than just an increase in register size from the IA-32 as it also doubles the number of general-purpose registers. The move to 64 bits by PowerPC processors had been intended since the processors' design in the early 90s and was not a major cause of incompatibility. Existing integer registers are extended as are all related data pathways, but, as was the case with IA-32, both floating point and

vector units had been operating at or above 64 bits for several years. Unlike what happened when IA-32 was extended to x86-64, no new general purpose registers were added in - it ower , so an performance gained when using the - it mode for applications ma ing no use of the larger address space is minimal mpt citation (help)

Multicore designs
Main article: Multi-core (computing) This section and the sections below needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2012) A different approach to improving a computer's performance is to add extra processors, as in symmetric multiprocessing designs, which have been popular in servers and workstations since the early 1990s. Keeping up with Moore's Law is becoming increasingly challenging as chipmaking technologies approach their physical limits. In response, microprocessor manufacturers look for other ways to improve performance so they can maintain the momentum of constant upgrades. A multi-core processor is simply a single chip that contains more than one microprocessor core. This effectively multiplies the processor's potential performance by the number of cores (as long as the operating system and software is designed to take advantage of more than one processor core). Some components, such as bus interface and cache, may be shared between cores. Because the cores are physically very close to each other, they can communicate with each other much faster than separate processors in a multiprocessor system, which improves overall system performance. In 2005, the first personal computer dual-core processors were announced. As of 2012, dual-core and quad-core processors are widely used in home PCs and laptops while quad, six, eight, ten, twelve, and sixteen-core processors are common in the professional and enterprise markets with workstations and servers. Sun Microsystems has released the Niagara and Niagara 2 chips, both of which feature an eightcore design. The Niagara 2 supports more threads and operates at 1.6 GHz. High-end Intel Xeon processors that are on the LGA 771, LGA1336, and LGA 2011 sockets and high-end AMD Opteron processors that are on the C32 and G34 sockets are DP (dual processor) capable, as well as the older Intel Core 2 Extreme QX9775 also used in an older Mac Pro by Apple and the Intel Skulltrail motherboard. AMD's G34 motherboards can support up to four CPUs and Intel's LGA 1567 motherboards can support up to eight CPUs. The modern desktop sockets do not support systems with multiple CPUs but very few applications outside of the professional market can make good use of more than four cores and both Intel and AMD currently offer fast quad- and six-core desktop CPUs so this is generally a moot point anyway. AMD also offers the first and still currently the only eight core desktop CPUs with the FX-8xxx line but anything with more than four cores is generally not very useful

in home desktops. As of 2012 January 24, these FX processors are generally inferior to similarly priced and sometimes cheaper Intel quad-core Sandy Bridge models. The desktop market has been in a transition towards quad-core CPUs since Intel's Core 2 Quads were released and now are quite common although dual-core CPUs are still more prevalent. This is largely because of people using older or mobile computers, both of which have a much lower chance of having more than two cores than newer desktops and because of how most computer users are not heavy users. AMD offers CPUs with more cores for a given amount of money than similarly priced Intel CPUsbut the AMD cores are somewhat slower, so the two trade blows in different applications depending on how well-threaded the programs running are. For example, Intel's cheapest Sandy Bridge quad-core CPUs often cost almost twice as much as AMD's cheapest Athlon II, Phenom II, and FX quad-core CPUs but Intel has dual-core CPUs in the same price ranges as AMD's cheaper quad core CPUs. In an application that uses one or two threads, the Intel dual cores outperform AMD's similarly priced quad-core CPUsand if a program supports three or four threads the cheap AMD quad-core CPUs outperform the similarly priced Intel dual-core CPUs. Historically, AMD and Intel have switched places as the company with the fastest CPU several times. Intel currently win on the desktop side of the computer CPU market, with their Sandy Bridge and Ivy Bridge series. In servers, AMD's new Opterons seem to have superior performance for their price points. This means that AMD are currently more competitive in lowto mid-end servers and workstations that more effectively use fewer cores and threads.

RISC
Main article: Reduced instruction set computing In the mid-1980s to early-1990s, a crop of new high-performance Reduced Instruction Set Computer (RISC) microprocessors appeared, influenced by discrete RISC-like CPU designs such as the IBM 801 and others. RISC microprocessors were initially used in special-purpose machines and Unix workstations, but then gained wide acceptance in other roles. In 1986, HP released its first system with a PA-RISC CPU. The first commercial RISC microprocessor design was released either by MIPS Computer Systems, the 32-bit R2000 (the R1000 was not released) or by Acorn computers, the 32-bit ARM2 in 1987.[citation needed] The R3000 made the design truly practical, and the R4000 introduced the world's first commercially available 64-bit RISC microprocessor. Competing projects would result in the IBM POWER and Sun SPARC architectures. Soon every major vendor was releasing a RISC design, including the AT&T CRISP, AMD 29000, Intel i860 and Intel i960, Motorola 88000, DEC Alpha. In the late 1990s, only two 64-bit RISC architectures were still produced in volume for nonembedded applications: SPARC and Power ISA, but as ARM has become increasingly powerful, in the early 2010's, it became the third RISC architecture in the general computing segment.

Special-purpose designs
A microprocessor is a general purpose system. Several specialized processing devices have followed from the technology. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. A digital signal processor (DSP) is specialized for signal processing. Graphics processing units may have no, limited, or general programming facilities. For example, GPUs through the 1990s were mostly non-programmable and have only recently gained limited facilities like programmable vertex shaders.

Market statistics
In 2003, about $44 billion (USD) worth of microprocessors were manufactured and sold.[35] Although about half of that money was spent on CPUs used in desktop or laptop personal computers, those count for only about 2% of all CPUs sold.[36] About 55% of all CPUs sold in the world are 8-bit microcontrollers, over two billion of which were sold in 1997.[37] As of 2002, less than 10% of all the CPUs sold in the world are 32-bit or more. Of all the 32-bit CPUs sold, about 2% are used in desktop or laptop personal computers. Most microprocessors are used in embedded control applications such as household appliances, automobiles, and computer peripherals. Taken as a whole, the average price for a microprocessor, microcontroller, or DSP is just over $6.[36] About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are embedded.[38]

You might also like