Professional Documents
Culture Documents
performance and cost goals. It is a field of computer science or computer engineering, The noun computer architecture or digital computer organization is a blueprint, a description of the requirements and basic design for the various parts of a computer. It is usually most concerned with how the central processing unit (CPU) acts and how it accesses computer memory. Some currently (2011) fashionable computer architectures include cluster computing and Non-Uniform Memory Access. The art of computer architecture has three main subcategories:[1]
y
Instruction set architecture, or ISA. The ISA is the code that a central processor reads and acts upon. It is the machine language (or assembly language), including the instruction set, word size, memory address modes, processor registers, and address and data formats. Microarchitecture, also known as Computer organization describes the data paths, data processing elements and data storage elements, and describes how they should implement the ISA.[2] The size of a computer's cache for instance, is an organizational issue that generally has nothing to do with the ISA. System Design includes all of the other hardware components within a computing system. These include: Data paths, such as computer buses and switches Memory controllers and hierarchies Data processing other than the CPU, such as direct memory access (DMA) Miscellaneous issues such as virtualization or multiprocessing.
1. 2. 3. 4.
The second step of designing a new architecture is often to design a software simulator, and write representative programs in the ISA, to test and adjust the architectural elements. At this stage, it is now commonplace for compiler designers to collaborate, suggesting improvements in the ISA. Modern simulators normally measure time in clock cycles, and give energy use estimates in watts. Once the instruction set and microarchitecture are described, a practical machine needs to be designed. This design process is called the implementation. Implementation is usually not considered architectural definition, but rather hardware design engineering. Implementation can be further broken down into several (not fully distinct) steps:
y y
Logic Implementation design of blocks defined in the microarchitecture at (primarily) the register-transfer and gate levels. Circuit Implementation transistor-level design of basic elements (gates, multiplexers, latches etc.) as well as of some larger blocks (ALUs, caches etc.) that may
be implemented at this level, or even (partly) at the physical level, for performance reasons. Physical Implementation physical circuits are drawn out, the different circuit components are placed in a chip floorplan or on a board and the wires connecting them are routed. Design Validation The computer as a whole is tested to see if it works in all situations and all timings. Once implementation starts, the first design validations are simulations using logic emulators. However, this is usually too slow to run realistic programs. So, after making corrections, next, prototypes are constructed using field-programmable gatearrays FPGAs. Many hobby projects stop at this stage. The final step is to test prototype integrated circuits. Integrated circuits may have to be redesigned several times to fix problems.
For CPUs, the entire implementation process is often called CPU design.
Contents
[hide]
y y y
y y y y
1 History 2 Computer architectures 3 Computer architecture topics o 3.1 Sub-definitions o 3.2 The role of computer architecture 3.2.1 Computer architecture: the definition 3.2.2 Instruction set architecture 3.2.3 Computer organization o 3.3 Design goals o 3.4 Performance o 3.5 Power consumption 4 See also 5 Notes 6 References 7 External links
[edit] History
The term architecture in computer literature can be traced to the work of Lyle R. Johnson, Muhammad Usman Khan and Frederick P. Brooks, Jr., members in 1959 of the Machine Organization department in IBMs main research center. Johnson had the opportunity to write a proprietary research communication about Stretch, an IBM-developed supercomputer for Los Alamos Scientific Laboratory. In attempting to characterize his chosen level of detail for discussing the luxuriously embellished computer, he
noted that his description of formats, instruction types, hardware parameters, and speed enhancements was at the level of system architecture a term that seemed more useful than machine organization. Subsequently, Brooks, one of the Stretch designers, started Chapter 2 of a book (Planning a Computer System: Project Stretch, ed. W. Buchholz, 1962) by writing, Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints. Brooks went on to play a major role in the development of the IBM System/360 (now called the IBM z-series) line of computers, where architecture gained currency as a noun with the definition as what the user needs to know. Later the computer world would employ the term in many less-explicit ways.
Quantum computer vs Chemical computer Scalar processor vs Vector processor Non-Uniform Memory Access (NUMA) computers Register machine vs Stack machine Harvard architecture vs von Neumann architecture Cellular architecture
The quantum computer architecture holds the most promise to revolutionize computing.[3]
Macroarchitecture architectural layers that are more abstract than microarchitecture, e.g. ISA Instruction Set Architecture (ISA) as defined above minus Assembly ISA a smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations Programmer Visible Macroarchitecture higher level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. E.g. the C, C++, or
Java standards define different Programmer Visible Macroarchitecture although in practice the C microarchitecture for a particular computer includes UISA (Microcode Instruction Set Architecture) a family of machines with different hardware level microarchitectures may share a common microcode architecture, and hence a UISA. Pin Architecture the set of functions that a microprocessor is expected to provide, from the point of view of a hardware platform. E.g. the x86 A20M, FERR/IGNNE or FLUSH pins, and the messages that the processor is expected to emit after completing a cache invalidation so that external caches can be invalidated. Pin architecture functions are more flexible than ISA functions - external hardware can adapt to changing encodings, or changing from a pin to a message - but the functions are expected to be provided in successive implementations even if the manner of encoding them changes.
analysis of the computer organization. For example, in a multimedia decoder, the designers might need to arrange for most data to be processed in the fastest data path and the various components are assumed to be in place and task is to investigate the organisational structure to verify the computer parts operates. Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while supervisory software may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of virtualization needs virtual memory hardware so that the memory of different simulated computers can be kept separated. The computer organization and features also affect the power consumption and the cost of the processor.
In a typical home computer, the simplest, most reliable way to speed performance is usually to add random access memory (RAM). More RAM increases the likelihood that needed data or a program will be in RAM. So, the system is less likely to need to move memory data from the disk. The disk is often ten thousand times slower than RAM because it has mechanical parts that must move to access its data. There are two main types of speed, latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data). Performance is affected by a very wide range of design choices for example, pipelining a processor usually makes latency worse (slower) but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable, short time after the brake pedal is sensed. The performance of a computer can be measured using other metrics, depending upon its application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in a webserving application) or memory bound (as in video editing). Power consumption has become important in servers and portable devices like laptops. Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it may not help one to choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might play popular video games more smoothly. Furthermore, designers have been known to add special features to their products, whether in hardware or software, which permit a specific benchmark to execute quickly but which do not offer similar advantages to other, more general tasks.