You are on page 1of 43

Theory of Parallelism:

Module 1
Parallel Computer Models
Chapter 1
In this chapter…

• THE STATE OF COMPUTING


• MULTIPROCESSORS AND MULTICOMPUTERS
• MULTIVECTOR AND SIMD COMPUTERS
• PRAM AND VLSI MODELS
• ARCHITECTURAL DEVELOPMENT TRACKS
1.1 The State of Computing
 Parallel processing has been developed as an effective
technology in modern computers to meet the demand
for
higher performance,
lower cost
and accurate results in real-life applications.
 Concurrent events are common in today’s computers
due to the practice of multiprogramming,
multiprocessing, or multicomputing.
 Modern computers have powerful and extensive
software packages.
 To analyze the development of the performance of
computers, first we have to understand the basic
development of hardware and software.
Computer Development Milestones −
Elements of Modern computers − A modern
computer system consists of computer
hardware, instruction sets, application programs,
system software and user interface.
Numerical computing
Transaction processing
Logical reasoning
 Evolution of Computer Architecture − In last four
decades, computer architecture has gone through
revolutionary changes.
 We started with Von Neumann architecture and now
we have multicomputers and multiprocessors.
FLYNN’S CLASSIFICATION
FLYNN’S CLASSIFICATION
System Attributes to Performance
Practice Problem
Programming Environments

 Programmability depends on the programming environment provided to


the users.
 Conventional computers are used in a sequential programming
environment with tools developed for a uniprocessor computer.
 Parallel computers need parallel tools that allow specification or easy
detection of parallelism and operating systems that can perform parallel
scheduling of concurrent events, shared memory allocation, and shared
peripheral and communication links.
Programming Environments
Implicit Parallelism
 Use a conventional language Explicit Parallelism
(like C, Fortran, Lisp, or Pascal)
 Programmer writes explicit
to write the program.
parallel code using parallel
 Use a parallelizing compiler to dialects of common
translate the source code into languages.
parallel code.
 Compiler has reduced need
 The compiler must detect to detect parallelism, but must
parallelism and assign target still preserve existing
machine resources.
 parallelism and assign target
 Success relies heavily on the machine resources.
quality of the compiler.
Needed Software Tools
 Parallel extensions of conventional high-level languages.
 Integrated environments to provide different levels of
program abstraction validation, testing and debugging
performance prediction and monitoring visualization
support to aid program development, performance
measurement graphics display and animation of
computational results.
1.2 MULTIPROCESSOR AND
MULTICOMPUTERS
 Two categories of parallel computers are discussed
below namely
 shared common memory or unshared distributed
memory.
1.2.1 Shared memory multiprocessors

 Shared memory parallel computers vary widely, but generally have


in common the ability for all processors to access all memory as
global address space.
 Multiple processors can operate independently but share the same
memory resources.
 Changes in a memory location effected by one processor are
visible to all other processors.
 Shared memory machines can be divided into two main classes
based upon memory access times: UMA , NUMA and COMA.
Fig:- Generic model of a message-passing multicomputer

You might also like