Professional Documents
Culture Documents
Learn and Share Recent advances in cluster computing (both in research and commercial settings):
Agenda
Overview of Computing Motivations & Enabling Technologies Cluster Architecture & its Components Clusters Classifications Cluster Middleware Single System Image Representative Cluster Systems Resources and Conclusions
Computing Elements
Applications
Programming Paradigms Threads Interface Threads Interface Microkernel Microkernel Multi-Processor Computing System
P P P P P
Operating System
..
Hardware
P Processor P
Thread
Process
3
Parallel Era
Commercialization R&D
2030 P.S.Es
4
Commodity
Aerospace Aerospace
E-commerce/anything
CAD/CAM CAD/CAM
6 Military Applications
Computer Analogy
1. Use faster hardware: e.g. reduce the time per instruction (clock cycle). 2. Optimized algorithms and techniques 3. Multiple computers to solve problem: That is, increase no. of instructions executed per clock cycle.
7
2 1 0
2 10
2 10
2 1 0
P E R F O R M A N C E
?
2 1 0 2 10 2 10 2 1 0
21 00
Administrative Barrie
Ind ividu al Gro up D epart men t C ampus Sta te N ational Globe Inte r Plane t U niverse 8
b b
b b b
What are/will be the major problems/issues in eCommerce? How will or can PDC be applied to solve some of them? Other than Compute Power, what else can PDC contribute to e-commerce? How would/could the different forms of PDC (clusters, hypercluster, GRID,) be applied to e-commerce? Could you describe one hot research topic for PDC applying to e-commerce? A killer e-commerce application for PDC ? ...
10
Numerous Scientific & Engineering Apps. Parametric Simulations Business Applications E-commerce Applications (Amazon.com, eBay.com .) Database Applications (Oracle on cluster) Decision Support Systems Internet Applications Web serving / searching Infowares (yahoo.com, AOL.com) ASPs (application service providers) eMail, eChat, ePhone, eBook, eCommerce, eBank, eSociety, eAnything! Computing Portals Mission Critical Applications command control systems, banks, nuclear reactor control, star-war, and handling life threatening situations.
11
12
Millions of Customers
(Millions) of Partners
Keep track of partners details, tracking referral link to partner and sales and payment
b b
A mechanism for participating in the bid (buyers/sellers define rules of the game)
13
2100
2100
2100
2100
2100
2100
2100
2100
Clusters are already in use for web serving, web-hosting, and number of other Internet applications including E-commerce
scalability, availability, performance, reliable-high performancemassive storage and database support. Attempts to support online detection of cyber attacks (through data mining) and control
b
Support for transparency in (secure) Site/Data Replication for high availability and quick response time (taking site close to the user). Compute power from hyperclusters/Grid can be used for data mining for cyber attacks and fraud detection and control. Helps to build Compute Power Market, ASPs, and Computing Portals.
14
PAPIA PC Cluster
15
Cluster based web-servers, search engineers, portals Scheduling and Single System Image. Heterogeneous Computing Reliability and High Availability and Data Recovery Parallel Databases and high performance-reliable-mass storage systems. CyberGuard! Data mining for detection of cyber attacks, frauds, etc. detection and online control. Data Mining for identifying sales pattern and automatically tuning portal to special sessions/festival sales eCash, eCheque, eBank, eSociety, eGovernment, eEntertainment, eTravel, eGoods, and so on. Data/Site Replications and Caching techniques Compute Power Market Infowares (yahoo.com, AOL.com) ASPs (application service providers) ...
16
Multiprocessor
C.P.I.
Uniprocessor
2. . . .
No. of Processors
18
Vertical
Horizontal
Growth
5 10
15 20 25
30
35
40
45 . . . .
Age
19
Why Parallel Processing NOW? of PP is mature and can The Tech. commercially; significantbe exploited R & D work on development of tools & environment. development in technology is paving a way for heterogeneous computing.
20
Significant Networking
History of Parallel Processing S PP can be traced to a tablet dated around 100 BC.
x x
Tablet has 3 calculating positions. Infer that multiple positions: Reliability/ Speed
21
Motivating Factors
Aggregated speed with which complex calculations carried out by millions of neurons in human brain is amazing! although individual neurons response is slow (milli sec.) demonstrate the feasibility of PP
22
Taxonomy of Architectures
- conventional - data parallel, vector computing - systolic arrays - very general, multiple approaches.
23
SISD - mainframes, workstations, PCs. SIMD Shared Memory - Vector machines, Cray... MIMD Shared Memory - Sequent, KSR, Tera, SGI, SUN. SIMD Distributed Memory - DAP, TMC CM2... MIMD Distributed Memory - Cray T3D, Intel, Transputers, TMC CM-5, plus recent workstation clusters (IBM SP2, DEC, Sun, HP).
24
25
NOTE: Modern sequential machines are not purely SISD - advanced RISC processors use many concepts from
vector and parallel architectures (pipelining, parallel execution of instructions, prefetching of data, etc) in order to achieve one or more arithmetic operations per clock cycle.
26
27
b b
Vast numbers of under utilised workstations available to use. Huge numbers of unused processor cycles and resources that could be put to good use in a wide variety of applications areas. Reluctance to buy Supercomputer due to their cost and short life span. Distributed compute resources fit better into today's funding model.
28
Technology Trend
29
30
31
Cluster Computing..
The Commodity Supercomputing!
32
Beowulf (CalTech and NASA) - USA CCS (Computing Centre Software) - Paderborn, Germany Condor - Wisconsin State University, USA DQS (Distributed Queuing System) - Florida State University, US. EASY - Argonne National Lab, USA HPVM -(High Performance Virtual Machine),UIUC&now UCSB,US far - University of Liverpool, UK Gardens - Queensland University of Technology, Australia MOSIX - Hebrew University of Jerusalem, Israel MPI (MPI Forum, MPICH is one of the popular implementations) NOW (Network of Workstations) - Berkeley, USA NIMROD - Monash University, Australia NetSolve - University of Tennessee, USA PBS (Portable Batch System) - NASA Ames and LLNL, USA PVM - Oak Ridge National Lab./UTK/Emory, USA
33
Codine (Computing in Distributed Network Environment) - GENIAS GmbH, Germany LoadLeveler - IBM Corp., USA LSF (Load Sharing Facility) - Platform Computing, Canada NQE (Network Queuing Environment) - Craysoft Corp., USA OpenFrame - Centre for Development of Advanced Computing, India RWPC (Real World Computing Partnership), Japan Unixware (SCO-Santa Cruz Operations,), USA Solaris-MC (Sun Microsystems), USA ClusterTools (A number for free HPC clusters tools from Sun) A number of commercial vendors worldwide are offering clustering solutions including IBM, Compaq, Microsoft, a number of startups like TurboLinux, HPTI, Scali, BlackStone..)
34
35
36
Cycle Stealing
Usually a workstation will be owned by an individual, group, department, or organisation - they are dedicated to the exclusive use by the owners. b This brings problems when attempting to form a cluster of workstations for running distributed applications.
b
37
Cycle Stealing
b
Typically, there are three types of owners, who use their workstations mostly for:
1. Sending and receiving email and preparing documents. 2. Software development - edit, compile, debug and test cycle. 3. Running compute-intensive applications.
38
Cycle Stealing
Cluster computing aims to steal spare cycles from (1) and (2) to provide resources for (3). b However, this requires overcoming the ownership hurdle - people are very protective of their workstations. b Usually requires organisational mandate that computers are to be used in this way. b Stealing cycles outside standard work hours (e.g. overnight) is easy, stealing idle cycles during work hours without impacting interactive use (both CPU and memory) is much harder.
b
39
Minis 1970
PCs 1980
41
PC
Vector Supercomputer
42
Mini Computer
PC
Vector Supercomputer
MPP
43
44
What is a cluster?
A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected standalone/complete computers cooperatively working together as a single, integrated computing resource. b A typical cluster: Network: Faster, closer connection than a typical network (LAN) Low latency communication protocols Looser connection than SMP
b
45
complete computers (HW & SW) shipped in millions: killer micro, killer RAM, killer disks, killer OS, killer networks, killer apps.
b b
b
b b
based networks coming (ATM) b Interfaces simple & fast (Active Msgs) Striped files preferred (RAID) Demise of Mainframes, Supercomputers, & MPPs
46
b Switch
processor, cache, bus, and memory design and engineering $ => performance
b
47
...Architectural Drivers
b
Individual nodes performance can be improved by adding additional resource (new memory blocks/disks) New nodes can be added or nodes can be removed Clusters of Clusters and Metacomputing
b
Threads, PVM, MPI, DSM, C, C++, Java, Parallel C++, Compilers, Debuggers, OS, etc.
b
48
1960
1990
1995+ 2000
b b
100 Sun UltraSparcs 200 disks Myrinet SAN 160 MB/s Fast comm. AM, MPI, ... Ether/ATM switched external net Global OS Self Config
50
Basic Components
MyriNet 160 MB/s Myricom NIC
M
I/O bus
$ P
51
53
Millennium PC Clumps
Inexpensive, easy to manage Cluster Replicated in many departments Prototype for very large PC cluster
54
55
So Whats So Different?
b b b b b b
Commodity parts? Communications Packaging? Incremental Scalability? Independent Failure? Intelligent Network Interfaces? Complete System on every node
57
Interconnect
Windows of Opportunities
b
MPP/DSM:
Network RAM:
Idle memory in other nodes. Page across other nodes idle memory
b
Software RAID:
Multi-path Communication:
59
Parallel Processing
good floating-point performance low overhead communication scalable network bandwidth parallel file system
60
Network RAM
b
Performance gap between processor and disk has widened. Thrashing to disk degrades performance significantly Paging across networks can be effective with high performance networks and OS that recognizes idle machines Typically thrashing to network RAM can be 5 to 10 times faster than thrashing to disk
61
I/O Bottleneck:
Microprocessor performance is improving more than 50% per year. Disk access improvement is < 10% Application often perform I/O
b b b
RAID cost per byte is high compared to single disks RAIDs are connected to host computers which are often a performance and availability bottleneck RAID in software, writing data across an array of workstation disks provides performance and some degree of redundancy provides availability.
62
63
64
Clustering Today
b
1. Very HP Microprocessors
workstation performance = yesterday supercomputers
65
66
Multiple High Performance Components: PCs Workstations SMPs (CLUMPS) Distributed HPC Systems leading to Metacomputing They can be based on different architectures and running difference OS
67
There are many (CISC/RISC/VLIW/Vector..) Intel: Pentiums, Xeon, Merceed. Sun: SPARC, ULTRASPARC HP PA IBM RS6000/PowerPC SGI MPIS Digital Alphas Integrate Memory, processing and networking into a single chip
IRAM (CPU & Mem): (http://iram.cs.berkeley.edu) Alpha 21366 (CPU, Memory Controller, NI)
68
Cluster Components2 OS
b
69
Ethernet (10Mbps), Fast Ethernet (100Mbps), Gigabit Ethernet (1Gbps) SCI (Dolphin - MPI- 12micro-sec latency) ATM Myrinet (1.2Gbps) Digital Memory Channel FDDI
70
Myrinet has NIC User-level access support Alpha 21364 processor integrates processing, memory controller, network interface into a single chip..
71
Traditional OS supported facilities (heavy weight due to protocol processing).. Sockets (TCP/IP), Pipes, etc. Light weight protocols (User Level) Active Messages (Berkeley) Fast Messages (Illinois) U-net (Cornell) XTP (Virginia) System systems can be built on top of the above protocols
72
Resides Between OS and Applications and offers in infrastructure for supporting: Single System Image (SSI) System Availability (SA) SSI makes collection appear as single machine (globalised view of system resources). Telnet cluster.myinstitute.edu SA - Check pointing and process migration..
73
Hardware
DEC Memory Channel, DSM (Alewife, DASH) SMP Techniques
OS / Gluing Layers
Solaris MC, Unixware, Glunix)
74
b b
Threads (PCs, SMPs, NOW..) POSIX Threads Java Threads MPI Linux, NT, on many Supercomputers PVM Software DSMs (Shmem)
75
Compilers
C/C++/Java/ ; Parallel programming with C++ (MIT Press book)
b b b
RAD (rapid application development tools).. GUI based tools for PP modeling Debuggers Performance Analysis Tools Visualization Tools
76
b b b
System availability (HA). offer inherent high system availability due to the redundancy of hardware, operating systems, and applications. Hardware Fault Tolerance. redundancy for most system components (eg. disk-RAID), including both hardware and software. OS and application reliability. run multiple copies of the OS and applications, and through this redundancy Scalability. adding servers to the cluster or by adding more clusters to the network as the need arises or CPU to SMP. High Performance. (running cluster enabled programs)
78
79
Clusters Classification..1
b
80
81
Clusters Classification..2
b
82
Clusters Classification..3
b
83
84
Clusters Classification..4
b
Linux Clusters (Beowulf) Solaris Clusters (Berkeley NOW) NT Clusters (HPVM) AIX Clusters (IBM SP2) SCO/Compaq Clusters (Unixware) .Digital VMS Clusters, HP clusters, ..
85
Clusters Classification..5
b
Based on node components architecture & configuration (Processor Arch, Node Type: PC/Workstation.. & OS: Linux/NT..):
Homogeneous Clusters
All nodes will have similar configuration
Heterogeneous Clusters
Nodes based on different processors and running different OSes.
86
Clusters Classification..6a
(3) Network
Public Enterprise Campus Department Workgroup
Metacomputing (GRID)
/ OS y mor / Me
Technology
(1)
CP
Uniprocessor
/ I/O U
Platform
(2)
87
b b b b
88
Size Scalability (physical & application) Enhanced Availability (failure management) Single System Image (look-and-feel of one system) Fast Communication (networks & protocols) Load Balancing (CPU, Net, Memory, Disk) Security and Encryption (clusters of clusters) Distributed Environment (Social issues) Manageability (admin. And control) Programmability (simple API if required) Applicability (cluster-aware and non-aware app.)
89
90
Application
PVM / MPI/ RSH
???
Hardware/OS
91
CC should support
b b
Multi-user, time-sharing environments Nodes with different CPU speeds and memory sizes (heterogeneous configuration)
b b
Many processes, with unpredictable requirements Unlike SMP: insufficient bonds between nodes Each computer operates independently Inefficient utilization of resources
92
Application
PVM / MPI/ RSH Middleware or Underware
Hardware/OS
93
b b b
94
An interface between between use applications and cluster hardware and OS platform. Middleware packages support each other at the management, programming, and implementation levels. Middleware Layers: SSI Layer Availability Layer: It enables the cluster services of Checkpointing, Automatic Failover, recovery from failure, fault-tolerant operating among all cluster nodes.
95
A single system image is the illusion, created by software or hardware, that presents a collection of resources as one, more powerful resource. SSI makes the cluster appear like a single machine to the user, to applications, and to the network. A cluster without a SSI is not a cluster
97
Usage of system resources transparently Transparent process migration and load balancing across nodes. Improved reliability and higher availability Improved system response time and performance Simplified system management Reduction in the risk of operator errors User need not be aware of the underlying system architecture to use these machines effectively
98
Single File Hierarchy: xFS, AFS, Solaris MC Proxy Single Control Point: Management from single GUI Single virtual networking Single memory space - Network RAM / DSM Single Job Management: Glunix, Codine, LSF Single User Interface: Like workstation/PC windowing environment (CDE in Solaris/NT), may it can use Web technology
99
any node can access any peripheral or disk devices without the knowledge of physical location.
b
Any process on any node create process with cluster wide process wide and they communicate through signal, pipes, etc, as if they are one a single node.
b
Saves the process state and intermediate results in memory to disk to support rollback recovery when node fails. PM for Load balancing...
b
Reduction in the risk of operator errors User need not be aware of the underlying system architecture to use these machines effectively 100
UP
101
It is a computer science notion of levels of abstractions (house is at a higher level of abstraction than walls, ceilings, and floors). Application and Subsystem Level Operating System Kernel Level
Hardware Level
102
subsystem
a subsystem
file system
toolkit
shared portion of implicitly supports the file system many applications and subsystems explicit toolkit best level of facilities: user, support for heterservice name,time ogeneous system
(c) In search of clusters 103
each name space: kernel support for files, processes, applications, adm pipes, devices, etc. subsystems type of kernel modularizes SSI objects: files, code within processes, etc. kernel may simplify implementation of kernel objects implicit SSI for all system services
(c) In search of clusters 104
none supporting each distributed operating system kernel virtual memory space Mach, PARAS, Chorus, each service OSF/1AD, Amoeba outside the microkernel
SSI Characteristics
b b
1. Every SSI has a boundary 2. Single system support can exist at different levels within a system, one able to be build on another
106
108
Benefits: makes the system quickly portable, tracks vendor software upgrades, and reduces development time. i.e. new systems can be built quickly by mapping new services onto the functionality provided by the layer beneath. Eg: Glunix
b
109
OS level SSI SCO NSC UnixWare Solaris-MC MOSIX, . Middleware level SSI PVM, TreadMarks (DSM), Glunix, Condor, Codine, Nimrod, . Application level SSI PARMON, Parallel Oracle, ...
110
http://www.sco.com/products/clustering/
UP or SMP node Users, applications, and systems management Standard OS kernel calls Standard SCO UnixWare with clustering hooks
Extensions
Extensions
Devices ServerNet
Other nodes
Devices
111
Single Clusterwide Filesystem view Transparent Clusterwide device access Transparent swap space sharing Transparent Clusterwide IPC High Performance Internode Communications Transparent Clusterwide Processes, migration,etc. Node down cleanup and resource failover Transparent Clusterwide parallel TCP/IP networking Application Availability Clusterwide Membership and Cluster timesync Cluster System Administration Load Leveling
112
Applications System call interface Network File system C++ Processes Solaris MC Other nodes
global file system globalized process management globalized networking and I/O
Object framework
http://www.sun.com/research/solaris-mc/
113
Solaris MC components
Applications System call interface Network File system C++ Processes Solaris MC Other nodes
b b
Object framework
Object invocations
b b
Object and communication support High availability support PXFS global distributed file system Process mangement Networking
114
b b
An OS module (layer) that provides the applications with the illusion of working on a single system Remote operations are performed like local operations Transparent to the application - user interface unchanged
Application
PVM / MPI / RSH MO Hardware/OS
SIX
115
Main tool
Preemptive process migration that can migrate--->any process, anywhere, anytime
b
b b
Supervised by distributed algorithms that respond on-line to global resource availability - transparently Load-balancing - migrate process from overloaded to under-loaded nodes Memory ushering - migrate processes from a node that has exhausted its memory, to prevent paging/swapping
116
50 Pentium-II 300 MHz 38 Pentium-Pro 200 MHz (some are SMPs) 16 Pentium-II 400 MHz (some are SMPs)
b b b b
Over 12 GB cluster-wide RAM Connected by the Myrinet 2.56 G.b/s LAN Runs Red-Hat 6.0, based on Kernel 2.2.7 Upgrade: HW with Intel, SW with Linux Download MOSIX:
http://www.mosix.cs.huji.ac.il/
117
NOW @ Berkeley
b
Design & Implementation of higher-level system b Global OS (Glunix) b Parallel File Systems (xFS) b Fast Communication (HW for Active Messages) b Application Support Overcoming technology shortcomings b Fault tolerance b System Management NOW Goal: Faster for Parallel AND Sequential
http://now.cs.berkeley.edu/
118
Parallel Apps
Name Svr
e ch S
e ul
119
Revolutionary (MPP Style): write new programs from scratch using MPP languages, compilers, libraries, Porting: port programs from mainframes, supercomputers, MPPs, Evolutionary: take sequential program & use 1) Network RAM: first use memory of many computers to reduce disk accesses; if not fast enough, then: 2) Parallel I/O: use many disks in parallel for accesses not in file cache; if not fast enough, then: 3) Parallel program: change program until it sees enough processors that is fast=> Large speedup without fine grain parallel program
120
121
DSM Threads/OpenMP (enabled for clusters) Java threads (HKU JESSICA, IBM cJVM)
b
Parametric Computations
Nimrod/Clustor
b b
122
Threads
Compilers CPU
+ +
x x
Load Load
123
b b b
Portable (once coded, it can run on virtually all HPC platforms including clusters! Performance (by exploiting native hardware features) Functionality (over 115 functions in MPI 1.0)
environment management, point-to-point & collective communications, process group, communication world, derived data types, and virtual topology routines.
b
(master)
Hello,... (workers)
125
if( my_rank == 0) /* Master Process */ { for( source = 1; source < p; source++) { MPI_Recv( message, 100, MPI_CHAR, source, tag, MPI_COM_WORLD, &status); printf(%s \n, message); } } else /* Worker Process */ { sprintf( message, Hello, I am your worker process %d!, my_rank ); dest = 0; MPI_Send( message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COM_WORLD); } /* Shutdown MPI environment */ MPI_Finalise(); }
126
Execution
% cc -o hello hello.c -lmpi % mpirun -p2 hello Hello, I am process 1! % mpirun -p4 hello Hello, I am process 1! Hello, I am process 2! Hello, I am process 3! % mpirun hello (no output, there are no workers.., no greetings)
127
parmon
PARMON High-Speed Switch
parmond
http://www.buyya.com/parmon/
128
129
Users
Users
131
Eliminate the gap between accessing local disk(s) and remote disks Support persistent programming paradigm Allow striping on remote disks, accelerate parallel I/O operations Facilitate the implementation of distributed checkpointing and recovery schemes
b b
132
b b b
Integrated I/O Space Addressing and Mapping Mechanisms Data movement procedures
133
Sequential addresses
LDn
Local Disks, (RADD Space) Shared RAIDs, (NASD Space) Peripherals (NAP Space)
Ph
134
User Applications
Name Agent
Block Mover
I/O Agent
I/O Agent
RADD
NASD
NAP
135
Node 2
I/O Agent
A
136
What Next ??
Clusters of Clusters (HyperClusters) Global Grid Interplanetary Grid Universal Grid??
137
Master Daemon
LAN/WAN
Submit Graphical Control Execution Daemon
Cluster 3
Scheduler
Clients
Master Daemon
Cluster 2
Scheduler
Master Daemon
Execution Daemon
Clients
Execution Daemon
Clients
138
139
What is Grid ?
b
across the local/wide-area networks (enterprise, organisations, or Internet) and presents them as an unified integrated (single) resource.
140
http://www.sun.com/hpc/
141
Grid Application-Drivers
b
Old and New applications getting enabled due to coupling of computers, databases, instruments, people, etc:
142
Grid Components
Applications and Portals
Scientific
Engineering
Collaboration
Grid Apps.
Languages
Libraries
Debuggers
Monitoring
Resource Brokers
Web tools
Grid Tools
Comm.
Information
Process
Data Access
QoS
Grid Middleware
Operating Systems
Queuing Systems
Grid Fabric
Computers
Clusters
Storage Systems
Data Sources
Scientific Instruments
143
PUBLIC FORUMS
Computing Portals Grid Forum European Grid Forum IEEE TFCC! GRID2000 and more.
USA
Australia
Europe
UNICORE MOL METODIS Globe Poznan Metacomputing CERN Data Grid MetaMPI DAS JaWS and many more...
Globus Legion JAVELIN AppLes NASA IPG Condor Harness NetSolve NCSA Workbench WebFlow EveryWhere and many more...
Japan
http://www.gridcomputing.com/
144
NetSolve
Client/Server/Agent -- Based Computing
Easy-to-usetool to provideefficient and uniform access to a variety of scientific packages on UNIX platforms
Client-Server design Network-enabled solvers Network Resources Seam less access to resources Non-hierarchical system Load Balancing Fault Tolerance reply Interfaces to Fortran, C, Java, Matlab, m ore request Softw are is available
Software Reposit
choice
NetSolveClient
145 NetSolveAgent
Host B
Another VM
Host C
http://www.epm.ornl.gov/harness/
HARNESS daem on
Research issues wit h Parallel plug -ins in clu de: heterog eneity, synch ro nization, in tero peration , p artial success
(three typica l ca ses):
load plug-in into single host of VM w/o communication load plug-in into single host broadcast to rest of VM load plug-in into every host of VM w/ synchronization
147
http://www.dgs.monash.edu.au/~davida/nimrod.html
148
149
Nimrod/G Architecture
Nimrod/G Client Nimrod/G Client Nimrod/G Client
Nimrod Engine
Persistent Store
Dispatcher
Middleware Services TM TS
Grid Explorer
GE
GIS
RM & TS RM & TS
RM & TS
GUSTO Test Bed RM: Local Resource Manager, TS: Trade Server
150
Schedule Advisor
Trade Server
Trading
Trade Manager Resource Reservation
Resource Allocation
R1
R2
Rn
A Resource Domain
151
152
DSMs
http://www.cs.umd.edu/~keleher/dsm.html
153
Beowulf:
http://www.beowulf.org
Metacomputing
http://www.sis.port.ac.uk/~mab/Metacomputing/
154
Cluster Computing: The Commodity Supercomputing, Journal of Software Practice and Experience-(get from my web)
by Mark Baker & Rajkumar Buyya
156
http://www.csse.monash.edu.au/~rajkumar/cluster/
157
http://www.ieeetfcc.org
158
TFCC Activities...
b b b b b b b b b
Network Technologies OS Technologies Parallel I/O Programming Environments Java Technologies Algorithms and Applications >Analysis and Profiling Storage Technologies High Throughput Computing
159
TFCC Activities...
b b b b b b b b
High Availability Single System Image Performance Evaluation Software Engineering Education Newsletter Industrial Wing TFCC Regional Activities
All the above have there own pages, see pointers from: http://www.ieeetfcc.org
160
TFCC Activities...
b
Mailing list, Workshops, Conferences, Tutorials, Web-resources etc. Resources for introducing subject in senior undergraduate and graduate levels. Tutorials/Workshops at IEEE Chapters.. .. and so on. FREE MEMBERSHIP, please join! Visit TFCC Page for more details:
b b b b b
Clusters Revisited
162
Summary
163
Conclusions
Clusters are promising..
Solve parallel processing paradox Offer incremental growth and matches with funding pattern. New trends in hardware and software technologies are likely to make clusters more promising..so that Clusters based supercomputers can be seen everywhere!
164
2 1 0
2 10
2 10
2 1 0
P E R F O R M A N C E
?
2 1 0 2 10 2 10 2 1 0
21 00
Administrative Barrie
Ind ividu al Gro up D epart men t C ampus Sta te N ational Globe Inte r Plane t U niverse 165
166
Backup Slides...
167
Data Input
Processor Processor
Data Output
Speed is limited by the rate at which computer can transfer information internally.
Ex:PC, Macintosh, Workstations
168
B C
More of an intellectual exercise than a practical configuration. Few built, but commercially not available
169
SIMD Architecture
Instruction Stream
Processor
A
Processor
B C
MIMD Architecture
Instruction Instruction Instruction Stream A Stream B Stream C
Processor
A
Processor
B C
Unlike SISD, MISD, MIMD computer works asynchronously. Shared memory (tightly coupled) MIMD Distributed memory (loosely coupled) MIMD
171
M E M B O U R S Y
M E M B O U R S Y
M E M B O U R S Y
172
IPC channel
M E M B O U R S Y
M E M B O U R S Y
M E M B O U R S Y
Communication : IPC on High Speed Network. Network can be configured to ... Tree, Mesh, Cube, etc. Unlike Shared MIMD easily/ readily expandable Highly reliable (any CPU failure does not affect the whole system)
173