You are on page 1of 8

grid computing

(1) May refer to the type of cloud computing that provides only the server
infrastructure. See cloud computing.

(2) A parallel processing architecture in which CPU resources are shared across a
network, and all machines function as one large supercomputer. It allows unused
CPU capacity in all participating machines to be allocated to one application that is
extremely computation intensive and programmed for parallel processing.

There Is a Lot of Idle Time


In a large enterprise, hundreds or thousands of desktop machines sit idle at any
given moment. Even when a user is at the computer reading the screen and not
typing or clicking, it constitutes idle time. These unused cycles can be put to use on
large computational problems. Likewise, the millions of users on the Internet waste
massive amounts of machine cycles every minute that could be harnessed instead.
This is precisely what the Search for Extraterrestrial Intelligence program does with
Internet users all over the world (see SETI).

Naturally, grid computing over the Internet requires more extensive security than
within a single enterprise, and robust authentication is employed in such
applications.

7 things you should know about...


Grid Computing
1)What is it?
Computing grids are conceptually not unlike electrical grids. In an electrical grid, wall outlets allows us to link
to an infrastructure of resources that generate, distribute, and bill for electricity. When you connect to the
electrical grid, you don’t need to know where the power plant is or how the current gets to you. Grid
computing uses middleware to coordinate disparate IT resources across a network,allowing them to function
as a virtual whole. The goal of a computing grid, like that of the electrical grid, is to provide users with
access to the resources they need, when they need them.Grids address two distinct but related goals:
providing remote access to IT assets, and aggregating processing power. The most obvious resource
included in a grid is a processor, but grids also encompass sensors, data-storage systems, applications, and
other resources. One of the first commonly known grid initiatives was the SETI@home project, which solicited
several million volunteers to download a screensaver that used idle processor capacity to analyze data in the
search for extraterrestrial life. In a more recent example, the Telescience Project provides remote access to
an extremely powerful electron microscope at the National Center for Microscopy and Imaging Research in
San Diego. Users of the grid can remotely operate the microscope, allowing new levels of access to the
instrument and its capabilities.

2)Who’s doing it?


Many grids are appearing in the sciences, in fields such as chemistry,
physics, and genetics, and cryptologists and mathematicians
have also begun working with grid computing. Grid technology has
the potential to significantly impact other areas of study with heavy
computational requirements, such as urban planning. Another
important area for the technology is animation, which requires
massive amounts of computational power and is a common tool in
a growing number of disciplines. By making resources available to
students, these communities are able to effectively model authentic
disciplinary practices.

3)How does it work?


Grids use a layer of middleware to communicate with and manipulate
heterogeneous hardware and data sets. In some fields—
astronomy, for example—hardware cannot reasonably be moved
and is prohibitively expensive to replicate on other sites. In other
instances, databases vital to research projects cannot be duplicated
and transferred to other sites. Grids overcome these logistical
obstacles and open the tools of research to distant faculty and students.
A grid might coordinate scientific instruments in one country
with a database in another and processors in a third. From a user’s
perspective, these resources function as a single system—differences
in platform and location become invisible.
On a typical college or university campus, many computers sit
idle much of the time. A grid can provide significant processing
power for users with extraordinary needs. Animation software, for
instance, which is used by students in the arts, architecture, and
other departments, eats up vast amounts of processor capacity.
An industrial design class might use resource-intensive software to
render highly detailed three-dimensional images. In both cases, a
campus grid slashes the amount of time it takes students to work
with these applications. All of this happens not from additional
capacity but through the efficient use of existing power.

4)Why is it significant?
Grids make research projects possible that formerly were impractical
or unfeasible due to the physical location of vital resources.
Using a grid, researchers in Great Britain, for example, can conduct
research that relies on databases across Europe, instrumentation
in Japan, and computational power in the United States. Making
resources available in this way exposes students to the tools of the
profession, facilitating new possibilities for research and instruction,
particularly at the undergraduate level.
Although speeds and capacities of processors continue to increase,
resource-intensive applications are proliferating as well. At many
institutions, certain campus users face ongoing shortages of computational
power, even as large numbers of computers are underused.
With grids, programs previously hindered by constraints on
computing power become possible.

5)What are the downsides?


Being able to access distant IT assets—and have them function
seamlessly with tools on different platforms—can be a boon to
researchers, but it presents real security concerns to organizations
responsible for those resources. An institution that makes its IT
assets available to researchers or students on other campuses and
in other countries must be confident that its involvement does not
expose those assets to unnecessary risks. Similarly, directors of
research projects will be reluctant to take advantage of the opportunities
of a grid without assurances that the integrity of the project,
its data, and its participants will be protected.
Another challenge facing grids is the complexity in building middleware
structures that can knit together collections of resources to
work as a unit across network connections that often span oceans
and continents. Scheduling the availability of IT resources connected
to a grid can also present new challenges to organizations that
manage those resources. Increasing standardization of protocols
addresses some of the difficulty in creating smoothly functioning
grids, but, by their nature, grids that can provide unprecedented
access to facilities and tools involve a high level of complexity.

6)Where is it going?
Because the number of functioning grids is relatively small, it may
take time for the higher education community to capitalize on the
opportunities that grids can provide and the feasibility of such projects.
As the number and capacity of high-speed networks increase,
however, particularly those catering to the research community and
higher education, new opportunities will arise to combine IT assets
in ways that expose students to the tools and applications relevant
to their studies and to dramatically reduce the amount of time
required to process data-intensive jobs. Further, as grids become
more widespread and easier to use, increasing numbers and kinds
of IT resources will be included on grids. We may also start to see
more grid tie-ins for desktop applications. While there are obvious
advantages to solving a complex genetic problem using grid computing,
being able to harness spare computing cycles to manipulate
an image in Photoshop or create a virtual world in a simulation
may be some of the first implementations of grids.

7)What are the implications for


teaching and learning?
Higher education stands to reap significant benefits from grid computing
by creating environments that expose students to the “tools
of the trade” in a wide range of disciplines. Rather than using mock
or historical data from an observatory in South America, for example,
a grid could let students on other continents actually use those
facilities and collect their own data. Learning experiences become
far richer, providing opportunities that otherwise would be impossible
or would require travel. The access that grid computing offers to
particular resources can allow institutions to deepen, and in some
cases broaden, the scope of their educational programs.
Grid computing encourages partnerships among higher education
institutions and research centers. Because they bring together
unique tools in novel groupings, grids have the potential to incorporate
technology into disciplines with traditionally lower involvement
with IT, including the humanities, social sciences, and the
arts. Grids can leverage previous investments in hardware and
infrastructure to provide processing power and other technology
capabilities to campus constituents who need them. This reallocation
of institutional resources is especially beneficial for applications
with high demands for processing and storage, such as modeling,
animations, digital video production, or biomedical studies.

grid computing, the concurrent application of the processing and data storage resources of many
computers computer, device capable of performing a series of arithmetic or logical operations. A
computer is distinguished from a calculating machine, such as an electronic calculator , by being
able to store a computer program (so that it can repeat its operations and make
..... Click the link for more information. in a network to a single problem. It also can be used for
load balancing as well as high availability by employing multiple computers—typically personal
computers and workstations—that are remote from one another, multiple data storage devices,
and redundant network connections. Grid computing requires the use of parallel processing
parallel processing, the concurrent or simultaneous execution of two or more parts of a single
computer program , at speeds far exceeding those of a conventional computer .
..... Click the link for more information. software that can divide a program among as many as
several thousand computers and restructure the results into a single solution of the problem.
Primarily for security reasons, grid computing is typically restricted to multiple computers within
the same enterprise.
Grid computing evolved from the parallel processing systems of the 1970s, the large-scale
cluster computing systems of the 1980s, and the distributed processing systems of the 1990s, and
is often referred to by these names. Grid computing can make a more cost-effective use of
computer resources, can be applied to solve problems that require large amounts of computing
power, and may be the forerunner of pervasive computing—computer applications that pervade
our environment without our being aware of their presence.

Grid computing (or the use of computational grids) is the combination of computer
resources from multiple administrative domains applied to a common task, usually to a scientific,
technical or business problem that requires a great number of computer processing cycles or the
need to process large amounts of data.
One of the main strategies of grid computing is using software to divide and apportion pieces of
a program among several computers, sometimes up to many thousands. Grid computing is
distributed, large-scale cluster computing, as well as a form of network-distributed parallel
processing [1]. The size of grid computing may vary from being small — confined to a network of
computer workstations within a corporation, for example — to being large, public collaboration
across many companies and networks. "The notion of a confined grid may also be known as an
intra-nodes cooperation whilst the notion of a larger, wider grid may thus refer to an inter-nodes
cooperation".[2] This inter-/intra-nodes cooperation "across cyber-based collaborative
organizations are also known as Virtual Organizations".[3]
It is a form of distributed computing whereby a “super and virtual computer” is composed of a
cluster of networked loosely coupled computers acting in concert to perform very large tasks.
This technology has been applied to computationally intensive scientific, mathematical, and
academic problems through volunteer computing, and it is used in commercial enterprises for
such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-
office data processing in support of e-commerce and Web services.
What distinguishes grid computing from conventional cluster computing systems is that grids
tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a
computing grid may be dedicated to a specialized application, it is often constructed with the aid
of general-purpose grid software libraries and middleware.
Grids versus conventional supercomputers
“Distributed” or “grid” computing in general is a special type of parallel computing that relies on
complete computers (with onboard CPU, storage, power supply, network interface, etc.)
connected to a network (private, public or the Internet) by a conventional network interface, such
as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many
processors connected by a local high-speed computer bus.
The primary advantage of distributed computing is that each node can be purchased as
commodity hardware, which when combined can produce similar computing resources to a
multiprocessor supercomputer, but at lower cost. This is due to the economies of scale of
producing commodity hardware, compared to the lower efficiency of designing and constructing
a small number of custom supercomputers. The primary performance disadvantage is that the
various processors and local storage areas do not have high-speed connections. This arrangement
is thus well suited to applications in which multiple parallel computations can take place
independently, without the need to communicate intermediate results between processors.
The high-end scalability of geographically dispersed grids is generally favorable, due to the low
need for connectivity between nodes relative to the capacity of the public Internet.
There are also some differences in programming and deployment. It can be costly and difficult to
write programs so that they can be run in the environment of a supercomputer, which may have a
custom operating system, or require the program to address concurrency issues. If a problem can
be adequately parallelized, a “thin” layer of “grid” infrastructure can allow conventional,
standalone programs to run on multiple machines (but each given a different part of the same
problem). This makes it possible to write and debug on a single conventional machine, and
eliminates complications due to multiple instances of the same program running in the same
shared memory and storage space at the same time.
Design considerations and variations
One feature of distributed grids is that they can be formed from computing resources belonging
to multiple individuals or organizations (known as multiple administrative domains). This can
facilitate commercial transactions, as in utility computing, or make it easier to assemble
volunteer computing networks.
One disadvantage of this feature is that the computers which are actually performing the
calculations might not be entirely trustworthy. The designers of the system must thus introduce
measures to prevent malfunctions or malicious participants from producing false, misleading, or
erroneous results, and from using the system as an attack vector. This often involves assigning
work randomly to different nodes (presumably with different owners) and checking that at least
two different nodes report the same answer for a given work unit. Discrepancies would identify
malfunctioning and malicious nodes.
Due to the lack of central control over the hardware, there is no way to guarantee that nodes will
not drop out of the network at random times. Some nodes (like laptops or dialup Internet
customers) may also be available for computation but not network communications for
unpredictable periods. These variations can be accommodated by assigning large work units
(thus reducing the need for continuous network connectivity) and reassigning work units when a
given node fails to report its results as expected.
The impacts of trust and availability on performance and development difficulty can influence
the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to
the developing organization, or to an open external network of volunteers or contractors.
In many cases, the participating nodes must trust the central system not to abuse the access that is
being granted, by interfering with the operation of other programs, mangling stored information,
transmitting private data, or creating new security holes. Other systems employ measures to
reduce the amount of trust “client” nodes must place in the central system such as placing
applications in virtual machines.
Public systems or those crossing administrative domains (including different departments in the
same organization) often result in the need to run on heterogeneous systems, using different
operating systems and hardware architectures. With many languages, there is a trade off between
investment in software development and the number of platforms that can be supported (and thus
the size of the resulting network). Cross-platform languages can reduce the need to make this
trade off, though potentially at the expense of high performance on any given node (due to run-
time interpretation or lack of optimization for the particular platform).
Various middleware projects have created generic infrastructure, to allow diverse scientific and
commercial projects to harness a particular associated grid, or for the purpose of setting up new
grids. BOINC is a common one for academic projects seeking public volunteers; more are listed
at the end of the article.
In fact, the middleware can be seen as a layer between the hardware and the software. On top of
the middleware, a number of technical areas have to be considered, and these may or may not be
middleware independent. Example areas include SLA management, Trust and Security, Virtual
organization management, License Management, Portals and Data Management. These technical
areas may be taken care of in a commercial solution, though the cutting edge of each area is
often found within specific research projects examining the field.
Market segmentation of the grid computing market
According to IT-Tude.com, for the segmentation of the grid computing market, two perspectives
need to be considered: the provider side and the user side:
[edit] The provider side
The overall grid market comprises several specific markets. These are the grid middleware
market, the market for grid-enabled applications, the utility computing market, and the software-
as-a-service (SaaS) market.
Grid middleware is a specific software product, which enables the sharing of heterogeneous
resources, and Virtual Organizations. It is installed and integrated into the existing infrastructure
of the involved company or companies, and provides a special layer placed among the
heterogeneous infrastructure and the specific user applications. Major grid middlewares are
Globus Toolkit, gLite, and UNICORE.
Utility computing is referred to as the provision of grid computing and applications as service
either as an open grid utility or as a hosting solution for one organization or a VO. Major players
in the utility computing market are Sun Microsystems, IBM, and HP.
Grid-enabled applications are specific software applications that can utilize grid infrastructure.
This is made possible by the use of grid middleware, as pointed out above.
Software as a service (SaaS) is “software that is owned, delivered and managed remotely by one
or more providers.” (Gartner 2007) Additionally, SaaS applications are based on a single set of
common code and data definitions. They are consumed in a one-to-many model, and SaaS uses a
Pay As You Go (PAYG) model or a subscription model that is based on usage. Providers of SaaS
do not necessarily own the computing resources themselves, which are required to run their
SaaS. Therefore, SaaS providers may draw upon the utility computing market. The utility
computing market provides computing resources for SaaS providers.
[edit] The user side
For companies on the demand or user side of the grid computing market, the different segments
have significant implications for their IT deployment strategy. The IT deployment strategy as
well as the type of IT investments made are relevant aspects for potential grid users and play an
important role for grid adoption.

Grid Computing
Add a linkTop of page

Grid computing - Wikipedia, the free encyclopedia


http://en.wikipedia.org/wiki/Grid_computing
Grid computing - Wikipedia, the free encyclopedia
"Grid computing is an emerging computing model that provides the ability to perform
higher throughput computing by taking advantage of many networked computers to
model a virtual computer architecture that is able to distribute process execution across a
parallel infrastructure. Grids use the resources of many separate computers connected
by a network (usually the Internet) to solve large-scale computation problems.
Grids provide the ability to perform computations on large data sets, by breaking them
down into many smaller ones, or provide the ability to perform many more computations
at once than would be possible on a single computer, by modeling a parallel division of
labor between processes"

Special Security and software


Naturally, grid computing over the Internet requires more extensive security than within a single
enterprise, and robust authentication is employed in such applications Grid computing does
require special software that is unique to the computing project for which the grid is being used.
The Globus Toolkit is an open source software toolkit used for building Grid systems and
applications. It is being developed by the Globus Alliance and many others all over the world. A
growing number of projects and companies are using the Globus Toolkit to unlock the potential
of grids for their cause.

Definition of grid computing

A parallel processing architecture in which CPU resources are shared across a network, and all
machines function as one large supercomputer. It allows unused CPU capacity in all participating
machines to be allocated to one application that is extremely computation intensive and
programmed for parallel processing.

Utility

Grid computing appears to be a promising trend for three reasons: (1) its ability to make more
cost-effective use of a given amount of computer resources, (2) as a way to solve problems that
can't be approached without an enormous amount of computing power, and (3) because it
suggests that the resources of many computers can be cooperatively and perhaps synergistically
harnessed and managed as a collaboration toward a common objective. In some grid computing
systems, the computers may collaborate rather than being directed by one managing computer.
One likely area for the use of grid computing will be pervasive computing applications - those in
which computers pervade our environment without our necessary awareness.

Some of the enterprises using grid computing in India include the Gujarat Electricity Board,
Saraswat Bank, National Stock Exchange, Indian Railway Catering & Tourism Corporation,
General Insurance Company, Syndicate Bank, Ashok Leyland, Maruti Suzuki India Ltd and
Municipal Corporation of Hyderabad.

You might also like