You are on page 1of 4

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882

Volume 4 Issue 4, April 2015

Non-Preemptive Dynamic Scheduling for Prevention of Jobs Starvation in


Distributed Systems
1

1, 2

Shivani Pathak, 2Sumit Rana


Department of CSE, Geeta Engineering College, Panipat, India

ABSTRACT
In the past few years, cloud computing has emerged as
an enabling technology and it has been increasingly
adopted in many areas including science and
engineering not to mention business due to its inherent
flexibility, scalability and cost effectiveness. With the
rapid development of cloud computing, many
companies and departments introduce cloud computing
to support their business. The clouds have changed the
patterns of traditional way of using the software and
infrastructures. The enterprises can rapidly complete
some business and reduce a lot of cost by using clouds.
Cloud computing, a new kind of computing model, is
coming. This word is a new word that appears at the
fourth season, 2007. It is an extend of changing with
the need, that is to say the manufacturer provide
relevant hardware, software and service according to
the need that users put forward. With the rapid
development of the Internet, users requirement is
realized through the Internet, different from changing
with the need. In fact cloud computing is an extend of
grid computing, distributed computing, and parallel
computing. Its foreground is to provide secure, quick,
convenient data storage and net computing service
centered by internet.
The presented work is about to define cloud based
simulation system so that the effective scheduling will
be performed. Author has to define a new scheduling
algorithm such that it will remove the drawback of
earlier resource allocation algorithm and give the
enhancement in terms of obtained output. The work
includes the concept to change the priority of the
process dynamically so that the starvation condition
can be resolved. To avoid this, parametric criteria will
be setup so that the decision regarding the priority
change will be taken.
KEYWORDS: Database, Java language.

I.

INTRODUCTION

Cloud computing, the new term for the long dreamed


vision of computing as a utility, enables convenient,
on-demand network access to a centralized pool of
configurable computing resources (e.g., networks,
applications, and services) that can be rapidly
deployed with great efficiency and minimal
management overhead. The amazing advantages of
Cloud Computing include: on-demand self-service,
ubiquitous network access, location independent

resource pooling, rapid resource elasticity, usage-based


pricing, transference of risk, etc. Thus, Cloud
Computing could easily benefit its users in avoiding
large capital outlays in the deployment and
management of both software and hardware.
Undoubtedly, Cloud Computing brings unprecedented
paradigm shifting and benefits in the history of IT.
As Cloud Computing becomes prevalent, more and
more sensitive information are being centralized into
the cloud, such as emails, personal health records,
private videos and photos, company finance data,
government documents, etc. By storing their data into
the cloud, the data owners can be relieved from the
burden of data storage and maintenance so as to enjoy
the on-demand high quality data storage service.
However, the fact that data owners and cloud server
are not in the same trusted domain may put the our
Sourced data at risk, as the cloud server may no longer
be fully trusted in such a cloud environment due to a
number of reasons: the cloud server may leak data
information to unauthorized entities or be hacked. It
follows that sensitive data usually
Shouldbe encrypted prior to outsourcing for data
privacy and combating accesses.
The character of cloud computing is in the
virtualization,
distribution
and
dynamically
extendibility. Virtualization is the main character.
Most software and hardware have provided support to
virtualization. We can virtualized many factors such as
IT resource, software, hardware, operating system and
net storage, and manage them in the cloud computing
platform; every environment has nothing to do with
the physical platform. Carries on the management, the
expansion, the migration, the backup through the
hypothesized platform, all sorts of operations will be
completed
through
the
virtualization
level.
Distributional refers to the physical node which the
computation
uses
is
distributed.
Dynamic
expandability is refers to through the dynamic
extension virtualization level, and then achieves to
above apply carries on the expansion the goal. Has
broken between the physical structure barrier,
represents is transforming the physical resources for
logic may manage the resources the inevitable trend. In
the future, all resources transparently will move in
each physical platform, the resources management will
carry on according to the logical way, will realize the
resources automated assignment completely, but the
virtualization technology realizes this ideal only tool.

www.ijsret.org

348

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4 Issue 4, April 2015

In view of the cloud computation, the virtualization


technology's fusion and the application should face the
high-quality hypothesized main engine, the application
and the resources, as well as aspects and so on.
Cloud computing is the result of evolution and
adoption of existing technologies and paradigms. The
goal of cloud computing is to allow users to take
benefit from all of these technologies, without the need
for deep knowledge about or expertise with each one
of them. The cloud aims to cut costs, and helps the
users focus on their core business instead of being
impeded by IT obstacles.
The main enabling technology for cloud computing is
virtualization. Virtualization software separates a
physical computing device into one or more "virtual"
devices, each of which can be easily used and
managed to perform computing tasks. With operating
systemlevel virtualization essentially creating a
scalable system of multiple independent computing
devices, idle computing resources can be allocated and
used more efficiently. Virtualization provides the
agility required to speed up IT operations, and reduces
cost by increasing infrastructure utilization.
Autonomic computing automates the process through
which the user can provision resources on-demand. By
minimizing user involvement, automation speeds up
the process, reduces labor costs and reduces the
possibility of human errors. Users routinely face
difficult business problems. Cloud computing adopts
concepts from Service-oriented Architecture (SOA)
that can help the user break these problems into
services that can be integrated to provide a solution.
Cloud computing provides all of its resources as
services, and makes use of the well-established
standards and best practices gained in the domain of
SOA to allow global and easy access to cloud services
in a standardized way. Cloud computing is a kind of
grid computing; it has evolved by addressing the Q o S
(quality of service) and reliability problems.
Cloud computing poses privacy concerns because the
service provider can access the data that is on the
cloud at any time. It could accidentally or deliberately
alter or even delete information. Many cloud providers
can share information with third parties if necessary
for purposes of law and order even without a warrant.
That is permitted in their privacy policies which users
have to agree to before they start using cloud services.
Solutions to privacy include policy and legislation as
well as end users' choices for how data is stored. Users
can encrypt data that is processed or stored within the
cloud to prevent unauthorized access
Physical control of the computer equipment (private
cloud) is more secure than having the equipment off
site and under someone else's control (public cloud).
This delivers great incentive to public cloud computing
service providers to prioritize building and maintaining
strong management of secure services. Some small

businesses that don't have expertise in IT security


could find that it's more secure for them to use a public
cloud.

II.

PROBLEM FORMULATION

Author has proposed a work to schedule the user


request with effective resource allocation and
maximizing the CPU utilization. In this work author
has taken a cloud network with homogenous nodes and
all the similar nodes over the network which is
composed of cluster. These homogenous resource
nodes are same with respective to their efficiency,
memory requirement and processing speed. It is also
assumed that each resource instance can be solely
assigned and it is not sharable.
The presented work is to accept the user request and
perform the resource allocation to these users. All the
user requests will be maintained in a job queue and the
scheduler will be implemented to select user to which
the resource allocation will be done. The scheduling
here is presented take care of three different factors.
First factor is the deadline defined for the task
completion, the second factor is the CPU resources and
third factor is process time.
In this proposed work, author has defined a change
priority mechanism to overcome the problem of
starvation. Because of the proposed contribution, No
process will stay in the queue for very long time. The
results will be presented in the form of execution time
and the wait time of the processes. The work will be
implemented by using CLOUDSIM tool integrated
with JAVA.

III.

HYPOTHESIS

The hypothesis defines the research question around


which the complete research is performed. The
hypothesis actually setup a direction for the author to
perform is work in an organized way. During the
research the answer of these research questions are
driven positively or negatively. In this proposed work,
author also setup up some research question.

What criteria will be taken to decide the


priority of the processes?

Will the system work for under load and


over load conditions effectively?

Will it improve the efficiency of the


System?

IV.

PAST WORK

Hemant S. M et.[1] compares performances of three


algorithms are studied in the paper. The request time
for the three policies applied (Round Robin, Equally
spread current execution load, Throttled Load
balancing) are same which means there is no effect on
data centers request time after changing the
algorithms. The cost analysis showed for each
algorithm is calculated in the experimental work. The

www.ijsret.org

349

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4 Issue 4, April 2015

cost calculated for virtual machine usage per hour is


same for two algorithms Round Robin, Equally spread
current execution load but Throttled Load balancing
algorithm reduce the cost of usage, so Throttled Load
balancing algorithm works more efficiently in terms of
cost for load balancing on cloud data centers
Nitin S. M and Swapnaja R. H [2] performed an
implementation of an algorithm which balances the
incoming load across different servers present in the
cloud; there is slight difference in the general load
balancing and load balancing in the cloud. Authors can
manage it by using well known first come first serve
basis. The Algorithm proposed uses Signature Driven
Load
management
for
Cloud
Computing
Infrastructure. Authors have develop an application in
java where they are monitoring the node resources like
RAM, CPU, Memory, Bandwidth, Partition
information, Running process information and
utilization.
Adam F et.[3] proposed a general, composable
abstraction for execution resources, along with a
continuation-based meta-scheduler that harnesses
those resources in the context of a deterministic
parallel programming library for Haskell.
Kumar .P et.[4] discuss various forms of mapping
cluster topology requirements into Cloud environments
to achieve higher reliability and scalability of
application executed within Cloud resources and
enabling the scheduler to maximize CPU utilization
while remaining within the constraints imposed by the
need to optimize user Quality of Service (QOS). The
focus of the paper is to provide a dynamic scheduler
that aims to maximize user satisfaction. Thus the job
details submitted by the user will include job
prioritization criteria: the allocated budget and the
deadline required by the user, enabling the scheduler to
maximize CPU utilization while remaining within the
constraints imposed by the need to optimize user
Quality of Service (QOS).
Mc Evoy. G and Schulze.B et[5] explored some of the
effects that the paradigm of Cloud Computing has on
schedulers when executing scientific applications.
Author present premises regarding to provisioning and
architectural aspects of a Cloud infrastructure, that are
not present in other environments, and which
implications they may have on scheduling decisions in
presence of relevant policies like improving
performance. Author proposed and tests a preliminary
workload classification, based on usage modes that
may improve early scheduling decisions as Author
research towards automatic deployment of scientific
applications.
Dutta D and Joshi. R C et[6] proposed a genetic
algorithm approach to cost based multi Q o S job
scheduling has been proposed. A model for cloud
computing environment has been also proposed and
some popular genetic cross over operators, like PMX,

OX, CX and mutation operators, swap and insertion


mutation are used to produce a better schedule. The
algorithm guarantees the best solution in finite time.
Kloh.H et.[7] compared the performance of a bicriteria scheduling algorithm for Work Flows with
Quality of Service (Q o S) support. This work serves
as basis to implement a bi-criteria hybrid scheduling
algorithm for work flows with Q o S support, aiming
to optimize the criteria chosen by the users and based
on the priority ordering and relaxation specified by
them. The proposed model aims to allow scheduling
with: reduced the response time to the user, better use
of resources, reducing the make-span and choosing the
best resource using the historical data from user
applications. The experiments take into account the
optimization of the runtime, cost and reliability
criteria. Through the results it was possible to verify
that criteria reliability and runtime are somewhat
contradictory and need to be treated independently, but
this does not prevent them to be used jointly. Criteria
such as runtime and cost can be worked together
because they are correlated and are selected from the
profile of the task/user. However the criterion
reliability is related to the resources and the selection
should be treated differently regarding the profile of
the resource. Thus it is hardly possible to achieve the
optimization of all criteria, but it is possible to reach a
balanced solution in the search for the optimization.

V.

www.ijsret.org

PROPOSED WORK

The Purpose Of The Utilization Of Dynamic


Scheduling Mechanism Is To Avoid A Task
Unit With Low Priority To Wait For Longer
Period Of Time. To Achieve This Goal, The
Scheduler Sets Waiting Threshold Value
Which Means That If Waiting Time Of The
Job Or Process Increases Than The Waiting
Threshold Value, The Priority Value Of
Waiting Job Is Increased And Updated In
Corresponding Array Value.
The Priority Of Jobs Will Be Calculated Using
Three Parameters. The Parameters Are CPU
Requirement, Io Requirement And Job
Criticality.
Allocations Of Jobs To Virtual Machines Will
Be Decided At Runtime Depending Upon The
Availability Of Virtual Machine.
Simulation Of Work Will Be Done On
Cloudsim.
The Generated Algorithm Non Preemptive
Dynamic Scheduler Is Compared With
Shortest Job First To Assure That Starvation
Problem Is Optimized And Overall Waiting
Time And Finish Time Of All Jobs Is
Reduced.

350

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4 Issue 4, April 2015

VI.

RESEARCH OBJECTIVE

The work about to perform the process scheduling


under cloud environment under the conditions of
deadline and prioritization of process.
The new algorithm is suggested in this work to
improve the priority of a process sitting in memory
from long time.
The work is about to reduce the average wait time
for the processes.
The main objective of the work is to allocate the
processes under the specification of deadline,
resource execute time and space requirements.

REFERENCE
[1] Hemant S. M, Kaveri.P.R, Chavan.V (2013)
Load Balancing On Cloud Data Centres
International Journal of Advanced Research in
Computer Science and Software Engineering,
Volume 3, Issue 1, January 2013
[2] Butt.S (2012) "Self-service Cloud Computing",
CCS12, Raleigh, North Carolina, USA. ACM
978-1-4503-1651-4/12/10 (pp 253-264)
[3] CloudSim: A Framework For Modeling And
Simulation Of Cloud Computing Infrastructures
And Services.
[4] Kumar.P, Sehgal.V, Chauhan.D.S, Diwakar.M
(2011)" Clouds: Concept to Optimize the Quality
of Service (QOS) for Clusters", 2011 World
Congress on Information and Communication
Technologies 978-1-4673-0125-1@ 2011 IEEE
(pp 820-825)
[5] Fan.B (2011) " Small Cache, Big Effect: Provable
Load Balancing for Randomly Partitioned Cluster
Services", SOCC11, Cascais, Portugal.ACM 9781-4503-0976-9/11/10
[6] D Dutta.D and Joshi R C (2011) " A Genetic
Algorithm Approach to Cost-Based Multi-QoS Job
Scheduling in Cloud Computing Environment",
International Conference and Workshop on
Emerging Trends in Technology (ICWET 2011)
TCET, Mumbai, India ICWET11, Maharashtra,
India. ACM 978-1-4503-0449-8/11/02 (pp 422427)
[7] Kloh.H, B. Schulze. B, A. Mury, Pinto R.C.G
(2010) " A Scheduling Model for Workflows on
Grids and Clouds", MGC 2010, Bangalore, India.
ACM 978-1-4503-0453-5/10/11.

www.ijsret.org

351

You might also like