Professional Documents
Culture Documents
Efficient Numerical Integration and Table Lookup Techniques for Real Time Flight Simulation .............. 8
P. Lathasree and Abhay A. Pashilkar
Improving Recommendation Quality with Enhanced Correlation Similarity in Modified Weighted Sum
.................................................................................................................................................................... 41
Khin Nila Win and Thiri Haymar Kyaw
Information Systems Projects for Sustainable Development and Social Change ................................... 68
James K. Ho and Isha Shah
Software Architectural Pattern to Improve the Performance and Reliability of a Business Application
using the Model View Controller .............................................................................................................. 83
G. Manjula and Dr. G. Mahadevan
International Journal of Computer Science and Business Informatics
IJCSBI.ORG
B. Senthil Murugan
Assistant Professor (Senior)
School of Information Technology & Engineering,
VIT University, Vellore-632014, Tamil Nadu, India
ABSTRACT
Cloud computing has become popular because of its on demand self services capability and
business benefits. This paper presents design of search engine application developed and
deployed using Google app engine. The application uses pattern-matching and regular
expression language processing across millions of web document and returns the matching
web documents. To facilitate large dataset processing the application makes use of Apache
Hadoop suite, which is distributed data processing framework that brings up hundreds of
virtual servers on-demand, runs a parallel computation on them, then shuts down all the
virtual servers releasing all its resources back to the cloud. The MapReduce concept is used
to implement the system to do the parallel computation and give efficient result to user. The
application is efficient and scalable to any number of users in quick response time. The
Google app engine uses cloud SQL instance to store data virtually in a cloud database.
Keywords
MapReduce, Pattern matching, SQL instance, Google app engine, Apache Hadoop suite.
1. INTRODUCTION
Using cloud architecture the software application can be effectively
designed and online databases are used on-demand. Cloud infrastructure
used for software application is utilized on need and returned it back to
cloud providers after its usage to make it available for other application.
Cloud architecture can handle large number of datas easily. Physical
location of the application infrastructure is determined by the provider, so
that there are many business benefits in cloud architecture, such as business
people no need to invest for infrastructure, quick infrastructure when
needed, resources is utilized efficiently, pay only for what using, through
parallelization processing time of the job is reduced. The main objective of
this paper is to develop efficient, scalable search engine application based
on cloud architecture which will give responses to many users. This
application should be loosely coupled so that it is available to all user
community and can access concurrently.
IJCSBI.ORG
2. BACKGROUND STUDY
The new computing model of cloud computing provide resource, storage
and online application as service to the user. Cloud computing is dynamic,
reliable, scalable, low cost and secure so that it provides virtual service to
any number users. The cloud computing provide three type of services such
as , software as a service where the application software can used by any
one as on demand resource, platform as a service and infrastructure as a
service. The internet users are more interest in searching datas and getting
needed information. For quick and efficient result, large computing
resources are needed. Cloud infrastructure is used get the resources needed,
to get data after processing the data and resources is given back. Using
Google apps engine implementation of search engine cloud application is
explained is this paper. The application use Hadoop mapreduce concept to
get large data from the cloud and map the process request on that data and
reduce the result set to give the searched result. Mapping of millions of
result has been done parallel and quick response to request is generated so
that application is more efficient.
3. RELATED WORKS
Chunzhi Wang and Zhuang Yang [1] of Hubei University of Technology,
explain the cloud search engine process based on user interest. They showed
that demand of user can be known by introducing user interest model. Push
mechanism used to get result for search and close all exciting sever on
demand to user. This lets the user to get relevant information on time. They
compare the traditional search model with user interest based search model.
The user interest model has accurate rate of giving relevant information on
user demand.
Lingyging Zeng and Hao Wen Lin [2] of Harbin Institute of technology
explain the concept of existing MapReduce and modified MapReduce to
perform parallel computing to collect the hardware performance information
from the virtual machine. The existing MapReduce will have master slave
process, when the client request is generated master node will create a new
job and assign to a new processor and is ready to perform. The master node
always checks the salve process status is working based on that it will split
and assign work to all available process and get combine all task. They used
this concept in cloud computing which is dynamic and server will generate
the request to the persistence independent storage device to collect and
information.
IJCSBI.ORG
Web Service which is loosely coupled system. Explained development of
application with GrepTheWeb Hadoop implementation based search engine
deployed using Amazon Web service. He also explained Amazon web
service such Amazon S3 which is used to get input and output, Amazon
SQS act as message passing, Amazon SimpleDB a database to get status,
Amazon EC2 a controller.
Kejiang Ye, Xiaohong Jiang, Yanzhang He, Xiang Li, Haiming Yan, Peng
Huang [5] in 2012 discusses A Scalable Hadoop Virtual Cluster Platform
for MapReduce-Based Parallel Machine Learning with Performance
Consideration. Big data processing is increasing its important because of
increasing data. Efficiently process large data virtual infrastructure is not
clear at present. He clearly explained based on the performance of Hadoop
and vHadoop. The performance is measured based on clustering, k-means,
on vHadoop.
Closed frequent Itemset mining [7] plays important role in many real world
applications. Cost and handling of large dataset is challenging issues of such
data mining. A parallelized AFOPT-close algorithm is proposed and
implemented based on the cloud computing framework MapReduce in 2012
by Su Qi Wang, Yu Bin Yang, Guang Peng Chen, Yang Gao and Yao
Zhang.
IJCSBI.ORG
4. METHODOLOGY
The system architecture depicted in Figure 2 implies that the GAE design
will get the query from the user as regular simple expression, then process
the request to the mapreduce phase which split the expression data set into
small sub set and request is sent to all different database machines. After the
extraction of resultant web document which matches the expression it will
combine into a single resultant set and produce it to the user as web
document.
MapReduce phase
Cloud SQL
database storage
IJCSBI.ORG
Search engine application is developed to provide the user software as a
service (SaaS). This application is developed based on to give efficient web
search to user. Search engine uses regular expression as a query to search
into the cloud database. This regular expression is run over millions of web
document using Hadoop map reduce concept. It uses matching pattern to
retrieve the document which is matched at most of the user entered regular
expression query. The challenges in designing search engine is that
complex regular expression, if there are many web document which
matches, or else pattern is unknown. This application is overcomes all of
such difficulties and gives result to number of users even with large dataset,
with quick response and cost of usage is less. This is done over because of
mapping is done parallel in number of processor then reduce and combine
into smaller needed information.
Hadoop split dataset into manageable data and give it to many machines, job
launched and processed in different machine which is located physically
wide somewhere because of its open source and distributed which can
manage large dataset. After that the result of all are aggregated as final
output of job. It works in three phases to implement this. Map phase will
map the data which is matched with the regular expression from the cloud
database. Reduce phase will produce intermediate result of the web
document. Map and reduce phase is done independent of each other in
separate processor. Combine phase will combine all the extracted data from
different machine. Thus needed data will be computed from all over the
cloud data base and processed parallel to give efficient search result.
IJCSBI.ORG
Hadoop use the master slave process, master process will run in separate
node and see all the slave process which runs in some other separate node.
Salve process all workers which extract data from different machine if any
failure in worker or any problem will be take care by master process.
5. RESULTS
Application implementation in Figure 4 shows the start up page of search
engine application which is developed and deployed using Google app
engine and web tool kit. The application ask the user to enter search string
and shows web document which match the search string based on Map
Reduce concept. Because Map Reduce concept uses parallel computation
search result will be mapped and computed fast so that response time of
application will increase. Cloud SQL instance is used to access the cloud
database and get all resource need for result and after the process is over
resource is released back to the cloud.
6. CONCLUSION
In this paper Search engine application is successfully designed, developed
and deployed using Google Apps engine and cloud instance sql database.
Search engine performs pattern matching across millions of web document
using Apache Hadoop Map-Reduce for regular expression inputted by the
user for query processing. Because of using map reduce concept, millions of
documents are pattern matched in parallel at a time and result is combined
and given to user as a web document. The process uses parallel distributed
processing across many dataset gives the quick response to the user and also
scale for any number of users. Application uses cloud sql data base using
instance created for the application, so that billing of used resources from
cloud computing data base can be easily maintained.
References
[1] Wang C., Yan Z., Chen H., 2010. Search engine concept based on user interest model
and information push mechanism. 8th International Conference on computer science and
education, Sri Lanka.
[2] Zeng L. and Lin H. W. 2012. A modified mapreduce for cloud computing. International
conference on computing, measurement, control and sensor networks.
IJCSBI.ORG
[3] Jinesh Varia, explained Cloud Architectures in Technology Evangelist Amazon Web
Services in June 2008
[4] Gaizhen Yang, The Application of MapReduce in the Cloud Computing, International
Symposium on Intelligence Information Processing and Trusted Computing in 2011.
[5] Kejiang Ye, Xiaohong Jiang, Yanzhang He, Xiang Li, Haiming Yan, and Peng Huang,
vHadoop: A Scalable Hadoop Virtual Cluster Platform for MapReduce-Based Parallel
Machine Learning with Performance Consideration, IEEE International Conference on
Cluster Computing Workshops in 2012.
[6] Zhiqiang Liu, Hongyan Li , Gaoshan Miao, MapReduce-based Backpropagation
Neural Network over Large Scale Mobile Data, Sixth International Conference on Natural
Computation (ICNC 2010) in 2010.
[7] Su Qi Wang, Yu Bin Yang, Guang Peng Chen, Yang Gao and Yao Zhang,
MapReduce-based Closed Frequent Itemset Mining with Efficient Redundancy Filtering in
IEEE 12th International Conference on Data Mining Workshops in 2012.
IJCSBI.ORG
Abhay A. Pashilkar
CSIR-National Aerospace Laboratories
Old Airport Road, PB No 1779, Bangalore-560017
ABSTRACT
A typical flight simulator consists of models of various elements such as the flight dynamic
model, filters and actuators, which have fast and slow eigen values in the overall system.
This results into an electromechanical control system of stiff ordinary differential
equations. Stability, accuracy and speed of computation are the parameters of interest while
selecting numerical integration schemes for use in flight simulators. Similarly, accessing
huge aerodynamic and engine database in table look-up format at high speed is an essential
requirement for high fidelity real time flight simulation. A study was carried out by
implementing well known numerical integration and table lookup techniques in a real time
flight simulator facility designed and developed in house. Table lookup techniques such as
linear search and index computation methodology using novel Virtual Equi-Spacing
concept were also studied. It is seen that the multi-rate integration technique and the table
look up using Virtual Equi-Spacing concept have the best performance amongst the
techniques studied.
Keywords
Real-Time Flight Simulation, Aerodynamic and Engine database, Virtual Equi-Spacing
concept, table look up and interpolation, Runge-Kutta integration, multi-rate integration.
1. INTRODUCTION
Flight simulation has a vital role in the design of aircraft and can benefit all
phases of the aircraft development program: the early conceptual and design
phase, systems design and testing, and flight test support and envelope
expansion [1]. Simulation helps in predicting the flight behavior prior to
flight tests. It helps in certification of the aircraft under demanding
scenarios. Flight Simulation is widely used for training purposes in both
fighter and transport aircraft programs [2]. Therefore, Modeling &
Simulation is one of the enabling technologies for aircraft design.
The fidelity of the simulation largely depends on the accuracy of the
simulation models used and on the quality of the data that goes into the
model. A faithful simulation requires an adequate model in the form of
IJCSBI.ORG
mathematical equations, a means of solving these equations in real-time and
finally a means of presenting the output of this solution to the pilot by
means visual motion, tactile and aural cues [3].
The Real-Time Flight Simulator implies the existence of a Man-In-the-Loop
operating the cockpit controls [4]. Because of the presence of the pilot-in-
the-loop, the digital computer executing the flight model in the simulator
must solve the aircraft equations of motion in 'real-time' [5]. Real-Time
implies the occurrence of events at the same time in the simulation as seen
in the physical system. All the associated computations should be completed
within the cycle update time [6].
The basis of a flight simulator is the mathematical model, including the
database package, describing the characteristic features of the aircraft to be
simulated. The block schematic of flight simulator is shown in Figure 1
with constituent modules such as aerodynamic, engine, atmosphere (static
and dynamic), actuator etc. The simulation model for atmosphere includes
the static and dynamic atmosphere components. Dynamic atmosphere model
caters for turbulence, wind shear and cross wind. Dryden and Von Karman
models are generally used for the simulation of atmospheric turbulence [7].
FLIGHT CONTROL
MASS,
ACTUATOR C.G &
MODEL INERTIA
PILOT
COMMANDS AIRCRAFT
RESPONSES
ENGINE FLIGHT
MODEL &
ENGINE MODEL POSITION VISUALS
DATABASE VELOCITY AND
ELEVONS
ACCELERATION DISPLAY
RUDDER
THROTTLE FLIGHT PATH
SLATS AOA
AOS
AEROMODEL
ATMOSPHERE & AERO DATA
MODEL
IJCSBI.ORG
Mathematical models, used to simulate modern aircraft, consist of a set of
non-linear differential equations with large amounts of aerodynamic
function data (tables), sometimes depending on 4 to 5 independent
variables. These aerodynamic data tables result in force and moment
coefficients which contribute to the total forces and moments. The equations
of motion are dependent on these forces and moments. They are solved by
the digital computer using a suitable numerical integration algorithm. This
allows the designer to create the complete range of static and dynamic
aircraft operating conditions, including landing and takeoff [6].
The type of method used for the integration of ordinary differential
equations is critical for real time simulation. The choice of an integrating
algorithm is a trade-off between simplicity, which affects calculation speed,
and accuracy. Also, real simulation needs high speed data access. The
aerodynamic and engine database used for real-time simulation are huge and
complex. Hence, the types of table look-up methods used for access of data
from aerodynamic and engine database also become critical.
This paper discusses the efficient table look up and interpolation schemes
and numerical integration techniques which can be used for ensuring
accurate real-time computations in a flight simulator.
IJCSBI.ORG
which merely truncates the Taylor series after the first derivative and is very
accurate [9]. An RK method (e.g., Euler) could be used to generate the
starting values for LMMs.
Higher order RK algorithms are an extension Taylor series expansion to
higher orders. An important feature of the RK methods is that the only value
of the state vector that is needed is the value at the beginning of the time
step; this makes them well suited to the Ordinary Differential Equations
initial value problem [1].
2.1.1 Stability, Accuracy and Speed of Computation
While choosing the numerical integration technique, one frequently has to
strike a compromise between three aspects [10-11].
Speed of the method
Accuracy of the method
Stability of the method
Speed of the method becomes an essential feature especially for real time
simulation.
Accuracy of the method is also an important aspect and needs to be
considered when choosing a method to integrate the equations of motion
[12]. Accuracy of the numerical integration technique can be determined
from step size, number of steps to be executed and truncation error terms
[10-11]. Generally, two types of errors will be introduced by the numerical
integration methods viz. round-off errors and discretisation errors. Round-
off errors are a property of the computer and the program that is used and
occur due to the finite number of digits used in the calculations [13-14].
Discretisation/ truncation errors are property of the numerical integration
method.
Stability can be defined as the property of an integration method that keeps
the integration errors bounded at subsequent time steps [12]. An unstable
numerical integration method will make the integration errors grow
exponentially resulting in possible arithmetic overflow just after a few time
steps.
Stability of numerical integration technique generally depends on the system
dynamics, step size and order of the chosen technique and is harder to assess
[10-11]. Impact of numerical integration method in terms of stability can be
assessed by applying it to a well-conditioned differential equation and then
investigating the limits of the onset of instability [10-11]. In the context of
stability of numerical integration, it is understood that a stable continuous
system results in a stable discrete-time system. Numerical stability is
important for fixed-step Runge-Kutta integrators because of the limitations
imposed on the integration step size. Generally, selection of the integration
IJCSBI.ORG
step size will be carried out based on analysis on the stability of the
numerical integration technique. [15]. Numerical stability will be an issue
when the chosen integration step size produces z-plane poles close to the
Unit Circle.
If the poles are located inside the Unit circle, then the system will be stable.
Increasing T (step size) eventually causes one of the z-plane poles to be on
the Unit Circle where the system becomes marginally stable. Depending on
the location of T (product of characteristic root and step size) on the
stability boundary of respective integrator, it is possible to estimate the
maximum allowable integration step size (Tmax) for the system solution to
be at least marginally stable. Beyond Tmax, the system solution will
become unstable. Hence, it is very essential to consider stability boundaries
for different numerical integrators while selecting the integration step size.
Figure 2 shows the stability boundaries for Runge-Kutta methods [15].
Stability Boundaries for RK-2 thru RK4 Integrators
3
T Plane
2
RK-4
RK-3
1
RK-2
Im ( T)
-1
-2
-3
-3 -2.5 -2 -1.5 -1 -0.5 0 0.5
Re ( T)
IJCSBI.ORG
One has to consider the following two points while choosing the numerical
integration technique [10-11]:
The integration technique should be chosen such that any error it
introduces is small in comparison to the errors associated with the
main terms of the model equations;
The numerical integration techniques should be able to solve the
system of differential equations within the real-time frame rate.
Many integration techniques, for non-real time simulation applications, are
available that work well with the stiff systems [16-17]. Two approaches that
can be used for simulating stiff systems with respect to real time and non-
real time simulation will be discussed here. The first approach considers
selection of numerical integration technique that works well in the presence
of stiffness.
The second approach involves the use of multi-rate integration to simulate
stiff systems. In multi-rate simulations, the simulation is split into multiple
tasks that are executed with different integration step times. The inverse of
the integration step time is termed as frame rate and expressed in frames per
second. This multi-rate integration technique is useful for real-time
applications as well as non real-time applications.
Of the two approaches discussed for the simulation of stiff systems, only the
multi-rate integration technique is applicable for real time applications.
The control systems with electrical and mechanical components, referred as
electromechanical control systems, are composed of fast and slow
subsystems. Generally, the mechanical systems being controlled are much
slower when compared to the components in electronic controllers and
sensors. This results in an electromechanical control system with fast and
slow dynamics. The aircraft pitch control system is an example of system of
stiff ordinary differential equations comprising of aircraft dynamics and
actuators [15].
Kunovsky et al have established the need of multi-rate integration for real
time flight simulation [18] with an example of aircraft pitch control system
comprising of slow aircraft dynamics and fast actuator dynamics using
Runge-Kutta and Adams-Bashforth numerical integration techniques. The
airframe module of aircraft pitch control system is modeled as a linear
second-order system to account for the short-period longitudinal dynamics.
Generally, selection step size for numerical integration will be carried out
based on the analysis of stability and dynamic accuracy. Ts and Tf are the
integration step sizes of slow and fast systems respectively.
The numerical integrator used to update slow system is termed as master
routine, and the integration method used to update the fast system is called
IJCSBI.ORG
as slave routine. It is common to use conventional numerical integration
schemes such as Runge-Kutta methods for both master and slave
systems. For the example studied here, the multi-rate integration scheme
with RK-4 is chosen for master and slave routines. The implementation is
carried out in the Matlab environment. For a pitch command of 2deg,
simulation is carried for the state space based simulink model. This result is
compared with the analytical solution and the response obtained using a
multi-rate integration scheme. The comparison of theta and elevator
responses for three methods is shown in Figure 3 and Figure 4 respectively.
2
Analytical
1.8 Multirate-RK4
RK4 @ 0.0025sec sampling
1.6
1.4
1.2
theta_resp(deg)
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time(sec)
0.45
Analytical
0.4 Multirate-RK4
RK4 @ 0.0025sec sampling
0.35
0.3
0.25
dle_resp(deg)
0.2
0.15
0.1
0.05
-0.05
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Time(sec)
IJCSBI.ORG
The responses obtained from analytical solution are taken as the reference.
From the figures, it can be seen that the response obtained using simulink
model at step size 0.0025 is matching well with reference whereas the
response obtained using the multi-rate integration exhibits loss of accuracy.
The multi-rate integration scheme would be recommended for real time
simulation even though there is some loss of accuracy, since the smaller step
size may deteriorate the performance.
2.2 Table look-up and Interpolation
Generally, an index search or look-up process will be performed first to
locate the data and this is followed by linear interpolation. Following steps
need to be performed for table look-up process [3]:
1. First we should decide between which pair of values in the table the
current input value of independent variable (X) lies
2. Next, calculate the local slope
3. Finally, apply the linear interpolation formula
For real-time simulation, it is always important to save the processing time.
One of the techniques to save the processing time is to remember the index
of the lower pair the interpolation range used in the previous iteration. The
value of the independent variable (X) is unlikely to have changed
substantially from one time step to the next, and hence it is a good first try
to use the same interval as before and thus save time in searching from one
end of the table each time.
Huge and complex aerodynamic and engine database has to be handled in
such a way that it can be easily read and interpolated for a given set of input
conditions. One way of ensuring the speed required for real-time simulation,
is to have uniformly spaced database. For this, the normal practice is to
convert the supplied database with non-uniform break points for
independent variables to equi-spaced format. It is necessary to choose an
appropriate step size for independent variables such as Angle of Attack,
Mach number, Elevator, Angle of Sideslip, Power Lever Angle (PLA) etc to
convert this non-uniform database to equi-spaced format. This is normally
termed as conventional equi-spacing concept. We propose a new concept
called Virtual Equi-Spacing where the original database with non-uniform
break points is retained. With the assumption of virtual equi-spacing, the
search process can be eliminated [19] as the index is directly computed.
The computation of index in Virtual Equi-Spacing concept is explained in
the following section.
2.2.1 Virtual Equi-Spacing Concept
A novel method is proposed which would retain original data with unevenly
spaced break points and satisfies real-time constraint without loss of
accuracy. In this method, an evenly spaced breakpoint array that is a
IJCSBI.ORG
superset of the unevenly spaced break points will be created for the
independent variables and shall be referred as Address Map. The index
into this evenly spaced array can be directly computed (Refer Figure 5).
This index is then used in an equivalent breakpoint index array that provides
pointers to the appropriate interpolation equation.
Unevenly
Spaced 0.0 0.3 0.4 0.5 0.6 0.8
breakpoints
Evenly
Spaced
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
breakpoints
5
Breakpoint
index 0 0 0 1 3 4 4 5
2
IJCSBI.ORG
of Mach number, PLA and altitude. This index computation methodology
using Virtual Equi-Spacing concept is extended to multi-dimension tables of
wind tunnel database.
Table 1. PLA and Thrust relationship
PLA(deg) Thrust(kN)
28. -0.63
42. 3.21
54. 8.7
66. 13.81
78. 20.24
90. 26.32
104. 28.09
107. 30.26
130. 44.84
The next section presents a study on efficient table look up algorithms and
numerical integration algorithms suitable for real time implementation in
flight simulators.
3. RESULTS
From the survey of existing techniques for numerical integration and table
look-up, concept of multi-rate integration and Virtual Equi-Spacing concept
are implemented for real-time flight simulation and studied. This
implementation is carried out in the real-time flight simulation facility
designed and developed at CSIR-NAL.
Figure 6 shows the conceptual flowchart of real time flight simulation. The
simulation is typically started from an equilibrium / trim condition. For the
given set of pilot inputs, flight dynamic module solves the equations of
motion using the chosen numerical integration method. It is necessary that
all the associated computations should be completed within the cycle update
time for real-time simulation. These computations are completed ahead of
cycle update time and the beginning of the next cycle is delayed till the
internal clock signals the next cycle update as shown in Figure 6.
IJCSBI.ORG
Pilot Inputs
Get External Inputs
Disturbances
Control Laws
Get surface positions
Hardware
models
Aerodynamic
Get Forces and Moments Engine
Landing Gear
simtime = simtime +
deltat
delay
end
If Stop Simulation
No
Yes
End
IJCSBI.ORG
3.1.1 Numerical Integration
The concept of multi-rate integration is adopted for the real-time flight
simulation facility designed and developed at CSIR-NAL. The full
nonlinear model of the aircraft dynamics along with the actuator dynamics
for a light transport aircraft is considered for this real-time flight simulation
environment. The aircraft dynamics of light transport aircraft constitute the
slow dynamics and fast dynamics is composed of actuator dynamics. The
nominal integration step size of 0.025sec is chosen for the airframe
simulation purpose. Similarly, for the actuator dynamics 0.0025sec is
chosen as integration step size based on the analysis of stability and
dynamic accuracy. It can be seen that the ratio of step sizes of slow system
to fast system (frame ratio) is 10 indicating a stiff system. The multi-rate
integration scheme with frame ratio 10 and simulation cycle update time
0.025sec ensures the handling of slow and fast subsystems. The Runge-
Kutta pair of Bogacki and Shampine [20] is currently being used for the
numerical integration of slow and fast dynamics. Table 2 presents the timing
analysis for the simulation (off-line) carried out using the windows based
timer function with the resolution in micro seconds.
Table 2 Timing analysis for multi-rate and mono-rate integration techniques
Figure 7 shows the plots of aircraft responses obtained with pitch stick
doublet for mono-rate integration with 0.0025sec sampling time and
multi-rate integration with 0.025 / 0.0025sec sampling times. From the
plots, it can be seen that the mismatch between the multi-rate integration
scheme and the mono-rate solution is negligible.
IJCSBI.ORG
15 20
10 10
Alpha(deg)
Q(deg/s)
5 0
0 -10
-5 -20
0 10 20 30 40 50 0 10 20 30 40 50
Time(sec) Time(sec)
78 2560
76
Vtot(m/s)
2540
Alt(m)
74
2520
72
70 2500
0 10 20 30 40 50 0 10 20 30 40 50
Time(sec) Time(sec)
20 20
Monorate @ 0.0025sec
Theta(deg)
0 0
-10 -10
0 10 20 30 40 50 0 10 20 30 40 50
Time(sec) Time(sec)
Figure 7 Comparison plots for aircraft response variables with mono-rate and
multi-irate integration schemes
For real time applications, accuracy is the feature that must be sacrificed in
conflicts with other properties. It is better to obtain a solution with some
small error than not be able to obtain it at all in the allowed time. Moreover,
many real time applications incorporate a feedback control. Feedback
control helps to compensate errors and disturbances, including integration
errors. For real time flight simulation, the multi-rate integration scheme may
be adopted for better computational time.
3.1.2 Table Look-Up and Interpolation
This flight simulator facility is using the aerodynamic and engine database
with unevenly spaced break points. It is proposed to use the original dataset
with unevenly spaced breakpoints and facilitate a faster table look up and
interpolation.
As already discussed, a technique to save time is to remember the index of
the lower pair of the interpolation range used last time. From one time step
to the next, the value of the independent variable X is unlikely to have
changed substantially and so it would be a good first try to use the same
interval as before and thus avoid waste of time in searching from one end of
the table each time. Hence, linear search with option of remembering
previous used index is used for the timing analysis.
IJCSBI.ORG
Timing analysis is carried out for linear search with option of remembering
previously used index and the novel Virtual Equi-Spacing concept proposed
in the previous section. A windows based timer function with the resolution
in micro seconds is used to obtain the time taken for the table look up and
interpolation. Generally, this process includes, computing the location of
data component value in the corresponding data table and interpolation.
Table 3 Timing studies for different search and interpolation techniques
Mach
number Time in Micro seconds
0.4
Altitude
4500m
Linear Search (with Virtual
option of remembering Equi-Spacing
previous used index) concept
PLA / 50 18.08 12.65
PLA / 90 18 12.5
PLA / 107 17.2 12.75
PLA / 110 19.1 12.44
The recommended Virtual Equi-Spacing technique has been used for the
table lookup and interpolation of the aerodynamic and engine database
consisting of around two lakh data points (representing a high performance
fighter aircraft). The engine data base of size 20000 data points is taken as
an example to carry out the study. Table 3 gives the timing of two different
techniques studies at different PLA conditions while Mach number and
altitude are maintained same. From the table, it is found that Virtual Equi-
Spacing technique takes lesser time. The accuracy is maintained as the
actual data tables are not affected.
4. CONCLUSIONS
A study was carried out to recommend efficient numerical integration and
table look up techniques suitable for real time flight simulation comprising
of system of stiff ordinary differential equations. Numerical integration and
table lookup techniques available in literature were implemented in a real
time flight simulator facility designed and developed in house. Aircraft pitch
control system representing the slow and fast subsystems was considered for
the study on numerical integration techniques. Table lookup techniques such
as linear search and index computation methodology using Virtual Equi-
Spacing concept have been studied for an example of the engine database of
a high performance fighter aircraft. The Virtual Equi-Spacing is a new
IJCSBI.ORG
concept developed for interpolation of large multi-dimensional tables
frequently used in flight simulation. With excessively small step size, it is
possible to solve the stiff differential equations, but this results in
performance penalty, an important aspect of real time simulation. Hence, it
is recommended to opt for multi-rate simulation, where it is necessary to use
a step size for the actuator simulation that is sufficiently small to ensure an
accurate and stable actuator solution and a larger step size for simulating the
slower dynamics of the airframe. The Virtual Equi-Spacing concept for
table lookup and interpolation leads to faster and accurate data access, an
essential feature of real-time simulation while handling larger databases.
From the results, it is found that the recommended multi-rate integration
technique and the table look up using Virtual Equi-Spacing concept perform
better.
5. ACKNOWLEDGMENTS
The authors would like to thank Mr Shyam Chetty, Director, CSIR-NAL
and Dr (Mrs) Girija Gopalratnam, Head, Flight Mechanics and Control
Division, CSIR-NAL for their guidance and support.
REFERENCES
[1] Ken A Norlin, Flight Simulation Software at NASA Dryden Flight Research Center,
NASA TM 104315, October 1995
[2] David Allerton, Flight Simulation- past, present and future, The Aeronautical Journal,
Vol 104, Issue No. 1042, pp 651-663, December 2000
[3] J M Rolfe and K J Staples, Flight Simulation, Cambridge University Press, Year of
publication 1991
[4] Flight Mechanics & Control Division, CSIR-National Aerospace Laboratories,
NAL-ASTE Lecture Series, May 2003
[5] Max Baarspul, A review of Flight Simulation Techniques, Progress in Aerospace
Sciences, (An International Review Journal), Vol. 27, Issue No. 1, pp 1-120,
March 1990
[6] Joseph S. Rosko, Digital Simulation of Physical systems, Addison-Wesley Publishing
Company. Year of publication 1972
[7] Beal, T.R., Digital simulation of atmospheric turbulence for Dryden and Von Karman
models, Journal of Guidance Control and Dynamics, Vol 16, Issue No. 1,
pp132138, February 1993.
[8] http://qucs.sourceforge.net/tech/node24.html Accessed on 8.1.2014
[9] Brian L Stevens and Frank L Lewis, Aircraft and Control and Simulation, John Wiley
& Sons Inc. Year of Publication 1992
[10] David Allerton, Principles of Flight simulation, John Wiley & Sons Ltd. Year of
Publication 2009
[11] http://www.scribd.com/doc/121445651/PRINICIPLES-OF-FLIGHT-SIMULATION
Accessed on 8.1.2014
IJCSBI.ORG
[12] http://mat21.etsii.upm.es/mbs/bookPDFs/Chapter07.pdf, Numerical Integration of
Equations of Motion Accessed on 26.6.2012
[13] Marc Rauw, FDC 1.4 A SIMULINK Toolbox for Flight Dynamics and Control
Analysis, Draft Version 7, May 25, 2005
[14] John W Wilson and George Steinmetz, Analysis of numerical integration techniques
for real-time digital flight simulation, NASA-TN-D-4900 dated November 1968,
Langley Research Center, Langley Station, NASA, Hampton, VA
[15] Harold Klee and Randel Allen, Simulation of Dynamic Systems with Matlab and
Simulink, Second Edition, CRC Press, Taylor and Francis Group, 2011
[16] Jim Ledin, Simulation Engineering:Build better embedded systems faster, CMP
books, Publication Year 2001
[17] http://www.embedded.com/design/real-world-applications/4023325/Dynamic-System-
Simulation
[18] Jir Kunovsky et al, Multi-rate integration and Modern Taylor Series Method, Tenth
International conference on Computer Modeling and simulation, 2008, IEEE Computer
Society.
[19] Donald E. Knuth, The art of computer programming Volume 3 / Sorting and
Searching, Addison-Wesley Publishing Company. Year of publication 1973
[20] http://en.wikipedia.org/wiki/Bogacki%E2%80%93Shampine_method Accessed on
12/10/2008
IJCSBI.ORG
Appendix
Computing index values and data values:
data pladata /
* 28.0,42.0,54.0,66.0,78.0,90.0,104.0,107.0, 130/
The Address Map assumes the virtual equi-spaced data with 1.0deg step.
For the PLA value 28 to 41, the index number will be 1. For the PLA value
42.0 to 53.0, the index number will be 2. Similarly, for the PLA value 54.0
to 65.0, the index number will be 3 and so on.
data (plamap(it), it=1,103) /
* 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
* 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
* 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
* 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
* 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
* 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
* 7, 7, 7,
* 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
* 8, 8, 8, 8, 8, 8, 8, 8, 8,
* 9/
The index into these address maps can directly computed based on the step
size.
iplav = int((pla_val-28.0)/1.0) + 1 = 27
iplax = plamap(iplav) = 3
Based on this index number corresponding to the independent variable PLA,
it is possible to obtain thrust value in the table.
thrust_val11 = thrust_tab(iplax) = 8.7
thrust_val12 = thrust_tab(iplax+1) = 13.81
thrust_val = thrust_val11 + ((thrustval12-thrustval11)/(pladata(iplax+1)-
pladata(iplax))*pla_val-pla_data(iplax) = 8.7
If PLA value is lying between two break points e.g. pla_val = 70.5
iplav = int((60.5-28)/1) + 1 = 43
iplax = plamap(iplav) = 4
thrust_val11 = thrust_tab(iplax) = 13.81
thrust_val12 = thrust_tab(iplax+1) = 20.24
thrust_val = thrust_val11 + ((thrustval12-thrustval11)/(pladata(iplax+1)-
pladata(iplax))*pla_val-pla_data(iplax) = 16.2213
IJCSBI.ORG
C. Sushanth
PG Scholar, Department of Information Technology
Sona College of Technology,
Salem, India.
ABSTRACT
Cloud computing is kinetically evolving areas which offer large potential for agencies of all
sized to increase efficiency. Cloud Broker acts as a mediator between cloud users and cloud
service providers. The main functionality of cloud broker lies in selecting best Cloud
Service Providers (CSP) from requirement set defined by cloud user. Request from cloud
users are processed by the cloud broker and suited providers are allocated to them. This
paper gives detailed review of cloud brokerage services and their method of negotiating
with the service providers. Once the SLA is specified by cloud service provider, the cloud
broker will negotiate the terms according to the users specification. The negotiation can be
modeled as a middleware, and its services can be provided as application programming
interface.
Keywords
Cloud computing, broker, mediator, service provider, middleware.
1. INTRODUCTION
A cloud refers the interconnection of huge number of computer systems in a
network. The cloud provider extends service through virtualization
technologies to cloud user. Client credentials are stored on the company
server at a remote location. Every action initiated by the client is executed in
a distributed environment and as a result, the complexity of maintaining the
software or infrastructure is minimized. The services provided by cloud
providers are classified into three types: Infrastructure-as-a-Service (IaaS),
Software-as-a-Service (SaaS), and Platform-as-a-Service (PaaS). Cloud
computing makes client to store information on remote site and hence there
is no need of storage infrastructure. Web browser act as an interface
between client and remote machine to access data by logging into his/her
account. The intent of every customer is to use cloud resources at a low cost
with high efficiency in terms of time and space. If more number of cloud
IJCSBI.ORG
service providers is providing almost same type of services, customers or
users will have difficulty in choosing the right service provider. To handle
this situation of negotiating with multiple service providers, Cloud Broker
Services (CBS) play a major role as a middleware. Cloud broker acts as a
negotiator between cloud user and cloud service provider. Initially, cloud
provider registers with cloud broker about its specification on offerings and
user submits request to broker. Based on type of service, and requirements,
best provider is suggested to the cloud user. Upon confirmation from the
user, broker establishes the connection to the provider.
Identity Manager
Cloud Provider A
Cloud Provider B
IJCSBI.ORG
Identity Manager handles user authentication through unique ID.SLA
Manager is responsible for negotiates SLA creation and storing. Match
Manager takes care of selecting suitable resources for cloud users.
Monitoring and Discovery Manager monitor SLA metrics in various
resource allocations. Deployment manager is in charge of deploying
services to cloud user. Abstract cloud API provides interoperability.
The user submits a request to SLA Manager and it parses the request into
SLA parameters which is given to Match Maker. By applying algorithm
Match Maker find best suited solution and response is passed to the user.
Upon user acceptance a connection is provided by service providers.
Table 2.1 Sample SLA parameters for IaaS
Functional Non-functional
CPU speed Response time
OS type Completion time
Storage size Availability
Image URL Budget
Memory size Data transfer time
Tao Yu and Kwei-Jay Lin [2] introduces Quality of Service (QoS) broker
module in between cloud service providers and cloud users. The role of QoS
information is collecting information about active servers, suggesting
appropriate server for clients, and negotiate with servers to get QoS
agreements. The QoS information manager collects information required for
QoS negotiation and analysis. It checks with the Universal Description
Discovery and Integration (UDDI) registry to get the server information and
contacts servers for QoS information such as server send their service
request and QoS load and service levels. After receiving clients functional
and QoS requirements, the QoS negotiation manager searches through the
brokers database to look for qualified services. If more than one candidate
is found, a decision algorithm is used to select the most suitable one. The
QoS information from both server and QoS analyzer will be used to make
the decision. By using this architecture load balancing factor of server is
maintained for a large number of users, but not efficient in delivering best
suited provider to the client.
IJCSBI.ORG
UDDI
QoS Information
QoS Manager
Information Client
DB USER
QoS Analyzer
QoS Result
Server
QoS broker
Recursive Virtualization
IJCSBI.ORG
Nested virtualization provides services to cloud user. The outcome is a
highly virtualizing cloud resource broker. The system supports
hierarchically nested virtualization with dynamically reallocate capable
resources. A base virtual machine is dedicated to enabling the nested cloud
with other virtual machines is referred to as sub-virtual machine running at a
higher virtualization level. The nested cloud virtual machine is to be
deployed by the broker and offers control facilities through the broker
configurator which turn it into a lightweight infrastructure manager. The
proposed solution yields the higher reselling power of unused resources, but
hardware cost of running virtual machine will be high to obtain the desired
performance.
Chao Chen et al [4] projected objectives of negotiation are minimize price
and guaranteed QoS within expected timeline, maximize profit from the
margin between the customers financial plan and the providers negotiated
price, maximize profit by accepting as many requests as possible to enlarge
market share. The proposed automated negotiation framework uses
Softwareas-a-Service (SaaS) broker which is utilized as the storage unit for
customers. This helps the user to save time while selecting multiple
providers. The negotiation framework helps user to assist in establishing a
mutual agreement between provider and client through SaaS broker. The
main objective of the broker is to maintain SLA parameters of cloud
provider and suggesting best provider to customer.
CUSTOMER AGENT
Negotiation
Engine Decision
Strategy Create
Making
System DB
SLA
SLA
Send
Template SLA
Directory
IaaS
IJCSBI.ORG
Negotiation policy translator maps customers QoS parameters to provider
specification parameters. Negotiation engine includes workflows which use
negotiation policy during the negotiation process. The decision making
system uses decision making criteria to update the negotiation status. The
minimum cost is incurred for resource utilization. Renegotiation for
dynamic customer needs is not solved.
Wei Wang et al [5] proposed a new cloud brokerage service that reserves a
large pool of instances from cloud providers and serves users with price
discounts. A practical problem facing cloud users is how to minimize their
costs by choosing among different pricing options based on their own
demands. The broker optimally exploits both pricing benefits of long-term
instance, reservations and multiplexing gains. Dynamic approach for the
broker to make instant reservations with the objective of minimizing its
service cost is achieved. This strategy controls, dynamic programming and
algorithms to quickly handle large demands.
Reserved/On On-demand
-demand Instances
IaaS instances
User 1
Cloud
Provider
s User 2
Broker
User
Cost
Broker
User 3
Cost
A smart cloud brokerage service that serves cloud user demands with a large
pool of computing instances that are dynamically launched on-demand from
IaaS clouds. Partial usage of the billing cycle incurs a full cycle charge, this
makes user to pay more than they actually use. This broker uses single
instance to serve many users by time-multiplexing usage, reducing cost of
cloud user.
IJCSBI.ORG
while assisting Independent Software Vendor (ISV) to maximize profit.
When data are arriving, it is divided and index is created and finally it is
mapped to original values through analysis. Large organizations are
purchasing such software as a SaaS instead of obtaining and hosting
software internally. But for ISVs that constructed their business by the
traditional model. The cloud broker acts as middleware between the ISV and
cloud providers. ISV yields solution to meet customer demands from for
existing services. The broker provides services such as entitlement,
analytics, billing and payment, security and context provisioning. ISVs
usually rely on pre-module licensing models and software audits to confirm
that the appropriate number of users access the modules and functions for
which the customer will be paid.
Analytics Demand
Generation
An ISV can drive faster profit growth, while maintaining margins, and
respond to market demand more quickly.
IJCSBI.ORG
late binding. A cloud delivery broker can make decision, such as where to
revert user upon request. Hybrid cloud must be able to describe capabilities
such as bandwidth, location, cost, type of environment.
Sigma Systems [8] introduces cloud service broker which is responsible for
order management, provisioning, billing integration and Single SignOn
(SSO). In the proposed architecture, the Cloud Service Broker allows
service providers to offer their own SLA, which provides a single source for
all applications to customers. Providers can establish and grow a single and
a combined collection of services that match their set of services, and allow
for unique grouping to meet their customers needs. Cloud brokerage from
Sigma Systems is available either as a managed service or can be deployed
on-basis.
SIGMA SYSTEMS
Off-net Brokerage
On-net
Video VPN
Backup Office Productivity
SaaS Service
Managed Unified
voice Security Collaboration
Messagin PaaS Service
g
The Sigma model allows service providers to create single and highly
exciting packages by combining high-speed data and other complex network
services with business and productivity-enhancing, SaaS based services.
IJCSBI.ORG
It records service type, time of day, and the identity of the user. All
information sent to cloud services must be examined for disclosing data, in
order to allow Data Loss Prevention (DLP). Caching protects the enterprise
from inactivity linked with connecting to the cloud service. Service Level
Agreement (SLA) monitoring observes the whole transaction throughput
time. The Cloud Service Broker contains a pluggable structure which allows
for modules to be added, such as modules to provide additional encryption
algorithm.
IJCSBI.ORG
the service provider issues a ticket that unites the users digital identity once
the authentication is complete.
Users use an IDP enforced method for authentication to prove their identity.
If the authentication is well, the IDP declares the users identity in the ticket
sent back to the browser, which in turn sends it to the service provider.
Users can then access the requested services. The existing IDP-enforced
authentication method is by means of a user name and key. Because the
entire united system of Web services only requires one username and
password license, SSO systems are convenient for the user. At the equal
time, such credentials become a major target for hackers because it gives
them access to many private user resources at once. Presently network
traffic between users browsers and remote servers is secured by ubiquitous
standard security protocols for information exchange, based on Secure
Socket Layer (SSL) and Transport Layer Security (TLS).
Muhammad Zakarya and Ayaz Ali Khan [11] found that Distributed Denial
of Service (DDoS) attack is identified as a major threat in present time,
which we overcome by new cloud environment architecture and Anomaly
Detection System (ADS). These ADS improve computation time, QoS and
high availability. Each cloud is separated as regional areas known as GS.
Each GS is protected by AS/GL. Developed ADS are installed in cloud node
or AS and router. A tree is maintained at every router by making every
IJCSBI.ORG
packet with path modification strategy, so the attacker of node is easily
found. ADS have two phases detection of malicious flow confirmation
algorithm to drop attack or pass it.
Randomness or Entropy is given by,
H(X) =
() (2.1)
Where 0< H(x) < log (n), p(x) probability of x
P(x) =mi/m (2.2)
Where mi is number of packet with value x and m is total number of packets
Normalized entropy is calculated to get overall probability of captured
packet in specific time
Normalized entropy = (H / log n0) (2.3)
For detection of DDoS attack, decide a threshold value. An edge router
collects the flow of traffic for a specific time window w. Find probability
p(x) for each packet node. Calculate link entropy of all active nodes
separately. Calculate H(x) for router, if normalized entropy less than
identified malicious attack flow then system is compromised. For
confirmation of attack flows, decide a threshold value and compare with
entropy rate.
Srijith K. Nair et al [12] describes the concept of cloud bursting, cloud
brokerage, framework of power brokerage based on service OPTIMIS.
When a private cloud need to access external cloud for a certain time for
computation, then the process is called cloud bursting. Internal cloud in the
company needs to verify SLA requirements to measure performance. Cloud
bursting environment, architecture being developed by OPTIMIS with
following capabilities need common management interface, set of
monitoring tools, global load balancer, and categorized providers. Cloud
brokerage model was created by cloud service providers for the cloud
management platform. The cloud management platform is responsible for
activities such as policy enforcement, usage monitor, network security,
platform security. Cloud API mediates consumer interaction with cloud
broker. The SLA monitoring unit is responsible for monitoring all SLA and
violations. Identify and access module records of serviced customer and
generate one time token. Audit unit inspects broker platform and
capabilities. Risk management prioritizes risks based on events.
Network/platform security provides overall security through IDS. The user
send storage request to cloud portal. Then portal forwards id and password
for Identity and Access Management (IAM), it verifies and grants access
along with criteria. Cloud portal converts identity and access rights to
external token, containing criteria and request, which is encrypted and sent
IJCSBI.ORG
to Broker IAM. Broker IAM decrypt using portal public key and verifies
integrity which in turn generates one time access token.
Scaling
IJCSBI.ORG
Instance Agent (TIA) conducts key exchange with controller to establish
HTTPS connection using novel algorithm. The controller checks validity of
certificate in image. To maintain integrity it employs Intrusion Detection
System, if any, violations are met a virtual channel is terminated.
IJCSBI.ORG
stated in the desired models named in the TDF. Next, as a result of multi-
criteria optimization process, a set of equivalent specifications is selected.
IJCSBI.ORG
Where m, n, q are number of transactions at lower domain needed to
complete transaction at higher domain. Authentication metric is a logical
conjunction of each level in EMMRA.
Security Authentication
Authorization
Non-repudiation
Integrity
Information Availability
Certificate and Accreditation
Physical Security
3. CONCLUSIONS
The development of a cloud brokerage services framework is getting
momentum since its usage is pervasive in all verticals. The works till now
do not consider the scenario of more than one cloud service provider
providing the same level of requirements to the user. This scenario will
induce an ambiguity for the users to choose an appropriate provider. The
Cloud Broker Services will act on behalf of the user to choose a particular
service provider for providing service to the user. If Cloud Broker Service
becomes a standard middleware framework, many chores of cloud service
providers can be taken by CBS.
IJCSBI.ORG
REFERENCES
[1] Foued Jrad, Jie Tao, Achim Streit, SLA Based Service Brokering in Intercloud
Environments. Proceedings of the 2nd International Conference on Cloud Computing
and Services Science, pp. 76-81, 2012.
[2] Tao Yu and Kwei-Jay Lin, The Design of QoS Broker Algorithms for QoS-Capable
Web Services, Proceedings of IEEE International Conference on e-Technology, e-
Commerce and e-Service, pp. 17-24, 2004.
[3] Josef Spillner, Andrey Brito, Francisco Brasileiro, Alexander Schill, A Highly-
Virtualising Cloud Resource Broker, IEEE Fifth International Conference on Utility
and Cloud Computing, pp.233-234, 2012.
[4] Linlin Wu, Saurabh Kumar Garg, Rajkumar Buyya, Chao Chen, Steve Versteeg,
Automated SLA Negotiation Framework for Cloud Computing, 13th IEEE/ACM
International Symposium on Cluster, Cloud, and Grid Computing, pp.235-244, 2013.
[5] Wei Wang, Di Niu, Baochun Li, Ben Liang, Dynamic Cloud Resource Reservation via
Cloud Brokerage, Proceedings of the 33rd International Conference on Distributed
Computing Systems (ICDCS), Philadelphia, Pennsylvania, July 2013.
[6] Dharmesh Mistry, Cloud Brokers can help ISVs Move to SaaS, Cognizant 20-20
Insight, and June 2011.
[7] Lori MacVittie, Integrating the Cloud: Bridges, Brokers, and Gateways, 2012.
[8] Sigma Systems, Cloud Brokerage: Clarity to Cloud Efforts, 2013.
[9] Vordel white papers, Cloud Governance in the 21st century, 2011.
[10] Apostol T. Vassilev, Bertrand du Castel, Asad M. Ali, Personal Brokerage of Web
Service Access IEEE Security & Privacy, vol. 5, no. 5, pp. 24-31, Sept.-Oct. 2007.
[11] Muhammad Zakarya & Ayaz Ali Khan, Cloud QoS, High Availability & Service
Security Issues with Solutions, International Journal of Computer Science and Network
Security, vol.12 No.7, July 2012.
[12] Srijith K. Nair, Sakshi Porwal, Theo Dimitrakos, Ana Juan Ferrer, Johan Tordsson,
Tabassum Sharif, Craig Sheridan, Muttukrishnan Rajarajan, Afnan Ullah Khan,
Towards Secure Cloud Bursting, Brokerage and Aggregation, Eighth IEEE European
Conference on Web Services, pp.189-196, 2010.
[13] Shtern. M, Simmons. B, Smit. M, Litoiu. M, An architecture for overlaying private
clouds on public providers, Eighth International Conference and Workshop on Systems
Virtualization Management, pp.371, 377, 22-26 Oct. 2012.
[14] Przemyslaw Pawluk, Bradley Simmons, Michael Smit, Marin Litoiu, Serge
Mankovski, Introducing STRATOS: A Cloud Broker Service, IEEE Fifth International
Conference on Cloud Computing, pp.891-898, 2012.
[15] Hershey. P, Rao. S,Silio. C.B., Narayan. A, System of Systems to provide Quality of
Service monitoring, management and response in cloud computing environments, 7th
International Conference on System of Systems Engineering (SoSE), vol., no., pp.314,
320, 16-19 July 2012.
IJCSBI.ORG
ABSTRACT
Recommender systems aim to help users in finding the items of their interests from large
data collections with little effort. Those systems use various recommendation approaches to
provide accurate recommendation more and more. Among them, collaborative filtering
approach is the most widely used approach in recommender systems. In the two types of
CF system, item-based CF systems overtake the traditional user-based CF systems since it
can overcome the scalability problem of the user-based CF. Item-based CF system
computes the prediction of the user tastes on new items based on the item similarity result
from the explicit rating of the users. They predict rating on the new items based on the
historical ratings of the users. The proposed system improves the item-based collaborative
filtering approach by enhancing the similarity of rating on items with demographic
similarity of the items. It modifies one of the prediction methods, weighted sum, weighted
by enhanced similarity of the items. This system intends to offer better prediction quality
than other approaches and to produce better recommendation results as a result of
considering item-demographic similarity with similarity result from explicit rating of the
user.
Keywords
Recommender systems, collaborative filtering approach, item-based CF system, user-based
CF systems, demographic similarity, weighted sum.
1. INTRODUCTION
With the explosive growth of knowledge available on World Wide
Web, which lacks an integrated structure or schema, it becomes much
more difficult for users to access relevant information efficiently.
Meanwhile, the substantial increase in the number of websites presents a
challenging task for web masters to organize the contents of websites
to cater to the need of users. Web usage mining has seen a rapid increase
in interest, from both the research and practice communities. The
IJCSBI.ORG
motivation of web mining is to discover users access models
automatically and quickly from the vast amount of Web log data, such
as frequent access paths, frequent access page groups and user clustering.
More recently, Web usage mining has been proposed as an underlying
approach for Web personalization. The goal of personalization based on
Web usage mining is to recommend a set of objects to the current (active)
user, possibly consisting of links, ads, text, products, or services, tailored to
the users perceived preferences as determined by the matching usage
patterns [1].
2. MEMORY-BASED TECHNIQUES IN RECOMMENDER
SYSTEMS
Memory-based techniques continuously analyze all user or item data to
calculate recommendations, and can be classified in following main groups:
Collaborative Filtering, Content-based techniques, and Hybrid techniques
[2]. While content-based techniques base their recommendations on individual
information and ignore contributions from other users, collaborative filtering
system emphasizes on the preferences of similarity users or items for their
recommendations. Since the proposed system uses collaborative filtering
techniques, explanations of other techniques are omitted in this paper and analysis
of collaborative filtering techniques are emphasized.
2.1 Collaborative Filtering Techniques (CF)
This approach recommends items that were used by similar users in the
past; they base their recommendations on social, community driven
information (e.g., user behavior like ratings or implicit histories).
Table 1. Special types and special characteristics of Memory-based CF Techniques
Special type of
Memory-based CF Pros Cons
techniques
-Neighborhood-based -easy to implement - reliant on human ratings
CF - easy for addition if - dispersed amount of
new data data may be impact on
- Item-based/user- -no need to consider the performance
based top-N content of are sparse
recommendations the items in - problems in
recommendation recommendation for new
users and items
- scalability limitation for
large
datasets
IJCSBI.ORG
Memory-based collaborative filtering techniques have special characteristics
and representative techniques. Table 1 describes the pros and cons of
memory-based CF techniques [2].
In user-based CF algorithms, first it finds a set of k similar users of the
target user based on correlations or similarities between user records and the
target user. Then, it produces a prediction value for the target user on
unrated items based on the similar users ratings. This approach suffer
scalability problem in large-scale recommender system.
In contrast, item-based CF algorithms attempt to find k similar items that are
co-rated by different users similarly. This performs similarity computations
among the items. Thus, item-based CF algorithms avoid the bottleneck in
user-based algorithms by first considering the relationships among items.
For a target item, predictions can be generated by taking a weighted average
of the target users item ratings on these similar items [3, 6].
2.1.1 Similarity Computation
Most of the recommender systems usually use three similarity computing
techniques: Cosine-based Similarity, Correlation-based Similarity, and
Adjusted Cosine Similarity. The proposed system uses adjusted cosine
similarity for similarity computation.
2.1.1.1 Adjusted Cosine Similarity Vs. Modified Adjusted Cosine Similarity
1) Adjusted Cosine Similarity
Computation of similarity value using basic cosine measure in item-based
recommendation system has one important weakness since the differences
in rating scale between different users are not taken into account. The
adjusted cosine similarity subtracts the corresponding user average from
each co-rated pair to offset this drawback. However, it has one drawback-
the different rating styles of the different users are not taken into account.
Adjusted cosine similarity finds the subtraction value of the rate value of
user u on items i and j respectively and his/her average ratings. Then, it
computes the similarity value as shown in Eq. (1).
sim(i, j ) uU ( Ru ,i Ru )( Ru , j Ru ) (1)
uU ( Ru ,i Ru ) 2 uU ( Ru , j Ru ) 2
In Eq. (1),
Ru is the average value of the u-th users ratings [4].
2) Modified Adjusted Cosine Similarity
IJCSBI.ORG
Adjusted cosine similarity still ignores the casual rating styles of the user.
For this reason, the proposed system improves the computation by
normalizing the rate values.
Demographic
Modified
Similarity or Enhanced Correlation
Adjusted
Content Similarity
Cosine
Similarity of (enh_corij=simi,j,+(simi,j*de
Similarity
Items m_corij))
(simi,j)
(dem_corij)
0.5 0.2 0.6
0.3 0.4 0.42
0.6 0.2 0.72
0.4 0.8 0.72
0.5 0.5 0.75
0.8 0.1 0.88
0.7 0.3 0.91
For example, in the case of the systems range of the rating is 1 to 5, user i
sets the rating 3 to his/her most like item t, while the other user j sets the
rating 5 to his/her most like items t. In such case, the system cant assume
the item t is the user is most likes while it assumes this item is the user js
most likes. So, the system cant determine the highest rating of the users and
cant assume the users most like even if it is the users highest rating in the
case of not being highest rating of the system. Therefore, the system needs
to normalize the rating style to accurately determine which the user most
like and which the least even if the users have different rating styles. The
proposed system applies the normalized rating to overcome such problem.
The proposed method, modified adjusted cosine similarity, can reduce
misunderstanding of the system on the users' likes and dislikes. Eq. 2
denotes the computation of similarity value by modified adjusted cosine
similarity.
uU ( NRu ,i Ru ) 2 uU ( NRu , j Ru ) 2
IJCSBI.ORG
In Eq. (2),
Ru is the average value of the u-th users ratings
HS
NRu R u ,i
,i HRu (3)
In Eq. (3),
HS means highest rating scale of the system
HRu means highest rating scale of the current user
Considering the topic similarity of item,
enh _ corij simij simij dem _ corij
Where,
simij means the similarity of item i and item j from the adjusted cosine
similarity after normalizing the users rating behaviour, dem_corij means the
similarity of the item i and item j according to the topic similarity.
Table 2 describes the way of computing to get enhanced correlation
similarity and also demonstrates how the demographic similarity improves
the modified adjusted similarity value.
2.1.2 Prediction Computation
To get the recommendation, recommender systems always compute the
prediction value firstly and then recommend the item according to the
prediction values. Among them, weighted sum is one of the widely used
techniques for prediction. However, it uses only the rating-based similarity
of the item. The proposed system enhances weighted sum techniques by
using enhanced correlation similarity instead of adjusted cosine similarity
value. Enhanced correlation similarity is the similarity value in which the
modified adjusted cosine similarity value is enhanced with demographic
similarity of the two items.
2.1.2.1 Weighted Sum Vs. Modified Weighted Sum
1) Weighted Sum
The prediction value of weighted sum technique is computed by computing
the summation of the ratings of the user on the items similar to i. Each
rating of user is weighted by the corresponding similarity si,j between items i
and j. Eq. 4 denotes the formula for prediction computation with weighted
sum.
Pu ,i
(4)
allsimilar items , N ( simiN Ru , N )
IJCSBI.ORG
2) Modified Weighted Sum
In Modified Weighted Sum in Eq. (5), each normalized rating, NRu,N in Eq.
(6), is weighted by the enhanced correlation similarity enh_coriN. The
prediction Pu,i is denoted as
IJCSBI.ORG
ignore the rating style of the current user. The proposed system,
Recommender System for Resources and Educational Assistants for
Learners, overcomes this challenge by normalizing the current user's rating
style. And in the section of similarity computation, the system considers the
rating similarity accompanying with topic similarity of resources pages. To
avoid the cold-start problem for users earlier system encountered, the
proposed system uses stereotypes or demographic CF. As a result, the
system takes advantages of not only item-based CF and but also stereotypes
or demographic CF. Moreover, the system can avoid the scalability and
quality bottleneck of the user based CF since it uses item-based
collaborative filtering techniques.
Modifying adjusted cosine similarity with normalized rating of users and
modifying weighted sum with enhanced correlation similarity are not only
able to determine accurately which the user's most likes but also able to
produce the higher prediction quality than the systems which do not
consider the item demographic data and only emphasize the rating of the
users. The system can reduce mean absolute error (MAE) between the
predicted ratings and actual ratings of the users due to the advantages of
modified adjusted cosine similarity and modified weighted sum.
5. CASE STUDY OF RESOURCES AND EDUCATIONAL
ASSISTANTS RECOMMENDATION
The following tables show the case study of resources and educational
assistants recommendation. Table 3 shows all links current user u has rated
in the first column and the links in second column are the links need to be
predicted for current user since they are the links current user has not rated.
Table 3. The links which current user has rated and other links which current user
has not rated but other users has rated
The links current user has
The links current user not rated but other users
has rated has rated
IEEE seminar topics on LAN & WAN
networking 2011-2012 IPv6
Social Networking JavaWorld:Solutions for
Electronics & Java Developers
Communication Project Mobile Java
Topics Core Java
LAN Monitoring and
Controlling
Network Books of Free
Computer Books
IJCSBI.ORG
The data in Table 4 describes the respective co-rated links with the links to
be predicted. Fig 1 distinguishes that four links are the links the current user
has just rated but other three links has not among the co-rated links with the
predicted link, LAN & WAN. In Fig 2, there are three co-rated links the
current user has already rated and four links that has not. Unfortunately,
there is no co-rated links the current user has rated in Fig 3, 4, and 5.
According to this result, these three links may not be possible the current
users interested links. Finally, the system recommends the two links, LAN
& WAN and IPv6 according to the prediction values.
Table 4. Predicted links with their similar links
IJCSBI.ORG
Fig. 1 Fig. 2
Fig. 3 Fig. 4
Fig. 5 Fig. 6
Fig. 1 - Fig. 5. Co-rated links for the respective predicted links
Fig. 6. Recommended links for current user
IJCSBI.ORG
MAE
{u ,i } Pu ,i ru ,i (7)
N
In Eq. (7),
Pu,i is the predicted rate value of user u on item i,
ru,i is the actual rate value of user u on item i,
N is the amount of ratings in the test set.
The proposed system can reduce MAE by applying both demographic
correlation and rating similarity of items.
6.1.1 Comparison of MAE Values
The following table compares MAE between the system which uses
adjusted cosine for similarity computation and weighted sum for prediction
computation and the proposed system.
Table 5. Comparison of MAE Values
2.987 2.635
1.96 1.93
1.92 1.87
IJCSBI.ORG
7. CONCLUSIONS
Recommendation systems are very popular in research area. Those systems
are applied in many areas such as book recommendation [1], movie
recommendation, as well as music recommendation. However, there are few
recommendation systems for learning resources. Recommendation
techniques are sometimes used in e-learning systems. These systems are
mostly intended for providing convenient for learners in accessing learning
resources provided by the systems. Such systems are based on user log data,
rarely based on rating-based. Moreover, due to the fact that most of the
learning resources provided in such systems are e-books and audio/video
lectures, the proposed system intends to fulfil the lack of resources. The
learning resources in the proposed system are not only e-books and
audio/video lectures but also educational hyperlinks from the web. Topic of
these links are related with various fields such as information and
communication technology, computer sciences, digital signal processing,
personalized information management, security challenges in mobile
network, and management in cloud services. Moreover, appropriate
international scholar universities are recommended according to the user
profiles. Finally, the proposed system intends not only to fulfill the lack of
resources for learners by providing the rich topics from the web but also to
offer more accurate prediction quality by proposed methods.
REFERENCES
[1] Bamshad Mobasher, DePaul University, Web Usage Mining and Personalization
[2] Hendrik Drachsler*, Hans G.K. Hummel and Rob Koper, Personal recommender
systems for learners in lifelong learning networks: the requirements,techniques and
model
[3] Good, N., Schafer, J.B., Konstan, J.A., Borchers, A., Sarwar, B., Herlocker, J., Riedl,
J.: Combining collaborative filtering with personal agents for better
recommendations. Proceedings of AAAI 99 (1999) 439-446
[4] BadrulSarwar, George Karypis, Joseph Konstan, and John Riedl, Item-based
Collaborative Filtering Algorithms,ACM 1-58113-348-0/01/0005, May 1-5, 2001,
Hong Kong.
[5] Andrew I. Schein, AlexandrinPopescul, Lyle H. Ungar, and David M.
Pennock,"Methods and metrics for cold-start recommendations", SIGIR
02:Proceedings of the 25th annual international ACM SIGIR conference on Research
and development in information retrieval, pages 253260, New York,NY, USA, 2002.
ACM.
[6] L. Candillier, F. Meyer, F. Fessant, and K. Jack, State-of-the-art recommender
systems, 2009.
[7] Shao, B., Wang, D., Li, T., and Ogihara, M. (2009). Music recommendation based on
acoustic features and user access patterns. IEEE Transactions on Audio, Speech And
Language Processing, 17(8):16021611.
[8] Su, J. and Yeh, H. (2010). Music recommendation using content and context
information mining. IEEE Intelligent Systems, 25:1626.
IJCSBI.ORG
[9] Music recommendation from song sets. In 5th International Conference on Music
Information Retrieval, pages 425428.
[10] Hu, Yajie, "A Music Recommendation System Based on User Behaviors and Genre
Classification" (2012). Open Access Theses. Paper 336.
[11] Khribi, M. K., Jemni, M., & Nasraoui, O. (2009). Automatic Recommendations for E-
Learning Personalization Based on Web Usage Mining Techniques and Information
Retrieval. Educational Technology & Society, 12 (4), 3042.
[12] Brusilovsky, P., & Henze, N. (2007). Open Corpus Adaptive Educational Hypermedia.
In P. Brusilovsky, A. Kobsa & W. Nejdl (Eds.), The Adaptive Web: Methods and
Strategies of Web Personalization (pp. 671-696). Heidelberg, Germany:
SpringerVerlag.
[13] Brusilovsky, P., Sosnovsky, S., & Shcherbinina, O. (2005). User Modeling in a
Distributed E-Learning Architecture. In L. Ardissono, P. Brna & M. A. (eds.),
Proceedings of 10th International Conference on User Modeling (UM'2001),
Edinburgh, UK (pp.387-391).
[14] Rich, E. User Modeling via Stereotypes. Cognitive Science, 3(4):329-354, 1979.
[15] Goldberg, D., D. Nichols, B. M. Oki, and D. Terry. Using collaborative filtering to weave an
information tapestry, Communications of the ACM, 35(12):61-70, 1992.
[16] Konstan, J. A., B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl. GroupLens:
Applying collaborative filtering to Usenet news. Communications of the ACM, 40(3):77-87,
1997.
[17] Resnick, P., N. Iakovou, M. Sushak, P. Bergstrom, and J. Riedl. GroupLens: An open
architecture for collaborative filtering of netnews. In Proceedings of the 1994 Computer
Supported Cooperative Work Conference, 1994.
[18] Hill, W., L. Stead, M. Rosenstein, and G. Furnas. Recommending and evaluating choices in a
virtual community of use. In Proceedings of CHI95.
[19] Shardanand, U. and P. Maes. Social information filtering: Algorithms for automating word of
mouth. In Proc. of the Conf. on Human Factors in Computing Systems, 1995.
IJCSBI.ORG
Kapil Verma
Department of Computer Science,
M. P. Bhoj (Open) University, Bhopal, M. P, India.
ABSTRACT
The problem of internet traffic sharing between two operators was discussed
by Naldi (2002) and he has developed mathematical relationship between
traffic share and network blocking probability. This relationship generates
probability based quadratic function which has a definite bounded area.
This area is a function of many parameters and needs to be estimated. But,
by direct integration methods, it is difficult solve. This paper presents
an approximate methodology to estimate the bounded area using
Trapezoidal rule of numerical quadrature. It is found that bounded area is
directly proportional to customer choice and network blocking .It helps to
explain relationship among traffic share and computer network parameters.
Keywords
Probability, Area estimation, Trapezoidal rule, Network blocking, Operator.
1. INTRODUCTION
The business of Internet is going fast and many countries are still using dial-
up-setup as provided by Internet operators. When network is blocked, it is
called cognition and is a parameter of this satisfaction among users. Naldi
(2002) has suggested Markov chain model where internet traffic sharing was
involved between two network operators. He has developed expressions for
traffic share and network blocking probability. These expressions are
functions of many other input parameters like initial choice of user,
blocking of competitor operator and probability of abandoning use. The
graphical relationship between traffic share and owners blocking probability
is a complex relationship and generates a curve where both axes contain
probability values ranging between 0-1. Now it is need to estimate the area
bounded by these curves at x-axis. If the area is high, operator can have
IJCSBI.ORG
more traffic share. The estimation of bounded area provides first hand
knowledge about the traffic share status. This paper presents approximate
computation of the bounded area using Trapezoidal method of numerical
quadrature.
2. BACKGROUND STUDY
The stochastic modeling was initiated by Naldi (2002) and consequently
utilized by Shukla, Gadewar and Pathak (2007). Shukla, Tiwari and Tiwari
(2009) extend a Markov chain model approach for internet traffic sharing by
introducing concept of two call basis. Shukla, Tiwari and Deshmukh (2010)
suggested a new expression for traffic sharing between two operators.
Shukla and Thakur (2010) examined the disconnectivity factor effect in
traffic sharing and modified the traffic sharing expressions. Shukla, Verma
and Gangele (2011) focused on the problem of re-attempt connectivity over
the same area. Shukla, Gangele, Verma and Trivedi (2011) derived the
internet traffic sharing expressions for cyber crime through elasticity
analysis. Shukla and Singh (2011) utilize the knowledge of Markov chain
model for the scenario of web-browsing. Some other useful contributions on
traffic sharing are due to Shukla, Gangele and Verma (2012), Shukla,
Verma, Dubey and Gangele (2012) where the Markov chain model utilized
as a tool for expression development and curve fitting technique.
Shukla, Jain, and Ojha (2009) presented an analysis of thread
scheduling with multiple processors under a Markov chain model whereas
Shukla, Jain, Singhai, and Agrawal (2009) discussed Markov chain model
based analysis for round robin scheduling scheme. Shukla and Jain (2009)
have a discussion on deadlock analysis of a class of multi-level queue
scheduling in operating system using Markov chain model. Shukla, Thakur,
and Deshmukh (2009) performed a state probability analysis of Internet
traffic sharing. Shukla, Tiwari, Thakur and Deshmukh (2009) contributed
on the share loss analysis of Internet traffic distribution. Shukla, Tiwari,
and Kareem (2009) presented a comparative analysis of Internet traffic
sharing problem using Markov chain model.
Shukla, et al. (2010) discussed stochastic model approach for reaching
probabilities of message flow in space division switches. Shukla, et al.
(2010) examined the effect of dis-connectivity analysis in computer
networks for congestion control. Shukla , Ojha and Jain (2010) suggested
performance evaluation of general class of multilevel queue scheduling
scheme whereas Shukla, Ojha, and Jain (2010) discussed the data model
approach and Markov chain based analysis of multilevel queue scheduling.
Shukla, Jain, and Choudhary (2010) attempted for the estimation of
Ready queue processing time under SL- scheduling scheme in
multiprocessors environment and in similar approach Shukla , Jain, and
Ojha (2010) explored the effect of data model on the analysis of multi-level
IJCSBI.ORG
queue scheduling. One more similar study is due to Shukla, Jain, and Ojha
(2010) on deadlock index analysis in operating system using data model
approach. Shukla, Jain, and Ojha (2010) conducted a study of scheduling
for deadlock state in operating system and in parallel Shukla & Thakur
(2010) presented an Index based internet traffic sharing analysis of users by
a Markov chain probability model.
Shukla and Singhai (2010) performed traffic analysis of message flow in
three cross-bar architecture in space division switches and accordingly
Shukla , Jain, and Choudhary (2010) has a contribution on estimation of
ready queue processing time under usual group lottery scheduling(GLS) in
multiprocessor environment. Shukla, Thakur, and Tiwari (2010) performed
stochastic modeling of Internet traffic management whereas Tiwari,
Thakur, and Shukla (2010) discussed the cyber -crime analysis for multi-
dimensional effect.
Shukla, Singhai, and Thakur (2011) discussed a new imputation
method for missing attribute values in data mining setup whereas Shukla,
Gangele , Singhai, Verma (2011 ) have a new viewpoint approach on
elasticity analysis of web-browsing behavior of users. In a useful
contribution, Shukla, Gangele, Verma, and Singh (2011 ) discussed
elasticity and index analysis of the usual Internet traffic share problem.
Moreover, Shukla, Jain and Choudhary (2011) presented an analytical
approach on prediction of Ready Queue processing time in multiprocessor
environment using Lottery scheduling (ULS) .Likewise, Shukla, Gangele,
Verma, and Thakur (2011) performed an study on index based analysis of
user of Internet traffic sharing in computer network.
Shukla, Verma, and Gangele (2012 ) has a contribution on least
square based curve fitting approach in internet access traffic sharing in
two operator environment and later Shukla, Verma, and Gangele (2012)
have extended the curve fitting approximation in Internet traffic distribution
in two market environment. A similar contribution is due to Shukla,
Verma, Bhagwat, and Gangele (2012). Some other useful are due to
Shukla and Jain (2012), Shukla and Jain (2012), Jain and Shukla (2013).
IJCSBI.ORG
bounded area. In this paper, we have tried to estimate the bounded area A
using trapezoidal method of numerical analysis.
4. TRAPEZOIDAL METHOD:
Let y = f(x) be a function to be integrated in the range a to b (a < b). Using
functional relationship, we can write n different discrete values of x in range
a - b, and can write different y using y=f (x) as below:
x: x0, x2------ xn
y: y0, y2------ yn, ; ( i=1,2,3,4,..n) ;
where a = x0, x1 < x2 < x3, ---- < xn = b and differencing h= (xi+1 - xi)
is like equal interval.
b b
h
y0 2( y1 y2 y3 yn1 ) yn .....(4.1)
I f ( x)dx ydx
a a
2
which is known as Trapezoidal rule of Integration used in numerical
analysis.
l l
(1 p) pL1 (1 pA )
I f ( L)dL1 (1 L2 ) dL1 ...(4.1.1)
d d 1 L1L2 (1 pA )
0.1 0.348 0.412 0.475 0.538 0.601 0.665 0.728 0.791 0.854
0.2 0.345 0.399 0.453 0.507 0.562 0.616 0.670 0.724 0.778
0.3 0.342 0.386 0.431 0.475 0.520 0.565 0.609 0.654 0.699
IJCSBI.ORG
0.4 0.338 0.373 0.407 0.442 0.477 0.512 0.546 0.581 0.616
0.5 0.334 0.359 0.383 0.407 0.432 0.456 0.481 0.505 0.529
0.6 0.330 0.344 0.358 0.371 0.385 0.398 0.412 0.426 0.439
0.7 0.326 0.329 0.331 0.333 0.336 0.338 0.340 0.343 0.345
0.8 0.322 0.313 0.303 0.294 0.284 0.275 0.265 0.256 0.246
0.9 0.318 0.296 0.274 0.252 0.230 0.208 0.186 0.165 0.143
Area(A) 0.302 0.327 0.353 0.378 0.403 0.428 0.454 0.479 0.504
The
The data in following tables for equal intervals of L1 ( where bounded area is A):
table 1 shows that for increasing p , the area A increases subject to
condition other parameters pA and L2 are fixed. The lowest value of area is
A=0.3 at p=0.1 whereas highest value is A = 0.504 on p= 0.9.
Area(A) 0.152 0.207 0.266 0.327 0.392 0.461 0.534 0.612 0.695
IJCSBI.ORG
The table 2 shows that for increasing L2, the area A increases subject to
condition other parameters pA and p are fixed. The lowest value of area is
A = 0.152 at L2=0.1 whereas highest value is A=0.695 on L2=0.9.
0.9 0.070
0.324 0.272 0.230 0.195 0.164 0.137 0.113 0.091
253
Area(A) 0.470 0.434 0.403 0.375 0.350 0.326 0.305 0.285 0.265
The table 3 shows that for increasing pA, the area A decreases subject to
condition other parameters p and L2 are fixed. The highest value of area is
A=0.470 at pA = 0.1 whereas lowest value is A = 0.265 on pA =0.9 .
0.6
0.5
Area Bounded
0.4
0.3
0.2
0.1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
IJCSBI.ORG
Area Computation of
Traffic Share By O1(where p=0.2, pA=0.3)
0.8
0.7
0.6
Area Bounded
0.5
0.4
0.3
0.2
0.1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.5
0.4
Area bounded
0.3
0.2
0.1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
The fig 2 and 3 support the observations in table 2 & 3 over variation of
estimated bounded area A. Consider another form of integration as below:
l l
p (1 p)(1 pA ) L2
I f ( L)dL2 (1 L1 ) dL2 ....(4.1.2)
d d 1 L 1 L2 (1 p A )
We have data in following tables for equal intervals of L2 (taking A as
bounded area):
IJCSBI.ORG
0.4
0.604 0.557 0.510 0.463 0.416 0.369 0.322 0.276 0.229
0.5
0.514 0.474 0.434 0.394 0.354 0.314 0.274 0.235 0.195
0.6
0.420 0.388 0.355 0.322 0.290 0.257 0.224 0.192 0.159
0.7
0.322 0.297 0.272 0.247 0.222 0.197 0.172 0.147 0.122
0.8
0.220 0.203 0.185 0.168 0.151 0.134 0.117 0.100 0.083
0.9
0.112 0.103 0.095 0.086 0.077 0.068 0.060 0.051 0.042
Area(A) 0.491 0.454 0.415 0.377 0.339 0.301 0.262 0.225 0.186
The table 4, 5, 6 are made on varying values of L2 when many parameters are
constant. Table 4 shows that for increasing p, the area A decreases subject
to condition other parameters pA and L1 are fixed. The highest value of
area is A=0.491 at p=0.1 whereas lowest value is A=0.186 on p=0.9.
0.6
0.5
Area Bounded
0.4
0.3
0.2
0.1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Fig. 4 (Variation over p)
L2
P1 P1 P1 P1 P1 P1 P1 P1 P1
0
0.814 0.828 0.842 0.856 0.87 0.884 0.898 0.912 0.926
0.1
0.736 0.752 0.769 0.785 0.802 0.819 0.836 0.854 0.871
0.2
0.657 0.675 0.694 0.712 0.731 0.751 0.771 0.791 0.812
IJCSBI.ORG
0.3
0.578 0.597 0.616 0.636 0.657 0.678 0.700 0.723 0.747
0.4
0.498 0.517 0.536 0.557 0.578 0.601 0.624 0.648 0.674
0.5
0.417 0.435 0.454 0.474 0.495 0.518 0.541 0.567 0.593
0.6
0.335 0.351 0.369 0.388 0.407 0.429 0.452 0.476 0.503
0.7
0.252 0.266 0.281 0.297 0.315 0.333 0.354 0.377 0.401
0.8
0.169 0.179 0.190 0.203 0.216 0.231 0.247 0.265 0.286
0.9
0.08 0.090 0.097 0.103 0.111 0.120 0.129 0.140 0.153
Area(A) 0.409 0.423 0.438 0.453 0.469 0.486 0.504 0.523 0.543
Table 5 shows that for increasing L1, the area A increases subject to
condition other parameters pA and p are fixed. The lowest value of area
is A=0.049 at L1=0.1 whereas highest value is A=0.543 on L1=0.9.
0.6
Area Bounded
0.5
0.4
0.3
0.2
0.1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Fig.5 (Variation over L1 )
IJCSBI.ORG
0.3
0.527 0.500 0.475 0.453 0.432 0.413 0.396 0.379 0.364
0.4
0.468 0.441 0.416 0.394 0.375 0.357 0.340 0.326 0.312
0.5
0.405 0.378 0.354 0.334 0.315 0.299 0.285 0.272 0.260
0.6
0.337 0.311 0.290 0.271 0.255 0.241 0.228 0.218 0.208
0.7
0.263 0.241 0.222 0.206 0.193 0.182 0.172 0.163 0.156
0.8
0.183 0.165 0.151 0.140 0.130 0.122 0.115 0.109 0.104
0.9
0.095 0.085 0.077 0.071 0.065 0.061 0.057 0.054 0.052
Area(A) 0.378 0.357 0.339 0.322 0.307 0.293 0.280 0.268 0.257
The table 6 shows that for increasing pA , the area A decreases subject to
condition other parameters p and L1 are fixed. The highest value of area
is as A=0.378 at pA =0.1 whereas lowest value is A= 0.257 on pA = 0.9.
0.4
Area Bounded
0.3
0.2
0.1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
5. DISCUSSION
The bounded area A is propositional to initial choice p. We can express A as
directly proportional variable to p when pA, L2 are kept constants (see table
1). Although rate of increment is slow as if p is doubled then area A has
only 5 to 10 percent increment. When p is at highest level (p=0.9) the
bounded area A is nearly approaching to level 50 percent.
The bounded area A is also proportional to opponent blocking probability L2
of the network competitor. The rate of increment in area A is higher than as
observed in earlier table. At the highest level equal to 0.9, the maximum
area is 0.69. When we look into the relation between bounded area A and pA
IJCSBI.ORG
parameter, it is of inversely proportional nature. The larger pA reduces fast
to the bounded area A.
As per Table 4, where the variation of blocking probability L2 relates
opposite to A, it is observed that bounded area A is inversely propositional
to the L2. The decrement rate is nearly 5 percent for unit increase in initial
probability p. Table 5 depicts the bounded area variation with two different
blocking probabilities. Increment in L1 provides higher levels of bounded
area for operator O2. Similarly, as per table 6, the pA and bounded area are
inversely proportional with the 2 to 3 percent decrement rate.
6. CONCLUSION
This is observed that estimated bounded area A contains lots of information
about the traffic sharing phenomenon. The A is directly proportional to the
initial preference p of consumer. It is also to state that bounded area is
directly proportional to the blocking probability of the owners network.
When blocking probability of network competitor increases, bounded area
reduces. It provides the knowledge of relationship between initial
preference p and network blocking probabilities L1 and L2.
REFERENCES
[1] Naldi, M. (2002): Internet access traffic sharing in a multi-user environment,
Computer Networks. Vol. 38, pp. 809-824.
[2] Shukla, D., Gadewar, S. and Pathak, R. K. (2007): A stochastic model for space
division switches in computer networks, International Journal of Applied
Mathematics and Computation, (Elsevier Journal), Vol. 184, Issue No. 02, pp.
235-269.
[3] Shukla, D., Jain, S. and Ojha, S. (2009): Analysis of thread scheduling with
multiple processors under a markov chain model. Karpagam Journal of
Computer Science , Vol. 3, Issue 5, p.p 1220-1226.
[4] Shukla, D., Jain, S., Singhai, R. and Agrawal, R. K. (2009): A Markov chain
model for the analysis of round robin scheduling scheme. Journal of Advanced
Networking and Applications, Vol. 1, No.1, pp 1-7.
[5] Shukla, D. and Jain, S. (2009 ): Deadlock analysis of a class of multi-level queue
scheduling in operating system using Markov chain model., ACCST Research
Journal, Vol. VII, No. 2, pp. 97-105.
[6] Shukla, D., Thakur, S. and Deshmukh, A. K. (2009 ): State Probability Analysis of
Internet Traffic Sharing in Computer Network, Int. Jour. of Advanced
Networking and Applications, Vol. 1, Issue 2, pp 90-95.
[7] Shukla, D., Tiwari, Virendra , Thakur, S. and Deshmukh, A. K.(2009): Share loss
analysis of Internet traffic distribution in computer network, International Journal
of Computer Science and security (IJCSS) , vol 3, 4, pp 414 -427.
[8] Shukla D., Tiwari, Virendra Kumar, Thakur , S. and Tiwari, Mohan (2009 ): A
comparison of Methods for Internet Traffic Sharing in Computer Network,
IJCSBI.ORG
International Journal of Advanced Networking, (IJANA), Vol. 1, issue 3, pp 164-
169.
[9] Shukla, D., Tiwari, V. and Kareem, Abdul (2009): All comparison analysis of
internet traffic sharing using Markov chain model in Computer Network, GESJ:
Georgian Electronic Scientific Journal: Computer Science
and Telecommunications, 6(23),pp 108 115.
[10] Shukla, D., Singhai, Rahul, Gadewar, Surendra, Jain, Saurabh, Ojha, Shweta and
Mishra, P.P. (2010) : A stochastic model approach for reaching probabilities
pf message flow in space division switches, International Journal of Computer
Network (IJCN), vol(2), issue 2, pp. 140 -150.
[11] Shukla, D., Tiwari, Virendra, Parchur, A.k. and Thakur, Sanjay (2010): Effect of
dis-connectivity analysis for congestion control in Internet Traffic Sharing ,
International Journal of the Computer The Internet and Management, Vol. 18, no.
1, pp 37-46.
[12] Shukla, D. ,Ojha, Shweta and Jain, saurabh (2010) : Performance evaluation of
general class of multilevel queue scheduling scheme ,GESJ: Computer Science
and Telecommunication ,3(26), pp 99-121.
[13] Shukla, D. ,Ojha, Shweta and Jain, Saurabh (2010) : Data model approach and
Markov Chain based analysis of multi- level queue scheduling, Journal of
Applied Computer Science and Mathematics, 8(4), pp 50-56.
[14] Shukla, D. ,Tiwari, Virendra,Thakur, Sanjay and Deshmukh, A.K. (2010): Two
call based analysis of Internet Traffic sharing. International Journal of Computer
and Engineering,(IJCE) ,Vol. 1, issue 1, pp 14 24.
[15] Shukla, D. , Jain, Anjali and Choudhary, Amita (2010): Estimation of ready
queue processing time under SL- Scheduling scheme in multiprocessors
environment, International Journal of Computer Science and Security (IJCSS),
Vol. 4, Issue 1, pp. 74-81.
[16] Shukla, D., Jain, Saurabh and Ojha, Shweta (2010): Effect of data model approach
for the analysis of multi-level queue scheduling, International Journal of
Advanced Networking and application (IJANA), vol. 02, No. 01, pp.419-427.
[17] Shukla, D. , Jain, Saurabh and Ojha, Shweta (2010): Deadlock index analysis of
multi-level queuw scheduling in operating system using data model approach,
GESJ: International Journal of Computer Science and Telecommunication, Vol.
06, No. 29, pp. 93-110.
[18] Shukla, D. , Jain, Saurabh and Ojha, Shweta (2010): Study of multi-level queue
scheduling for deadlock state in operating system, Madhya Bharti Journal, Vol.
56, pp. 141-157.
[19] Shukla, D. and Thakur, Sanjay (2010): Index based internet traffic sharing
analysis of users by a markov chain probability model, Karpagam Journal of
Computer Science, Vol. 4, No. 3, pp. 1539-1545.
[20] Shukla, D. and Singhai, Rahul (2010): Traffic analysis of message flow in three
cross-bar architecture in space division switches. Karpagam Journal of Computer
Science, Vol. 4, No. 3, pp. 1560-1569.
[21] Shukla, D., Jain, Anjali and Choudhary, Amita (2010): Estimation of ready queue
processing time under usual group lottery scheduling(GLS) in multiprocessor
environment, International Journal of Computer Application, Vol. 8, No. 04, 39-
45.
IJCSBI.ORG
[22] Shukla, D., Thakur, Sanjay and Tiwari, Virendra (2010): Stochastic modeling of
internet traffic management, International Journal of the Computer the Internet
and Management, Vol. 18, No. 02, pp. 48-54.
[23] Tiwari, Virendra, Thakur, Sanjay and Shukla, D. (2010 ): Cyber crime analysis
for multi -dimensional effect in computer network. Journal of Global Research in
Computer Science, Vol. 1, No. 4. pp. 14-21.
[24] Thakur, Sanjay and Shukla, D. (2010 ): Iso-share analysis of internet traffic
sharing in presence of favoured dis- connectivity, GESJ: Computer science and
Telecommunication, 4(27), pp. 16-22.
[25] Shukla, D. and Singhai, Rahul (2011): Analysis of user web browsing behavior
using Markov chain model. International Journal of Advanced Networking And
Application (IJANA), Vol. 2, No. 5, pp. 824-830.
[26] Shukla, D., Singhai, Rahul and Thakur, N.S. (2011): A new imputation method
for missing attribute values in datamining. Journal of Applied Computer Science
and Mathematics, Vol. 10, Issue 05, pp. 14-19.
[27] Shukla, D. ,Gangele, Sharad, Singhai, R., Verma, Kapil (2011): Elasticity
analysis of web-browsing behavior of users. International Journal of Advanced
Networking And Application (IJANA), Vol. 3 No.3, pp. 1162-1168.
[28] Shukla, D., Gangele, Sharad, Verma, kapil and Singh, Pankaja (2011): Elasticity
of Internet Traffic distribution in Computer network in two-market environment,
Journal of Global Research in Computer Science (JGRCS), Vol. 2, No.6, pp. 6-
12.
[29] Shukla, D., Gangele , Sharad, Verma, Kapil and Singh, Pankaja (2011):
Elasticity and Index analysis of usual Internet traffic share problem, International
Journal of Advanced Research in Computer Science (IJARCS), Vol. 02, No. 04,
pp. 473- 478.
[30] Shukla, D., Jain, Anjali and Choudhary, Amita (2011): Prediction of Ready
queue processing time inmultiprocessor environment using Lottery scheduling
(ULS), Journal of Applied Computer Science and Mathematics, 11( 5), pp. 58-63.
[31] Shukla, D., Gangele, Sharad, Verma, Kapil and Thakur, Sanjay (2011) : A study
on index based analysis of user of Internet traffic sharing in computer network,
World Applied Programming (WAP), Vol. 1, no.04, pp. 278- 287.
[32] Shukla, D. Verma, Kapil and Gangele, Sharad (2011):Re-attempt connectivity to
Internet analysis of user by Markov Chain Model, International Journal of
research in Computer Application and Management (IJRCM), Vol. 1, Issue 9, pp
94- 99.
[33] Shukla, D., Gangele, Sharad, Verma, Kapil and Trivedi, Manish (2011): Elasticity
variation under rest state environment in case of Internet traffic sharing in
computer network, International Journal of Computer Technology and
Application (IJCTA), Vol. 2, Issue no. 6, pp. 2052-2060.
[34] Shukla, D., Gangele, Sharad, Verma, Kapil and Trivedi, Manish (2011):Two call
based cyber crime elasticity analysis of Internet traffic sharing in computer
network. International Journal of Computer Application (IJCA), Vol 2, Issue 1,
pp. 27-38.
[35] Shukla, D., Verma, Kapil and Gangele, Sharad, (2012 ): Iso-Failure in Web
Browsing using Markov chain model and curve fitting analysis. International
journal of modern engineering research (IJMER), Vol. 02, Issue 02, pp. 512- 517.
IJCSBI.ORG
[36] Shukla, D., Verma, Kapil and Gangele, Sharad, (2012): Least square based curve
fitting in internet access traffic sharing in a two operator environment.
International journal of computer application (IJCA), Vol. 43, No. 12, pp. 26-32.
[37] Shukla, D., Verma, Kapil and Gangele, Sharad, (2012): Curve Fitting
Approximation In Internet Traffic Distribution In Computer Network In Two
Market Environment, International Journal of Computer Science and Information
Security (IJCSIS), Vol. 10, Issue 05, pp. 71-78.
[38] Shukla, D., Verma, Kapil and Gangele, Sharad, (2012): Least Square Fitting
Applications under Rest State Environment in Internet Traffic Sharing in
Computer Network, International Journal of Computer Science and
Telecommunications (IJCST), Vol.3, Issue 05, pp.43-51.
[39] Shukla, D., Verma, Kapil, Dubey, Jayant and Gangele, Sharad (2012):Cyber
Crime Based Curve Fitting Analysis in Internet Traffic Sharing in Computer
Network. International journal of computer application (IJCA), Vol.46 (22), pp.
41-51.
[40] Shukla, D., Verma, Kapil, Bhagwat, Shree and Gangele, Sharad (2012): Curve
Fitting Analysis of Internet Traffic Sharing Management in Computer Network
under Cyber Crime. International journal of computer application (IJCA), Vol.
47, No. 24, pp. 36-43.
[41] Shukla, D., Jain, Anjali (2012): Estimation of Ready Queue Processing Time
using Efficient Factor Type Estimator (E- F- T) in Multiprocessor Environment,
International journal of computer application (IJCA), Vol. 48, No. 16, pp. 20-27.
[42] Shukla, D., Jain, Anjali (2012): Analysis of Ready Queue Processing Time Under
PPS-LS and SRS-LS Scheme in Multiprocessing Environment, GESJ: Computer
Science and Telecommunication, Vol. 1, Issue 33, pp. 54-61.
[43] Jain, Anjali and Shukla, D. (2013): Estimation of ready queue processing time
using Factor-type (F-T) estimator in a multiprocessor environment,
COMPUSOFT, 2(8), pp. 256-260.
IJCSBI.ORG
Isha Shah
Information & Decision Sciences
University of Illinois at Chicago
Chicago, USA
ABSTRACT
Student projects in Information Systems courses are typically focused on advancing
technical skills or practical experience in application to business, industry, and perhaps
conventional service organizations. This paper considers an alternative focus on the
potential of leveraging commonplace information technology for sustainable development
and social change. A specific example of designing information services for the
underserved using SMS over mobile phone networks (without the Internet) is demonstrated
with a particular case of helping smallholder farmers prosper in modern supply chains. It
can be instrumental in motivating students who are already interested, as well as raising
awareness among the uninitiated.
Keywords
Short Message Service, Supply Chain Governance, Sustainable Development, Social
Change, Information System Course Projects
1. INTRODUCTION
In the latest revision of the curriculum guidelines for undergraduate degree
programs in Information Systems [21] sponsored by the Association for
Computing Machinery (ACM) and the Association for Information Systems
(AIS), it is recognized that: It is essential for the health of the IS
[Information Systems] discipline to actively recruit IS students. [An elective
course in IS Innovation and New Technologies] will focus on topics
designed to excite students about the IS discipline. Specifically, this course
will look at how IS is used in the world around the student and how IS can
be used to create powerful applications. This is done by delivering topics
that will gain traction with the target audience. In turn, by exposing students
to a variety of business views of IS the students would better understand the
possibilities within the field.
IJCSBI.ORG
Since the degree programs are designed primarily for career placement in
the business world, it is understandable that course projects are typically
focused on advancing technical skills or practical experience in application
to business, industry, and perhaps conventional service organizations.
Example topics for IS term projects with a business orientation may include:
Job Application Systems for company recruiters and job search agencies,
Ticket Reservation Systems for Entertainment Industries,
Reservation Systems for Travel Industries: airlines, hotels, car rentals,
Appointment Systems for Health Service providers,
Online shopping and auction portals,
Delivery Tracking Systems for Transportation and Logistics industries,
Marketing and Sales Systems for Real Estate Industries.
IJCSBI.ORG
a broader spectrum of prospective IS recruits. In Section 2, the global issue
of sustainable food supply and the role and predicament of smallholder
farmers worldwide are discussed. Section 3 surveys how Short Message
Service (SMS), which is the most pervasive telecommunication medium, is
already being used in both commercial and government supported
information services to assist farmers in underdeveloped and developing
regions. A prototype student project to design and implement such a SMS-
based information service is reported in Section 4. Discussion and future
directions are given in Section 5.
IJCSBI.ORG
quarter million in 16 years) among farmers in India [19]. Less dire, but
equally critical situations can be found worldwide. For example,
governments in the Caribbean, being limited by colonization and
geographic location, often sign trade agreements with surrounding and
relevant trading countries which may not take into full consideration
sustainability issues concerning their smallholder farmers. To a large extent,
the farmers do not have access to reliable information that enables them to
make wise decisions in crop selection, growing operations, and marketing
[17]. Similarly, smallholder farmers in rural Cambodia must often settle for
incomplete, out-dated, or biased information when making business
decisions including what, when, and how to produce crops and livestock
and when and where to sell their agricultural goods [18].
IJCSBI.ORG
IJCSBI.ORG
02 India 904,480,000 1,220,800,359 74.96 October 2013
03 United States 327,577,529 310,866,000 103.9 June 2013
04 Brazil 268,440,423 192,379,287 135.4 August 2013
05 Russia 256,116,000 142,905,200 155.5 July 2013
06 Indonesia 236,800,000 237,556,363 99.68 September 2013
07 Pakistan 130,583,076 188,854,781 69.18 December 2013
08 Japan 121,246,700 127,628,095 95.1 June 2013
09 Nigeria 114,000,000 165,200,000 69 May 2013
10 Bangladesh 110,675,000 165,039,000 73.8 September 2013
11 Germany 107,000,000 81,882,342 130.1 2013
12 Philippines 106,987,098 94,013,200 113.8 October 2013
13 Iran 96,165,000 73,973,000 130 February 2013
14 Mexico 92,900,000 112,322,757 82.7 Dec. 2011
15 Italy 88,580,000 60,090,400 147.4 Dec. 2013
16 United Kingdom 75,750,000 61,612,300 122.9 Dec. 2013
17 Vietnam 72,300,000 90,549,390 79 October 2013
18 France 72,180,000 63,573,842 114.2 Dec. 2013
19 Egypt 92,640,000 82,120,000 112.81 August 2013
20 Thailand 69,000,000 65,001,021 105 2013
IJCSBI.ORG
has introduced since 2010 a SMS-based feature to cater to the needs of the
farmers to its full potential. Information includes availability of quality
planting material from recognized nurseries or authentic farms, weather
information for specific locations; availability of quality livestock breeds
from recognized and authentic source; monthly crop management
advisories; and soil test results. The latter will help the farmers in choosing
the proper fertilizer for each crop. The farmers submit soil samples to the
district level soil testing laboratory. Once the analysis is complete, the
results along with fertilizer recommendations are made available to the
farmers on their mobile phone, thus avoiding considerable delay through
conventional means. Farmers can access the service by using the free SMS
package supplied by various service providers, and pay only normal SMS
charges for the messages.
A similar pilot project that started with free trials in Maharashtra State in
Western India since April 2007 has been launched by Thomson-Reuters on
a fully commercial basis [4] as Reuters Market Light. By 2013, it operates
across 13 states in India and covers over 300 crops and varieties and 1300
markets across these states and has over a million subscribers. Initial
surveys found most users attesting to benefits of such a service, particularly
in improvement of productivity [16]. However, a randomized controlled
trial involving 933 farmers in 100 villages of central Maharashtra find no
statistically significant effect on the prices received by farmers using RML
[7]. This somewhat disappointing observation is explained by the fact that
even with better pricing information, farmers typically have no option than
selling to the nearest market due to existing infrastructure, especially for
transportation. This shows that to assist farmers, information dissemination
is just one part of a complex challenge.
IJCSBI.ORG
[3, 6, 13]. In particular, Telenor Pakistan partnered with the Livestock and
Dairy Development Department of Punjab to provide information on
livestock best practices to farmers in the province [14]. Google, in
partnership with MTN Uganda, has launched 'Google SMS', a set of
services that allows users in the country to access information on agriculture
tips and weather conditions [23].
For the server, a GSM 3G mobile broadband modem with USB 2.0
connection is used. Most widely available and affordable models support
2G network protocols (GSM/GPRS/EDGE) at frequency bands of 850, 900,
1800, 1900 MHz and
IJCSBI.ORG
For software requirements, there are two major components: server and
gateway. As server, we use the well-known and widely used XAMPP
package. This is a free and open source cross-platform web server solution
stack package, comprising the Apache HTTP Server from the Apache
Software Foundation (apache.org), MySQL database (Community Edition
from mysql.com), and interpreters for API (Application Programming
Interface) scripts written in the PHP and Perl programming languages
maintained by php.net and perl.org, respectively. There are various versions
of XAMPP available through different channels of distribution (e.g.
apachefriends.org).
To provide an interface between the server and the GSM modem, a SMS
gateway software program is used. The choice of this tool largely defines
the technical tasks in the SMS project. In searching for a suitable program,
it is important to distinguish offerings, all under the guise of SMS gateways
that provide a platform/service for actually channeling messages, and those
that are standalone software program/application for SMS interface on ones
IJCSBI.ORG
own system. We need the latter and they fall into two categories: freeware
(most often open source) and commercial products. Freeware is obviously
cost effective, but may lack in ease of use and technical support.
Commercial products range widely in capability and costs. For IS student
projects, one can limit the choice to lower-end options, especially those with
30 to 60-day free trial offers. We demonstrate our prototype SMS project
with one program from each category. SMS Enabler (smsenabler.com) is a
commercial software program that facilitates the automatic reception and
response to incoming SMS messages using a GSM or 3G mobile phone or
modem connected to a computer.
If Rice is an instance of CROP with PRICE of 25, then the SMS Rice
sent to the system will result in a reply of Rice 25. From the open-source
freeware category, we single out and use a particularly significant entry not
solely for its technical merits, but for its mission and impact on using IS to
promote positive social change. FrontlineSMS (frontlinesms.com), first
developed by Ken Banks in 2004, uses mobile technology to help
IJCSBI.ORG
developing countries tackle key health, social, environmental and
development challenges. It is a SMS gateway application intended to
empower grassroots NGOs with instantaneous two-way communication on
a large scale. Hence it is a perfect fit in our context of assisting smallholder
farmers. In addition, there is a growing community of like-minded users to
share experience with diverse projects and help with technical issues. It can
provide further motivation and inspiration for our IS students.
If Rice is an instance of CROP with PRICE of 25, then the SMS Rice
sent to the system will result in a reply of Rice 25. Starting with basic but
not necessarily expert skills in HTTP servers, API scripts, and SQL for
database, our project is a reasonable challenge for IS students at both the
advanced undergraduate and masters level. We have outlined a skeletal
prototype of an SMS information service. Depending of time and skill,
additional assignment such as more elaborate design of the database as well
as the user interface may be considered. It should be reiterated that the key
feature of this experiment is to demonstrate an interactive SMS information
service without requiring end users to have access to the Internet. It is also
worth remarking that another open source SMS gateway API is SMSLib
IJCSBI.ORG
(smslib.org) which is a Java or .NET developer library for sending and
receiving SMS messages via a compatible GSM modem or GSM phone. It
can be of interest to IS students with advanced technical skills to attempt
more sophisticated implementation of our concepts beyond prototyping for
demonstration purposes.
Table 2. Opportunities for Mobile-enabled Solutions for Food and Agriculture [15]
Mobile payment system Increasing access and
Improving access to Micro-insurance system affordability of financial services
financial services Micro-lending platform tailored for agricultural purposes
IJCSBI.ORG
information where traditional methods of
communication
are limited
Smart logistics
Improving data Traceability and tracking Optimizing supply chain
visibility system management
for Mobile management of across the sector, and delivering
efficiency
supply chain supplier networks improvements for transportation
efficiency logistics
distribution networks
Agricultural trading platform Enhancing the link between
Enhancing access Agricultural tendering commodity exchanges, traders,
platform
to markets Agricultural bartering buyers and sellers of agricultural
platform produce
6. REFERENCES
[1] Allison, M. Seattle-based nonprofit plants seeds of hope in India, The Seattle Times,
(March 16 2013). http://seattletimes.com/html/specialreportspages/2020565872_
landesa.html (retrieved January 15, 2014).
[2] Baldauf, S. iCow: Kenyans now manage their herds via mobile phone. The Christian
Science Monitor, (November 11, 2011), http://www.csmonitor.com/World/Africa/
2011/1111/iCow-Kenyans-now-manage-their-herds-via-mobile-phone.
(retrieved January 15, 2014).
[3] Banks, K. Farming out agricultural advice through radio and SMS. National
Geographic Newswatch (April 26, 2011). http://newswatch.nationalgeographic.com/
2011/04/26/%E2%80%9Cfarming-out%E2%80%9D-agricultural-advice-through-
radio-and-sms/ (retrieved January 15, 2014).
[4] Business Wire India. Reuters launches mobile information service to Indian farming
community, (October 1, 2007). http://www.businesswireindia.com/pressrelease.asp?
b2mid=13897 (retrieved January 15, 2014).
IJCSBI.ORG
[5] Cecchinia, S. and Scott, C. Can information and communications technology
applications contribute to poverty reduction? Lessons from rural India. Information
Technology for Development 10 (2003), 7384.
[6] Donner, J. Mobile-based livelihood services in Africa: pilots and early deployments, in
Fernndez-Ardvol, M. And Ros Hjar, A. (eds), Communication Technologies in
Latin America and Africa: A multidisciplinary perspective, Barcelona, IN3 (2009).
Available at http://in3.uoc.edu/web/IN3/communication-technologies-in-latin-america-
and-africa/ (retrieved January 15, 2014).
[7] Fafchamps, M. and Minten, B. Impact of SMS-based agricultural information on
Indian farmers, World Bank Economic Review, Vol. 26(3) (2012), 383-414.
[8] FAO/WFP The State of Food Insecurity in the World: Addressing Food Security in
Protracted Crises, Food and Agriculture Organization of the United Nations, (Rome,
2010) Available at:http://www.fao.org/docrep/013/i1683e/i1683e.pdf (retrieved
January 15, 2014).
[9] FrontlineSMS, Case Study: FrontlineSMS at Plan International (2011). Available at
http://www.frontlinesms.com/wpcontent/uploads/2011/10/FrontlineSMS_Plan_2011_2
.pdf (retrieved January 15, 2014).
[10] Gakuru, M., Winters, K. and Stepman, F.) Inventory of innovative farmer advisory
services using ICTs. Report of The Forum for Agricultural Research in Africa (FARA).
(2009). Available at http://www.fara-africa.org/media/uploads/File/NSF2/RAILS/
Innovative_Farmer_Advisory_Systems.pdf. (retrieved January 15, 2014).
[11]Glendenning, C., Babu, S. and Asenso-Okyere, K. Review of Agricultural Extension in
India--Are Farmers Information Needs Being Met? International Food Policy
Research Institute Discussion Paper 01048, (2010). Available at: http://www.ifpri.org/
sites/default/files/publications/ifpridp01048.pdf (retrieved January 15, 2014).
[12] Islam, M.S. and Grnlund, . An agricultural market information service (AMIS) in
Bangladesh: evaluating a mobile phone based e-service in a rural context. Information
Development Vol. 26(4) (2010), 289302.
[13] Jaiswal, P. SMS Based Information Systems. Masters Thesis, School of Computing,
University of Eastern Finland (2011). Available at http://cs.joensuu.fi/sipu/2011_MSc_
Jaiswal_Pankaj.pdf (retrieved January 15, 2014).
[14] Khan M. Telenor to provide info services to farmers in Punjab. Pro Pakistani,
(Friday, June 8, 2012). http://propakistani.pk/2012/06/08/telenor-to-provide-info-
services-to-farmers-in-punjab/ (retrieved January 15, 2014).
[15] Kirk, M., et al. Connected Agriculture: The role of mobile in driving efficiency and
sustainability in the food and agriculture value chain. Vodafone and Accenture report
(2011). Available at http://www.vodafone.com/content/dam/vodafone/about/sustain
ability/2011/pdf/connected_agriculture.pdf (retrieved January 15, 2014).
[16] Mittal, S. and Tripathi, G. Role of mobile phone technology in improving small farm
productivity, Agricultural Economics Research Review Vol. 22 (2009), 451-459.
[17]Renwick, S. Retrieved trends in agricultural information services for farmers in
Trinidad and Tobago/Caribbean, Proceedings of World Library and Information
Congress: 76th IFLA General Conference and Assembly (Gothenburg, Sweden,
August 10-15, 2010). Available at http://conference.ifla.org/past/ifla76/85-renwick-
en.pdf (retrieved January 15, 2014).
[18] Roberts, M. and Kernick, H. Feasibility Study for SMS-enabled Collection and
Delivery of Rural Market Information, GTZPrivate Sector Promotion Program (2006).
[19] Sainath, P. In 16 years, farm suicides cross a quarter million The Hindu, Mumbai,
(October 29, 2011). http://www.thehindu.com/opinion/columns/sainath/article2577635.
ece (retrieved January 15, 2014).
[20] Sharma, R. (ed.) Report of the APO Seminar on Strengthening Agricultural Support
Services for Small Farmers (Japan, July 4-11, 2001). Asian Productivity Organization
IJCSBI.ORG
(SEM-28-01) Available at http://www.apo-tokyo.org/publications/files/pjrep-sem-28-
01.pdf (retrieved January 15, 2014).
[21] Topi H., J. Valacich, R. Wright, K. Kaiser, J. Nunamaker, Jr., J. Sipior and G. de
Vreede. Curriculum guidelines for undergraduate degree programs in Information
Systems, Association for Computing Machinery (ACM), Association for Information
Systems (AIS), (2010).
[22] Torero, M. A framework for linking small farmers to markets, Paper presented at the
International Fund for Agricultural Development (IFAD) Conference on New
Directions for Smallholder Agriculture, (Rome, Jan 24-25, 2011). Available at
http://www.ifad.org/events/agriculture/doc/papers/torero.pdf (retrieved January 15,
2014.
[23] Verclas, K. Google launches health and trading SMS info services in Uganda, (2009).
http://www.africa-uganda-business-travel-guide.com/google-launches-health-and
trading-sms-info-services-in-uganda.html (retrieved January 15, 2014).
[24] Wegner, L. and Zwart , G. Who Will Feed the World?-The production challenge,
Oxfam Research Reports, (April 2011). Available at http://www.oxfam.org/
en/grow/policy/ who-will-feed-world (retrieved January 15, 2014).
IJCSBI.ORG
Dr. G. Mahadevan
Principal, Annai College of Engineering, T.N.
Guide, Dept. of Computer Science& Engineering
Visvesvaraya Technological University, Belgaum, Karnataka, India.
ABSTRACT
In recent time, several new methods have been developed at a rapid pace. Some of the
advancements in continuous years, new methods have been developed at a rapid pace.
Some of the advancements in continuous optimization methods have been focused on
comparison and contrasting nature of Evolutionary Algorithms and Gradient based
methods. As a matter of fact, an Evolutionary algorithm is one of the best methods
available for derivative free optimization on higher dimensional problems. This approach
will surely make difference in the existing system, whereas the measuring metrics software
platform varies in each application. Our approach applies to software architectures
modelled with the Palladio Component Model. It supports quantitative performance,
reliability, and cost prediction and can be extended to other quantitative quality criteria of
software architectures. By adding a new component model in between the each system is
more effective in measuring and easily suitable in any business application. In Software life
cycle, the two key activities involved are Requirements Engineering and software
architecting researchers are emphasing on mapping and transformation of requirements to
software architecture, but the lack of effective solution is still prevalent.
Keywords
Evolutionary Algorithm, PCM, Software Architecture, MVC.
1. INTRODUCTION
Palladio component model is a model which acts like Meta model for the
designed application, where we can measure the performance, cost and
reliability of a system .PCM is one of the high level design structure where
IJCSBI.ORG
software process can interface with the frame work. The software process
for measuring the performance, reliability and cost we can use the software
MAT Lab. The mat lab is one of the tools we can implement in the business
application and measuring metrics. PCM can be used many ways.
Prediction methods for performance and reliability of general software
systems are still limited and rarely used in industry Component developers
who produce components that are assembled by software architects and
deployed by system allocators. The diverse information needed for the
prediction of extra-functional properties is thus spread among these
developer roles. PCM can also be used based on the different data set,
where the behavioral skills of a data are data integrity. The each behavior of
a data can be put into sequence diagram and traced out. But some of the
features are dependent then different methodology are used some of the
methodology are Parametric dependencies, Branch conditions, Loop
iterations, Parametric resource demand. Some properties of patterns for
software Architecture are.
1. The existing Patterns document is well structured and designed so that
based on the business application it makes easy to adopt practically.
2. Each pattern will be suitable for the application either it is complex or
easy, if it is so we can also make new pattern by using heterogeneous
software architectures.
3. Patterns are the best methodology to apply any business application to
measure the metrics of a system.
The following listing helps to classify the Palladio Component Model
(PCM) which is underlying the Palladio approach. In case you are preparing
taxonomy or try to identify whether specific features are supported by the
PCM, this page assists our work.
Supported quality dimensions
o Performance
o Reliability
o Costs
o Maintainability
Requirements engineering and software architecting are two important
activities in software life cycle. Requirements engineering is concerned with
purposes and responsibilities of a system. It aims for a correct, consistent
and unambiguous requirements specification, which will become the
baseline for subsequent development, validation and system evolution. In
contrast, software architecting is concerned with the shape of the solution
space. It aims at making the architecture of a system explicit and provides a
blueprint for the succeeding development activities. It is obvious that there
exist quite different perspectives in user (or customer) requirements and
software architecture (SA).
IJCSBI.ORG
In this method, the concept of using feature model for requirements
engineering was introduced. As a main activity in domain modeling, feature
analysis is intended to capture the end-users (and customers)
understanding of the general capabilities of applications in a domain
requirements engineering and software architecting
Some of the IEEE standards are
1. To solve a problem a constraint is needed so that achieving an
objective of a problem is considerable.
2. The constraint or condition must possess the quality of interaction
with the system component model to satisfy the standard
specification or other formally imposed document.
3. A document must reflect the above mentioned stages, and then we
can ensure that the architecture has achieved the objective of a
problem.
SA has become an important research field for software engineering
community. There exists a consensus that for any large software system,
critical situation a high level of computational elements are needed to
design. Because critical situation leads to complexity while other models are
process flow in the modified or newly added model controller Palladio
components [6] [7].
1.1 Method
The transformation rule is applied to the UML, where each state of a system
is ready to accept the query provided by the metrics system. The system will
also available in java JSP pages, but the retrieval operation from each page
will continuously affect the system. The process flow diagram mentioned in
the Palladio component model [11]. The context for the method consists of
a requirements specification that is taken as an input to the method and an
architectural design generated as output. User interface are prone to change
requests. An MVP is a basic platform to extend the performance of a
designed system called M-ACCURATE approach. This approach is newly
involved in the problem context while processing the sequence diagram. In
Sequence diagram each model behaviour can be measured and identification
of the weakness of data, to overcome we insert one more new model at the
initial stage called ACCURATE model. This way we can improve the
performance, cost and reliability of a system compared to the existing
methodology.
1.2 M-Accurate Approach
In this section we present the M-ACCURATE approach. The name M-
ACCURATE comes from an acronym for Model A Configurable Code
generator Unified with Requirements Analysis Techniques. As it implies,
requirements play an important role in both PIM modelling and platform
decision. The key idea is to capture functional and non-functional
IJCSBI.ORG
requirements into separate artifacts, a PIM and a platform configuration
respectively, and join them at the downstream of the development.
Implementation constructs UML models are meant for both logical analysis
and physical implementation. Certain constructs represent implementation
items. A component model is a basic or the initial level, replacement can be
done whenever the demand or the constraint is not fulfilling. In other words,
replaceable of a model can be done to improve the performance of a system.
It is intended to be easily substitutable for other components that meet the
same specification. A node is a run-time computing resource that defines a
location. It can hold components and objects. The deployment view
describes the configuration of nodes in a running system and the
arrangement of components and objects on them, including possible
migration of contents among nodes.
In Model organization, Computers can deal with large flat models, but
humans cannot. In a large system, the modelling information must be
divided into coherent pieces so that teams can work on different parts
concurrently. Even on a smaller system, human understanding requires the
organization of model content into packages of modest size. Packages are
general-purpose hierarchical organizational units of UML models. They can
be used for storage, access control, configuration management, and
constructing libraries that contain reusable model fragments. A dependency
between packages summarizes the dependencies among the package
contents. A dependency among packages can be imposed by the overall
system architecture. Then the contents of the packages must conform to the
package dependencies and to the imposed system architecture.
The model view controller relationship is the best architecture pattern for
the designed business application, but the behavior of a model in the OMT
diagram of an application is data dependent.to over come from this insert
one more new model called M-ACCURE model between the model view
controller.as figure 1 shows the interface between mode and view model,
the Controller is at the backend of the system, because the controller is
coordinating all the models.
1.2.1 model
The model component encapsulates core data functionality. The model is
independent of specific output representation or input behavior. When the
figure 3 is designed there are basically six models which are designed. They
are Model, view, controller, concrete model, concrete view and concrete
controller. Each model inherits the sub model called CONCRETE which
can be taken as presenter in the figure 1.The technic behind this is the each
sub model will be treated as the ACCURATE model such that we can
improve the performance, cost and reliability of a system.
IJCSBI.ORG
1.2.2 View
View components display information to the user. A view obtains the data
from the model. There can be multiple views of the model. Different views
present the information of the model in different ways. Each view defines
an update procedure that is activated by the change propagation mechanism.
View model inherited CONCRETE view model in figure3 which updates
the data information between model and controller.
2.2.3 Presenter
The concept represented by bottom-level approach represented by pie charts
in application result. Here the result will be simulated output. Presentation
model fulfill two different roles Composition and Coordination. Presenter
model always defines a structure for interactive systems in the form of a
hierarchy of other cooperating models such as Model and view, but in this
approach there are two new sub models are also added to overcome with the
communication failure. The communication between human and computer
will always taking care by the presenter model.
2. SEQUENCE DIAGRAM
The following scenarios depict the dynamic behavior of MVC. For
simplicity only one view-controller pair is shown in the diagram.
1. The model instance is created, which then initializes its internal data
structures.
2. A view object is created .This takes a reference to the model as a
parameter for its initialization.
3. The view subscribes to the change propagation mechanism of the model
by calling the attach procedure. The mechanism is presentation model
where it inherits the CONCRETE-model, view and controller.
4. The concrete models continue initialization by creating its controller. It
passes references both to the model and view.
5. After initialization in each model, the application begins to process
events.
IJCSBI.ORG
Main Program
Model, Concrete
Model
View, Concrete
View
Model
Controller,
View Concrete,
Concrete
3.
IJCSBI.ORG
4. SIMULATION RESULT
The result of a simulation run contains response time distributions of each
executed service. The simulation resolves resource contentions for the
service centers either by a FIFO or processor sharing scheduling policy.
Further scheduling policies including more realistic schedulers of todays
operation systems and multi-core handling will be implemented in the
future. The simulation can therefore predict the performance for more
complex scenarios than the analytical solver finally; the real system
implemented using the code skeletons.
IJCSBI.ORG
//code segment highlighted here....
//MODEL main
main(){
Table View *v1=new tableView(&m);
public abstract class Model
v1->initialize();
BarChartView *v2=new BarChartVeiw(&m);
V2->initialize();
//now start event processing...
Class Model{
//...continued
public:
void attach(concretemodel *cm){ registry.add(s);}
void detach(concretemodel *cm){ registre.remove(s);}
protected:
virtual void notify();
private:
Set<Observer*>registry;
};
//model view
class view :public Observer{
public:
veiw(model *m):M1model(M1),M1Controller(0)
{M1model->attach(this);}
virtual ~View(){M1model->detach(this);}
virtual void update(){this->draw();}
virtual void initialize();
virtual void draw();
//... to be continued below
//model controlller
class Controller:public Observer {
public:
virtual void ConcreteEvent(Event *){ }
Controller(veiw *v):V1view){
M1model=M1view->getModel();
M1model=attach(this);
}
Virtual ~controller(){M1model->detach(this);}
Virtual void update{}
protected:
Model *M1model;
Veiwn *M1view;
};
IJCSBI.ORG
efficiency: As the evaluation of each candidate solution, mainly due to the
performance evaluation, takes several seconds, the overall approach is
considerably time consuming. Here, software architects should run it in
Parallel to other activities or overnight. A distribution of the analyses on a
cluster of workstations could lead to significant improvements. It could also
be possible to split the optimization problem into several independent parts
that are solved separately and thus quicker.
Limited degrees of freedom: Currently, design options that over new
degrees of freedom are not yet considered. For example, adding a new
server results in further options to configure that server. Such de-sign
options could be integrated by formulating the genotype as a tree structure
rather than a vector. Simplistic cost model: The cost model used here is
simplistic, as we only wanted to demonstrate the approach. We do not want
to devise a new cost estimation technique. However, more sophisticated
cost estimations techniques.
An architecture design method has been presented that explicitly addresses
the non-functional requirements put on the architecture. The simulated
output will always measure the metrics of the business application. The new
model which is inserted in between the existing model will prove that the
quality of the metrics is improved. It has been identified that the ability of a
system to fulfill its non-functional requirements is, up to a considerable
extent, restricted by its architecture. The proposed method starts with a
functionality-based design phase in which a software architecture is
designed purely based on the functional requirements. The architectural
design method has been applied, in some form, in the design of systems,
Experience shows that the method is provide appreciated support to the
software engineers during architectural design.
6. CONCLUSION
An architecture design method has been presented that explicitly addresses
the non-functional requirements put on the architecture. The simulated
output will always measure the metrics of the business application. The new
model which is inserted in between the existing model will prove that the
quality of the metrics is improved. It has been identified that the ability of a
system to fulfill its non-functional requirements is, up to a considerable
extent, restricted by its architecture. The proposed method starts with a
functionality-based design phase in which a software architecture is
designed purely based on the functional requirements. The architectural
design method has been applied, in some form, in the design of systems,
Experience shows that the method is provide appreciated support to the
software engineers during architectural design.
IJCSBI.ORG
7. REFERENCES
[1]Eclipse.org. ATLAS Transformation Language (ATL). http://www.eclipse.org/m2m/atl/.
[2A] D. Ayed and Y. Berbers. UML Profile for the Design of Platform-Independent
Context-Aware Applications. In MODDM06: Proceedings of the 1st Workshop on Model
Driven Development for Middleware, pp. 15, 2006.
[4] S. Link, T. Schuster, P. Hoyer, and S. Abeck. Focusing Graphical User Interfaces in
Model-Driven Software Development. In ACHI08: Proceedings of the 1st International
Conference on Advances in Computer-Human Interaction, pp. 38, 2008.
[6] C. He, F. He, K. He and W. Tu. Constructing Platform Independent Models of Web
Application. In SOSE05: Proceedings of the 2005 IEEE International Workshop on
Service-Oriented System Engineering, pp. 8592, 2005
[7] D. Ayed and Y. Berbers. UML Profile for the Design of Platform-Independent Context-
Aware Applications. In MODDM06: Proceedings of the 1st Workshop on Model Driven
Development for Middleware, pp. 15, 2006.
[8] M. Lpez-Sanz, C. Acua, C. Cuesta, and E. Marcos. UML Profile for the Platform
Independent Modeling of Service-Oriented Architectures. Software Architecture, pp. 304
307, 2007.
[9] T. Fink, M. Koch, and K. Pauls. An MDA approach to Access Control Specifications
Using MOF and UML Profiles. Electronic Notes in Theoretical Computer Science,
142:161179, 2006.
[10] J. Bezivin, S. Hammoudi, D. Lopes, and F. Jouault. Applying MDA approach for
Web service platform. In EDOC04: Proceedings of the 8th IEEE International Enterprise
Distributed Object computing Conference, pp. 5870, 2004.
[11] M.Rahmouni and S. Mbarki. International Journal of Computer Science & information
Technology. Vol 3, No 4, August 2011.
IJCSBI.ORG
[12] L. DOBRIC, A. D. IONI, R. PIETRARU, A. OLTEANU. U. P. B. Automatic
Transformation of Software Architecture Models, Sci. Bull., Series C, Vol. 73, Iss. 3,
2011.