You are on page 1of 22

An SMDP-Based Service Model for Inter domain Resource Allocation in Mobile Cloud Networks

ABSTRACT

Mobile cloud computing is a promising technique that shifts the data and computing service modules from individual devices to geographically distributed cloud service architecture. A general mobile cloud computing system is comprised of multiple cloud domains, and each domain manages a portion of the cloud system resources, such as the Central Processing Unit, memory and storage, etc. How to efficiently manage the cloud resources across multiple cloud domains is critical for providing continuous mobile cloud services. In this paper, we propose a service decision making system for inter domain service transfer to balance the computation loads among multiple cloud domains. Our system focuses on maximizing the rewards for both the cloud system and the users by minimizing the number of service rejections that degrade the user satisfaction level significantly. To this end, we formulate the service request decision making process as a semi-Markov decision process. The optimal service transfer decisions are obtained by jointly considering the system incomes and expenses. Extensive simulation results show that the proposed decision making system can significantly improve the system rewards and decrease service disruptions compared with the greedy approach.

INTRODUCTION

Cloud computing is a promising platform to assist mobile devices in computing and communication. In cloud computing, data and computing modules are located at remote devices in a resource on demand and a pay-as-you-go manner. Mobile cloud has become a service model that allows mobile devices to utilize the resource from the cloud without complex hardware and software implementations at the device side. Due to the mobility of mobile users, location-based (or geo-based) cloud resource provisioning is required to reduce the end-to-end communication delay. As the result, the mobile cloud system should consist of multiple cloud service domains.

One cloud service domain usually provides cloud services to local mobile devices that are connected through local base stations or Inter-net access points. Although the resources of the mobile cloud are considered as infinite compared with those in a single mobile device, the available resources in one cloud service domain are usually limited. Therefore, the service transitions between different mobile cloud domains play a critical role in improving the overall cloud resource utilization and quality of experience (QoE) for mobile users.

Mobile cloud system that has the following properties:

1) Both the arrivals and departures of mobile cloud services follow Poisson distribution. 2) The available resource of the cloud is time varying. 3) Current resource decision may have a big impact on the future decision.

EXISTING SYSTEM: In earlier, resource allocation based on greedy approach. In which, Decision will be made according to the current situation. This kind of decision degrades the performance of cloud domain. There is possibility to reject the user request if already many requests are processing. PROPOSED SYSTEM: In which, resource allocation in multi cloud domain system the arrival and departure follows Poisson distribution. The resource allocation considers the present and future outcomes. That type of resource allocation scheme maximizes the overall rewards of the cloud system and mobile users. In this section, we present our proposed mobile cloud re-source management model for choosing the optimal adjacent cloud domains. Home domain decides whether the mobile service should be accepted, rejected, or transferred to an adjacent cloud domain.

FEASIBILITY STUDY

Feasibility studies aim to objectively and rationally uncover the strengths and weaknesses of the existing business or proposed venture, opportunities and threats as presented by the environment, the resources required to carry through, and ultimately the prospects for success. In its simplest term, the two criteria to judge feasibility are cost required and value to be attained. As such, a well-designed feasibility study should provide a historical background of the business or project, description of the product or service, accounting statements, details of the operations and management, marketing research and policies, financial data, legal requirements and tax obligations. Generally, feasibility studies precede technical development and project implementation.

Economical Feasibility This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

Technical Feasibility Technical feasibility study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.

Operational Feasibility The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the

users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

SYSTEM REQUIREMENT

HARDWARE SPECIFICATION:

System Hard Disk Floppy Drive Monitor Mouse RAM

: : : : : :

Pentium IV 2.4 GHz 320 GB 1.44 MB 15 VGA colour Logitech. 256 MB

SOFTWARE SPECIFICATION:

Domain Operating System Front End Back End

: : : :

Parallel and Distributed System Windows XP. C# .NET Microsoft SQL Server 2005

SOFTWARE DESCRIPTION

FRONT END: MICROSOFT.NET FRAMEWORK Microsoft .NET Framework is a software framework that is available with several Microsoft Windows operating systems. It includes a large library of pre-coded solutions to common programming problems and a virtual machine that manages the execution of programs written specifically for the framework. The pre-coded solutions that form the framework's Base Class Library cover a large range of programming needs in a number of areas, including user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. The class library is used by programmers, who combine it with their own code to produce applications. The CLR provides the appearance of an application virtual machine so that programmers need not consider the capabilities of the specific CPU that will execute the program. The main components of .Net Framework: Common Language Runtime Common Type System Common Language Runtime .Net Base Class Libraries Web Forms Web Services Windows Forms ADO.Net

C# and .NET Framework C# is a computer language that has a special relationship to its runtime environment, the .NET Framework. C# was initially designed by Microsoft to create code for the .NET Framework. The libraries used by C# are the ones defined by the .NET Framework. Microsoft says that C #is the best language to develop its .NET Framework applications. Object-Oriented Programming C# is object-oriented programming (OOP). The object-oriented methodology is inseparable from C#, and all C# programs are to at least some extent object oriented. Because of its importance to C#, it is useful to understand OOPs basic principles before you write even a simple C# program. OOPs Concepts Object Objects are the basic run time entities in an object oriented system. When a program is executed, objects interact with each other by sending message. Different objects can also interact with each other without knowing the details of their data or code. Classes A class is a collection of objects of similar type. Once a class is defined, any number of objects can be created which belongs to that class.

Data Abstraction and Encapsulation Abstraction refers to the act of representing essential features without including the background details or explanations. Storing data and functions in a single unit (class) is encapsulation. Data cannot be accessible to the outside world and only those functions which are stored in the class can access it. Inheritance Inheritance is the process by which objects can acquire the properties of objects of other class. Inheritance provides reusability, like adding additional features to an existing class without modifying it. This is achieved by deriving a new class from the existing one. The new class will combine the features of both the classes. Polymorphism Polymorphism means the ability to exist in more than one form. An operation may exhibit different behaviors in different instances. The behavior depends on the data types used in the operation. Polymorphism is extensively used in implementing inheritance. Key Features of C#.NET C# programs typically have no need for direct pointer manipulation Automatic memory management through garbage collection Formal syntactic constructs for classes, interfaces, structures, enumerations, and delegates. The ability to build generic types and generic members. Using generics, you are able to build very efficient and type-safe code that defines numerous place holders specified at the time you interact with the generic item.

Support for anonymous methods, which allow you to supply an inline function anywhere a delegate type is required. Numerous simplifications to the delegate/event model, including covariance, contra variance, and method group conversion. The ability to define a single type across multiple code files (or if necessary, as an inmemory representation) using the partial keyword. Creating Windows/Web based applications easily using base class libraries and components available in .NET. Packaging and deploying, Remitting, XML Web services are simple in .NET. Versioning Concept removes the DLL hell issue. ASP.NET Web Service ASP.NET was developed in direct response to the problems that developers had with classic ASP. Since ASP is in such wide use, however, Microsoft ensured that ASP scripts execute without modification on a machine with the .NET Framework (the ASP engine, ASP.DLL, is not modified when installing the .NET Framework). A Web service is a method of communication between two electronic devices over the web (internet).The W3C defines a "Web service" as "a software system designed to support interoperable machine-to-machine interaction over a network". It has an interface described in a machine-process format (specifically Web Services Description Language, known by the acronym WSDL). Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards." The W3C also states, "We can identify two major classes of Web services, REST-compliant Web services, in which the primary purpose of the service is to manipulate XML representations of Web resources using a uniform set of "stateless" operations; and arbitrary Web services, in which the service may expose an arbitrary set of operations.

BACK END MICROSOFT SQL SERVER Microsoft SQL Server is a relational database management system (RDBMS) produced by Microsoft. SQL Server 2005 introduces "studios" to help the development and management tasks such as SQL Server Management Studio and Business Intelligence Development Studio. In Management Studio, to develop and manage SQL Server Database Engine and notification solutions, manage deployed Analysis Services solutions, manage and run Integration Services packages, and manage report servers and Reporting Services reports and report models. Reporting Services projects to create reports, the Report Model project to define models for reports and Integration Services projects to create packages. DATABASE ENGINE The database engine is the core service for storing, processing, and securing data. The database engine provides controlled access and rapid transaction processing to meet the requirements of the most demanding data consuming applications within your enterprise. Use the database engine to create relational databases for online transaction processing or online analytical processing data. This includes creating tables for storing data, and database objects such as indexes, views, and stored procedures for viewing, managing, and securing data.

MODULES:

Upload file to server Access file from client Compute system resources Transfer user request and Process it Analysis the performance for cloud domains

Upload file to server:

In which, Controller of cloud provider upload the file to cloud server. From that user can access the file by using mobile devices through internet.

Access file from client:

Client send the file request to server according to his needs.

Compute system resources:

Home domain computes the VM resources that are available in that domain. And also find the adjacent domain too.

Transfer Request:

Check the Vm available in Adjacent domain. If there is a available VM then transfer the request to adjacent domain.

Performance Analysis:

Here, we are going to analyze the dropping probability on service request. It is used to enhance the performance of cloud service

DATABASE DESIGN:

Probability Calculation:

Field Name NoofAccess Nooffailure Noofsuccess

Field Type Varchar(50) Varchar(50) Varchar(50)

Description Total number of access Total number of failure Total number of success

SYSTEM IMPLEMENTATION

Implementation of software refers to the final installation of the package in its real environment, to the satisfaction of the intended users and the operation of the system. The people are not sure that the software is meant to make their job easier. The active user must be aware of the benefits of using the system Their confidence in the software built up Proper guidance is impaired to the user so that he is comfortable in using the application Before going ahead and viewing the system, the user must know that for viewing the result, the server program should be running in the server. If the server object is not running on the server, the actual processes will not take place.

User Training
To achieve the objectives and benefits expected from the proposed system it is essential for the people who will be involved to be confident of their role in the new system. As system becomes more complex, the need for education and training is more and more important. Education is complementary to training. It brings life to formal training by explaining the background to the resources for them. Education involves creating the right atmosphere and motivating user staff. Education information can make training more interesting and more understandable.

Training on the Application Software


After providing the necessary basic training on the computer awareness, the users will have to be trained on the new application software. This will give the underlying philosophy of the use of the new system such as the screen flow, screen design, type of help on the screen, type of errors while entering the data, the corresponding validation check at each entry and the ways to correct the data entered. This training may be different across different user groups and across different levels of hierarchy. Operational Documentation

Once the implementation plan is decided, it is essential that the user of the system is made familiar and comfortable with the environment. A documentation providing the whole operations of the system is being developed. Useful tips and guidance is given inside the application itself to the user. The system is developed user friendly so that the user can work the system from the tips given in the application itself. System Maintenance The maintenance phase of the software cycle is the time in which software performs useful work. After a system is successfully implemented, it should be maintained in a proper manner. System maintenance is an important aspect in the software development life cycle. The need for system maintenance is to make adaptable to the changes in the system environment. There may be social, technical and other environmental changes, which affect a system which is being implemented. Software product enhancements may involve providing new functional capabilities, improving user displays and mode of interaction, upgrading the performance characteristics of the system. So only thru proper system maintenance procedures, the system can be adapted to cope up with these changes. Software maintenance is of course, far more than finding mistakes. Corrective Maintenance The first maintenance activity occurs because it is unreasonable to assume that software testing will uncover all latent errors in a large software system. During the use of any large program, errors will occur and be reported to the developer. The process that includes the diagnosis and correction of one or more errors is called Corrective Maintenance.

Adaptive Maintenance The second activity that contributes to a definition of maintenance occurs because of the rapid change that is encountered in every aspect of computing. Therefore Adaptive maintenance termed as an activity that modifies software to properly interfere with a changing environment is both necessary and commonplace. Perceptive Maintenance The third activity that may be applied to a definition of maintenance occurs when a software package is successful. As the software is used, recommendations for new capabilities, modifications to

existing functions, and general enhancement are received from users. To satisfy requests in this category, Perceptive maintenance is performed. This activity accounts for the majority of all efforts expended on software maintenance. Preventive Maintenance The fourth maintenance activity occurs when software is changed to improve future maintainability or reliability, or to provide a better basis for future enhancements. Often called preventive maintenance, this activity is characterized by reverse engineering and re-engineering techniques.

SYSTEM TESTING

System testing is the stage of implementation, which aimed at ensuring that the system works accurately and efficiently before the live operation commences. Testing is the process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding a yet undiscovered error. A successful test is one that answers a yet undiscovered error. Testing is vital to the success of the system. System testing makes a logical assumption that if all parts of the system are correct, the goal will be successfully achieved. The candidate system is subject to variety of tests-on-line response, Volume Street, recovery and security and usability test. A series of tests are performed before the system is ready for the user acceptance testing. Any engineered product can be tested in one of the following ways. Knowing the specified function that a product has been designed to from, test can be conducted to demonstrate each function is fully operational. Knowing the internal working of a product, tests can be conducted to ensure that al gears mesh, that is the internal operation of the product performs according to the specification and all internal components have been adequately exercised. UNIT TESTING Unit testing is the testing of each module and the integration of the overall system is done. Unit testing becomes verification efforts on the smallest unit of software design in the module. This is also known as module testing. The modules of the system are tested separately. This testing is carried out during the programming itself. In this testing step, each model is found to be working satisfactorily as regard to the expected output from the module. There are some validation checks for the fields. For example, the validation check is done for verifying the data given by the user where both format and validity of the data entered is included. It is very easy to find error and debug the system.

INTEGRATION TESTING Data can be lost across an interface, one module can have an adverse effect on the other sub function, when combined, may not produce the desired major function. Integrated testing is systematic testing that can be done with sample data. The need for the integrated test is to find the overall system performance. There are two types of integration testing. They are: i) ii) Top-down integration testing. Bottom-up integration testing.

WHITE BOX TESTING White Box testing is a test case design method that uses the control structure of the procedural design to drive cases. Using the white box testing methods, we derived test cases that guarantee that all independent paths within a module have been exercised at least once. BLACK BOX TESTING
o o o o o

Black box testing is done to find incorrect or missing function Interface error Errors in external database access Performance errors Initialization and termination errors

In functional testing, is performed to validate an application conforms to its specifications of correctly performs all its required functions. So this testing is also called black box testing. It tests the external behavior of the system. Here the engineered product can be tested knowing the specified function that a product has been designed to perform, tests can be conducted to demonstrate that each function is fully operational.

VALIDATION TESTING After the culmination of black box testing, software is completed assembly as a package, interfacing errors have been uncovered and corrected and final series of software validation tests begin validation testing can be defined as many, but a single definition is that validation succeeds when the software functions in a manner that can be reasonably expected by the customer. USER ACCEPTANCE TESTING User acceptance of the system is the key factor for the success of the system. The system under consideration is tested for user acceptance by constantly keeping in touch with prospective system at the time of developing changes whenever required. OUTPUT TESTING After performing the validation testing, the next step is output asking the user about the format required testing of the proposed system, since no system could be useful if it does not produce the required output in the specific format. The output displayed or generated by the system under consideration. Here the output format is considered in two ways. One is screen and the other is printed format. The output format on the screen is found to be correct as the format was designed in the system phase according to the user needs. For the hard copy also output comes out as the specified requirements by the user. Hence the output testing does not result in any connection in the system.

CONCLUSION

In this paper, we have developed an SMDP-based computing model for inter domain services in a cloud computing system considering both the system gain, the expenses of computing resources, and the communication costs. The optimal decision is made such that the overall system rewards are maximized. In our future work, we will analyze the optimal system resources toward the maximal system rewards under a given dropping probability constraint for a large-scale cloud system

REFERENCES

BIBLIOGRAPHY Releasing the Source Code for the .NET Framework - Jay Greenspan Web Services in .NET Brad Bugler Automatic Memory Management in the .NET Framework David Sams Database management Systems Steve Jones Programming in C# - Balagurusamy Developing Enterprise Application Using ASP .NET Michael Kofler

WEBLIOGRAPHY www.microsoft.com www.mssql.com www.netframework.com www.websitation.org www.rdbms.com

You might also like