You are on page 1of 3

5 Steps to design and build your own Cloud Computing Infrastructure

Cloud computing necessarily requires coordination not just across a variety of data center infrastructure but disparate teams as well. Organization must be ready and willing to change its thinking about applications and how they are deployed, as well as how they are budgeted and assigned resources. Project managers will need to change how they assign costs to projects as virtualization and a cloud model necessarily requires a focus on compute resources rather than physical hardware and software.

Ref: http://techsol.tnlexperts.com/wp-content/uploads/2012/02/Cloud-Computing.jpg

Start with a small application that is non-critical to the business. The goal of a cloud computing pilot is to recuperate idle resources, validate the cost savings, and gain an understanding of how to manage performance across a virtualized infrastructure.

Step 1 -- Decide which technology will be the basis for your on-demand application infrastructure
Most dont start thinking about a cloud computing infrastructure until theyve already deployed a number of virtualized applications, so the decision regarding which virtualization technology will be the organizational standard is often already made. But if it hasnt, decide before you start. There are pros and cons to both a heterogeneous and homogenous virtualization infrastructure, and the decision will impact the ability to manage and monitor infrastructure later, so make this decision first. Dont forget that to automate the provisioning and management processes, changes in the application infrastructure at the network layer are required. The ability to boot from the network and automate network/IP configuration is paramount to ensuring connectivity and the ability of processes to spin up images of applications on-demand.

Step 2 -- Determine what delivery infrastructure will be used to abstract the application infrastructure
The on-demand capabilities of a cloud infrastructure are primarily designed to do two things: ensure scalability and make efficient use of resources. To accomplish the former some method of load balancing/application delivery will be necessary. This layer of the architecture will abstract the applications from the instances and provide a consistent means of access to users and customers, shielding them from the high rate of change occurring in the infrastructure. The delivery infrastructure/load balancer will need to be included in the provisioning process and will be relied upon to provide visibility into application performance, capacity, and resource management, so ensure that your choice is capable of being integrated into the automation system. This can be accomplished via standards-based APIs (application program interfaces) or through remote execution of scripts. Most solutions are capable of one or the other, or both, but ensure your choice matches the way in which you will integrate the system into the architecture. Also verify the solution is capable of providing the visibility you will need into performance metrics. If thresholds will be based on capacity, ensure the application delivery infrastructure can provide that information. Decide early what metrics and thresholds youll use to trigger provisioning processes and ensure the infrastructure can support it.

Step 3 -- Prepare the network infrastructure


This step may seem as though it needs not be stated, but there is a great deal that goes into preparing the network to deal with an on-demand application infrastructure. Hardware -- network, storage, application delivery -- must be configured correctly for the application being deployed. While this is a simple task when considering a single virtualized application, remember youll eventually share hardware resources across multiple application instances. The network must be able to handle applications migrating from hardware to hardware, and must be configured to deal with such change without requiring human intervention. Because applications will be moving from server to server, the network will require constant optimization to adapt to changing traffic patterns. This rapid rate of change necessitates automation as it is impossible for manual processes to keep up and human intervention will likely introduce errors.

Step 4 -- Provide visibility and automation of management tasks


Visibility is key to an on-demand infrastructure. The infrastructure and associated management systems must know what is running, where, and when to evaluate available resources and make decisions regarding the assignment of resources. Determine how you will collect the data and from where. CPU and memory utilization on hardware should come from the individual servers, but will it be collected via virtualization management system or individual servers? Will bandwidth utilization come from routers and switches or the application delivery infrastructure? Capacity and response time can be collected from individual servers, the application delivery infrastructure, as well as thirdparty application performance management systems. Decide which system or device is authoritative for each metric and verify there is a way to feed that information in real-time to the automation system.

Step 5 -- Integrate all the moving parts, such that the infrastructure actually becomes on-demand and realizes the benefits of abstraction, automation, and resource sharing
The most difficult piece is last, and requires the previous steps be completed as it relies on those systems and information. The integration, aka automation, of all the requisite pieces of the infrastructure -- network, storage, and application -- is what enables the infrastructure to act ondemand. Without automation the realization of cost-reduction benefits will be marginalized. The integration step automates workflow. For example, when an application meets or exceeds a service level agreement or established threshold, a workflow should be executed to spin up additional images. Automation requires constant monitoring across the application infrastructure from the network layer to the applications executing in the environment. In most cases this integration will require a custom solution. A few commercial implementations exist to assist in automating infrastructure, but if youre an early adopter it may be necessary to build an automation framework and management system yourself. Virtualization is the first step toward a cloud infrastructure. Moving beyond virtualization requires the ability to coordinate the actions of multiple tiers of the architecture in response to specific events. It is necessary to re-evaluate the suitability of each critical layer of the architecture for inclusion in the new infrastructure model. Building a cloud infrastructure will require an investment -- if not in hardware or solutions then in time and effort. Reconfiguration, automation, and integration will require significant IT resources to accomplish. The investment upfront should pay off quickly as your on-demand infrastructure recoups unused processing power and makes more efficient the entire data center architecture.

Summary
Designing private cloud solutions to meet a particular business need can solve some of the security challenges that might exist with public cloud solutions; however, may introduce new challenges depending on the current maturity of the IT organization implementing the private cloud solution. These challenges include return on investment discussions, training, and service management considerations. Depending on a given business requirement, close evaluation and analysis is required to ensure a private cloud solution is the best fit for a given requirement.

You might also like