You are on page 1of 29

1

Key points: System Center 2012 enables delivering IT as a Service between the App Owner and DC Admin personas that we defined. We have two personas: application owner, and the data center administrator Each have capabilities required to deliver a private cloud as well as leverage hybrid computing models For example, consider a self-service experience to enable your application owners to specify their service requirements; Lets say the consumer trying to provision a SharePoint service. Understand the topology and architecture of the application service in question. An application deployed in cloud computing model is called a service. This would necessitate a service model that accurately binds the applications architecture to the underlying resources where it will be hosted. The service model would be comprised of: Service definition information, deployed as roles. Roles are like DLLs, i.e. a collection of code with an entry point that runs in its own virtual machine Front end: e.g. load-balanced stateless web servers Middle worker tier: e.g. order processing, encoding Backend storage: e.g. SQL tables or files Service Configuration information Additionally, update domains, availability domains, and scale out rules You will need a set of process automation capabilities to break down this application provisioning request into the enterprise change requests that need to be implemented. This could include setting up the underlying infrastructure and then a set of application configuration/release requests that need to be tracked (and ideally implemented with orchestrated automation) Next you need a set of provisioning tools that actually configure and deploy the infrastructure and application layers. The underlying datacenter resources could be physical, virtual, private or public as per the requirements dictated by the applications service model Once the application service is deployed, it would immediately need to be discovered and monitored for reporting and health tracking There you see how the System Center 2012 components offer these life cycle management capabilities in combination to help you deliver hybrid IT as a Service as per your organizations requirements: App Controller 2012 would offer that self-service experience that allows your application owners manage their apps across private and public environments. Service Manager 2012 offers the standardized self-service catalog that defines templates for your applications and infrastructure. App Controller 2012, Virtual Machine Manager 2012, Service Manager 2012 and Operations Manager 2012 work together to maintain the service model through the application service life cycle. Orchestrator 2012 and Service Manager 2012 offer orchestrated automation for the process workflows required to drive your provisioning and monitoring tools Virtual Machine Manager 2012 and Configuration Manager 2012 can provision physical, virtual and cloud environments. Operations Manager 2012 monitors your application services end-to-end and offers deep application insight to help you deliver predictable SLA. Your datacenter resources could be deployed anywhere from physical boxes, to virtual, to private, to public with Windows Server/ Hyper-V and Windows Azure However, to get to this agile self-service end-state, you will have to start with abstracting your infrastructure and allocating it appropriately so that your business units can deploy and manage their applications on top. How does System Center 2012 get you to this point where you can deliver IT as a Service? Application Management: Deploying and operating your business applications Service Delivery & Automation: Standardizing and automating service and resource provisioning; managing change and access controls Infrastructure management: Deploying and operating all the underlying infrastructure on which your business applications and services run

Key points: System Center 2012 cloud and data center management solutions empower you with a common management toolset for your private and public cloud applications and services. System Center helps you confidently deliver IT as a Service for your business. System Center 2012 helps your organization consume and deliver IT as a Service by enabling productive infrastructure, predictable applications, and cloud on your terms. System Center 2012 helps you to deliver flexible and cost-effective private-cloud infrastructure to your business units in a self-service model, while carrying forward your existing data center investments. Recognizing that applications are where core business value resides, System Center 2012 offers deep application insight, which, combined with a service-centric approach, helps you deliver predictable application-service levels. Finally, System Center 2012 empowers you to deliver and consume private and public cloud computing on your terms with common management experiences across your hybrid environments. Productive Infrastructure

System Center 2012 helps you deliver flexible and cost-effective infrastructure with what you already know and own. System Center 2012 helps you integrate heterogeneous data center investments, including multi-hypervisor environments. You can pool and abstract your data center resources and deliver self-service infrastructure to your business units in a flexible, yet controlled, manner.
Predictable Applications Apps power your business. System Center 2012 helps you deliver predictable application service levels with deep application insight, and holistically manage your application services, which is where your core business value resides. Your Cloud Private and public cloud computing on your terms managed with a common toolset. System Center 2012 empowers you to deliver and consume private and public cloud computing on your terms, with common management experiences across your hybrid environments.

Key points: The key steps in deploying your private cloud infrastructure Deploy your Private Cloud infrastructure from the different fabric resources in your datacenter. Deploy your compute fabric through bare metal OS deployments of Hyper-V servers. Discover, classify, and allocate my storage fabric for Private Cloud use. Abstract my networking fabric for use in my Private Cloud. Pull these fabric resources together and create cluster for use as the underlying infrastructure for the Private Cloud.

Key points: The key steps in transforming your data center with different Fabric resources into a private cloud: Deploy the underlying Compute resources like bare metal OS deployments, as well as my Hyper-V Servers. Discover, Classify, and Allocate the storage resources in the fabric to my different Virtualized environments. Simplify the complex Networking requirements in my datacenter for use in my Private Cloud. Pull the different Fabric Elements together and create Clusters for use as the underlying infrastructure for my Private Cloud.

Key points: Even if I have multiple Hypervisors in my environment, I do not need different management tools to manage them Manage Heterogeneous Hypervisors with VMM 2012 Use the same methods to deploy Services to different Hypervisors The data center administrator controls the underlying infrastructure and the application owners just use it

Key points: Lets now look at System Center 2012 enabled solutions help you deal more efficiently with the deployment and configuration of resources that help you deliver IT as a service. System Center 2012 has both physical and virtual deployment capabilities. Customers can use System Center 2012 Configuration Manager or System Center 2012 Virtual Machine Manager for different provisioning scenarios.

Customer challenges I need to be able to quickly provision bare metal servers for specific workload or service usage. I have different types of storage that have different costs and I want to ensure that the correct storage is being used for my VMs. I have a complex networking environment, and I dont want to expose that complexity to folks who dont need to know it. System Center 2012 cloud and datacenter management solutions empower you with a common management toolset for your physical and virtual resources. Over 80% of our customers run a mix of physical and virtual infrastructure, so they need to be able to quickly provision and configure operating systems and applications to their physical and virtual servers. Many customers are already using Configuration Manager in their Datacenters to deploy operating systems and applications to their servers. Currently, even bare-metal deployment of operating systems is supported. Configuration Manager can also help ensure configuration compliance for those machines New in the 2012 release, Configuration Manager provides: Full unattended installation mode with media for deployment Extensibility to automate the selection of an available task sequence

Recognizing that todays datacenters are comprised of a mix of physical and virtual resources, Virtual Machine Manager supports deployment and configuration of virtual servers and Hyper-V. Additionally, VMM can support the provisioning of VMs on VMware vSphere and Citrix XenServer hypervisors and clusters.

Key Takeaways: The concept of bare metal deployment with VMM 2012 means the discovery of bare metal machines and bringing them into a fully provisioned Hyper-V enabled state. The VMM, WDS, and the library server work together to bring up the new server. You can create a cluster on the new server, add into an existing cluster, or just run virtual machines on it.

10

Key points: Automated Bare Metal Hyper-V Deploy in Action The steps it takes to do bare metal deployment within Hyper-V are roughly equivalent to the steps and processes to take a bare metal machine and provision it with Hyper-V. First step is with the VMM server, connect to the out-of-band management server. For example with the Baseboard management controller for HP it would be an iLO Card; for Dell its a DRAC. Reboot and do a PXE boot. PXE boot hits the WDS Server which autoboots that environment and it authorizes through VMM so it hits the WDS server. The WDS servers communicating with VMM to PXE boot the server. After we PXE boot the server were going to download WINPE on to that bare metal server. At that point, its booted up, and you can run general generic command execution to do things like configure partitions, and get it ready to accept the boot from VHD file that we are going to copy down to it. After weve repartitioned everything and rebooted: We then first download a VHD on to this machine. Reboot the machine from the VHD. Then we inject the drivers. This ensures the OS has the proper drivers, such as special drivers for the NIC card or SCSI adapter, for example . Join it to the domain, enable the Hyper-V role, reboot the machine and add it to the host group. VMM already has a host group in this example You configure Hyper-V add Hyper-V host. Deploy from bare metal, and go through the wizard. These are the steps that come out after its deployed that server. At this point we now have a Hyper-V host enabled, and added to a host group. Thus weve gone through the first steps of creating a virtualized infrastructure for the private cloud; managing and deploying the compute resources.

11

Key points: This process is important if you have different types of storage that have different costs, and you want to ensure that the correct storage is being used for my VMs, including: Administer Storage Classification, Allocations and Access SMI-S Support for Storage allows for Tiering and Classification Better Utilization and Tiering of Storage Infrastructure

12

Key points: Rapid provisioning of storage is only limited by the capabilities of the array Line of sight into virtualization fabric Help the frustrated administrator who has no visibility into storage fabric Simplify the end-to-end mapping of virtualization to storage assets Simplify consumption of storage capacity Enable IT to provision storage on-demand Create value-add on top of VMM storage model and cmdlets Remove the middle-man when requesting storage Minimize human error Deploy VMs faster with no load on the network, leveraging SAN capabilities

Cost reduction through ease of use

Reduction in complexity through automation Reduction in deployment friction

13

Key points: Discover: SMI-S support for array based discovery External storage array, pools, logical units (LUN), storage groups, endpoints, and initiators Local Host side disks, volumes, initiators (FC, iSCSI), ports Classify: Generate user defined capability of a storage Create tiers of storage definitions Associate a storage pool to the classification Allocate: Control what storage consumed by hosts and clusters Associate storage pools and logical units with a host group before assigning to cluster Create new logical units from storage pool Assign: Expose new logical units to a host or cluster Unmasking operations, initialization of disk, creation of volume Creates CSV automatically in the cluster case Create: LUN From available capacity Writeable snapshot of logical unit Full clone of logical unit Associate storage: Associate a storage pool and/or logical unit to host group for consumption by hosts/clusters contained in host group. You can provision LUNs, Snapshot LUNs, or Copy LUNs depending on need and capabilities of SAN. You can assign to Hosts as LUNs, Passthrough disks, and Cluster Shared Volumes Expose iSCSI storage to host/cluster using VMM: Creation of persistent sessions Present iSCSI array to existing host/cluster Present host/cluster to existing iSCSI array Support for MultiPortPerView, AllPortsPerView, OnePortPerView Simplify multi-path claiming of storage devices: Using default MSDSM Automatic creation of storage groups: Standalone host - per host Cluster per node or per cluster

14

Key points: Logical abstraction is useful if you have a complex networking environment, and dont want to expose that complexity to users who dont need to know it Network Abstraction this covers Logical Network Abstractions with IP and MAC Address Management, and Load Balancer Support Logical Network Abstraction is useful for Cloud environment Can help with group interactions allowing for less time required to provide access Logical abstraction for the network fabric is performed through System Center Virtual Machine Manager 2012

15

Key points: With Networking and Load Balancer Integration: Logical network Classification of network VM connects to Based on purpose or network access level Logical network definition Map logical network to network topology Based on connectivity of hypervisor host in relation to physical topology IP Pool - Range of IPs managed by VMM Used for static IP assignment to guest OS or as VIP on load balancer MAC Pool - Range of MACs managed by VMM Used for static MAC assignment to VM virtual network adapter Networking and Load Balancer Integration Load Balancer Automation Provisioning of VIP Dedicated IP (DIP) is statically assigned to VM from IP pool Virtual IP is assigned from IP pool Load Balancer Provider Integration with HW/SW load balancers Providers based on PowerShell and implemented by vendor VIP Template Property bag with most common used setting on LB Used to streamline the creation of a VM that needs connectivity to LB.

16

Logical networks As we have the physical diverse set of infrastructure represented on this slide, and inside are two types of resources. We can have one or more physical network resources; typical for many organizations If you add on to that, two separate data centers with separate networking environments, with separate IP address ranges (for networking, internet, VLAN, subnet, DMZ, internal production) What we can do is take that complex networking environment and simplify it as we create this logical and standardized environment by creating logical networks For example: In the development environment, with only one physical networking environment, you are able to create multiple logical networks and attach to that same networking environment. This way you have this same network labels in the development environment called DMZ and Prod in my production environment in Datacenter One and in Datacenter Two, all without having to expose that complex networking underneath. To deploy cloud services we use the same service template models. Each of these services has three different types of networking requirements. Say its a web pool with a backend SQL server. You need to have the web servers in development and production. We think that they are hitting a DMZ environment, then the app tier, the web tier and then the database tier. Then they need hit the production environment, and the app tier actually spans both. In this scenario, what you can do is create this logical networking environment underneath that is the same across all these different clouds, yet depending on where that service is deployed, you can deploy those services and connect to the correct logical networks. In development it connects to the DMZ and the production environment, but in my production environment those first two servers because they were deployed on the Datacenter One resources, connect to the logical networks over there. However, that last service because it is deployed in the Datacenter Two environment, connects to the logical and the DMZ and production logical network environment in Datacenter Two. Thus you can see: Not only can we deploy these services out, but also when we deploy those services out we can assign them the correct networking requirements for wherever they are located physically. The host that they are connected to may have totally different networking IP ranges, but when we deploy these services out they will be deployed appropriately to the correct network services.

17

Key points: IP pools are assigned to VMs and hosts as well as virtual IPs. Theyre specified when you create a template of a virtual machine. You create a virtual machine template and inside that template under the networking section, add a new network adapter and decide whether or not to use an IP pool. Benefits of an IP pool are that you can assign an IP to a VM as a static IP; instead of having to go and talk to the network administrator to request an IP address that you can statically assign to a machine. Thus, you can automatically, on deployment time, choose the next available IP within that pool, pull out that IP, deploy it into the VM as a static IP. So my environment doesnt need to have DHCP enabled but they dont have to manually type in these IP addresses at service deployment time every time.

Now when you use VMM to delete this VM or this service, when you do that, it returns that IP address back to the pool so that it can be used for another service deployment. MAC does pools very similar. You can create the ability to assign a specific MAC address to a virtual machine when you deploy that virtual machine. Thus, you can give a range of MAC addresses and they will be assigned at deployment time.
When you do this, you assign that MAC address to the VM before boot. When you boot up that virtual machine, it already has that MAC address, and that MAC address stays with it wherever you migrate that virtual machine around to. Then lastly, you have these things called virtual IP pools. Now, virtual IP pool is used by load balancers. When you create a load balanced web service, and when you deploy a service that is connected to a load balancer, you have to assign a virtual IP to that load balancer. This allows me to do that. Those virtual IPs are assigned to the clouds, and then as you deploy services well pick it to the particular service that needs it. And again, when you delete the service, you get it back.

18

Key points: VMM 2012 supports multiple different load balancing environments. The four here are F5, Big-IP, Brocade, ServerIron ADX, and Citrix NetScaler, as well as Microsoft Network Load Balancing. Microsoft had to write individual providers for each of these because there is no common load balancing API, as with SMI-S for storage . We connect to the load balancers through hardware providers. Those providers you download from companies like F5, Brocade, or Citrix. You install them, either on the VMM machine or some other machine, and they connect to the load balancers through those providers.

We assign the load balancers to the clouds, particular host groups, as well as logical networks. Meaning that, when you want to deploy a service on a particular host group, on a particular cloud, then the load balancer has to be assigned so its visible to this VM. Because you wouldnt want to have a host group in Datacenter One with a load balancer assigned to it but not to Datacenter Two. Then deploy a new VM service to Datacenter Two, then have it try and create a load-balancing environment on Datacenter One.
Thus the load balancers need to be within that exact same environment and logical networks so you can deploy it and access those resources. Therefore, once youve created this connection to the load balancer of these types of load balancing environments, then I create these virtual IP templates. These virtual IP templates allow you to easily set up the specifics of that load balancer and how it wants to be configured when you deploy this service. The load balancer template can set up things like SSL, and how the inbound and outbound methods are configured. When you specify load balancing methods, you can do things like round robin, use least-connects, or fastest response. That it has all of that configured inside of the load balancer itself, which means when you deploy a brand new service you can give it these specifications or customizations for that particular loadbalancing environment.

19

Key points: Once you configured compute, storage, and network resources, the next step is to take all of that together, and go from bare metal deployment to a fully clustered Hyper-V environment, all through VMM Using VMM 2012, you can: Collect Fabric Resources and build clusters for a Virtualization Environment Go from Zero to Cluster from the Bare Metal to a Microsoft Hyper-V Cluster Use a Single Console for Hypervisor and Cluster Management

20

Key points: Virtual Machine Manager now provides automated bare metal to Hyper-v cluster provisioning, using an efficient and automated process The steps of the process: First, storage is discovered and provisioned for use with virtual machine deployments. Discover storage device to VM relationship Classify storage according to capabilities Assign new storage to Hyper-V cluster Provision new storage with VM deployment

Next, network resources are defined using logical networks. IP, VIP and MAC addresses can then be assigned to new virtual machines from designated pools. Define network using Logical Networks Assign IP, VIP, and MAC from pools Integrate with load balancers

At this point VMM communicates with the bare metal server via a baseboard management controller or similar device which can be used to force the machine to boot and begin installing an operating system from a Windows Deployment Server. Once the operating system is installed, VMM then configures Hyper-v on the new server. At this point the Create cluster capability in VMM can be used to join the newly provisioned virtual machine to a cluster and connect to the configured storage and network resources

This is a streamlined, powerful new process, compared to how long this task could take without the standardization and automation that VMM 2012 offers.

21

Key points: You can manage and create clusters within VMM 2012 with a wizard-based experience that allows you to validate the cluster and allocate disks and networks for the cluster. We manage the cluster while its running so you can check the health of the cluster or you could add nodes to the cluster. You can also add or remove things like disks or networks. You can either remove nodes from that cluster or delete the entire cluster and turn these back into standalone hosts. Creation key steps to this process are: Wizard based experience Cluster validation will be run for you Ability to skip validation and start on-demand validation Allocate cluster disks if VMM is managing storage Create cluster-wide virtual network For WS08 R2 Hyper-V hosts in trusted domain only

Management key steps to this process are: Add/Remove - Nodes, Cluster disks and Virtual networks Drag/drop host to add node to cluster Cluster status tab Shortcut to cluster validation test results from cluster status tab

Deletion key steps to this process are: Un-clustered hosts remain managed as standalone hosts Cluster disk will be unmasked if VMM is managing storage

22

Key points: When adding nodes to the cluster I choose the host group of the nodes I want to add. Specify the nodes you want to add to the cluster. You can choose the cluster validation or not. The cluster validation can be re-run which allows you to test to see what is going on with the cluster later on.

23

Key points: You have many different ways of specifying and creating the cluster network IP address: o within VMM, if there is an IP pool already available, we can detect that there is an IP pool available, you can choose that IP pool, and then after youve chosen that IP pool you can let VMM automatically assign that static IP address from the pool of IPs. o if there is a logical network available there and there is an IP pool, but you want to specify the IP address yourself, you can do that. o if there is no logical network seen, and because of that no IP pool available, then I can manually type in that IP address. o Thus, all the different types of networking capabilities that clusters support, VMM 2012 can support as well. VMM 2012 can fill it all in, or create those pool assignments. In this example, you can see VMM 2012 perform a number of functions: VMM detected an IP pool and logical network user selected the IP pool and was ok that we use an address from that pool VMM detected an IP pool and logical network but user wanted to specify the exact address from this pool VMM didnt detect any IP pool and logical network it is required to provide an IP address

24

Key points: In this example, while you have created a cluster, you would also want to attach shared storage to this environment. Different LUNs that are in the storage classification pool, and classified to this host group, can now be assigned to this cluster creation so that it can have a CSV volume or not. Look at the existing LUN pool and assign the LUNs to the host group. From the LUNs that are assigned to this host group, you can pick which ones of those LUNs you want to assign to this cluster node. Then if you want this to be CSV support or not, you can turn that on. Once past all of this, youve created that private cloud and constructed those resources.

25

Key points: The key steps in deploying your private cloud infrastructure Deploy your Private Cloud infrastructure from the different fabric resources in your datacenter. Deploy your compute fabric through bare metal OS deployments of Hyper-V servers. Discover, classify, and allocate my storage fabric for Private Cloud use. Abstract my networking fabric for use in my Private Cloud. Pull these fabric resources together and create cluster for use as the underlying infrastructure for the Private Cloud.

26

Key points: Within this presentation we have discussed: Taking compute resources, network resources, and storage resources that are diverse and complex and weve simplified them together, dedicating them to creating this logical and standardized fabric of resources that we can now create our private cloud from. Taking the diverse resources and created a logical standardized and from there then you can build the cloud abstraction, deploy your capacity, and deploy the new services on there. Managing and deploying your physical and virtual compute fabric, your storage fabric, creating that logical network abstraction, and with the cluster creation bring it all together to be that foundation for that private cloud.

27

28

29

You might also like