Professional Documents
Culture Documents
Acropolis 4.5
06-Apr-2016
Notice
Copyright
Copyright 2016 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.
Conventions
Convention
Description
variable_value
ncli> command
user@host$ command
root@host# command
The commands are executed as the root user in the vSphere or Acropolis
host shell.
> command
output
Target
Username
Password
Nutanix Controller VM
admin
admin
ESXi host
root
nutanix/4u
Interface
Target
Username
Password
vSphere client
ESXi host
root
nutanix/4u
ESXi host
root
nutanix/4u
AHV host
root
nutanix/4u
Hyper-V host
Administrator
nutanix/4u
SSH client
Nutanix Controller VM
nutanix
nutanix/4u
Version
Last modified: April 6, 2016 (2016-04-06 15:34:50 GMT-7)
Contents
1: Node Management...................................................................................5
Controller VM Access.......................................................................................................................... 5
Shutting Down a Node in a Cluster (AHV)......................................................................................... 5
Starting a Node in a Cluster (AHV).................................................................................................... 5
Changing CVM Memory Configuration (AHV).....................................................................................6
Changing the Acropolis Host Name.................................................................................................... 7
Changing the Acropolis Host Password..............................................................................................7
Upgrading the KVM Hypervisor to Use Acropolis Features................................................................ 8
1
Node Management
Controller VM Access
Most administrative functions of a Nutanix cluster can be performed through the web console or nCLI.
Nutanix recommends using these interfaces whenever possible and disabling Controller VM SSH access
with password or key authentication. Some functions, however, require logging on to a Controller VM
with SSH. Exercise caution whenever connecting directly to a Controller VM as the risk of causing cluster
issues is increased.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.
NTNX-12AM2K470031-D-CVM
shut off
NTNX-12AM2K470031-D-CVM
running
Replace cvm_name with the name of the Controller VM that you found from the preceding command.
5. Log on to another Controller VM in the cluster with SSH.
6. Verify that all services are up on all Controller VMs.
nutanix@cvm$ cluster status
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up
6606, 6607]
Zeus
Scavenger
SSLTerminator
Hyperint
Medusa
DynamicRingChanger
Pithos
Stargate
Cerebro
Chronos
Curator
Prism
CIM
AlertManager
Arithmos
SysStatCollector
Tunnel
ClusterHealth
Janus
NutanixGuestTools
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
[3704,
[4937,
[5034,
[5059,
[5534,
[5852,
[5877,
[5902,
[5930,
[5960,
[5987,
[6020,
[6045,
[6070,
[6107,
[6196,
[6263,
[6317,
3727,
4960,
5056,
5082,
5559,
5874,
5899,
5927,
5952,
6004,
6017,
6042,
6067,
6099,
6175,
6259,
6312,
6342,
3728,
4961,
5057,
5083,
5560,
5875,
5900,
5928,
5953,
6006,
6018,
6043,
6068,
6100,
6176,
6260,
6313]
6343,
3729,
4990]
5139]
5086,
5563,
5954]
5962]
6103,
6106]
6075]
6261]
6111,
6101]
6296]
6344]
6497]
3807, 3821]
5099, 5108]
5752]
6108]
6818]
Replace cvm_name with the name of the Controller VM that you found from the preceding command.
4. Increase the memory of the Controller VM, depending on your configuration settings for deduplication
and other advanced features.
root@ahv# virsh setmaxmem cvm_name --config --size ram_gbGiB
root@ahv# virsh setmem cvm_name --config --size ram_gbGiB
Replace cvm_name with the name of the Controller VM and ram_gb with the recommended amount
from the sizing guidelines.
5. Start the Controller VM.
root@ahv# virsh start cvm_name
Replace my_hostname with the name that you want to assign to the host.
3. Use the text editor to replace the host name in the /etc/hostname file.
4. Restart the Acropolis host.
3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.
Use this procedure if you are currently using a legacy, non-Acropolis version of KVM and want to use the
Acropolis distributed VM management service features. The first generally-available Nutanix KVM version
with Acropolis is KVM-2015120; the Nutanix support portal always makes the latest version available.
Result
nutanix-release
centos-release
Do This
Note:
See the Nutanix Support Portal for the latest information on Acropolis Upgrade Paths.
This procedure requires that you shut down any VMs running on the host and leave them off
until the hypervisor and the AOS upgrade is completed.
Do not run the upgrade script on the same Controller VM where you are upgrading the node's
hypervisor. You can run it from another Controller VM in the cluster.
1. Download the hypervisor upgrade bundle from the Nutanix support portal at the Downloads link.
You must copy this bundle to the Controller VM you are upgrading. This procedure assumes you copy it
to and extract it from the /home/nutanix directory.
2. Log on to the Controller VM of the hypervisor host to be upgraded to shut down each VM and shut
down the Controller VM.
a. Power off each VM, specified by vm_name, running on the host to be upgraded.
nutanix@cvm$ virsh shutdown vm_name
b. Shut down the Controller VM once all VMs are powered off.
nutanix@cvm$ sudo shutdown -h now
Note: You can also download this package from the Nutanix support portal from the
Downloads link.
The file creates and extracts to the upgrade_kvm directory.
5. Change to the upgrade_kvm/bin directory and run the upgrade_kvm upgrade script, where host_ip is the
IP address of the hypervisor host to be upgraded (the host where you have just shutdown the Controller
VM in Step 2).
nutanix@cvm$ cd upgrade_kvm/bin
nutanix@cvm$ ./upgrade_kvm --host_ip host_ip
The Controller VM of the upgraded host restarts and messages similar to the following are displayed.
This message shows the first generally-available KVM version with Acropolis (KVM-2015120).
...
2014-11-07 09:11:50 INFO host_upgrade_helper.py:1733 Found kernel
version: version_number.el6.nutanix.20150120.x86_64
2014-11-07 09:11:50 INFO host_upgrade_helper.py:1588 Current hypervisor version:
el6.nutanix.20150120
2014-11-07 09:11:50 INFO upgrade_kvm:161 Running post-upgrade
2014-11-07 09:11:51 INFO host_upgrade_helper.py:1716 Found upgrade marker:
el6.nutanix.20150120
2014-11-07 09:11:52 INFO host_upgrade_helper.py:1733 Found kernel
version: version_number.el6.nutanix.20150120
2014-11-07 09:11:52 INFO host_upgrade_helper.py:2036 Removing old kernel
2014-11-07 09:12:00 INFO host_upgrade_helper.py:2048 Updating release marker
2014-11-07 09:12:00 INFO upgrade_kvm:165 Upgrade complete
6. Log on to the upgraded Controller VM and verify that cluster services have started by noting that all
services are listed as UP .
nutanix@cvm$ cluster status
2
Host Network Management
Network management in an Acropolis cluster consists of the following tasks:
Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you configure
bridges, bonds, and VLANs.
Optionally changing the IP address, netmask, and default gateway that were specified for the hosts
during the imaging process.
For performing other OVS configuration tasks, such as adding an interface to a bridge and configuring
LACP for the interfaces in an OVS bond, log on to the Acropolis hypervisor host, and then follow the
procedures described in the OVS documentation at http://openvswitch.org/.
Nutanix recommends that you configure the network as follows:
Best Practice
Open vSwitch
Do not modify the OpenFlow tables that are associated with the default
OVS bridge br0.
VLAN
Add the Controller VM and the Acropolis hypervisor to the same VLAN.
By default, the Controller VM and the hypervisor are assigned to VLAN
0, which effectively places them on the native VLAN configured on the
upstream physical switch.
Do not add any other device, including guest VMs, to the VLAN to which
the Controller VM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.
Network Component
Best Practice
Virtual bridges
Use the 1 GbE interfaces for guest VM traffic. If the 1 GbE ports are used
for guest VM connectivity, follow the hypervisor manufacturers switch port
and networking configuration guidelines.
To avoid loops, do not add the 1 GbE interfaces to bridge br0, either
individually or in a second bond. Use them on other bridges.
Do not trunk switch ports that connect to the IPMI interface. Configure the
switch ports as access ports for management simplicity.
Controller VM
Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.
This diagram shows the recommended network configuration for an Acropolis cluster:
Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 12
An internal port with the same name as the default bridge; that is, an internal port named br0. This is the
access port for the hypervisor host.
A bonded port named bond0. The bonded port aggregates all the physical interfaces available on the
node. For example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces
are aggregated on bond0. This configuration is necessary for Foundation to successfully image the
node regardless of which interfaces are connected to the network.
Note: Before you begin configuring a virtual network on a node, you must disassociate the
1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with Desired
Interfaces on page 17.
The following diagram illustrates the default factory configuration of OVS on an Acropolis node:
The Controller VM has two network interfaces. As shown in the diagram, one network interface connects to
bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to
communicate with the hypervisor host.
To show interface properties such as link speed and status, log on to the Controller VM, and then list the
physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces
To show the ports and interfaces that are configured as uplinks, log on to the Controller VM, and then
list the uplink configuration.
nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks
Replace bridge with the name of the bridge for which you want to view uplink information. Omit the -bridge_name parameter if you want to view uplink information for the default OVS bridge br0.
To show the virtual switching configuration, log on to the Acropolis host with SSH, and then list the
configuration of Open vSwitch.
root@ahv# ovs-vsctl show
To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then list the
configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
3. Create an OVS bridge on each host in the cluster.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br bridge'
Replace bridge with a name for the bridge. The output does not indicate success explicitly, so you can
append && echo success to the command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
Replace bridge with the name of the bridge on which you want to create the bond. Omit the -bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:
A comma-separated list of the interfaces that you want to include in the bond. For example,
eth0,eth1 .
A keyword that indicates which interfaces you want to include. Possible keywords:
11:17:17
11:17:17
11:17:18
11:17:22
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
3. Assign the public interface of the Controller VM to a VLAN.
nutanix@cvm$ change_cvm_vlan vlan_id
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10
For information about how to log on to a Controller VM, see Controller VM Access on page 5.
3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
Acropolis Host to a VLAN on page 18.