You are on page 1of 8

Upgrade Guide

NOS 3.1.1
17-Jul-2013

Notice
Copyright
Copyright 2013 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 400
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.

Conventions
Convention

Description

variable_value

The action depends on a value that is unique to your environment.

ncli> command

The commands are executed in the Nutanix nCLI.

user@host$ command

The commands are executed as a non-privileged user (such as nutanix)


in the system shell.

root@esx# command

The commands are executed as the root user in the ESXi shell.

output

The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface

Target

Username

Password

Nutanix web console

Nutanix Controller VM

admin

admin

vSphere client

ESXi host

root

nutanix/4u

SSH client or console

ESXi host

root

nutanix/4u

SSH client

Nutanix Controller VM

nutanix

nutanix/4u

IPMI web interface or ipmitool

Nutanix node

ADMIN

ADMIN

IPMI web interface or ipmitool

Nutanix node (NX-3000)

admin

admin

Version
Last modified: July 17, 2013 (2013-07-17-18:52 GMT-07:00)

Upgrade Guide | NOS 3.1.1 | 2

Contents
Upgrading to NOS 3.1.1.................................................................................................. 4
To Prepare to Upgrade to NOS 3.1.1 from 3.0.2, 3.0.3/3.0.3.1, 3.0.4/3.0.4.1, or 3.1......................... 4
To Upgrade to NOS 3.1.1 from 3.0.2, 3.0.3/3.0.3.1, 3.0.4/3.0.4.1, or 3.1...........................................5

Upgrading to NOS 3.1.1


You can upgrade to NOS 3.1.1 from 3.0.2, 3.0.3/3.0.3.1, 3.0.4/3.0.4.1, or 3.1. Upgrading does not require
cluster downtime.

To Prepare to Upgrade to NOS 3.1.1 from 3.0.2, 3.0.3/3.0.3.1, 3.0.4/3.0.4.1, or 3.1


Warning: If the ESXi hosts in the cluster have been upgraded to ESXi 5.1, ensure that all
configurations specified by Nutanix have been made. The NOS upgrade will fail if vSphere 5.1 has
not been properly configured.
Refer to the article on upgrading to vSphere 5.1 in the Nutanix knowledge base on the support
portal.
1. Log on to any Controller VM in the cluster with SSH.
2. Confirm that the correct ESXi password is configured.
nutanix@cvm$ zeus_config_printer 2>/dev/null | grep "hypervisor {" -A 3

For each ESXi host in the cluster, a block like the following is displayed:
hypervisor {
address_list: "172.16.8.185"
username: "root"
password: "nutanix/4u"

If the passwords listed match the actual passwords of the ESXi hosts, no action is necessary.
If the passwords listed differ from the actual passwords of the ESXi hosts, update them in Zeus by
completing the following steps.
a. Find the host IDs.
nutanix@cvm$ ncli -p 'admin_password' host list | grep -E 'ID|Hypervisor Key'

Note the host ID for each ESXi host.


b. Update the ESXi host password.
nutanix@cvm$ ncli -p 'admin_password' managementserver edit name=esxi_host_addr
password='esxi_password'
nutanix@cvm$ ncli -p 'admin_password' host edit id=host_id hypervisorpassword='esxi_password'

Replace esxi_host_addr with the IP address of the ESXi host.


Replace host_id with a host ID you determined in the preceding step.
Replace esxi_password with the root password on the corresponding ESXi host.

Perform this step for every ESXi host in the cluster.


3. Enable automatic installation.
nutanix@cvm$ cluster enable_auto_install

4. If the cluster has iSCSI LUNs, set the ESXi hosts to avoid all paths down.

Upgrade Guide | NOS 3.1.1 | 4

To check if the cluster has iSCSI LUNs, log on to an ESXi host as root and run esxcfg-scsidevs -m. If
any output other than the node local datastore is returned, log on to each ESXi host in the cluster and
enable the FailVolumeOpenIfAPD option.
root@esx# esxcfg-advcfg -s 1 /VMFS3/FailVolumeOpenIfAPD

This setting prevents the ESXi hosts from entering the all paths down (APD) state, which causes VMs
to stop responding and requires restarting ESXi hosts. For more information, see VMware KB article
1016626 at http://kb.vmware.com.
If the option is enabled, the host will display the following status.
Value of FailVolumeOpenIfAPD is 1

5. Prepare each Controller VM in the cluster.


Perform these steps once for each Controller VM in the cluster.
a. Log on to the Controller VM to upgrade with SSH.
b. Remove core, blackbox, installer, and temporary files.
Important: Save a copy of /home/nutanix/data/installer/* before deleting it. These files
are needed in the unlikely event it become necessary to revert back to the old version after
upgrading.
nutanix@cvm$
nutanix@cvm$
nutanix@cvm$
nutanix@cvm$
nutanix@cvm$
nutanix@cvm$

rm
rm
rm
rm
rm
rm

-rf
-rf
-rf
-rf
-rf
-rf

/home/nutanix/data/backup_dir
/home/nutanix/data/blackbox/*
/home/nutanix/data/cores/*
/home/nutanix/data/installer/*
/home/nutanix/tmp
/var/tmp/*

c. Check the Controller VM hostname in /etc/hosts and /etc/sysconfig/network. If if contains any


spaces or parentheses, replace them with dashes.
After changing the hostname on the Controller VM, change the name in vCenter and restart the
Controller VM.
Perform the preceding steps once for each Controller VM in the cluster.
6. Log on to vCenter with the vSphere client.
7. Right-click the cluster and select Rescan for Datastores.
Click Ok in the confirmation dialog box. Ensure that both Scan for New Storage Devices and Scan for
New VMFS Volumes are selected in the next dialog box and click Ok. Wait for the rescan to complete
before proceeding.

To Upgrade to NOS 3.1.1 from 3.0.2, 3.0.3/3.0.3.1, 3.0.4/3.0.4.1, or 3.1


Before you begin.
Warning: If the ESXi hosts in the cluster have been upgraded to ESXi 5.1, ensure that all
configurations specified by Nutanix have been made. The NOS upgrade will fail if vSphere 5.1 has
not been properly configured.
Refer to the article on upgrading to vSphere 5.1 in the Nutanix knowledge base on the support
portal.

Prepare for the upgrade by following To Prepare to Upgrade to NOS 3.1.1 from 3.0.2, 3.0.3/3.0.3.1,
3.0.4/3.0.4.1, or 3.1 on page 4.

Upgrade Guide | NOS 3.1.1 | 5

Download the NOS 3.1.1 tar file and copy it to /var/tmp on one Controller VM in the cluster.

1. Log on to the Controller VM that has the Nutanix software tar file.
2. Disable email alerts.
nutanix@cvm$ ncli cluster stop-email-alerts

This is to avoid spurious alerts when Controller VMs restart during the upgrade.
3. Expand the tar file.
nutanix@cvm$ cd /var/tmp
nutanix@cvm$ tar -xvf nutanix_installer*-3.1.1-*

4. Start the upgrade.


nutanix@cvm$ /var/tmp/install/bin/cluster -i /var/tmp/install upgrade

For each node in the cluster, this command updates the Controller VM image, updates the NOS
packages, and restarts it. Your logon session to the Controller VM will be terminated. Upgrading the
Controller VMs takes approximately 10 minutes per node in the cluster. (If the command fails on one
or more Controller VMs with a message like Could not connect to zookeeper: ok, run the command
again.)
While the upgrade is in progress, you can monitor progress with the following command.
nutanix@cvm$ upgrade_status

This command specifies which Controller VMs have completed the upgrade, which is in progress, and
which have not yet been updated. Wait until the upgrade process completes.
INFO upgrade_status:39 Target release version: 1.4-release-congo-3.1.1-stablefc853bbd93bd1a3e663c7fe0f8c3a3b0dd5f6344
INFO upgrade_status:82 SVM 172.16.24.80 is up to date
INFO upgrade_status:82 No release version is available on SVM 172.16.24.81, this node
still needs to upgrade, node is currently upgrading
INFO upgrade_status:82 No release version is available on SVM 172.16.24.79, this node
still needs to upgrade
INFO upgrade_status:82 No release version is available on SVM 172.16.24.78, this node
still needs to upgrade

If the upgrade fails to complete on a Controller VM after 20 minutes, perform the following steps.
a. Log on to vCenter or the ESXi host with the vSphere client.
b. Right-click the Controller VM in the vSphere client and select Open Console.
If the logon screen does not appear in the vSphere client Console window, click inside the window
and press Alt+F2.
c. Enter the username and password.
d. Confirm that package installation has completed and that genesis is not running.
nutanix@cvm$ ps afx | grep rpm
nutanix@cvm$ ps afx | grep genesis
nutanix@cvm$ ps afx | grep svm_boot

If any rpm, genesis, or svm_boot processes are listed, wait five minutes and try again. If none of these
processes are listed, it is safe to proceed.
e. Restart the Controller VM.
nutanix@cvm$ sudo reboot

Upgrade Guide | NOS 3.1.1 | 6

Enter the nutanix password if prompted. Wait to proceed until the Controller VM has finished
starting, which takes approximately 5 minutes.
f. Check the upgrade status again.
The upgrade process writes the following log files on the Controller VMs:
/home/nutanix/data/logs/install.out
/home/nutanix/data/logs/boot.out
/home/nutanix/data/logs/boot.err
/home/nutanix/data/logs/finish.out
/home/nutanix/data/logs/svm_upgrade.tar.out

5. Log on to any Controller VM in the cluster with SSH.


6. Confirm that the upgrade is complete.
a. Confirm the NOS version.
nutanix@cvm$ upgrade_status

The Target release version should match the target version, and each Controller VM should
show SVM cvm_ip_address is up to date. If some Controller VMs are marked as needing to be
upgraded, wait 5 minutes and check again. (If the command fails on one or more Controller VMs
with a message like Could not connect to zookeeper: ok, run the command again.)
b. Confirm the Controller VM version.
nutanix@cvm$ for i in `svmips`; do echo $i; ssh -o StrictHostKeyChecking=no $i cat /
etc/nutanix/svm-version; done

The version identifier 3.1r2 should be displayed for every Controller VM in the cluster.
c. Log on to vCenter with the vSphere client.
d. Right-click each Controller VM and select Edit Settings.
Verify that the following parameters are set to the expected values:

(Hardware) Memory: 12 GB
(Hardware) vCPUs: 8
(Hardware) Adapter Type for both network interfaces: VMXNET 3
(Resources > Advanced CPU) Hyperthreaded Core Sharing Mode: Any

If any of these parameters are set to different values, change them. The Controller VM must be shut
down to change some of the parameters.
Warning: Once the upgrade is complete, do not upgrade VMware tools on any Controller VM.

7. If the site policy is for remote support and email alerts to be enabled, re-enable them.
a. Ensure that remote support is enabled.
nutanix@cvm$ ncli cluster get-remote-support-status

If Enabled is not set to true, enable remote support.


nutanix@cvm$ cluster start-remote-support

b. Ensure that an email address is set.


nutanix@cvm$ ncli cluster get-email-contacts

Upgrade Guide | NOS 3.1.1 | 7

If no email address is listed in Email Alert Contacts and the customer wishes to receive email
alerts, add one or more addresses.
nutanix@cvm$ ncli cluster add-to-email-contacts email-addresses="customer_email"

Replace customer_email with a comma-separated list of customer email addresses to receive alerts.
c. Enable email alerts.
nutanix@cvm$ ncli cluster start-email-alerts

d. Send a test email.


nutanix@cvm$ ~/serviceability/bin/email-alerts \
--to_addresses="support@nutanix.com, customer_email" \
--subject="[alert test] `ncli cluster get-params`"

Replace customer_email with a customer email address that receives alerts.

Upgrade Guide | NOS 3.1.1 | 8

You might also like