You are on page 1of 8

Veritas Doctumentation & Troubleshooting techniques

Guides Cluster and Cluster File System Server Provisioning 4.2 (Formerly OpForce) OpForce 4.0 4.1 Veritas vxconfigd fix Install Veritas Veritas Disk Commands Veritas cluster commands and files Processes OpForce/Server Provisioning Restore files Sending Crash info to Veritas CFS Commands Troubleshooting

Guides
Veritas File System

Need to add pdfs for vxfs

Cluster and Cluster File System


Veritas Storage Foundation Cluster File System 4.1 Install and Admin Guide Veritas Storage Foundation Cluster File System 4.0 Install and Admin Guide Veritas Cluster Server 4.0 Bundled Agents Reference Guide Veritas Cluster Server 4.0 User's Guide

Server Provisioning 4.2 (Formerly OpForce)


Release Notes Getting Started Guide Install Guide Provisioning Guide System Administration Server Management Software Management Troubleshooting Guide

OpForce 4.0 4.1


Opforce Install guide Opforce release notes Opforce User guide Opforce Troubleshooting

Veritas vxconfigd fix


This should only be installed after Veritas 4.1 and MP1. 1.Download the following files and copy to /tmp, also chmod 700 libvxscsi (a) Veritas Hitachi driver (b) Veritas scsi lib file (c) Script to install above files 2.Run the /tmp/libvxscsi file. 3.Reboot server.

Install Veritas
To install Veritas software on your servers follow these steps

1. Setup ssh on your servers, refer to ssh setup 2. Setup the network so install doesn't fail.

You will have to do this on all of your servers. This only applies to SLES, not Redhat. If you have any typos or miss any ' you will have to redo your configure Run yast and configure your heartbeat cards to at least do dhcp. Once that is done go into a shell and ifconfig eth#, note the mac address of the cards. vi the /etc/sysconfig/network/ifcfg-eth-file example:

vi /etc/sysconfig/network/ifcfg-eth-id-00:11:0a:5b:f1:4c BOOTPROTO='dhcp' MTU='' REMOTE_IPADDR='' STARTMODE='onboot' UNIQUE='LRaa.NhO9mtEJcG8' _nm_name='bus-pci-0000:07:0a.0'


Make the above file look like this

BOOTPROTO='static' MTU='' REMOTE_IPADDR='' STARTMODE='onboot' UNIQUE='LRaa.NhO9mtEJcG8' _nm_name='bus-pci-0000:07:0a.0' PERSISTENT_NAME='eth#'


Change the eth# to match your port. Later in the install when it asks for your heartbeat ports and you enter in eth3.You will see this

Enter the NIC for the first private heartbeat link on server: [b,?] eth3 eth3 is probably active as a public NIC on server Are you sure you want to use eth3 for the first private heartbeat link? [y,n,q,b,?] (n) y 3. If you have copied the iso file from this server do a mount -o loop /locationofiso /mountpoint 4. cd in to the /mountpoint/sles9_x86_64 5. run the ./installer - it must be run from this location 6. Select the Veritas software you are installing, normally 1 or 4 7 Enter the names of the servers with spaces 8 Enter license key Once these steps are all done your Veritas configure will probably crash with a /dev/llt error. At which point then it is time to install Veritas Maintence Pack 1. 1 Copy the veritas mp1 file from our Downloads to your server 2 On the server do a tar -xvf veritasmp1.tar 3 Cd into /mp1 4 Type ./installmp 5 Insert the server names 6 After install reboot the server
If you are still having any kind of issue after all of this try to cd to /opt/VRTS/install and do a ./installsf or ./installsfcfs -configure. The program you run will be determined by your software license. This will rerun the configuration daemon for the Veritas and should fix any errors after everything above this is completed.

Veritas Disk Commands


One minor note from the issues we have encountered over the last 24 hours. For our NGPR box we needed to recover the disk, we need to find out how long a 100GB+ disk with at least that much data takes to recover.

The enabled state is the normal operating state. Most configuration operations are allowed in this state. Entering the enabled state imports all disk groups, and begins the management of device nodes stored in the /dev/vx/dsk and /dev/vx/rdsk directories.

vxdctl enable

Displays the cvm state (master/slave)

vxdctl -c mode

Displays what disks Veritas sees and there current state

vxdisk list

This lists all disk groups on current server

vxdisk -o alldgs list

Add or initialize disks. Under selection 1 you will be asked for a diskgroup

vxdiskadm

Similar to vxdisk list

vxdg list

Shows available bytes

vxdg free

Creates in the diskgroup a volume at whatever size vxassist is used for converting and other useful stuff try man vxassist

vxassist -g diskgroup make volume volumesize

Removed the DCO log partition so the volume the can be resized. Try this if the vxresize command fails.

vxsnap -g (disk group) unprepare (volume name)

Only works for cfs/ha/cvm /

/usr/lib/vxvm/bin/vxclustadm nidmap

Clear the failing flag from the Disk Group so that it can be used. Only do this if you are sure there is no hardware issue.

vxedit -g (diskgroup) set failing=off (disk name)

Veritas cluster commands and files


This is one of the config files needed for llt

/etc/llttab

This is the cmnodelist

/etc/llthosts

This file specifies how many nodes are needed to form a cluster and the timeout value for the cluster

/etc/gabtab

These are the cluster config files

/etc/VRTSvcs/conf/config

This command shows you what servers are in the cluster and there heartbeat connections

lltstat -nvv |more

which servers are in the cluster

hasys -list

The current state of the cluster

hasys -state

Looks at status of all servers in the cluster (pipe this to a file, very long)

hasys -display

Provides summary of cluster and package status (like cmviewcl)

hastatus -sum

which server are in the group

hagrp -list

current state of the group in the cluster

hagrp -state

looks at status of groups in cluster (can be very long)

hagrp -display

Processes
For Clustering

/opt/VRTSvcs/bin/had /opt/VRTSvcs/bin/hashadow /opt/VRTSvcs/bin/CmdServer /opt/VRTSvcs/bin/CVMCluster/CVMClusterAgent -type CVMCluster /opt/VRTSvcs/bin/CFSfsckd/CFSfsckdAgent -type CFSfsckd /opt/VRTSvcs/bin/CVMVxconfigd/CVMVxconfigdAgent -type CVMVxconfigd lltd lltdlv lsmod |grep gab (should return a line) lsmod |grep llt (should return a line)
For volume

vxiod's vxconfigd -x syslog vxfs_thread /usr/lib/vxvm/bin/vxrelocd root /usr/lib/vxvm/bin/vxcached root /usr/lib/vxvm/bin/vxconfigbackupd vxnotify's

OpForce/Server Provisioning
Login to the Server Provisioning server at http://wwhq230m.whq.ual.com.

Restore files
/usr/openv/netbackup/bin/bp

Sending Crash info to Veritas


Use the following Binary to gather the info that Vertias needs to analyse a problem. The VRTSexploer Install

Install the binary with the commands


zcat VRTSspt-linux.tar.Z | tar xf rpm -i ./VRTSspt-linux/VRTSspt-*.i386.rpm

and run with the following.

/opt/VRTSspt/VRTSexplorer/VRTSexplorer

when asked to start vxconfigd leave default (n) Send the information to the following ftp site. ftp ftp.veritas.com login: anonymous passwd: (your email address) ftp> cd /incoming ftp> bin ftp> put (VRTSexplorer file)

CFS Commands
To add new Nodes to a CFS Cluster (CVM Service Group)

Add the nodes to the cluster via editing the llttab, llthosts, gabtab files. Verify the Volume Manager has been configured correctly via the ''vxinstall'' command. Add the new Node to the cvm SystemList and AutoStartList. Add the new Node to the cvm_clus (CVMNodeID) Resource The nodes list must match that in the llthosts file On each of the existing nodes in the cluster: Run the following command to rescan for the new nodes
/etc/vx/bin/vxclustadm -m vcs -t gab reinit

Run the following command to verify that the new nodes were found. This should be in the ''Out of the Cluster'' State
/etc/vx/bin/vxclustadm nidmap

On the new node in the cluster: Run the following command to join it into the CVM communication
/etc/vx/bin/vxclustadm -m vcs -t gab startnode /etc/vx/bin/vxclustadm nidmap

After run the following command to verify that the node has changed the ''Joined'' state. To Add Mount Points to new Nodes in the cluster To modify the Diskgroup info cfsdgadm modify (disk group) (node name)=sw To modify the mount point info cfsmntadm modify (mount point) add (node name)=suid,rw

To bring the mount point online cfsmount (mount point) (node name)

Troubleshooting
VEA won't run? /etc/init.d/isisd start

You might also like