Professional Documents
Culture Documents
----
Some more notes from the try, vxassist command used as per book
suggestion.
/etc/vx/bin/vxrootmir bootdisk
mirror the root partition, it will need slide 0 be avail.
This would probably work, except somehow var was placed in disk
earlier
than root, so will need to do that first.
vxassist -g rootdg mirror var bootdisk &
vxassist -g rootdg mirror u01 bootdisk &
More info:
http://www.sun.com/blueprints/0800/vxvmref.pdf
---
vxdisk list
display all disk managed by Veritas
vxdisk -s list
much more detailed version than above
vxprint
print disk slice/mirroring info, plex,vol, etc
vxprint -hrt
diff version than above, useful to see boot disk mirroring
location
on disk.
vxdg list
list all disk group
vxdisk -o alldgs list
scan all disk, incl those not currently managed by veritas.
those with disk group name inside () are exported dg, ready for
import.
----
vxddladm listjbod
Display current settings
Veritas DMP will continue to be used, PowerPath sits at lower layer and
intercept the calls
to the different disks. Veritas will know multiple path to the LUN, and
it will know they are
the same. If fiber is removed, DMP won't know as power path work behind
the scene. Syslog
will log error. After using vxddladm addjbod, reboot machine w/
reconfigure to ensure all devices are seen.
ls -l /dev/vx/dsk/
reminor disk group whose import is in conflict w/ existing disk group.
ls -l /dev/vx/dsk/oracledg
brw------- 1 root root 259,52001 May 4 17:10 u02
brw------- 1 root root 259,52000 May 4 17:10 u03
^^^^^
^^^^^ 52000 is the minor number,
shown also in vxprint -l oracledg
Other commands
vxassist
/etc/rc2.d/S97vxfen stop
echo "vxfen_mode=disabled" > /etc/vxfenmode
/etc/rc2.d/S97vxfen start
Above will still load driver, which allows Veritas Cluster VM to work
w/o io fencing disks,
but driver need to be loaded. This is supported for certain CVM config,
but not for RAC.
Best way is still to turn iofencing off completely in the init script.
IOFencing.
driver get loaded by kernel.
it search for vxfendg to find which disk group to use, then
run vx... cmd to generate /etc/vxfentab, which has list of device path
for LUN beloging to io fencing dg.
4.0 use /dev/rdsk/cXtXdXsX, 4.1 use the multipath devices, such as
/dev/rdsk/emcpowerXc
the driver is much more robust than the vxfentsthdw script.
4.0 vxfentsthdw -g vxfencoorddg fails, though driver should be all good.
4.1 vxfentsthdw -g will work correctly using emcpowerX devices and know
that they may not be the
same device path on different nodes. 4.1 resolved all io fencing issues
found in 4.0.
4.1 vxdisk -o alldgs list will also show the emcpowerX as device name,
instead of generic DISK_X.
The naming now is at the mercy of PowerPath, veritas see them just as
solaris format command see them.
It may still not be persistent binding, but at least easy corss ref b/w
veritas, solaris format, and info
presented in EMC Navisphere. No ASL (Array Support Lib) was needed.
vxfenadm -i /dev/rdsk/emcpower0c
Display serial number of the device (LUN, disk)
vxfenadm -g /dev/rdsk/emcpower0c
Show IO Fencing info
hastop -all # stop vcs for the whole cluster, ready for both machine
to shutdown.
hastop -local # stop vcs on local machine only, it will stop the
services, no migration by default.
hastop -local -evacuate # stop vcs, migrate (evacuate) service to
another node
# evacuate a single node, just single node clean
exit out of cluster.
---
config eg, for adding oracle test group.
haconf -makerw
hagrp -freeze oracle_group
hares -modify Oracle_oaprod User veritas_monitor
hares -modify Oracle_oaprod Pword veritas_password
hares -modify Oracle_oaprod Table monitor
hares -modify Oracle_oaprod MonScript "./bin/Oracle/SqlTest.pl"
hares -modify Oracle_oaprod DetailMonitor 1
haconf -dump -makero
hagrp -unfreeze oracle_group
---
config commands (typically located in /opt/VRTS/bin):
haconf -makerw
turn config to be read write, so that changes can be made via
haclus
haconf -dump -makero
save and close config from rw, must remember to do this, or else
reboot will have issues!
haclus ...
change cluster config param. (CLI change instead of gui).
/etc/init.d/vx*
vxvm-relocover
starts several deamon, which also take argument and email root
at local machines.
change these!
/etc/llttab ::
set-node oaprod1 # diff for each node, reflect
local node name
set-cluster 1
link ce1 /dev/ce:1 - ether - -
link ce3 /dev/ce:3 - ether - -
link-lowpri ce0 /dev/ce:0 - ether - -
/etc/llthosts ::
0 oaprod1
1 oaprod2
/etc/gabtab ::
/sbin/gabconfig -c -n2
Try this
HOSTNAME=`uname -n`
echo ""
echo "Hostname Logical CU:LDEV Volumes"
echo "-------- ------- ------- -------"
USEDDISKS=`vxprint -ht|grep "sd " |grep -v root|grep -v swap|awk '{print
$8}'|sort|uniq`
for DISK in $USEDDISKS;do
CULDEV=`vxdisk list $DISK | grep 7D86 | cut -b32-35`
CU=`echo $CULDEV|cut -b1-2`
LDEV=`echo $CULDEV|cut -b3-4`
VOLLIST=`vxprint -ht|grep $DISK|grep -v "dm " | awk {'print $3'}|cut -d"-"
-f1`
for VOL in $VOLLIST;do
VFSLINE=`egrep "$VOL |$VOL " /etc/vfstab`
if [ $? -eq 0 ];then
MNTPT=`echo $VFSLINE |awk '{print $3}'`
echo "$HOSTNAME $DISK $CU:$LDEV $MNTPT "
else
echo "$HOSTNAME $DISK $CU:$LDEV $VOL "
fi
done
done
Me <me@hotmail.com> wrote:
>NO way to do that.
>
>vxdisk list will list the Disk Access (da) name (or the name of the
>disk in Volume Manager) to a c#t#d# (or to an enclosure-based name)
>
>
>What I suspect is that you want to know how the enclosure-based name
>(something like EMC0_1 or EMC0_12) convetrs back to a LUN.
>
>
>The reason why enclosure-based names are used, is to make it easy for
>you. The "real" name of the disk always has the form c#t#d# Biggest
>problem is that you will see that the t# will include the 20 character
>WWNN (World Wide Node Number) of the disk. So EMC0_12 might have a
>"real" name like c5t234ed852ab23c423961d0
>
>This is a bit difficult to manage, but if you want to see the "real"
>names next to the enclosure-based name, do "vxdisk -e list".
>
>Once you've got the c#t#d#, you will have to use EMC tools to get the
>LUN numbers.