You are on page 1of 29

EMC Celerra

EMC Celerra 101


Celerra is the NAS ofering from EMC.
Control station is the management station where all admin
commands are issued:
https://celerra-cs0.myco.com/ # web gui URL. Most feature
avail there, including a console.
ssh celerra-cs0.myco.com # ssh (or rsh, telnet in) for CLI
access
Layers:

VDM (vdm2) / DM (server_2)
|
Export/Share
|
Mount
|
File System
|
(AVM stripe, volume, etc)
|
storage pool (nas_pool)
|
disk
Export can export subdirectory within a File System.
All FS are native Unix FS. CIFS features are added thru
Samba (and other EMC add ons?).
CIFS share recommended thru VDM, for easier migration, etc.
NFS share thru normal DM (server_X). Physical DM can
mount/export FS already shared by VDM,
but VDM can't access the "parent" export done by a DM.
VDM mounts are accessible by underlaying DM via /root_vdm_N
Quota can be on tree (directory), per user a/o group.
Commands are to be issued thru the "control station" (ssh)
(or web gui (Celerra Manager) or Windows MMC SnapIn (Celerra
Management).)
Most commands are the form:
server_...
nas_...
fs_...
/nas/sbin/...
typical options can be abreviated, albeit not listed in
command usage:
-l = -list
-c = -create
-n = -name
-P = -Protocol
nas_halt # orderly shutdown of the whole NS80 integrated.
# issue command from control station.
IMHO Admin Notes
Celerra sucks as it compares to the NetApp. If you have to manage
one of these suckers, I am sorry for you (I am very sorry for
myself too). I am so reay to convert my NS!"# integrate into a
C$%"# an chuck all the &ata Mover that create the NAS hea. 'here
are lot of catchas. More often than not( it )ill *ite you in your ass. +ust
*e very careful( an kno) that )hen you nee to most to change some
option( count on it neeing a re*oot, 'he -I am sorry- .uote came from
a storage architect. /ne of my former *oss use to *e a *ig avocate
of EMC Celerra *ut after having to plan multiple outage to 01 things
()hich NetApp )ouln2t have to)( he *ecame a Celerra hater.
Comments apply to &A3' 4.4 an 4.5 (circa 6##"( 6##7)
1. 8ino)s 0les are store as N9S( plus some hacking sie aition for
meta ata. 'his mean from the getgo( nee to ecie ho) to
store the useri an gi. :serMapper is a very iferent *east
than the usermap.cfg use in NetApp.
2. ;uota is nightmare. <olicy change is impossi*le. 'urning it of re.uire
removing all 0les on the path.
3. 8e* =:I is heavy +ava( slo) an clunky. An if you have the )rong
>ava on your laptop( )ell( goo luck,
4. C?I is very unforgiven in speci0cation of parameters an se.uences.
5. 'he nas@pool comman sho)s ho) much space is availa*le( *ut give
no hints of virtual provisioning limit (NetApp may have the same
pro*lem though)
Some goo stuf( *ut only marginallyA
1. Check<oint is more po)erful than NetApp2s Snapshot( *ut it re.uires
a *it more setup. Argua*ly it oes not hog up mainstream
prouction 0le system space ue to snapshot( an they can *e
elete iniviually( so it is )orth all the e1tra )ork it *rings. A!)
6.
am!le etu!
Belo) is a sample con0g for a *ran ne) setup from scratch. 'he
general Co) isA
1. Setup net)ork connectivity( EtherChannel( etc
2. &e0ne ActiveDStan*y server con0g
3. &e0ne *asic net)ork servers such as &NS( NIS( N'<
4. Create Eirtual CI9S server( >oin them to 8ino)s &omain
5. Create a storage pool for use )ith AEM
6. Create 0le systems
7. Mount 0le systems on &MDE&M( e1portDshare them
# Network configurations
server_sysconfig server_2 -pci cge0 -o
"speed=auto,duplex=auto"
server_sysconfig server_2 -pci cge1 -o
"speed=auto,duplex=auto"
# Cisco EtherChannel (PortChannel)
server_sysconfig server_2 -virtual -name TRK0 -create trk
-option "device=cge0,cge1"
server_sysconfig server_3 -virtual -name TRK0 -create trk
-option "device=cge0,cge1"
server_ifconfig server_2 -c -D TRK0 -n TRK0 -p IP
10.10.91.107 255.255.255.0 10.10.91.255
# ip, netmask, broadcast
# Create default routes
server_route server_2 -add default 10.10.91.1
# Configure standby server
server_standby server_2 -create mover=server_5 -policy auto
# DNS, NIS, NTP setup
server_dns server_2 oak.net 10.10.91.47,162.86.50.204
server_nis server_2 oak.net 10.10.89.19,10.10.28.145
server_date server_2 timesvc start ntp 10.10.91.10

server_cifs ALL -add security=NT
# Start CIFS services
server_setup server_2 -P cifs -o start
#Create Primary VDMs and VDM file system in one step.
nas_server -name VDM2 -type vdm -create server_2 -setstate
loaded
#Define the CIFS environment on the VDM
server_cifs VDM2 -add
compname=winsvrname,domain=oak.net,interface=TRK0,wins=162.8
6.25.243:162.86.25.114
server_cifs VDM2 -J
compname=vdm2,domain=oak.net,admin=hotin,ou="ou=Computers:ou
=EMC Celerra" -option reuse
# ou is default location where object will be added to AD
tree (read bottomm to top)
# reuse option allows AD domain admin to pre-create computer
account in AD, then join it from a reg user (pre-granted)
# the ou definition is quite important, it need to be
specified even when
# "reusing" an object, and the admin account used much be
able to write to
# that part of the AD tree defined by the ou.
# EMC seems to need the OU to be defined in reverse order,
# from the bottom of the LDAP tree, separated by colon,
working upward.
# When in doubt, use the full domain account priviledges.
# <!--

# option to reset password if account password has changed
but want to use same credential/object again...
resetserverpasswd
other troubleshooting commands:
... server_kerberos -keytab ...
server_cifssupport VDM2 -cred -name WinUsername -domain
winDom # test domain user credentials
# Confirm d7 and d8 are the smaller LUNs on RG0
nas_pool -create -name clar_r5_unused -description "RG0
LUNs" -volumes d7,d8

# FS creation using AVM (Automatic Volume Management), which
use pre-defined pools:
# archive pool = ata drives
# performance pool = fc drives
nas_fs -name cifs1 -create size=80G pool=clar_archive
server_mountpoint VDM2 -c /cifs1 # mkdir
server_mount VDM2 cifs1 /cifs1 # mount (fs given a
name instead of traditional dev path)
server_export VDM2 -name cifs1 /cifs1 # share, on VDM,
automatically CIFS protocol
## Mount by VDM is accessible from a physical DM as
/root_vdm_N (but N is not an obvious number)
## If FS export by NFS first, using DM /mountPoint as path,
## then VDM won't be able to access that FS, and CIFS
sharing would be limited to actual physical server
nas_fs -name nfshome -create size=20G
pool=clar_r5_performance
server_mountpoint server_4 -c /nfshome
server_mount server_4 nfshome /nfshome
server_export server_4 -Protocol nfs -option
root=10.10.91.44 /nfshome
nas_fs -name MixedModeFS -create size=10G
pool=clar_r5_performance
server_mountpoint VDM4 -c /MixedModeFS
server_mount VDM4 MixedModeFS /MixedModeFS
server_export VDM4 -name MixedModeFS /MixedModeFS
server_export server_2 -Protocol nfs -option
root=10.10.91.44 /root_vdm_6/MixedModeFS
## Due to VDM sharing the FS, the mount path used by
Physical DM (NFS) need to account for the /root_vdm_X prefix
See aitional notes in Con0g Approach *elo).
Con"g A!!roach
Make ecision )hether to use :SE3MA<<E3 (okay in CI9S only )orl(
*ut if there is any :NI$( most likely N/).
&ecie on ;uotas policy
<lan for Snapshots...
An I< aress can *e use *y F N9S server an F CI9S server.
server@ifcon0g !& cge# !n cge#!F can *e one for the &MG cge#!F can
still *e the interface for CI9S in E&M. Alternatively( the &M can have
other I< (eg cge#!6) if it is esire to match the I<Dhostname of other
CI9SDE&M.
E1port 9S thru E&M 0rst( then N9S e1port use the
Droot@vm@NDmount<oint path. :se E&M instea of &M (server@6) for
CI9S server. A E&M is really >ust a 0le system. 'hus( it can *e
copieDreplicate. Because )ino)s group an many other system
ata is not store at the unerlaying :ni1 9S( there )as a nee to
easily *ackupDmigrate CI9S server. 9or multi!protocol( it is *est to have
F E&M to provie CI9S access( an N9S )ill rie on the <hysical &M.
CAEA complicationA 'he Antivirus scanning feature must *e connecte
to a physical CI9S server( not to a E&M. 'his is *ecause it is F CAEA for
the )hole &M( not multiple instance for multiple E&M that may e1ist on
a &M. =lo*al CI9S share is also re.uire. May still )ant to >ust use
physical &M )ith limite )ino)s userDgroup con0g( as that may not
reaily migrate or *ackup. /verall( still think that there is a nee of 6 I<
per &M. May*e E&M an N9S &M have same I< so that it can have
same hostname. But the =lo*al CI9S share )ill rie on a <hysical &M
)ith a separate I< that user on2t nee to kno). 9inally( perhaps scrap
the iea of E&M( *ut then one may pay early in replicationD*ackup...
Celerra Ho#to
Create a Server
* Create a NFS server
- Really just ensuring a DM (eg server_2) is acting as
primary, and
- Create logical Network interface (server_ifconfig -c -n
cge0-1 ...)
(DM always exist, but if it is doing CIFS thru VDM only,
then it has no IP and thus can't do NFS export).
* Create Physical CIFS sesrver (server_setup server_2 -P
cifs ...)
OR
VDM to host CIFS server (nas_server -name VDM2 -type vdm
-create server_2 -setstate loaded)
+ Start CIFS service (server_setup server_2 -P cifs -o
start)
+ Join CIFS server to domain (server_cifs VDM2 -J ...)
Create FS and Share
1. 9in space to host the 9S (nas@pool for AEM( nas@isk for maso.uistic
MEM)
2. Create the 9S (nas@fs !n 9SNAME !c ...)
3. Mount 9S in E&M( then &M (server@mountpoint !c( server@mount)
4. Share it on )ino)s via E&M (server@e1port !< cifs E&M6 !n
9SNAME D9sMount)
5. E1port the share -via the vm path- (server@e1port !o rootH...
Droot@vm@ND9sMount)
Note that for server creation( &M for N9S is create 0rst( then E&M for
CI9S. But for 9S sharing( it is 0rst mounteDshare on E&M (CI9S)( then
&M (N9S). 'his is *ecause E&M mount )ill ictate the path use *y the
&M as Droot@vm@N. It is kina *ack)ar( almost like lo)er level &M
nee to go thru the higher level E&M( *lame in on ho) the 9S mount
path ene up...
$ile ystem, Mounts, E%!orts
nas_fs -n FSNAME -create size=800G pool=clar_r5_performance
# create fs
nas_fs -d FSNAME # delete fs

nas_fs size FSNAME # determine size
nas_fs -list # list all FS, including private root_* fs
used by DM and VDM
server_mount server_2 # show mounted FS for DM2
server_mount VDM1 # show mounted FS for VDM1
server_mount ALL # show mounted FS on all servers
server_mountpoint VDM1 -c /FSName # create mountpoint
(really mkdir on VDM1)
server_mount VDM1 FSNAME /FSName # mount the named FS
at the defined mount point/path.
# FSNAME is name of the file system, traditionally a
disk/device in Unix
# /FSName is the mount point, can be different than the
name of the FS.
server_mount server_2 -o accesspolicy=UNIX FSNAME /FSName
# Other Access Policy (training book ch11-p15)
# NT (both unix and windows access check NTFS ACL)
# UNIX (both unix and windows access check NFS permission
bits)
# NATIVE (default, unix and nt perm kept independent,
careful with security implication!
Ownership is only maintained once, Take Ownership in
windows will
change file UID as viewed from Unix.)
# SECURE (check ACL on both Unix and Win before granting
access)
# MIXED - Both NFS and CIFS client rights checked against
ACL; Only a single set of security attributes maintained
# MIXED_COMPAT - MIXED with compatible features

NetApp Mixed Mode is like EMC Native. Any sort of mixed
mode is likely asking for problem.
Stict to either only NT or only Unix is the best bet.
server_export ALL # show all NFS export and CIFS share,
vdm* and server_*
# this is really like "looking at /etc/exports" and
# does not indicate actual live exports.
# if FS is unmountable when DM booted up, server_export
would
# still show the export even when it can't possibly
exporting it
# The entries are stored, so after FS is online, can
just export w/ FS name,
# all other params will be looked up from "/etc/exports"
server_export server_4 -all # equivalent to "exportfs -all"
on server_4.
# no way to do so for all DM at the same time.
server_export VDM1 -name FSNAME /FSName
server_export server_2 -Protocol nfs -option
root=10.10.91.44 /root_vdm_6/FSName
## Due to VDM sharing the FS, the mount path used by
Physical DM (NFS) need to account for the /root_vdm_X prefix
(1) server_export server_4 -Protocol nfs -option
root=host1:host2,rw=host1,host2 /myvol
(2) server_export server_4 -Protocol nfs -option
rw=host3 /myvol
(3) server_export server_4 -Protocol nfs -option
anon=0 /myvol
# (1) export myvol as rw to host1 and host2, giving them
root access.
# subsequently add a new host to rw list.
# Celerra just append this whole "rw=host3" thing in there,
so that the list end up having multiple rw= list.
# Hopefully Celerra add them all up together.
# (2) Alternatively, unexport and reexport with the updated
final list.
# (3) The last export add mapping of anonymous user to map
to 0 (root). not recommended, but some crazy app need it
some time.
# there doesn't seems to be any root squash. root= list is
machine that is granted root access
# all other are squashed?
8A3NIN='he accessH clause on Celerra is likely )hat one nee to use
in place of the traitional r)H list.
## root=host1:host2,
## rw=host1:host2:hostN,
## access=host1:host2:hostN
## Celerra require access to be assigned, which effectively
limit which host can mount.
## the read/write list is not effective (I don't know what
it is really good for)
## access= (open to all by default), and any host that can
mount can write to the FS,
## even those not listed in rw=...
## (file system level NFS ACL still control who have write,
but UID in NFS can easily be faked by client)
## In summary: for IP-based access limitation to Celerra,
access= is needed.
## (can probably omit rw=)
## rw= is the correct settings as per man page on the
control station.
## The PDF paints a different pictures though.
# NFS share is default if not specified
# On VDM, export is only for CIFS protocol
# NFS exports are stored in some file,
unshare/unmount
server_export VDM1 -name ShareName\$ # share name with $
sign at end for hidden need to be escaped
server_export VDM1 -unexport -p -name ShareName # -p for
permanent (-unexport = -u)
server_umount VDM1 -p /FSName # -p = permanent, if
omitted, mount point remains
# (marked with "unmounted" when listed by
server_mount ALL)
# FS can't be mounted elsewhere, server cannot be
deleted, etc!
# it really is rmdir on VDM1
Advance FS cmd
nas_fs -xtend FSNAME size=10G ## ie ADD 10G to existing FS
# extend/enlarge existing file system.
# size is the NET NEW ADDITION tagged on to an existing
FS,
# and NOT the final size of the fs that is desired.
# (more intuitive if use the +10G nomenclature, but it is
EMC after all :-/
nas_fs -modify FSNAME -auto_extend yes -vp yes -max_size 1T
# modify FSNAM
# -auto_extend = enlarge automatically. DEF=no
# -vp yes = use virtual provisioning
If no, user see actual size of FS, but it can still
grow on demand.
# -max_size = when FS will stop growing automatically,
specify in G, T, etc.
Defualt to 16T, which is largest FS supported by DART
5.5
# -hwm = high water mark in %, when FS will auto enlarge
Default is 90
nas_fs -n FSNAME -create size=100G pool=clarata_archive
-auto_extend yes -max_size 1000G -vp yes
# create a new File System
# start with 100 GB, auto growth to 1 TB
# use virtual provisioning,
# so nfs client df will report 1 TB when in fact FS could
be smaller.
# server_df will report actual size
# nas_fs -info -size FSNAME will report current and max
allowed size
# (but need to dig thru the text)
erver &M, '&M
nas_server -list # list physical server (Data Mover, DM)
nas_server -list -all # include Virtual Data Mover (VDM)
server_sysconfig server_2 -pci
nas_server -info server_2
nas_server -v -l # list vdm
nas_server -v vdm1 -move server_3 # move vdm1 to DM3
# disruptive, IP changed to the logica IP on destination
server
# logical interface (cge0-1) need to exist on desitnation
server (with diff IP)
#
server_setup server_3 -P cifs -o start # create CIFS server
on DM3, start it
# req DM3 to be active, not standby (type 4)
server_cifs serve_2 -U
compname=vdm2,domain=oak.net,admin=administrator # unjoin
CIFS server from domain
server_setup server_2 -P cifs -o delete # delete the cifs
server
nas_server -d vdm1 # delete vdm (and all the CIFS server
and user/group info contained in it)
torage (ool, 'olume, &isk, i)e
AEM H Automatic Eolume Management MEM H Manual Eolume
Management MEM is very teious( an re.uire lot of unerstaning of
unerlaying infrastructure an isk striping an concatenation. If not
one properly( can create performance im*alance an egraation.
Not really )orth the heaache. :se AEM( an all 9S creation can *e
one via nas_fs pool=...
nas_pool -size -all # find size of space of all hd managed
by AVM
potential_mb = space that is avail on the raid group but
not allocated to the pool yet??

nas_pool -info -all # find which FS is defined on the
storage pool
server_df # df, only reports in kb
server_df ALL # list all *MOUNTED* FS and check points
sizes
# size is actual size of FS, NOT virtual provisioned size
# (nfs client will see the virtual provisioned size)
server_df ALL | egrep -v ckpt\|root_vdm # get rid of
duplicates due to VDM/server_x mount for CIFS+NFS access
nas_fs -info size -all # give size of fs, but long output
rather than table format, hard to use.
nas_fs -info -size -all | egrep name\|auto_ext\|size
# somewhat usable space and virtual provisioning info
# but too many "junk" fs like root_fs, ckpt, etc
nas_volume -list # list disk volume, seldom used if using
AVM.
nas_disk -l
/nas/sbin/rootnas_fs -info root_fs_vdm_vdm1 | grep _server
# find which DM host a VDM
*serMa!!er
:sermapper in EMC is su*stantially iferent than in the NetApp. 3'9M,
It is a program that generate :I& for ne) )ino)s user that it has
never seen *efore. 9iles are store in :ni1 style *y the &M( thus SI&
nee to have a translation &B. :sermapper provies this. A single
:sermapper is use for the entire ca*inet (server@6( @%( @I( E&M6(
E&M%( etc) to provie consistency. If you are a 8ino)s!/N?J shop(
)ith only F Celerra( this may*e okay. But if there is any :ni1( this is
likely going to *e a *a solution. If user get Unix UID, then the same
user accessing fles on windows or Unix is viewed as two diferent user,
as UID from NIS will be diferent than UID created by usermapper :I&
lookup se.uenceA
1. SecMap <ersistent Cache
2. =lo*al &ata Mover SI& Cache (selom pose any pro*lem)
3. local pass)Dgroup 0le
4. NIS
5. Active &irectory Mapping :tility (schema e1tension to A& for EMC
use)
6. :serMapper ata*ase
8hen a )ino)s user hit the system (even for rea access)( Celerra
nee to 0n a :I& for the user. 'echnically( it consults NIS anDor local
pass) 0le 0rst( failing that( it )ill ig in :serMapper. 9ailing that( it
)ill generate a ne) :I& as per :serMapper con0g. Ko))ever( to spee
.ueries( a -cache- is use 0rst all the time. 'he cache is calle
SecMap. Ko)ever( it is really a *inary ata*ase( an it is persisten
across re*oot. 'hus( once a user has hit the Celerra( it )ill have an
entry in the SecMap. 'here is no time out or re*oot that )ill ri the
user from SecMap. Any changes to NIS anDor :serMapper )on2t *e
efective until the SecMap entry is manually elete. /verall( EMC
amit this too( :serMapper shoul not *e use in heterogeneous
8ino)sD:ni1 environment. If :I& cannot *e guarantee from NIS (or
?&A<) then %r party tool from Centrify shoul *e consiere.
server_usermapper server_2 -enable # enable usermapper
service
server_usermapper server_2 -disable
# even with usermapper disabled, and passwd file in
/.etc/passwd
# somehow windows user file creation get some strange GID of
32770 (albeit UID is fine).
# There is a /.etc/gid_map file, but it is not a text file,
not sure what is in it.
server_usermapper server_2 -Export -u passwd.txt # dump out
usermapper db info for USER, storing it in .txt file
server_usermapper server_2 -E -g group.txt # dump out
usermapper db info for GROUP, storing it in file
# usermapper database should be back up periodically!
server_usermapper server_2 -remove -all # remove usermapper
database
# Careful, file owner will change in subsequent
access!!
There is no way to "edit" a single user, say to modify its
UID.
Only choice is to Export the database, edit that file, then
re-Import it back.
# as of Celerra version 5.5.32-4 (2008.06)
8hen multiple Celerra e1ist( :serMapper shoul *e synchroniLe (one
*ecome primary( rest seconary). server_usermapper ALL -enable
primary=IP. Note that even )hen sync is setup( no entry )ill *e
populate on seconary until a user hit the Celerra )ith re.uest. &itto
for the SecMap -cache- &B.
p28 of configuring Celerra User Mapping PDF:
Once you have NIS configured, the Data Mover automatically
checks NIS for a user
and group name. By default, it checks for a username in the
form username.domain
and a group name in the form groupname.domain. If you have
added usernames
and groupnames to NIS without a domain association, you can
set the cifs resolver
parameter so the Data Mover looks for the names without
appending the domain.
server_param server_2 -facility cifs -info resolver
server_param server_2 -facility cifs -modify resolver
-value 1
repeat to all DM, but not applicable to VDM
Setting the above will allow CIFS username lookup from NIS
to match based on username,
without the .domain suffix. Use it! (Haven't seen a
situation where this is bad)
server_param server_2 -f cifs -m acl.useUnixGid -v 1
Repeat for for all DM, but not for VDM.
This setting affect only files created on windows. UID is
mapped by usermapper.
GID of the file will by default map to whatever GID that
Domain User maps to.
Setting this setting, unix primary group of the user is
looked up and used as
the GID of any files created from windows.
Windows group permission settings retains whatever config is
on windows
(eg inherit from parent folder).
ecMa!
:nlike :serMapper( )hich is human reaa*le ata*ase (an authority
*) )hich e1ist one per NS"# ca*inet (or sync *D) multiple ca*inet)(
the SecMap ata*ase e1ist one per CI9S server ()hether it is physcial
&M or E&M).
server_cifssupport VDM2 -secmap -list # list SecMap
entries
server_cifssupport ALL -secmap -list # list SecMap
entries on all svr, DM and VDM included.
server_cifssupport VDM2 -secmap -delete -sid S-1-5-15-
47af2515-307cfd67-28a68b82-4aa3e
server_cifssupport ALL -secmap -delete -sid S-1-5-15-
47af2515-307cfd67-28a68b82-4aa3e
# remove entry of a given SID (user) from the cache
# delete would need to do for each CIFS server.
# Hopefully, this will trick EMC to query NIS for the UID
instead of using one from UserMapper.
server_cifssupport VDM2 -secmap -create -name USERNAME
-domain AD-DOM
# for 2nd usermapper, fetch the entry of the given user
from primary usermapper db.

+eneral Command
nas_version # version of Celerra
# older version only combatible with older JRE (eg 1.4.2
on 5.5.27 or older)
server_log server_2 # read log file of server_2
Con"g $iles
A num*er of 0les are store in etc foler. retrieveDpost using
server_file server_2 -get/-put ... egA server@0le server@% !get
pass) .Dserver@%.pass).t1t )oul retrieve the pass) 0le local to
that ata mover. Each 9ile System have a D.etc ir. It is *est practice to
create a su*irectory (;'ree) *elo) the root of the 9S an then e1port
this ir instea. /n the control station( there are con0g 0les store inA
DnasDserver
DnasDsite Server parameters (most of )hich re.uire re*oot to take
efect)( are store inA
DnasDsiteDslot@param for the )hole ca*inet (all server@M an vm)
DnasDserverDslot@$Dparam (for each &M $)
Celera Management
8ino)s MMC <lug in thing...
Check(oint
Snapshots are kno)n as Check<oint in EMC speak. 3e.uires a SaveEol
to keep the -copy on )rite- ate. It is create automatically )hen 0rst
checkpoint is create( an *y efault gro)s automatically (at 7#N high
)ater mark). But it cannot *e strunk. 8hen the last checkpoint is
elete( the SaveEol is remove. =:I is the only sane )ay to eit it.
Kas a*ilities to create automate scheules for hourly( aily( )eekly(
monthly checkpoints.
,acku! and -estore, &isaster -ecovery
9or N&M< *ackup( each &ata Mover shoul *e 0*er connecte to a
tape rive (eicate). /nce Loning is in place( nee to tell ata mover
to scan for the tapes.
.uotas
Change to use 9ilesiLe policy uring initial setup as )ino)s oes not
support *lock policy ()hich is Celerra efault). Eit the
/nas/site/slot_param on the control station ()hat happen to stan*y
control stationO) a the follo)ing entryA
param quota policy=filesize
Since this is a param change( retare EMC re.uies a re*ootA
server_cpu server_2 -r now3epeat for aitional &M that may e1ist
on the same ca*inet.!!!! ')o -Cavor- of .uotasA 'ree ;uota( an
:serD=roup .uota. Both are per 9S. 'ree ;uoata re.uires creating
irectory (like NetApp .tree( *ut at any level in the 9S). 'here is no
turning of tree .uota( it can only *e remove )hen all 0les in the tree
is elete. :serD=roup .uota can *e create per 9S. Ena*leling re.uire
free!ing of the "S for it to catalogDcount the 0le siLe *efore it is
availa*le again, &isa*ling the .uota has the same efect. :serD=roup
.uota efault have # limit( )hich is monitoring only( *ut oes not
actually have har .uota or enforce anything. !!!! Each 9ile System still
nee to have .uota ena*le... (O) &efault *ehaviour is to eny )hen
.uota is e1ceee. 'his -&eny &isk Space- can *e change (on the Cy
)Do re*ootO)
=:IA 9ile System ;uotas( Settings.
C?IA nas@.uotas !user !eit con0g !fs "SN#$% PP repeat for 'ree ;uota
OO But *y efault( .uota limit is set to #( )hich is to say it is only oing
tracking( so may not nee to change *ehaviour to allo). Celerra
manager is easiest to use. =:I allo)s sho)ing all ;'ree for all 9S( *ut
C?I on2t have this capa*ility. Sucks ehO A( EMC recommens turning on
9ileSystem .uota )henever 9S is create. But nas@.uotas !on !tree ...
!path D is enie( ho) to o thisOO,,
# Create Tree Quota (NA QTree). Should do this for each of
the subdir in the FS that is directly exported.
nas_quotas -on -tree -fs CompChemHome -path /qtree # create
qtree on a fs
nas_quotas -off -tree -fs CompChemHome -path /qtree #
destroy qtree on a fs (path has to be empty)
# can remove qtree by removing dir on the FS from Unix
host, seems to works fine.
nas_quotas -report -tree -fs CompChemHome # display qtree
quota usage
# per user quota, not too important other than Home dir...
# (and only if user home dir is not a qtree, useful in
/home/grp/username FS tree)
nas_quotas -on -user -fs CompChemHome # track user usage
on whole FS
# def limit is 0 = tracking only
nas_quotas -report -user -fs CompChemHome # display users
space usage on whole FS
$rom /a0 E%ercise
nas_quotas -user -on -fs FSNAME # enable user quota on
FsNAMe. Disruptive. (ch12, p22)
nas_quotas -group -on -mover server_2 # enable group quota
on whole DM . Disruptive.
nas_quotas -both -off -mover server_2 # disable both group
and user quota at the same time.
++ disruption... ??? really? just slow down? or FS really
unavailable?? ch 12, p22.
nas_quotas -report -user -fs FSNAME
nas_quotas -report -user -mover server_2
nas_quotas -edit -config -fs FsNAME # Define default quota
for a FS.
nas_quotas -list -tree -fs FSNAME # list quota tree on the
spefified FS.

nas_quotas -edit -user -fs FSNAME user1 user2 ... # edit
quota (vi interface)
nas_quotas -user -edit -fs FSNAME -block 104 -inode 100
user1 # no vi!
nas_quotas -u -e mover server_2 501 # user quota, edit, for
uid 501, whole DM
nas_quota -g -e -fs FSNAME 10 # group quota, edit, for gid
10, on a FS only.
nas_quotas -user -clear -fs FSNAME # clear quota: reset to
0, turn quota off.
Tree Quota
nas_quotas -on -fs FSNAME -path /tree1 # create qtree on FS
(for user???) ++
nas_quotas -on -fs FSNAME -path /subdir/tree2 # qtree can be
a lower level dir
nas_quotas -off -fs FSNAME -path /tree1 # disable user
quota (why user?)
# does it req dir to be empty??
nas_quotas -e -fs FSNAME -path /tree1 user_id # -e, -edit
user quota
nas_quotas -r -fs FSNAME -path /tree1 # -r = -report
nas_quotas -t -on -fs FSNAME -path /tree3 # -t = tree quota,
this eg turns it on on
# if no -t defined, it is for the user??
nas_quotas -t -list -fs FSNAME # list tree quota
To turn off Tree Quotas:
- Path MUST BE EMPTY !!!!! ie, delete all the files, or move
them out.
can one ask for a harder way of turning something
off??!!
Only alternative is to set quota value to 0 so it
becomes tracking only,
but not fully off.
Quota Policy change:
- Quota check of block size (default) vs file size (windows
only support this).
- Exceed quota :: deny disk space or allow to continue.
The policy need to be established from the getgo. They
can't really be changed as:
- Param change require reboot
- All quotas need to be turned OFF (which requires path to
be empty).
Way to go EMC! NetApp is much less draconian in such
change.
Probably best to just not use quota at all on EMC!
If everything is set to 0 and just use for tracking, maybe
okay.
God forbid if you change your mind!

CI$ 1rou0leshooting
server_cifssupport VDM2 -cred -name WinUsername -domain
winDom # test domain user credentials
server_cifs server_2 # if CIFS server is Unjoined from AD,
it will state it next to the name in the listing
server_cifs VDM2 # probbly should be VDM which is part of
CIFS, not physical DM
server_cifs VDM2 -Unjoin ... # to remove the object from AD
tree
server_cifs VDM2 -J
compname=vdm2,domain=oak.net,admin=hotin,ou="ou=Computers:ou
=EMC Celerra" -option reuse
# note that by default the join will create a new "sub
folder" called "EMC Celerra" in the tree, unless OU is
overwritten
server_cifs server_2 -Join compname=dm112-
cge0,domain=nasdocs.emc.com,admin=administrator,ou="ou=Compu
ters:ou=Engineering"
... server_kerberos -keytab ...
Other eldom Changed Con"g
server_cpu server_2 -r now # reboot DM2 (no fail over to
standby will happen)
server_devconfig
server_devconfig server_2 -probe -scsi all # scan for new
scsi hw, eg tape drive for NDMP
server_devconfig ALL -list -scsi -nondisks # display non
disk items, eg tape drive
/nas/sbin/server_tcpdump server_3 -start TRK0 -w
/customer_dm3_fs/tcpdump.cap # start tcpdump,
# file written on data mover, not control station!
# /customer_dm3_fs is a file system exported by server_3
# which can be accessed from control station via path of
/nas/quota/slot_3/customer_dm3_fs
/nas/sbin/server_tcpdump server_3 -stop TRK0
/nas/sbin/server_tcpdump server_3 -display
# /nas/sbin/server_tcpdump maybe a sym link to
/nas/bin/server_mgr
/nas/quota/slot_2/ ... # has access to all mounted FS on
server_2
# so ESRS folks have easy access to all the data!!
/nas/tools/collect_support_materials
# "typically thing needed by support
# file saved to /nas/var/emcsupport/...zip
# ftp the zip file to emc.support.com/incoming/caseNumber
# ftp from control station may need to use IP of the remote
site.

server_user ?? ... add # add user into DM's /etc/passwd,
eg use for NDMP
Net#ork interface con"g
<hysical net)ork oesn2t get an I< aress (for Celera e1ternal
perspective) All net)ork con0g (I<( trunk( route( nsDnisDntp server)
applies to &M( not E&M.
# define local network: ie assign IP
server_ifconfig server_2 -c -D cge0 -n cge0-1 -p
IP 10.10.53.152 255.255.255.224 10.10.53.158
# ifconfig of serv2 create device logical name
protocol svr ip netmask broadcast
server_ifconfig server_2 -a # "ifconfig -a", has mac of
trunk (which is what switch see)
server_ifconfig server_2 cge0-2 down ?? # ifconfig down for
cge0-2 on server_2
server_ifconfig server_2 -d cge0-2 # delete logical
interfaces (ie IP associated with a NIC).
...
server_ping server_2 ip-to-ping # run ping from server_2
server_route server_2 a default 10.10.20.1 # route add
default 10.10.20.1 on DM2
server_dns server_2 corp.hmarine.com ip-of-dns-svr #
define a DNS server to use. It is per DM
server_dns server_2 -d corp.hmarine.com # delete DNS
server settings
server_nis server_2 hmarine.com ip-of-nis-svr # define NIS
server, again, per DM.
server_date server_2 timesvc start ntp 10.10.91.10 # set to
use NTP
server_date server_2 0803132059 # set serverdate format is
YY DD MM HH MM sans space
# good to use cron to set standby server clock once a
day
# as standby server can't get time from NTP.

server_sysconfig server_2 -virtual # list virtual devices
configured on live DM.
server_sysconfig server_4 -v -i TRK0 # display nic in TRK0
server_sysconfig server_4 -pci cge0 # display tx and rx
flowcontrol info
server_sysconfig server_4 -pci cge4 -option
"txflowctl=enable rxflowctl=enable" # to enable rx on cge0
# Flow Control is disabled by default. But Cisco has
enable and desirable by default,
# so it is best to enable them on the EMC. Performance
seems more reliable/repeatable in this config.
# flow control can be changed on the fly and it will not
cause downtime (amazing for EMC!)

If performance is still unpredictable, there is a FASTRTO
option, but that requires reboot!
server_netstat server_4 -s -p tcp # to check retrnsmits
packets (sign of over-subscription)
.server_config server_4 -v "bcm cge0 stat" # to check
ringbuffer and other paramaters
# also to see if eth link is up or down (ie link LED
on/off)
# this get some info provided by ethtool
.server_config server_4 -v "bcm cge0 showmac" # show native
and virtualized mac of the nic
server_sysconfig server_2 -pci cge0 -option "lb=ip"
# lb = load balance mechanism for the EtherChannel.
# ip based load balancing is the default
# protocol defaults to lacp? man page cisco side
must support 802.3ad.
# but i thought cisco default to their own protocol.
# skipping the "protocol=lacp" seems a safe bet
(erformance2tats
'he .server@con0g is an unocumente comman( an EMC oes not
recommene their use. Not sure )hy( I hope it oesn2t crash the ata
mover A!<
server_netstat server_x -i # interface statistics
server_sysconfig server_x -v # List virtual devices
server_sysconfig server_x -v -i vdevice_name #
Informational stats on the virtual device
server_netstat server_x -s -a tcp # retransmissions
server_nfsstat server_x # NFS SRTs
server_nfsstat server_x -zero # reset NFS stats
# Rebooting the DMs will also reset all statistics.
server_nfs server_2 -stats
server_nfs server_2 -secnfs -user -list
.server_config server_x -v "printstats tcpstat"
.server_config server_x -v "printstats tcpstat reset"
.server_config server_x -v "printstats scsi full"
.server_config server_x -v "printstats scsi reset"
.server_config server_x -v "printstats filewrite"
.server_config server_x -v "printstats filewrite reset"
.server_config server_x -v "printstats fcp"
.server_config server_x -v "printstats fcp reset"
tand0y Con"g
Server failoverA 8hen server@6 fail over to server@%( then &M% assume
the role of server@6. E&M that )as running on &M6 )ill move over to
&M% also. All I< aress of all the &M an E&M are treansfere(
incluing the MAC aress. Note that )hen moving E&M from server@6
to server@%( outsie of the fail over( the I< aress are change. 'his is
*ecause such a move is from one active &M to another. I< are kept only
)hen failing over from Active to Stan*y.
server_standby server_2 -c mover=server_3 -policy auto
# assign server_3 as standby for server_2, using auto fail
over policy
Lab 6 page 89
AN 0ackend
If using the integrated model, there only way to peek into
the CX backend is to use navicli command from the control
station.
navicli -h spa getcontrol -busy
# see how busy the backend CX service processor A is
# all navicli command works from the control station even
when
# it is integrated model that doesn't present navisphere to
outside workd
# spa is typically 128.221.252.200
# spb is typically 128.221.252.201
# they are coded in the /etc/hosts file under APM... or
CK... (shelf name)
cd /nas/sbin/setup_backend
./setup_clariion2 list config APM00074801759 # show lot of
CX backend config, such as raid group config, lun, etc
nas_storage -failback id=1 # if CX backend has trespassed
disk, fail them back to original owning SP.
Pro-actively replacing drive
# Drive 1_0_7 will be replaced by a hot spare (run as root):
# -h specify the backend CX controller, ip address in bottom
of /etc/hosts of control station.
# use of navicli instead of the secure one okay as it is a
private network with no outside connections
naviseccli -h xxx.xxx.xxx.xxx -user emc -password emc -scope
0 copytohotspare 1_0_7 -initiate
# --or--
/nas/sbin/navicli -h 128.221.252.200 -user nasadmin -scope 0
copytohotspare 1_0_7 -initiate
# find out status/progress of copy over (run as root)
/nas/sbin/navicli -h 128.221.252.200 -user nasadmin -scope 0
getdisk 1_0_7 -state -rb
*ser2security
Sys amin can create accoutn for themselves into the DetcDpass) of
the control station(s). Any user that have login via ssh to the control
station can issue the *ulk of the commans to control the Celerra. the
nasamin account is the same kin of generic user account. (ie( on2t
>oin the control station to NISD?&A< for general user login,,) 'here is a
root user( )ith pass)or typically set to *e same as nasamin. root is
neee on some special comman in DnasDs*in( such as navicli to
access the *acken C$. All 9S create on the Celerra can *e accesse
from the control station.
/inks
1. EMC <o)er?ink
2. EMC ?a* access E&M6
%.
History
DART 5.6 Released around 2009.0618. Included Data Dedup,
but must enable compression also
which makes deflation a cpu and time expensive, not usable
at all for high performance storage.
DART 5.5 mainstream in 2007, 2008
Q&oc :3?A httpADD))).grumpy1mas.comDemcCelerra.html R
Q&oc :3?A httpADD))).cs.0u.euDStho#FDpsgDemcCelerra.htmlR

You might also like