You are on page 1of 4

Solaris Containers cheat sheet

This a quick cheat sheet of the commands that can be used when using zones (containers), for a more
complete guide see solaris zones.

Zone States

Configuration has been completed and storage has been committed.


Configured
Additional configuration is still required.
Incomplete Zone is in this state when it is being installed or uninstalled.
The zone has a confirmed configuration, zoneadm is used to verify the
Installed configuration, Solaris packages have been installed, even through it has
been installed, it still has no virtual platform associated with it.
Zone's virtual platform is established. The kernel creates the zsched
process, the network interfaces are plumbed and filesystems mounted. The
Ready (active)
system also assigns a zone ID at this state, but no processes are
associated with this zone.
A zone enters this state when the first user process is created. This is
Running (active)
the normal state for an operational zone.
Shutting down +
Normal state when a zone is being shutdown.
Down (active)

Cheat sheet

zonecfg -z <zone>
Creating a zone
see creating a zone for a more details
deleting a zone from the global
zonecfg -z <zone> delete -F
ssytem
Display zones current
zonecfg -z <zone> info
configuration
Create a zone creation file zonecfg -z <zone> export

Verify a zone zoneadm -z <zone> verify


Installing a zone zoneadm -z <zone> install
Ready a zone zoneadm -z <zone> ready
boot a zone zoneadm -z <zone> boot
reboot a zone zoneadm -z <zone> reboot
halt a zone zoneadm -z <zone> halt
uninstalling a zone zoneadm -z <zone> uninstall -F
Veiwing zones zoneadm list -cv

login into a zone zlogin <zone>


login to a zones console zlogin -C <zone> (use ~. to exit)
login into a zone in safe mode
zlogin -S <zone>
(recovery)

# pkgadd -G -d . <package>
add/remove a package (global
zone) If the -G option is missing the package will be
added to all zones
# pkgadd -Z -d . <package>
add/remove a package (non-
global zone) If the -Z option is missing the package will be
added to all zones
Query packages in all non-
# pkginfo -Z
global zones
query packages in a specified
# pkginfo -z <zone>
zone

lists processes in a zone # ps -z <zone>


list the ipcs in a zone # ipcs -z <zone>
process grep in a zone # pgrep -z <zone>
list the ptree in a zone # ptree -z <zone>
Display all filesystems # df -Zk
display the zones process
informtion (must be login into # psrstat -Z
the zone)

Quick and dirty ZFS cheatsheet


December 20, 2008

Create simple striped pool:


zpool create [pool_name] [device] [device] ...
zpool create datapool c5t433127A900011C370000C00003210000d0
c5t433127B4001031250000900000540000d0

Create mirrored pool:


zpool create [pool_name] mirror [device] [device] ...
zpool create datapool mirror c5t433127A900011C370000C00003210000d0
c5t433127B4001031250000900000540000d0

Create Raid-Z pool:


zpool create [pool_name] raidz [device] [device] [device] ...
zpool create datapool raidz c5t433127A900011C370000C00003210000d0
c5t433127B4001031250000900000540000d0 c5t439257C4000019250000900000540000d0

Transform simple pool to a mirror:


zpool create [pool_name] [device]
zpool attach [pool_name] [existing_device] [new_device]
zpool create datapool c5t433127A900011C370000C00003210000d0
zpool attach datapool c5t433127A900011C370000C00003210000d0
c5t433127B4001031250000900000540000d0

Expand simple pool:


zpool create [pool_name] [device]
zpool add [pool_name] [new_device]
zpool create datapool c5t433127A900011C370000C00003210000d0
zpool add datapool c5t433127B4001031250000900000540000d0

Expand mirrored pool by attaching additional mirror:


zpool add [pool_name] mirror [new_device] [new_device]
zpool add datapool mirror c5t433127A900011C370000C00003460000d0
c5t433127B400011C370000C00003410000d0

Replace device in a pool:


zpool replace [pool_name] [old_device] [new_device]
zpool replace datapool c5t433127A900011C370000C00003410000d0
c5t433127B4001031250000900000540000d0

Destroy pool:
zpool destroy [pool_name]
zpool destroy datapool
Set pool mountpoint:
zfs set mountpoint=/path [pool_name]
zfs set mountpoint=/export/zfs datapool

Display configured pools:


zpool list
zpool list

Display pool status info:


zpool status [-v] [pool_name]
zpool status -v datapool

Display pool I/O statistics:


zpool iostat [pool_name]
zpool iostat datapool

Display pool command history:


zpool history [pool_name]
zpool history datapool

Export a pool:
zpool export [pool_name]
zpool export datapool

Import a pool:
zpool import [pool_name]
zpool import datapool

Create a filesystem:
zfs create [pool_name]/[fs_name]
zfs create datapool/filesystem

Destroy a filesystem:
zfs destroy [pool_name]/[fs_name]
zfs destroy datapool/filesystem

Rename a filesystem:
zfs rename [pool_name]/[fs_name] [pool_name]/[fs_name]
zfs rename datapool/filesystem datapool/newfilesystem

Move a filesystem:
zfs rename [pool_name]/[fs_name] [pool_name]/[fs_name]/[fs_name]
zfs rename datapool/filesystem datapool/users/filesystem

Display properties of a filesystem:


zfs get all [pool_name]/[fs_name]
zfs get all datapool/filesystem

Make a snapshot:
zfs snapshot [pool_name]/[fs_name]@[time]
zfs snapshot datapool/filesystem@friday

Roll back filesystem to its snapshot:


zfs rollback [pool_name]/[fs_name]@[time]
zfs rollback datapool/filesystem@friday

Clone a filesystem:
zfs snapshot [pool_name]/[fs_name]@[time]
zfs clone [pool_name]/[fs_name]@[time] [pool_name]/[fs_name]
zfs snapshot datapool/filesystem@today
zfs clone datapool/filesystem@today datapool/filesystemclone
Backup filesystem to a file:
zfs send [pool_name]/[fs_name] > /path/to/file
zfs send datapool/filesystem@friday > /tmp/filesystem.bkp

Restore filesystem from a file:


zfs receive [pool_name]/[fs_name] < /path/to/file
zfs receive datapool/restoredfilesystem < /tmp/filesystem.bkp

Create ZFS volume:


zfs create -V [size] [pool_name]/[vol_name]
zfs create -V 100mb datapool/zvolume
newfs /dev/zvol/dsk/datapool/zvolume

System Controller Systems (4800, 6900)

These systems have a system controller accessable over the network. They are given a name distinct from the system(s) they
control because they can control multiple different systems. To gain access to the system controller, telnet to the relevant name
and choose the 'Platform Shell' option. You will then need to provide a password. The other options in this list are for access to
the relevant system consoles.

At the command line, the following should be enough for basic operation :-

Command Action

poweron all Turn on all boards and start systems booting


setkeyswitch -d A Turn off domain A (or C if you replace A in the command)

eXtended System Control Facility (XSCF on the M5000)

The XSCF is provided on separate hardware from the main M5000 processing capacity. The network interfaces are distinct from
those used by the server and are configured to connect via ssh. To gain access to the console ssh in to the XSCF controller and
subsequently connect to domain 0 on the system. The M5000 servers are all configured with one domain at present. The
controllers are registered in the format 'jamaican-xscf.iso.port.ac.uk'.

Command Action
poweron -d 0 Power on domain 0
poweroff -d 0 Power off domain 0
sendbreak -d 0 Send a break signal to domain 0
console -d 0 Connect to the console of domain 0
showdomainstatus -a show status of all domains

integrated Lights Out Manager (iLom on the T5220)

The iLom can be used both via ssh or a web browser interface. To connect to the to the iLom of a server use the name format
'bread-lom.iso.port.ac.uk'. Although the servers are capable of making use of an aLom interface, Sun are reportedly
standerdizing all controllers to the iLom model and as such it would be best to familiarize with the new commands.

Command Action
start /SP/console Connect to the console of the server
set /HOST send_break_action=break send a break signal to the host

You might also like