You are on page 1of 7

My Environment:

Currently system is running on c0t0d0 boot disk which is a ufs file system. It has
c0t0d0s0 as root, c0t0d0s1 as swap, c0t0d0s4 as /var. we need to migrate this OS
to ZFS where c0t1d0 as boot disk.

+++++++++++++++++++++
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c0d0s0 14G 5.9G 8.4G 42% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.5G 988K 2.5G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
14G 5.9G 8.4G 42% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c0d0s4 2.5G 180M 2.3G 8% /var
swap 2.5G 44K 2.5G 1% /tmp
swap 2.5G 32K 2.5G 1% /var/run
+++++++++++++++++++++
Procedure and notes:
<!--[endif]-->C0t0d0s0 should be of same size.
<!--[endif]-->As ZFS boot disk not supported by EFI label so make sure y
ou have the zfs boot disk formatted and labeled with SMI. You may do this by usi
ng

#format e
Then select the disk and then label. It will ask two options and do as below

+++++++++++++++++
format> la
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? yes
+++++++++++++++++++++++++++++
Once labeling is done then do check whether any existing Boot environment (BE) i
s present on this system or not. This is only for avoiding any confusion later.
++++++++++++++++++++++++++++
bash-3.00# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names
++++++++++++++++++++++++++++

Now create a zpool named rpool (this name is recommended and I have not checked wh
ether we can go for any other name) with the SMI labeled disk.
++++++++++++++++++++++++++++
bash-3.00# zpool create rpool c0d1s0
bash-3.00# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0d1s0 ONLINE 0 0 0
errors: No known data errors
++++++++++++++++++++++++++++

Lets create BE for existing system FS (file system) and new BE for ZFS.
Here below ufsBE is the current BE and zfsBE would be new ZFS BE. Option -p is the
location where it will store the BE. As here we would be making ZFS boot disk s
o rpool.
++++++++++++++++++++++++++++
bash-3.00# lucreate -c ufsBE -n zfsBE -p rpool
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufsBE>.
Creating initial configuration for primary boot environment <ufsBE>.
The device </dev/dsk/c0d0s0> is not a root device for any boot environment; cann
ot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c0d0s0>
.
Comparing source boot environment <ufsBE> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0d1s0> is not a root device for any boot environment; cann
ot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
cp: cannot access //platform/i86pc/bootlst
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </var>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Updating bootenv.rc on ABE <zfsBE>.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <zfsBE> in GRUB menu
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
++++++++++++++++++++++++++++
The above may take few mins so wait for the completion and you may also check th
e progress by typing zfs list command on another window, where you will see FS siz
e utilization increase.
Now check the BE status
++++++++++++++++++++++++++++
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes yes yes no -
zfsBE yes no no yes -
++++++++++++++++++++++++++++
Few things you need to understand on above output
++++++++++++++++++++++++++++
Is Complete: Yes/No: it shows BE is created successfully or not
Active Now: Yes/No: It shows the system booted from BE
Active on Reboot: Yes/No: It shows BE name to be used on next reboot
Can Delete: Yes/No: BE CANNT be deleted if its active and rest can be deleted.
Copy Status: N/A
++++++++++++++++++++++++++++

As we are going to boot our system from ZFS so lets make that BE active on next
reboot. To do that
++++++++++++++++++++++++++++
bash-3.00# luactivate zfsBE
Generating boot-sign, partition and slice information for PBE <ufsBE>
A Live Upgrade Sync operation will be performed on startup of boot environment <
zfsBE>.
Generating boot-sign for ABE <zfsBE>
NOTE: File </etc/bootsign> not found in top level dataset for BE <zfsBE>
Generating partition and slice information for ABE <zfsBE>
Boot menu exists.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.
**********************************************************************
The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the target BE
.
**********************************************************************
In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:
mount -Fufs /dev/dsk/c0d0s0 /mnt
3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
**********************************************************************
Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <zfsBE> successful.
++++++++++++++++++++++++++++
Now you will see changes in Active on Reboot as bold marked below.
++++++++++++++++++++++++++
bash-3.00# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes yes no no -
zfsBE yes no yes no -
++++++++++++++++++++++++++++
While running luactivate it will tell you that you should use init and shutdown comm
and for reboot so now lets reboot the server using init command.
++++++++++++++++++++++++++++
bash-3.00# init 6
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfsBE> a
s <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful
bash-3.00#
++++++++++++++++++++++++++++
you running on VM machine then you will see the below changes in grub menu entry
.
<!--[if !vml]--><!--[endif]-->

<!--[if !vml]--><!--[endif]-->
Once your system is up then you may check your root FS is on ZFS.
++++++++++++++++++++++++++++
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
rpool/ROOT/zfsBE 20G 6.3G 11G 38% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.3G 388K 2.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
17G 6.3G 11G 38% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 2.3G 44K 2.3G 1% /tmp
swap 2.3G 32K 2.3G 1% /var/run
rpool 20G 36K 11G 1% /rpool
rpool/ROOT 20G 21K 11G 1% /rpool/ROOT
bash-3.00# uname -a
SunOS debasishsol 5.10 Generic_142910-17 i86pc i386 i86pc
bash-3.00# cat /etc/vfstab
#live-upgrade:<Tue Aug 6 13:35:04 IST 2013> updated boot environment <zfsBE>
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd - /dev/fd fd - no -
/proc - /proc proc - no -
#live-upgrade:<Tue Aug 6 13:35:04 IST 2013>:<zfsBE># /dev/dsk/c0d0s1 -
- swap - no -
/dev/zvol/dsk/rpool/swap - - swap - no -
#live-upgrade:<Tue Aug 6 13:35:04 IST 2013>:<zfsBE># /dev/dsk/c0d0s0 /dev/rds
k/c0d0s0 / ufs 1 no -
#live-upgrade:<Tue Aug 6 13:35:04 IST 2013>:<zfsBE># /dev/dsk/c0d0s4 /dev/rds
k/c0d0s4 /var ufs 1 no -
/dev/dsk/c0d0s3 /dev/rdsk/c0d0s3 /globaldevice ufs 2 yes
-
/devices - /devices devfs - no -
sharefs - /etc/dfs/sharetab sharefs - no -
ctfs - /system/contract ctfs - no -
objfs - /system/object objfs - no -
swap - /tmp tmpfs - yes -
bash-3.00# cat /etc/system
*ident "@(#)system 1.18 97/06/27 SMI" /* SVR4 1.5 */
*
* SYSTEM SPECIFICATION FILE
*
* moddir:
*
* Set the search path for modules. This has a format similar to the
* csh path variable. If the module isn't found in the first directory
* it tries the second and so on. The default is /kernel /usr/kernel
*
* Example:
* moddir: /kernel /usr/kernel /other/modules
* root device and root filesystem configuration:
*
* The following may be used to override the defaults provided by
* the boot program:
*
* rootfs: Set the filesystem type of the root.
*
* rootdev: Set the root device. This should be a fully
* expanded physical pathname. The default is the
* physical pathname of the device where the boot
* program resides. The physical pathname is
* highly platform and configuration dependent.
*
* Example:
* rootfs:ufs
* rootdev:/sbus@1,f8000000/esp@0,800000/sd@3,0:a
*
* (Swap device configuration should be specified in /etc/vfstab.)
* exclude:
*
* Modules appearing in the moddir path which are NOT to be loaded,
* even if referenced. Note that `exclude' accepts either a module name,
* or a filename which includes the directory.
*
* Examples:
* exclude: win
* exclude: sys/shmsys
*forceload:
*
* Cause these modules to be loaded at boot time, (just before mounting
* the root filesystem) rather than at first reference. Note that
* forceload expects a filename which includes the directory. Also
* note that loading a module does not necessarily imply that it will
* be installed.
*
* Example:
* forceload: drv/foo
* set:
*
* Set an integer variable in the kernel or a module to a new value.
* This facility should be used with caution. See system(4).
*
* Examples:
*
* To set variables in 'unix':
*
* set nautopush=32
* set maxusers=40
*
* To set a variable named 'debug' in the module named 'test_module'
*
* set test_module:debug = 0x13
++++++++++++++++++++++++++++

You may check changes in /etc/vfstab and /etc/system file as above.
Thats all.

You might also like