You are on page 1of 11

HANDBOOK FOR CONFIGURATION AND MANAGEMENT OF A FREENAS SAN

W r i t t e n b y M i c h a e l K l e i n p a s t e | F r i d a y, F e b r u a r y 2 6 , 2 0 1 0

this page left intentionally blank

W r i t t e n b y M i c h a e l K l e i n p a s t e | F r i d a y, F e b r u a r y 2 6 , 2 0 1 0

Table of Contents

Introductions!
FreeNAS! Components for building a FreeNAS SAN ! Key Concepts in FreeNAS ! Setting up the FreeNAS SAN! ZFS Management!

2
2 2 3 5 7

1 of 9

Introductions
FreeNAS
The FreeNAS storage server is an embedded open source NAS (Network-Attached Storage) distribution based on FreeBSD. It supports the following protocols: CIFS (samba), FTP, NFS, TFTP, AFP, RSYNC, Unison, iSCSI (initiator and target) and UPnP. In addition to these protocols FreeNAS supports Software RAID (0,1,5), ZFS, disk encryption, S.M.A.R.T/email monitoring with a WEB conguration interface (from m0n0wall). Due to its small footprint, FreeNAS can be installed on Compact Flash/USB key, hard drive or booted from LiveCD. The purpose of FreeNAS and ZFS is to provide a full SAN experience without the dependency on proprietary commercial SAN products that cost 10x to 100x more than a similarly provisioned FreeNAS SAN. FreeNAS through its ZFS lesystem provides features that costs thousands to tens of thousands just to implement through a commercial vendor.

Components for building a FreeNAS SAN


The FreeNAS Server is comprised of the following components: A server running the FreeNAS operating system. The server need not be expensive, however the more powerful the server the less work it must do to pass data. One or more JBOD (Just a Bunch Of Disks) subsystem(s) with 2 1/2 drives. A dedicated drive chassis that presents the internal drives as though they were independently attached to the server through a SAS card. Multiple JBOD subsystems can be chained to the same system up to the maximum supported drive count of the connecting SAS card(s). Although any JBOD chassis will do and can be mix and matched the preferred JBOD system in this installation is the XJ1100 Series 2U 24-drive JBOD system from AIC (www.aicipc.com). Available via rackmountpro.com as RM2U2024-SA3-212R-JBOD-P350WRPS for single I/O model and RM2U2024-SA26-212R-JBOD-P350WRPS for the dual I/O model. One or more SAS cards (with 8088-8088 SAS cable(s)) to provide connectivity from one or more JBOD chassis to the FreeNAS server. The preferred SAS card are LSI models as they are known compatible with FreeBSD/FreeNAS. The card used in this installation is the LSI SAS 3801X (PCI-X). For PCI-E installations use the LSI SAS 9200-83e (512 max drives) or the LSI SAS 9200-16e (1024 max drives). 2 1/2 drives. SAS/SATA II/SSD-SATA II can be mix and matched. Note: To use SAS drives you must have two SAS expanders in the JBOD system. When using SSDs, you will need to use SLC-NAND drives. Do not use MLC-NAND drives as they do not have the write cycle lifetime of SLC drives. SLC can sustain up to 5,000,000 write cycles whereas MLC can only sustain up to 1,000,000 write cycles. iSCSI aware 1Gigabit Network Switch. This installation uses the Dell PowerConnect 5448. One or more Intel PRO/1000 MT Quad Port NIC(s). These are LAG-ed together in combination with the switch to create 4 Mbit iSCSI connections in and out of the FreeNAS server and the attached server(s) over iSCSI.

2 of 9

Key Concepts in FreeNAS


Learning any new system is a massive undertaking. There are several concepts that can be focused on that are key for a basic understanding of a FreeNAS based SAN system. Understanding these key items enable you to quickly support and manage it. FreeNAS (Currently 0.7.1.4997 running FreeBSD 7.2) and FreeBSD (Currently 8.0) Zeta File System (ZFS) - ZFS is a lesystem like NTFS, FAT32, HFS+, etc. that enables systems to provide SAN like performance, capabilities, and scalability without expensive proprietary hardware. ZFS combined with FreeNAS creates a system that has been proven to outperform Enterprise SANs while remaining highly scalable and including the majority of the extras that proprietary vendors charge for, such as de-duplication, thin provisioning, etc. Note: Some feature will not be available until FreeNAS 0.8. zfs commands vdev Virtual Device Creating ZFS Volumes to be shared via iSCSI. Thin Provisioned zfs Volumes. Thin provisioned volumes present themselves to the target operating system as drives than have more available space than the actual space provisioned. This allows storage managers to buy only the storage they need at the time of purchase and add more to the ZFS pool as needed. Note: Windows Server has a maximum partition size of 2 Terabytes. Deleting ZFS volumes and les zpool command Creating zpools in ZFS RAID-0 - Grouped drives with no redundancy. Mirrored (RAID-10) - 4+drives in sets of 2. 1 drive can fail in each set without overall system failure. RAID-Z - RAID-5 without the R5 write hole. 3+ drives. 1 drive can fail without overall system failure. RAID-Z2 - RAID-6 has two parity stripes. 4+ drives. 2 drives can fail without overall system failure. RAID-Z3 - Triple parity with three parity stripes. 5+ drives. 3 drives can fail without overall system failure. Cache (Implemented in the coming FreeNAS 0.8) Spares Replacing drives iSCSI OS X iSCSI initiator Linux iSCSI initiator Windows iSCSI initiator iSCSI Targets (served from the FreeNAS server)
3 of 9

iSCSI Extents - Created to provide connectivity between the ZFS Volume and the iSCSI Target. SSH - Used to manage the ZFS structure remotely. Camcontrol command - Used to rescan the SAS subsystem for new or missing drives.

4 of 9

Setting up the FreeNAS SAN


The process of setting up the FreeNAS Server is fairly straightforward.

FreeNAS Server
1. 2. 3. Install the SAS card(s) in your FreeNAS server. Install the Quad Port NIC(s) in your FreeNAS Server. Install the drive(s) in your FreeNAS Server. Note keep it minimal. If you can run it from USB or a single SSD do so. The number of drives in your server affects the drive listing in your JBOD(s). For instance, if you have 3 drives in your server, the rst drive in your rst tray will be listed as /dev/da3 not /dev/da1 or /dev/da2. 4. Install your drives in the JBOD subsystem. Make sure to label them as you go. ZFS adds drives to the pool based on their dev name which is listed during the scan. 5. 6. Connect the JBOD UP1 port to the left port on the SAS card. Plug a single network cable into the rst port of the built in Network ports on the FreeNAS server. DO NOT USE THE QUAD PORT NIC PORTS. 7. Install the FreeNAS Operating system from a downloaded and burned ISO to the rst drive or drive set internal to the server. Note: Due to its small footprint, FreeNAS can be installed on Compact Flash/USB key, hard drive or booted from LiveCD. When installed on internal hard drives it is preferably that the OS drives be in an internal RAID array. The recommended RAID level is RAID 10. 8. At the console screen congure the Management Port. In FreeNAS this is known as the LAN port and will be connected to your internal LAN in order to manage the system. Link the active port (usually bge0)to the LAN interface. 9. Assign the LAN interface a manual IP Address.

10. From a web browser, preferably Firefox, connect to the IP Address you assigned to the LAN interface. i.e. http://192.168.0.100. The initial username and password is admin freenas. Note: The Admin account for the web interface is an alias for the command line root account. The passwords are interchangeable. 11. Congure additional General Information under System > General including changing the admin password and setting the web interface to https. 12. Assign the 4 ports (em0, em1, em2 and em3) to the LAGG group via Network > LAGG. This is usually setup as lagg0, lagg1, etc. 13. Assign the LAG to the OPT1 interface and reboot the system. 14. Assign an IP Address to OPT1 via Network > OPT1. Note: The storage network trafc will be separate from LAN trafc. Assign it an IP Address in another subnet. 15. Make all the drives from the JBOD subsystem available to ZFS via Disks > Management. Turn on S.M.A.R.T. Monitoring and set the Pre-formatted le system to zfs storage pool device. Note: RAIDed drives will appear in /dev/ as sda, sdb, etc. Non-RAIDed drives will appear in /dev/ as da# i.e. da1, da2, da3. If the OS is installed on a single drive and there are one or more additional non-RAIDed drives inside the server chassis (not the
5 of 9

JBOD), the rst JBOD drive will be +1 after the last server drive count starting with drive 0 (zero). So for instance, if the OS is installed on drive 0 and there are 2 other drives in the server chassis. The rst drive in the JBOD will be da3. Drive positions do not move if a drive is transferred from one bay to another in the same subsystem. The position is static to the bay as it is scanned.

6 of 9

ZFS Management
For iSCSI presentation you will need to create ZFS Volumes. For simple le sharing via FTP, CIFS, or other protocol you will create a simple ZFS le system that is mounted within the normal FreeNAS lesystem.

Naming Conventions
Any naming convention can be used for pools, lesystems and volumes, but the best practice would be to name pools that are meaningful to the pool. I prefer to name them according to the drive type and size as zfs doesnt list that information, such as sata10k150G/fp01_Edrive. This name for instance tells us that storage pool is made up of 10K, 150GB, SATA drives (the sata part has to be rst because ZFS requires pool to begin with a letter) and the lesystem (or volume in this case) is intended as the Edrive for the server named fp01. This should be considered throughout the storage management process, including iSCSI congurations, NFS, etc.

Creating ZPOOLs
Before you can do anything with zfs you must rst create a zpool. Zpools represent the basic collective storage that is then sliced out through the zfs command as le shares or volumes. They are created by conguring sets of RAIDed drives known as VDEVs that are then collectively pooled together into storage pools. Zpools can be created for different purposes to servers and services either (1 zpool):(1 server/service) or (1 zpool):(many servers/services). To create different RAIDed storage pools use the following commands: #zpool create tank da3 da4 da5 da6 - creates a RAID-0 storage pool named tank consisting of da3, da4, da5 and da6. #zpool create tank mirror da3 da4 mirror da5 da6 - creates a RAID-10 storage pool consisting of da3 and da4 in the rst set with da5 and da6 in the second set. #zpool create tank raidz da3 da4 da5 - creates a RAID-Z storage pool consisting of da3, da4 and da5. #zpool create tank raidz2 da3 da4 da5 da6 - creates a RAID-Z2 storage pool. #zpool create tank raidz3 da3 da4 da5 da6 da7 - creates a RAID-Z3 storage pool. Spare drives can be added at creation or later. Spares can be removed with the remove option, however drives may not be removed from storage pools until an upcoming release of the base ZFS code trickles down to the FreeNAS project. To add hot spares to a storage pool use the following commands: #zpool create tank mirror da3 da4 mirror da5 da6 spare da7 - creates a RAID-10 storage pool with da7 as a hot spare. #zpool add tank spare da7 - adds da7 as a hot spare to the storage pool tank. Drives and vdevs cannot be removed from a storage pool to reclaim unused drive space at this time. Currently only individual drives can be replaced upon failure or upgraded in place to add available storage space. To replace a hard drive in a storage pool use the following command: #zpool replace tank da3 da17 - Replaces drive da3 with drive da17. #zpool replace -f tank da3 da17 - Forces the drive replacement of da3 with drive da17 if da3 shows that it is still in use. Some drives may not respond to the -f switch.

7 of 9

Growing Storage Pools


Zpools are not a static unit as with traditional SANs. ZFS is a highly exible lesystem where growth is concerned. As growth needs increase, storage can be added at any time to provide additional space with no downtime. This is especially important when provisioning sparse volumes (thin provisioning) where space must be closely monitored to ensure actual usage does not exceed actual available space. The most common way to add storage space to a zpool is by adding additional drives to it. To add drives to an existing storage pool use the following commands: #zpool add tank da7 - adds da7 to the existing RAID-0 storage pool or any RAIDZ+ storage pool. #zpool add tank mirror da7 da8 mirror da9 da10 - adds da7 and da8 into the RAID-10 storage pool. Space within a storage pool space can also be grown without the need for additional drives to a pool. Larger and newer drives can replace older and smaller drives within the same drive bay(s) for more space in the assigned storage pool. This allows a system to grow its storage without using additional drive bays. Additional bays may then be used only where it makes sense. For instance, in the case of a storage pool that requires more I/O throughput through the addition of more drives. To add available space to a storage pool through drive replacement do the following: 1. 2. Replace one drive in each vdev and wait until the vdevs RAID has rebuilt the drive. Replace each additional drive in the vdev, waiting for each one to complete its rebuild, until all the drives in the vdev(s) are replaced. The pool will then benet from the additional storage of the newer, larger drives automatically.

L2ARC Caching
L2ARC is a new level of storage specic to ZFS and will be implemented in FreeNAS 0.8 with FreeBSD 8.0 as its OS base. Spindle drives suffer a CPU:Storage access latency differential approximately 100,000x slower than the the CPU:RAM access latency. L2ARC places the SSDs in a new tier to the memory/cpu stack to reduce this access latency differential by caching commonly read/written les in the SSDs as opposed to storing them on the slower spindle drives taking advantage of ZFSs ability to create Hybrid Storage Pools using different drive types and speeds in the same storage pool. This Hybrid Storage Pool allows the system to provide full SSD performance for the les that require the lowest possible latency without requiring every drive in the storage pool to be SSD drives essentially creating a 4th cache level. 1-internal cache of the CPU, 2-L2 CPU cache, 3-System RAM, 4-SSD hybrid cache, Last-spindle drives. To add SSDs as L2ARC cache (once released in FreeNAS 0.8): #zpool create tank mirror da3 da4 mirror da5 da6 cache da15 da16 - Creates a RAID-10 storage pool, as above, adding da15 da16 (SSD drives) as L2ARC cache. #zpool add tank cache da15 da16 - Adds SSDs da15 and da16 to the existing storage pool.

Creating ZFS Volumes


Once one or more zpools are created you may then provision the storage pools by splitting them up or on a 1:1 basis via the zfs command. There are two options for creating ZFS Volumes; standard volumes and Thin Provisioned volumes.

8 of 9

Fat Provisioned or standard volumes/LUNs are akin to the SAN industrys terminology of LUNs; being a virtual device that is presented to the receiving server as a single drive/volume. A volume is created and served to the server in xed increments. Adding storage to a Fat Provisioned volume requires, 1) modifying the volsize property of the volume with the zfs set option, 2) modifying the extent in the FreeNAS Web GUI, 3) refreshing the iSCSI connection and 4) resizing the Volume (if possible) with the target operating systems internal disk management utilities. Thin Provisioned volumes are presented to the server as being physically larger than the actual available storage. This allows storage managers to provision servers with their maximum capable storage while only buying additional drives as they need to increase the actual storage. It also makes adding additional storage to a target host simpler as there is no need to make changes at any level above the storage pool. The new drives and their storage are added per the zpool command, as above, and are instantly available to the provisioned servers with no additional commands. The true amount of space available to the volume is determined by the actual available space of the storage pool the volume resides in. Note 1: 2TB is the maximum partition size Windows NTFS can manage. If a volume is created above 2TB, Windows will have to split the volume into multiple partitions.

WARNING: When using Thin Provisioned or sparse volumes it is imperative that actual storage pool usage on the FreeNAS server be continuously monitored to ensure that storage is not depleted. The servers receiving their ZFS volumes will be unaware of the actual available storage on the SAN.
To create ZFS Volumes use the following commands: #zfs create -V 250G tank/volume01 - Creates a standard zfs volume in the tank storage pool that is 250GB and named volume01. #zfs create -s -V 2TB tank/volume02 - Creates a thin provisioned zfs volume in the tank storage pool that is presented as 2TB in size and named volume02.

Expanding A Standard Volume


The exibility of ZFS extends beyond its storage pools. Once created ZFS Volumes can be further expanded to present additional volume space to a target system. Systems can then expand their existing utilization in whatever manner applies best, whether it is to expand an existing volume to span the new space or add additional partitions. Both the functionality and the command that implements expanding a volume applies to both standard and sparse volumes equally. To expand a ZFS Volume use the following command: #zfs set volsize=2T tank/volume01 - Expands the volumes size to 2 Terabytes.

9 of 9

You might also like