You are on page 1of 72

Physical Storage

Module 3
Data ONTAP 7-Mode Administration

Module Objectives
By the end of this module, you should be able to:
Describe Data ONTAP RAID technology
Identify a disk in a disk shelf based on its ID
Execute commands to determine disk ID
Identify a hot-spare disk in a FAS system
Describe the effects of using multiple disk
types
Create a 32-bit and 64-bit aggregate
Execute aggregate commands in Data
ONTAP
Calculate usable disk space
2011 NetApp, Inc. All rights reserved.

Storage
Data ONTAP provides data storage for clients
Storage is made available to clients by a volume
(or a smaller increment within a volume)
vol1
Volumes are discussed in Module 4
Volumes are made available to clients through
protocols discussed later in this course
Volumes are contained in an aggregate
aggr1
Aggregates not visible to clients

2011 NetApp, Inc. All rights reserved.

Storage Architecture

2011 NetApp, Inc. All rights reserved.

Storage Architecture
Aggregates:
Are created by administrators
Contain one or more plexes
Aggregates types:
Traditional: Deprecated
32-bit: 16 TB limitation
64-bit: Data ONTAP 8.0.x only
system> aggr status
Aggr
State
aggr_trad online
aggr0

online

aggr1

online

2011 NetApp, Inc. All rights reserved.

aggr1
plex0

Status
Options
raid4, trad
32-bit
raid_dp, aggr root
32-bit
raid_dp, aggr
64-bit
5

Storage Architecture (Cont.)


Plex:
Provides mirror capabilities (RAID 0)
with the SyncMirror product
Contain one or more RAID groups

Only one plex in an aggregate


unless mirroring

rg0

aggr1
plex0
rg1

system> sysconfig -r
...
Plex /aggr1/plex0 (online, normal, active, pool0)
RAID group /aggr1/plex0/rg0 (normal)
...
RAID group /aggr1/plex0/rg1 (normal)

...
2011 NetApp, Inc. All rights reserved.

Disks will belong


to pool0 unless
part of SyncMirror
6

Storage Architecture (Cont.)


RAID Group:
Provides data protection
Contains two or more disks

aggr1

RAID types:
RAID 4
RAID-DP (a RAID 6
implementation)

plex0
rg0

rg1

system> sysconfig -r
...
RAID group /aggr1/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool...
--------- ------ ------------- ---- ---parity
0a.24
0a
1
8
FC:A
0...
data
0a.25
0a
1
9
FC:A
0...
2011 NetApp, Inc. All rights reserved.

Storage Architecture (Cont.)


Disks:
Store data
Contained in shelves

aggr1

Disk types:
Parity
Data

plex0

Made up of
4-KB blocks
rg0

rg1

system> sysconfig -r
...
RAID group /aggr1/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool...
--------- ------ ------------- ---- ---parity
0a.24
0a
1
8
FC:A
0...
data
0a.25
0a
1
9
FC:A
0...
2011 NetApp, Inc. All rights reserved.

Disks

2011 NetApp, Inc. All rights reserved.

Disks
All data is stored on disks
To understand how physical media is managed
in your storage system, we will address:

Disk types
Disk qualification
Disk ownership
Spare disks

2011 NetApp, Inc. All rights reserved.

10

Supported Disk Topologies


FC

SATA

SAS

DS14mark2
DS14mark4
(ESH2 and ESH4)

DS14mark2-AT

DS4243
DS2246

FAS2000

FAS2000

FAS2000*

FAS3200

FAS3200

FAS3200

FAS6200

FAS6200

FAS6200

* Some limitation, check the NOW site


2011 NetApp, Inc. All rights reserved.

11

Disk Qualification
NetApp only allows qualified disks to be used
with Data ONTAP
Ensures:
Quality
Reliability

Enforced by /etc/qual_devices
Dont modify
Caution!
Modifying the Disk Qualification
Requirement file can cause your
storage system to halt.

2011 NetApp, Inc. All rights reserved.

12

Disk Names
System assigns Disk ID automatically through the
host_adapter (HA) and device_id

ystem> sysconfig -r
ggregate aggr0 (online, raid_dp, redirect) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)

AID Disk
-------parity
arity
ata

Device
-----0a.16
0a.17
0a.18

HA
-0a
0a
0a

SHELF BAY CHAN Pool Type


----- --- ---- ---- ---1
0 FC:A FCAL
1
1 FC:A FCAL
1
2 FC:A FCAL

RPM
---10000
10000
10000

Used (MB/blks)...
-------------34000/69632000...
34000/69632000...
34000/69632000...

Disk ID = host_adapter.device_id

2011 NetApp, Inc. All rights reserved.

13

Disk Names: host_adapter


host_adapter is the designation for the slot and
port where an adapter is located

0a

FAS6280 with optional IOXM


2011 NetApp, Inc. All rights reserved.

14

Disk Names: device_id

Shelf ID
13 12 11 10

Bay Number

Shelf ID

Bay Number

Device ID

130

2916

130

4532

130

6148

130

7764

130

9380

130

10996

130

125112

2011 NetApp, Inc. All rights reserved.

15

The fcstat device_map Command


Use the fcstat command to troubleshoot disks and
shelves
Use the fcstat device_map command to show
disks and their relative physical position map of drives
on an FC loop
system> fcstat device_map
Loop Map for channel 0a:
Translated Map: Port Count
7 29
Shelf mapping: Shelf 1: 29
Loop Map for channel 0b:
Translated Map: Port Count
7 45
Shelf mapping: Shelf 2: 45

2011 NetApp, Inc. All rights reserved.

15
28 27 25 26 23 22 21 20 16 19 18 17 24
28 27 26 25 24 23 22 21 20 19 18 17 16
15
44 43 41 42 39 38 37 36 32 35 34 33 40
44 43 42 41 40 39 38 37 36 35 34 33 32

16

Disk Ownership
Disks are assigned to one system controller
Disk ownership is either:
Hardware-based
Determined by slot position of the host bus
adapter (HBA) and shelf module port

Software-based
Determined by storage system administrator
Storage Systems
FAS6000 series
FAS3100 series
FAS3000 series
FAS2000 series

2011 NetApp, Inc. All rights reserved.

Software Disk Ownership


X
X
X
X

Hardware Disk Ownership

17

Disk Ownership (Cont.)


To determine your systems ownership:
system> storage show

Hardware-based: SANOWN not enabled


Software-based will report the current ownership

In a stand alone storage system without


SyncMirror
Disks are owned by a single controller
Disks are in pool0
High-availability and SyncMirror
are discussed in Module 13
2011 NetApp, Inc. All rights reserved.

18

Hardware-Based Ownership
Determined by two conditions:
1.
2.

How a storage system is configured


How the disk shelves are attached to it

If a stand alone system, system owns all disks


directly attached to it
Discussed in
Module 13
If part of a high-availability configuration:
.
.

Local node owns disks connected to ESH A channel


Partner owns disk connected to ESH B channel

Channel A

Channel B
2011 NetApp, Inc. All rights reserved.

19

Software-Based Ownership
Determined by storage system administrator
To verify current ownership:
system> disk show -v
DISK
OWNER
--------- --------------0b.43
Not Owned
...
0b.29
system (84165672)
...

POOL
----NONE

SERIAL NUMBER
------------41229013

Pool0

41229011

To view all disks without an owner:


system> disk show -n
DISK
OWNER
--------- --------------0b.43
Not Owned
...
2011 NetApp, Inc. All rights reserved.

POOL
----NONE

SERIAL NUMBER
------------41229013
20

Software-Based Ownership (Cont.)


To assign disk ownership, use:
system> disk assign {disk_list|all|
[-T storage_type] -n count|auto}...
disk_list is the Disk IDs of the unassigned disk
T is either ATA, FCAL, LUN, SAS, or SATA

To assign a specific set of disks:


system> disk assign 0b.43, 0b.41, 0b.39

To assign all unassigned disks:

Specify the Disk


system> disk assign all
IDs that you wish
To unassign disks:
to work with
system> disk assign 0b.39 -s unowned -f
s is used to specify the sysid to take ownership
f is used to force assignment of previously assigned disks
NOTE: Unassign only hot spare disks

2011 NetApp, Inc. All rights reserved.

21

Software-Based Ownership (Cont.)


Automatic assignment option:
system> options disk.auto_assign

Specifies if disks will be automatically assigned


on systems with software disk ownership
Default on
Data ONTAP looks for any unassigned disks
and assigns them to the same system and pool
as other disks on their loop
Automatic assignment is invoked:
Every 5 minutes
10 minutes after boot
system> disk assign auto
2011 NetApp, Inc. All rights reserved.

22

Matching Disk Speeds


When creating an aggregate, Data ONTAP
selects disks:
With same speed
That match speed of existing disks

Data ONTAP verifies that adequate spares are


available
If spares are not available, Data ONTAP will
warn you
NetApp recommends to have spares available

2011 NetApp, Inc. All rights reserved.

23

Using Multiple Disk Types in an Aggregate


Drives in an aggregate can be:
Different speeds (not
recommended)
On the same shelf or on different
shelves

Avoid mixing drive types within an


aggregate
FC and SAS can be mixed (not
recommended)
FC and SATA or SAS and SATA
cannot be mixed

The spares pool is global with a


single controller
2011 NetApp, Inc. All rights reserved.

24

Spare Disks
Spare disks are used to:
Increase aggregate capacity
Replace failed disks

Disks must be zeroed before use


Automatically zeroed when the disk is brought
into use
NOTE: NetApp recommends zeroing disks
prior to use
system> disk zero spares

2011 NetApp, Inc. All rights reserved.

25

System Manager: Disk Management

Select Disks to
reveal a list of
disks

2011 NetApp, Inc. All rights reserved.

26

Disk Protection
and Validation

2011 NetApp, Inc. All rights reserved.

27

Disk Protection and Validation


Data ONTAP protects Data ONTAP
data through:
validates data
RAID
through:
Disk scrubbing

2011 NetApp, Inc. All rights reserved.

28

RAID Groups
RAID groups are a collection of data disks and
parity disks
RAID groups provide protection through parity
Data ONTAP organizes disks into RAID groups
Data ONTAP supports:
RAID 4
RAID-DP

2011 NetApp, Inc. All rights reserved.

29

RAID 4 Technology
RAID 4 protects against data loss that results from a
single-disk failure in a RAID group
A RAID 4 group requires a minimum of two disks:
One parity disk
One data disk

Data

Data

Data

2011 NetApp, Inc. All rights reserved.

Data

Data

Data

Data

Parity

30

RAID-DP Technology
RAID-DP protects against data loss that results from
double-disk failures in a RAID group
A RAID-DP group requires a minimum of three disks:
One parity disk
One double-parity disk
One data disk

Data

Data

Data

2011 NetApp, Inc. All rights reserved.

Data

Data

Data

Parity

DoubleParity

31

RAID Group Size


RAID-DP
NetApp
Platform

Minimum
Group Size

Maximum Group
Size

Default
Group Size

All storage systems


(with SATA disks)

16

14

All storage systems


(with FC or SAS disks)

28

16

Minimum
Group Size

Maximum Group
Size

Default
Group Size

All storage systems


(with SATA)

All storage systems


(with FC or SAS)

14

RAID 4
NetApp
Platform

2011 NetApp, Inc. All rights reserved.

32

Growing Aggregates
Take care on how you grow your aggregates
Existing
rg0
Data

Data

Data

Data

Data

Data

Data

Parity

Data

Data

Data

Data

Data

Data

Data

Parity

Data

Parity

Existing
rg1

If you grow this existing full configuration by 3 disk...


... then the new data disks may become
hot disks
2011 NetApp, Inc. All rights reserved.

33

Data Validation
NetApp provides data validation, using several
different methods:
RAID-level checksums
Media scrub process
RAID scrub process

RAID-level checksums enhance data


protection and reliability
WAFL (Write Anywhere File Layout) provides that
real-time validation always occurs
Two different checksums:
Block Checksums (BCS)
Zone Checksums (ZCS) - used only with V-Series
2011 NetApp, Inc. All rights reserved.

34

Data Validation and Disk Structure


To understand disk protection and validation,
you must understand disk structure
Track

Mathematical
Sector
Sector

2011 NetApp, Inc. All rights reserved.

35

Block Checksums (BCS) Method with FC


In FC disks,
sector size
is 520 Bytes

4 Byte checksums
for 4096 Bytes
stored within cluster
1

Inode number and timestamp

520 Bytes

NetApp Data Block = 4096 Bytes


2011 NetApp, Inc. All rights reserved.

64 Bytes
36

Block Checksums (BCS) Method w/ ATA


In ATA disks,
sector size
is 512 Bytes
Wasted space
Checksums
for previous
4096 Bytes
stored within
9th sector
1

Inode number and timestamp

512 Bytes

NetApp Data Block = 4096 Bytes


2011 NetApp, Inc. All rights reserved.

64 Bytes
37

Data Validation Processes


Two processes:
system> options raid.media_scrub
Checks for media errors only
If enabled, runs continuously in the background

system> options raid.scrub


Also called disk scrubbing
Checks for media errors by verifying the
checksum in every block

2011 NetApp, Inc. All rights reserved.

39

Comparing Media and RAID Scrubs


A media scrub:
Is always running in the
background when the storage
system is not busy
Looks for unreadable blocks
at the lowest level (0s and 1s)
Is unaware of the data stored
in a block
Takes corrective action when
it finds too many unreadable
blocks on a disk (sends
warnings or fails a disk,
depending on findings)

2011 NetApp, Inc. All rights reserved.

A RAID scrub:
Is enabled by default
Can be scheduled or disabled
Disabling is not
recommended
Uses RAID checksums
Reads a block and then
checks the data
If the RAID scrub finds a
discrepancy between the
RAID checksum and the data
read, it re-creates the data
from parity and writes it back
to the block
Ensures that data has not
become stale by reading
every block in an aggregate,
even when users havent
accessed the data
40

About Disk Scrubbing


Automatic RAID scrub:
By default, begins at 1:00 a.m. on Sundays
Schedule can be changed by an administrator
Duration can be specified by an administrator

Manual RAID scrub overrides automatic settings


To scrub disks manually:
system> options raid.scrub.enable off
And then: system> aggr scrub start

To view scrub status:


system> aggr scrub status aggrname

To configure the reconstruction impact on performance


system> options raid.reconstruct.perf_impact

To configure the scrubbing performance impact


system> options raid.scrub.perf_impact
2011 NetApp, Inc. All rights reserved.

41

Disk Failure and Physical Removal


To fail a disk:
system> disk fail disk_id

To unfail a disk:
system> priv set advanced
system*> disk unfail disk_id

To unload a disk so it can be physically removed:


system> disk remove disk_id
The disk is now ready
to be pulled from the
shelves

2011 NetApp, Inc. All rights reserved.

42

Disk Sanitization
If you have sensitive data on the disk, you might want
to do more than remove the disk... sanitize the disk
Disk sanitization is a process of physically obliterating
data by overwriting disks with specified byte patterns
or random data so that recovery of the original data
becomes impossible
Administrators may choose up to three patterns to use
or use the default pattern specified by Data ONTAP

2011 NetApp, Inc. All rights reserved.

43

Disk Sanitization (Cont.)


License the storage for sanitization:
system> license add XXXXXX
Verify the disks to be sanitized:
system> sysconfig -r
Start the sanitization operation:
system> disk sanitize start -r -c 3 disk_list

-r provides a random pattern to overwrite the disks


-c provides the number of times to run to operation (max = 7)
Administrators may provide their own pattern with the -p option
disk_list specifies space-separated list of Disk IDs

To check the status of the sanitization operation:


system> disk sanitize status
To release disks back to the spare pool:
system> disk sanitize release disk_list

2011 NetApp, Inc. All rights reserved.

44

Degraded Mode
Degraded mode occurs when:
A single disk fails in a RAID 4 group with no spares
Two disks fail in a RAID-DP group with no spares

Degraded modes operates for 24 hours, during which time:


Data is still available
Performance is less-than-optimal
Data must be recalculated from parity until the failed disk is
replaced
CPU usage increases to calculate from parity

System shuts down after 24 hours


To change time interval, use the options raid.timeout
command
If an additional disk in the RAID group fails during degraded
mode, the result will be data loss

2011 NetApp, Inc. All rights reserved.

45

Replacing a Failed Disk by Hot Swapping


Hot-swapping is the process of removing or
installing a disk drive while the system is
running and allows for:
Minimal interruption
The addition of new disks as needed

Removing two disks from a RAID 4 group:


Double-disk failure
Data loss will occur

Removing two disks from a RAID-DP group:


Degraded mode
No data loss
2011 NetApp, Inc. All rights reserved.

46

Replacing Failed Disks


750 GB

1 TB

750 GB

750 GB

750 GB

750 GB

NOTE: Disk resizing occurs if a smaller disk is replaced by a larger one

2011 NetApp, Inc. All rights reserved.

47

Disk Replacement
To replace a data disk with a spare disk:
system> disk replace start diskname
spare_diskname
system> disk replace start 0a.21 0a.23

Parity
Disk

0a.20

0a.21

0a.22

0a.23

Data
Disk

Target
Disk

Data
Disk

Spare
Disk

To check the status of a replace operation:


system> disk replace status

To stop the disk replace operation:


system> disk replace stop diskname

2011 NetApp, Inc. All rights reserved.

48

Aggregates

2011 NetApp, Inc. All rights reserved.

49

Aggregates
Aggregates will logically contain flexible volumes
(FlexVol volumes) - see next module
NetApp recommends aggregates to be either:
32-bit
64-bit
An aggregate name must:
Begin with either a letter or the underscore character
(_)
Contain only letters, digits, and underscore
characters
Contain no more than 255 characters

2011 NetApp, Inc. All rights reserved.

50

Adding an Aggregate
To add an aggregate using the CLI:
system> aggr create ...
To add an aggregate using NetApp System Manager:
Use the Aggregate Wizard
When adding aggregates, you must have the following
information available:
Aggregate name
Aggregate type (32-bit is default)
Parity (DP is default)
RAID group size (minimum)
Disk selection method
Disk size
Number of disks (including parity)
2011 NetApp, Inc. All rights reserved.

51

Creating an Aggregate Using the CLI


To create a 64-bit aggregate:
system> aggr create aggr -B 64 24

Creates a 64-bit aggregate called aggr with 24


disks
By default, this aggregate uses RAID-DP
24 disks must be available (spares) for the
command to succeed
To create a 32-bit aggregate:
system> aggr create aggr -B 32 24

or
system> aggr create aggr 24
2011 NetApp, Inc. All rights reserved.

52

32-bit or 64-bit Aggregate


NetApp recommends the following when
creating an aggregate:

32-bit
Maximize performance
with no need of allocating
space more than 16 TB

64-bit
Provides high performance
as well as the ability to exceed
the 16 TB limitation

NOTE: 64-bit aggregates are only available in


Data ONTAP 8.0 and later

2011 NetApp, Inc. All rights reserved.

53

Common Aggregate Commands


To grow an existing aggregate:
system> aggr add aggr [options] disklist

To check status of an existing aggregate:


system> aggr status aggr [options]

To rename an aggregate:
system> aggr rename aggr new_aggr

To take an aggregate offline:


system> aggr offline aggr

To put an aggregate back online:


system> aggr online aggr

To destroy an aggregate:
system> aggr offline aggr
system> aggr destroy aggr

2011 NetApp, Inc. All rights reserved.

You must destroy


all volumes inside
before taking the
aggregate offline
First take the
aggregate offline

54

System Manager: Storage View

Select Storage and launch the


wizard to configure

2011 NetApp, Inc. All rights reserved.

55

Storage Configuration Wizard

NFS and CIFS are


discussed in
Module 7 and
Module 8
respectively
2011 NetApp, Inc. All rights reserved.

This optional page


within the wizard
appears if you have
NFS and CIFS licensed

56

Storage Configuration Wizard (Cont.)

2011 NetApp, Inc. All rights reserved.

57

System Manager: Aggregate

Select Aggregates
to administrate
aggregates

2011 NetApp, Inc. All rights reserved.

Select Create to
create a new
aggregate

58

Create Aggregate Wizard

Check for a 64-bit aggregate


or leave it blank for a
32-bit aggregate
2011 NetApp, Inc. All rights reserved.

59

Create Aggregate Wizard (Cont.)

2011 NetApp, Inc. All rights reserved.

60

Create Aggregate Wizard (Cont.)

2011 NetApp, Inc. All rights reserved.

61

Space Allocation

2011 NetApp, Inc. All rights reserved.

62

Aggregate Space Allocation


Understanding how Data ONTAP allocates
space is important
Space allocation competing concerns:
Use space
efficiency
1-TB
ATA disks
are used

Protect
Data

In the this example, we will use the following:


system> aggr create aggr1 5@847

Data
2011 NetApp, Inc. All rights reserved.

Data

Data

Parity Double-Parity

RAID-DP
is the
default
63

Aggregate Space Allocation (Cont.)


First 20 MB of every disk is used for kernel space and
disk metadata such as disk labels
Data

...

1 TB

Data

...

1 TB

Data

...

1 TB

Data

...

1 TB

Data

...

20 MB

2011 NetApp, Inc. All rights reserved.

1 TB
64

Aggregate Space Allocation (Cont.)


Second, disks are right-sized
When you purchase a disk, the disk is
originally calculated in decimal format where
1 GB = 1000 MB
A 1-TB disk is in decimal format: 1000000 MB
When the Data ONTAP analyzes the disk, it
computes it in binary format where 1 GB = 1024 MB
1000000 MB / 1024 GB / 1024 MB = 976.56 GB
Data

...

977 GB

1 TB

system> aggr status -r aggr1


...
RAID
Disk Device HA SHELF BAY CHAN Pool Type RPM Used(MB/blks) Phys(MB/blks)
-----------------------------------------------------------------------------data
2b.52
2b
3
4
FC:A
- ATA 7200 847555/... 847827/...

But wait Data ONTAP reports more space taken away


2011 NetApp, Inc. All rights reserved.

65

Aggregate Space Allocation (Cont.)


Right-sized also include reducing the size slightly to
eliminate manufacturing variance
Data

...

Data

Data

...

847 GB 1 TB

Data

...

847 GB

1 TB

Data

...

847 GB

1 TB

847 GB is the
. . .right-size
allocation for 1-TB ATA disks

2011 NetApp, Inc. All rights reserved.

847 GB 1 TB

847 GB 1 TB

66

Aggregate Space Allocation (Cont.)


In Data ONTAP prior to version 7.3:
Aggregate size is calculated using all disks in the
aggregate

Data

Data

Data

Parity Double-Parity

In Data ONTAP 7.3 and later:


Aggregate size is calculated using the size of data
disks
Only data disks in the aggregate are included

Data
2011 NetApp, Inc. All rights reserved.

Data

Data
67

Aggregate Space Allocation (Cont.)


Third, you can use 90% of the available space
10% is for WAFL Reserve which provides efficiency

Data

...
10% WAFL Reserve

847 GB 1 TB

Data

...

847 GB 1 TB

Data

...

847 GB 1 TB

90% of the available space


2011 NetApp, Inc. All rights reserved.

68

Space Usage of an Aggregate


To show the available space in an aggregate:
system> aggr show_space aggr

Example:

In increments of GB

system> aggr show_space -g aggr1


Aggregate aggr1'

Space available after right-size and kernel space

Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS Smtape
2483GB
248GB
0GB
2234GB
0GB
0GB
0GB
This aggregate contains no volume
Aggregate
Total space
Snap reserve
WAFL reserve

Allocated
0GB
0GB
248GB

10% WAFL reserved


Used
0GB
0GB
0GB

Avail
2234GB
0GB
248GB

90% available space


can used
2011 NetApp, Inc. All rights reserved.

69

Module Summary
In this module, you should have learned to:
Describe Data ONTAP RAID technology
Identify a disk in a disk shelf based on its ID
Execute commands to determine disk ID
Identify a hot-spare disk in a FAS system
Describe the effects of using multiple disk
types
Create a 32-bit and 64-bit aggregate
Execute aggregate commands in Data ONTAP
Calculate usable disk space
2011 NetApp, Inc. All rights reserved.

70

Exercise
Module 3: Physical Storage
Estimated Time: 60 minutes

Check Your Understanding: Answers


What is a RAID group?
A collection of disks organized to protect data
that includes:
One or more data disks
One or two parity disks for protection

Why use double parity?


To protect against a double-disk failure

2011 NetApp, Inc. All rights reserved.

74

Check Your Understanding: Answers


(Cont.)
What is the RAID group size and aggregate
type of the following command?
aggr create newaggr 32

Assuming a default RAID group size of 16, this


creates two RAID groups
Creates a 32-bit aggregate

What is the minimum number of disks in a


RAID-DP group?
Three disks (one data, one parity and one
double-parity disk)

2011 NetApp, Inc. All rights reserved.

75

You might also like