You are on page 1of 32

SAN Fundamentals Study Guide

SAN Fundamentals

What is SAN
A SAN is a high-speed (1 to 2Gb/s data transfer rates, with the future of 10Gb/s) network with
heterogeneous (mixed vendor or platforms) servers accessing a common or shared pool of heterogeneous
storage devices.
SAN environments provide any-to-any communication between servers and storage resources, including
multiple paths.
The parts of a SAN are:

Client layer The clients are the access point of a SAN.

Server layer The major components in this layer are the servers, the HBAs, including the
GBICs, and the software drivers that enable HBAs to communicate with the fabric layer.

Fabric layer This is the middle layer of a SAN, the network part of a SAN, where hubs and
switches tie all the cables together into a logical and physical network.

Storage layer This is where all the data resides on the disk drives.

Advantages of Using Fibre Channel SAN

High return on investment (ROI) and reduce the total cost of ownership (TCO) by increasing
performance, manageability, and scalability.

Disaster recovery capabilities SAN devices can mirror the data on the disk to another location.

Increased I/O performance SANs operate faster than internal drives or devices attached to a
LAN.
Connectivity Any-to-any connections

Cost effectiveness Serverless backups and tape library sharing

Modular scalability Dynamic capacity

Consolidated storage Sharing of centralized storag

Page 11 of 27

SAN Fundamentals Study Guide


For companies to continue being successful, data storage has become a business-critical consideration. A
storage solution is a fully integrated and tested storage-product configuration consisting of a combination of
hardware, software, and servicesall of which focus on solving specific customer business problems.
Data storage solutions are:

Direct-attached storage (DAS)


Network-attached storage (NAS)
Storage-attached network (SAN)

DAS
DAS is storage connected to a server. The storage itself can be external to the server connected by a cable
to a controller with an external port, or the storage can be internal to the server. Some internal storage
devices use high-availability features such as adding redundant component capabilities.

DAS is accessed by servers attaching directly to a storage system. In a DAS configuration, clients access
storage through the server and the same server resources are used. If the server becomes unavailable, it
disrupts access to any storage directly connected to the server. If certain resources connected to the
server are busy, network traffic increases.
However, DAS is fast and reliable for small-sized networks.
NAS
NAS is storage that resides on the LAN behind the
servers. NAS storage devices require special
storage cabinets providing specialized file access,
security, and network connectivity.

NAS

Requires network connectivity.


Requires a network interface card (NIC) on the server to access the storage.

Page 22 of 27

SAN Fundamentals Study Guide

Provides file-to-disk block mapping.


Provides client access at the file level using network protocols.
Does not require the server to have a SCSI HBA and cable for storage access.
Supports FAT, NTFS, and NFS file systems.

NAS storage devices require special storage cabinets providing specialized file access, security, and
network connectivity. On the server side, the SCSI HBA is no longer needed for storage access.
Servers access the NAS the same way that clients do.

SAN
A SAN is a network composed of many servers, connections,
and storage devices, including disk, tape, and optical storage.
The storage can be located far from the servers that use it.
One server or many heterogeneous servers can share a
common storage device, or many different storage devices.
SAN components include:

Client access to the LAN and SAN levels.


Servers connected to switches or hubs that connect
to storage.
Storage connected to switches or hubs that connect
to servers.
Routers or bridges that connect and interface with
tape libraries or backup devices.

A SAN is different from traditional networks because it is created from storage interfaces.

SAN solutions use a dedicated network behind the servers and are based on primarily Fibre
Channel architecture.
Fibre Channel provides a highly scalable bandwidth over long distances. Fibre Channel has the
ability to provide full redundancy, including switched parallel data paths to deliver high
availability and high performance.
Clients with business-critical data and applications are concerned about high availability. Fibre
Channel SANs help provide the no-single-point-offailure configurations that businesscritical customers require by being able to mirror data or cluster servers over a SAN.

Therefore, a SAN can avoid network bottlenecks. It supports direct, high-speed transfers between
servers and storage devices in the following methods:

Server to storage This is the traditional method of interaction with storage devices. The
SAN advantage is that the same storage device can be accessed serially or concurrently by
multiple servers.
Server to server This provides high-speed, high-volume communications between servers.

Page 33 of 27

SAN Fundamentals Study Guide

Storage to storage In this configuration of a SAN, a disk array could back up its data directly to
tape across the SAN, without server processor intervention. A device could be mirrored remotely
across the SAN for high-availability configurations.

RAID
RAID stands for Redundant Array of Independent Disks and it basically involves combining two or more
drives together to improve the performance and the fault tolerance. Combining two or more drives
together also offers improved reliability and larger data volume sizes. A RAID distributes the data across
several disks and the operating system considers this array as a single disk.
RAID Levels
Several different arrangements are possible and different
standard schemes have evolved which represent a set of tradeoffs between capacity, speed and protection against the data
loss.
Some of the common RAID levels are RAID 0
RAID 0 uses data stripping as the data is broken into fragments
while writing it to the drive. The fragments are then written to
their disks simultaneously on the same sector. While reading, the
data is read off the drive in parallel and so, this type of arrangement offers huge bandwidth.
The trade-off associated with RAID 0 is that a single disk failure destroys the entire array as it offers no
fault tolerance and RAID 0 does not implement error checking.
RAID 1
RAID 1 uses mirroring to write the data to the drives. It also offers fault tolerance from the disk errors and
the array continues to operate efficiently as long as at least one drive is functioning properly.
The trade-off associated with the RAID 1 level is the cost required to purchase the additional disks to
store data.
RAID 2
It uses Hamming Codes for error correction. In RAID 2, the disks are synchronized and they're striped
in very small stripes. It requires multiple parity disks.

Page 44 of 27

SAN Fundamentals Study Guide

RAID 3
This level uses a dedicated parity disk instead of rotated parity stripes and offers improved performance
and fault tolerance. The benefit of the dedicated parity disk is that the operation continues without parity if
the parity drive stops working during the operation.
RAID 4
It is similar to RAID 3 but it does block-level stripping instead of the byte-level stripping and as a result, a
single file can be stored in blocks. RAID 4 allows multiple I/O requests in parallel but the data transfer
speed will be less. Block level parity is used to perform the error detection.
RAID 5
RAID 5 uses block-level stripping with distributed parity and it requires all drives but one to be present to
operate correctly. The reads are calculated from the distributed parity upon the drive failure and the
entire array is not destroyed by a single drive failure. However, the array will lose some data in the event
of the second drive failure.

PROTOCOLS:
What is the difference between SCSI and FC PROTOCOLS?
Two major protocols are used in Fibre Channel SANs The Fibre Channel protocol (used by the hardware to
communicate) and the SCSI protocol (used by software applications to talk to disks).
SCSI Protocol
The SCSI protocol (small computer system interface) is used by operating systems for input/output
operations to disk drives. Data is sent to from the host operating system to the disk drives in large
chunks called "blocks" of data, normally in parallel over a physical interconnect of high-density 68-wire
copper cables. Because SCSI is transmitted in parallel, each bit must arrive at the end of the cable at the
same time. Due to signal strength and "jitter", this limited the maximum distance a disk drive could be
from the host to under 20 meters. This protocol lies on top of the Fibre Channel protocol enabling SANattached server applications to talk to their disks.
FC (Fibre Channel) is just the underlying transport layer that SANs use to transmit data. This is the
language used by the HBAs, hubs, switches and storage controllers in a SAN to talk to each other. The
Fibre Channel protocol is a low-level language meaning that it's just used as a language between the actual
hardware, not the applications running on it.
Actually, two protocols make up the Fibre Channel protocol. Fibre Channel Arbitrated Loop or FC-AL works
with hubs and Fibre Channel Switched or FC-SW works with switches. Fibre Channel is the building block
of the SAN highway. It's like the road of the highway where other protocols can run on top of it just as
different cars and trucks run on top of an actual highway. In other words, if Fibre Channel is the road, then
SCSI is
the truck that moves the data cargo down the road.

Page 55 of 27

SAN Fundamentals Study Guide

The operating systems still use SCSI to communicate with the disk drives in a SAN as Fibre Channel SANs
layer the SCSI protocol on top of the FC protocol. FC can run on copper cables or optical cables. Using
optical cables, the SCSI protocol is serialized (the bits are converted from parallel to serial, one bit at a time)
and transmitted as light pulses across the optical cable. Your data can now run at the speed of light and you
are no longer limited to the shorter distances of SCSI cables. (Disks in an FC fabric can be located up to
100 thousand meters from the host! 100Km).

SCSI: Small Computer System Interface


Small Computer System Interface (SCSI), an ANSI standard, is a parallel interface standard used by Apple
Macintosh computers, PCs, and many UNIX systems for attaching peripheral devices to computers. SCSI
interfaces provide for faster data transmission rates than standard serial and parallel ports. In addition, you
can attach many devices to a single SCSI port. There are many variations of SCSI: SCSI-1, SCSI-2, SCSI-3
and the recently approved standard Serial Attached SCSI (SAS).
FC & FCP: Fibre Channel and Fibre Channel Protocol
The Fibre Channel Standards (FCS) defines a high-speed data transfer mechanism that can be used to
connect workstations, mainframes, supercomputers, storage devices and displays. FCS addresses the
need for very fast transfers of large volumes of information and could relieve system manufacturers from the
burden of supporting the variety of channels and networks currently in place, as it provides one standard for
networking, storage and data transfer. Fibre Channel Protocol (FCP) is the interface protocol of SCSI on the
Fibre Channel.
FCIP: Fibre Channel Over TCP/IP
Fibre Channel Over TCP/IP (FCIP) describes mechanisms that allow the interconnection of islands of
Fibre Channel storage area networks over IP-based networks to form a unified storage area network in a
single Fibre Channel fabric. FCIP relies on IP-based network services to provide the connectivity between
the storage area network islands over local area networks, metropolitan area networks, or wide area
networks.
iFCP: Internet Fibre Channel Protocol
Internet Fibre Channel Protocol (iFCP) is a gateway-to-gateway protocol, which provides fibre channel fabric
services to fibre channel devices over a TCP/IP network. iFCP uses TCP to provide congestion control, error
detection and recovery. iFCP's primary objective is to allow interconnection and networking of existing fibre
channel devices at wire speeds over an IP network. The protocol and method of frame address translation
defined permit the attachment of fibre channel storage devices to an IP-based fabric by means of
transparent gateways.
The fundamental entity in fibre channel is the fibre channel network. Unlike a layered network architecture,
a fibre channel network is largely specified by functional elements and the interfaces between them. These
consist, in part, of the following:
1.

N_PORTs -- The end points for fibre channel traffic.

2.

FC Devices -- The fibre channel devices to which the N_PORTs provide access.

3.

Fabric Ports -- The interfaces within a fibre channel network that provide attachment for an
Page 66 of 27

SAN Fundamentals Study Guide


N_PORT.

Page 77 of 27

SAN Fundamentals Study Guide

4.

The network infrastructure for carrying frame traffic between N_PORTs.

5.

Within a switched or mixed fabric, a set of auxiliary servers, including a name server for
device discovery and network address resolution.

The iFCP protocol enables the implementation of fibre channel fabric functionality on an IP network in which
IP components and technology replace the fibre channel switching and routing infrastructure.
The main function of the iFCP protocol layer is to transport fibre channel frame images between locally and
remotely attached N_PORTs. When transporting frames to a remote N_PORT, the iFCP layer
encapsulates and routes the fibre channel frames comprising each fibre channel Information Unit via a
predetermined TCP connection for transport across the IP network.
When receiving fibre channel frame images from the IP network, the iFCP layer de-encapsulates and
delivers each frame to the appropriate N_PORT. The iFCP layer processes the following types of
traffic:
1.

FC-4 frame images associated with a fibre channel application protocol.

2.

FC-2 frames comprising fibre channel link service requests and responses

3.

Fibre channel broadcast frames.

4.

iFCP control messages required to setup, manage or terminate an iFCP session.

iSCSI: Internet Small Computer System Interface


Internet Small Computer System Interface (iSCSI) is a TCP/IP-based protocol for establishing and
managing connections between IP-based storage devices, hosts and clients, which is called Storage Area
Network (SAN). The SAN makes possible to use the SCSI protocol in network infrastructures for high-speed
data transfer at the block level between multiple elements of data storage networks.
The architecture of the SCSI is based on the client/server model, which is mostly implemented in an
environment where devices are very close to each other and connected with SCSI buses. Encapsulation
and reliable delivery of bulk data transactions between initiators and targets through the TCP/IP network
is the main function of the iSCSI. iSCSI provides mechanism for encapsulating SCSI commands on an IP
network and operates on top of TCP.
For today - SAN (Storage Area Network), the key requirements of data communication are: 1)
Consolidation of data storage systems, 2) Data backup, 3) Server clusterization, 4) Replication, 5) Data
recovery in emergency conditions. In addition, SAN is likely geographic distribution over multiple LANs and
WANs with various technologies. All operations must be conducted in a secure environment and with QoS.
iSCSI is designed to perform the above functions in the TCP/IP network safely and with proper QoS.
Traditional JBOD:
JBOD (for "just a bunch of disks," or sometimes "just a bunch of drives") is a derogatory term - the official
term is "spanning" - used to refer to a computer's hard disks that haven't been configured according to
the RAID (for "redundant array of independent disks") system to increase fault tolerance and improve
data access performance.
Page 88 of 27

SAN Fundamentals Study Guide


The RAID system stores the same data redundantly on multiple disks that nevertheless appear to the
operating system as a single disk. Although, JBOD also makes the disks appear to be a single one, it
accomplishes that by combining the drives into one larger logical one. JBOD doesn't deliver any
advantages

Page 99 of 27

SAN Fundamentals Study Guide

over using separate disks independently and doesn't provide any of the fault tolerance or
performance benefits of RAID.
RAID: Redundant array of independent disks
Redundant Arrays of Independent Disks (RAID) is a type of disk drives with two or more drives in
combination for increasing data integrity, fault tolerance, throughput or capacity and performance. RAID
provides seceral methods of writing data across/to multiple disks at once. RAID is one of many ways to
combine multiple hard drives into one single logical unit. Thus, instead of seeing several different hard
drives, the operating system sees only one. RAID is typically used on server computers, and is usually
implemented with identically-sized disk drives. With decreases in hard drive prices and wider availability
of RAID options built into motherboard chipsets, RAID is also being found and offered as an option in
higher- end end user computers, especially computers dedicated to storage-intensive tasks, such as
video and audio editing.
There are at least nine types of RAID plus a non-redundant array (RAID-0):

RAID-0: This technique has striping but no redundancy of data. It offers the best performance
but no fault-tolerance.

RAID-1: This type is also known as disk mirroring and consists of at least two drives that duplicate
the storage of data. There is no striping. Read performance is improved since either disk can be
read at the same time. Write performance is the same as for single disk storage. RAID-1
provides the best performance and the best fault-tolerance in a multi-user system.

RAID-2: This type uses striping across disks with some disks storing error checking and correcting
(ECC) information. It has no advantage over RAID-3.

RAID-3: This type uses striping and dedicates one drive to storing parity information. The
embedded error checking (ECC) information is used to detect errors. Data recovery is
accomplished by calculating the exclusive OR (XOR) of the information recorded on the other
drives. Since an I/O operation addresses all drives at the same time, RAID-3 cannot overlap
I/O. For this reason, RAID-3 is best for single-user systems with long record applications.

RAID-4: This type uses large stripes, which means you can read records from any single drive.
This allows you to take advantage of overlapped I/O for read operations. Since all write
operations have to update the parity drive, no I/O overlapping is possible. RAID-4 offers no
advantage over RAID-5.

RAID-5: This type includes a rotating parity array, thus addressing the write limitation in RAID-4.
Thus, all read and write operations can be overlapped. RAID-5 stores parity information but not
redundant data (but parity information can be used to reconstruct data). RAID-5 requires at
least three and usually five disks for the array. It's best for multi-user systems in which
performance is not critical or which do few write operations.

RAID-6: This type is similar to RAID-5 but includes a second parity scheme that is distributed
across different drives and thus offers extremely high fault- and drive-failure tolerance.

RAID controller
Page 1010 of

SAN Fundamentals Study Guide


Modern mass storage systems are growing to provide increasing storage capacities to fulfill increasing
user demands from host computer system applications. Redundant array of independent disks (RAID)
storage technology allows for the storing of the same data on multiple hard disks. Within a RAID system,
varying levels of data storage redundancy are utilized to enable reconstruction of stored data in the event
of data corruption or disk failure. RAID subsystems are configured such that each drive serves as the
primary storage device for a first portion of the data stored on the subsystem and serves as the backup
storage

Page 1111 of

SAN Fundamentals Study Guide

device for a second portion of the data. RAID storage subsystems typically utilize a control module that
shields the user or host system from the details of managing the redundant array. The controller makes the
subsystem appear to the host computer as a single, highly reliable, high capacity disk drive. In RAID
subsystems, the storage controller device performs significant management functions to improve reliability
and performance of the storage subsystem. The RAID controller provides an interface between the RAID
subsystem and the computer system. The RAID controller includes the hardware that interfaces between
the computer system and the disks. The RAID storage management techniques improve reliability of a
storage subsystem by providing redundancy information stored on the disk drives along with the host
system data to ensure access to stored data despite partial failures within the storage subsystem
What is RPO & RTO
Recovery Point Objective (RPO) and Recovery Time Objective (RTO) are some of the most
important parameters of a disaster recovery or data protection plan. These objectives guide the
enterprises in choosing an optimal data backup (or rather restore) plan.
RPO Recovery Point Objective
Recovery Point Objective (RPO) describes the amount of data lost measured in time. Example: After
an outage, if the last available good copy of data was from 18 hours ago, then the RPO would be 18
hours.
In other words it is the answer to the question Up to what point in time can the data be recovered? .
RTO Recovery Time Objectives
The Recovery Time Objective (RTO) is the duration of time and a service level within which a business
process must be restored after a disaster in order to avoid unacceptable consequences associated with
a break in continuity.
It should be noted that the RTO attaches to the business process and not the resources required to
support the process.
In another words it is the answer to the question How much time did you take to recover
after notification of a business process disruption ?
The RTO/RPO and the results of the Business Impact Analysis (BIA) in its entirety provide the basis for
identifying and analyzing viable strategies for inclusion in the business continuity plan. Viable strategy
options would include any which would enable resumption of a business process in a time frame at or
near the RTO/RPO. This would include alternate or manual workaround procedures and would not
necessarily require computer systems to meet the objectives.
Storage subsystems
SAN File System conforms to small computer system interface (SCSI) standards and is designed to work
with any SCSI-compliant storage devices, including Just a Bunch Of Disks (JBOD), redundant array of
independent disks (RAID) with mirroring, and hierarchically-managed storage devices. You can attach
tape devices to SAN File System for backups and long-term storage, although tape devices cannot be part
of a storage pool.

Page 1212 of

SAN Fundamentals Study Guide

All storage subsystems attached to SAN File System can be accessed by all clients (unless you use
zoning to allow only specific clients to access specific devices). This enables data sharing among
heterogeneous clients.
SAN File System supports heterogeneous, simultaneously-connected storage and host-bus adapter (HBA)
sharing, subject to client platform, driver, and storage-vendor limitations.
Why Fibre Channel:
Industry requires an efficient and high-performance transfer of information between devices such
as computers, storage devices, and other peripherals.
Fibre Channel is a multilayered network based on a series of standards from the American National
Standards Institute (ANSI). These standards define characteristics and functions for moving data across
the network. They include definitions of physical interfaces such as cabling, distances, and signaling; data
encoding and link controls; data delivery in terms of frames, flow control, and classes of service; common
services; and protocol interfaces.
With Fibre Channel:

Hosts and applications see storage devices attached to the SAN as if they are locally
attached storage.

Multiple protocols and a broad range of devices can be supported.

Connections can be either optical fiber (for distance) or copper cable links (for short distance at
low cost).

Protocols
Fibre Channel uses three protocols:

Point-to-point Devices are directly connected to other devices without the use of
hubs, switches, or routers.

Fibre Channel Arbitrated Loop (FC-AL) FC-AL has a shared bandwidth, distributed topology,
connects with hubs, and is the simplest form of a fabric topology.

Fibre Channel Switched Fabric (FC-SW) FC-SW provides the highest performance and
connectivity of the three topologies. It has nondisruptive scalability and switch connection.

Fibre Channel supports 126 nodes on an FC-AL, and 16 million nodes on an FC-SW and
provides connectivity over several kilometers (up to 10km) when using optical fiber.

Fibre Channel features


Fibre Channel has features that include:

Price:performance Fibre Channel expense is offset by its


high-speed bandwidth benefit.
Page 1313 of

SAN Fundamentals Study Guide


Hot pluggability Fibre Channel drives can be installed or
removed while the host system is operational. This is crucial in

Page 1414 of

SAN Fundamentals Study Guide

high-end and heavy-use server systems where there is little or no downtime.


Reliability Fibre Channel is the most reliable form of storage communication.
Multiple topologies Customers can develop a storage network with configuration choices at
a range of price points, levels of scalability, and availability.
Full suite of services The Fibre Channel set of storage network services include discovery,
addressing, LUN zoning, failover, management, and security.
Longer cable lengths Fibre Channel maintains data integrity through long cables.
Cable lengths include:
o 30m between nodes with copper cabling
o 500m between nodes with multimode cabling and shortwave lasers
o 10km between nodes with single-mode cabling and longwave lasers
Gigabit bandwidth Both 1Gb and 2Gb solutions are available and backward compatible,
providing customers with the highest bandwidth network interface technology in the
industry.
Loop resiliency Fibre Channel provides high data integrity in multiple devices (including Fibre
Channel RAID) on a loop.
Multiple protocols Protocols include SCSI, IP, VI, ESCON, HIPPI, IPI, and IEEE 802.2, (refer
to the Terms and Definitions section) to meet needs for storage connectivity, cluster computing,
and network interconnect.
Scalability Moving from single point-to-point gigabit links to integrated enterprises with
hundreds of servers, Fibre Channel is a high-performing and flexible configuration.
Congestion-free flow The Fibre Channel flow control delivers data as fast as the destination
buffer is able to receive it for high throughput data transfers. This enables backup, restore,
remote replication, and other applications to run more efficiently.

Framing protocol
Framing protocol is a communication procedure that:

Accepts data and divides it into frames for transmission.


Reassembles the received frames into a packet for delivery to a receiving device.
Positions the data in the frames for improved performance and hardware efficiency.

A frame is a string of data bytes, prefixed by a start of frame (SOF) delimiter and followed by an end of
frame (EOF) delimiter.

Frame specifications

2148-byte maximum frame size


4-byte SOF
24-byte header (destination and source
addressing control fields)
0 to 2112-byte data
4-byte cyclic redundancy check (CRC)
4-byte EOF

Page 1515 of

SAN Fundamentals Study Guide

Fibre Channel ports


On a computer or communication device, a port is a specific place for being physically connected to some
other device, usually with a socket and plug of some kind. A port is also a logical connection place and
specifically, using a protocol, the way a client program specifies a particular server program on a
computer in a network.
A node is a connection point. It is either a redistribution point or an end point for data transmissions. In
general, a node has programmed or engineered capability to recognize and process or forward
transmissions to other nodes. A node can be a server or storage system, a tape backup device, or a
video
display terminal. Each node must hold at least one port for providing access to other devices.
Fibre Channel Fibre Channel description
ports
N_port

All node (server or storage) ports are called N_Ports. An N_Port attaches to an F_Port in a
point- to-point protocol. N-port to N-port is uncommon, so when two nodes are direct-attached it
is through an arbitrated loop (NL_Port to NL_Port).

L_port

All loop-hub ports are called L_Ports, which stands for loop ports.

NL_port

An N_Port that contains arbitrated loop functions associated with arbitrated loop topology
is called an NL_Port.

F_port

The F_Port, or fabric port, is the Link_Control_Facility within the fabric (switch) that attaches
to an N_Port.

FL_port

An F_Port, that contains arbitrated loop functions associated with arbitrated loop topology
is called an FL_Port, which stands for fabric loop port.

E_port

An E_Port is used for connecting fabrics (switches). The link is called the inter-switch link (ISL).

G_port

A G_Port (generic port) can auto-discover its type. It automatically configures itself as an E, N, or
NL port.

Page 1616 of

Switch ports
Switch ports become a port type depending on what
gets plugged into them.

Switch ports are usually G_Ports when nothing


is plugged into them.
If you plug a host port into the switch, it
becomes an F_Port.
If you plug a switch into a fabric switch, it
becomes an E_Port.
Plug a hub into a switch port and it becomes
an FL_Port.
Plug a host port into a FL_Port and it becomes
an NL_Port.
The diagram shows how port names change
depending on what devices are connected.

Storage Area Network Components


As previously discussed, the primary technology used in
storage area networks today is Fibre Channel. This
section provides a basic overview of the components in
a fibre channel storage fabric as well as different
topologies and configurations open to Windows deployments.
Fibre Channel Topologies
Fundamentally, fibre channel defines three configurations:

Point-to-point

Fibre Channel Arbitrated Loop (FC-AL)

Switched Fibre Channel Fabrics (FC-SW).

Although the term fibre channel implies some form of fibre optic technology, the fibre channel
specification allows for both fibre optic interconnects as well as copper coaxial cables.
Point-to-Point
Point-to-point fibre channel is a simple way to connect two (and only two) devices directly together,
as shown in Figure 1 below. It is the fibre channel equivalent of direct attached storage (DAS).

Host

Storage

Figure 1: Point to point connection

From a cluster and storage infrastructure perspective, point-to-point is not a scalable


enterprise configuration and we will not consider it again in this document.
Arbitrated Loops
A fibre channel arbitrated loop is exactly what it says; it is a set of hosts and devices that are connected into
a single loop, as shown in Figure 2 below. It is a cost-effective way to connect up to 126 devices and hosts
into a single network.

Host A

Host B

Device
E

Device
C

Device
D

Figure 2: Fibre Channel arbitrated loop


Devices on the loop share the media; each device is connected in series to the next device in the loop
and so on around the loop. Any packet traveling from one device to another must pass through all
intermediate devices. In the example shown, for host A to communicate with device D, all traffic between
the devices must flow through the adapters on host B and device C. The devices in the loop do not need
to look at the packet; they will simply pass it through. This is all done at the physical layer by the fibre
channel interface card itself; it does not require processing on the host or the device. This is very
analogous to the way a token-ring topology operates.
When a host or device wishes to communicate with another host or device, it must first arbitrate for the loop.
The initiating device does this by sending an arbitration packet around the loop that contains its own loop
address (more on addressing later). The arbitration packet travels around the loop and when the initiating
device receives its own arbitration packet back, the initiating device is considered to be the loop owner. The
initiating device next sends an open request to the destination device which sets up a logical point-to-point
connection between the initiating device and target. The initiating device can then send as much data as
required before closing down the connection. All intermediate devices simply pass the data through. There
is no limit on the length of time for any given connection and therefore other devices wishing to
communicate must wait until the data transfer is completed and the connection is closed before they can
arbitrate.
If multiple devices or hosts wish to communicate at the same time, each one sends out an arbitration
packet that travels around the loop. If an arbitrating device receives an arbitration packet from a different
device before it receives its own packet back, it knows there has been a collision. In this case, the device
with the lowest loop address is declared the winner and is considered the loop owner. There is a fairness
algorithm built into the standard that prohibits a device from re-arbitrating until all other devices have been
given an opportunity, however, this is an optional part of the standard.

Note: Not all devices and host bus adapters support loop configurations since it is an optional part of
the fibre channel standard. However, for a loop to operate correctly, all devices on the loop MUST have
1

arbitrated loop support . Figure 3 below shows a schematic of the wiring for a simple arbitrated loop
configuration.

Figure 3: FC-AL wiring schematic


With larger configurations, wiring a loop directly can be very cumbersome. Hubs allow for simpler,
centralized wiring of the loop (see section Hubs, Switches, Routers and Bridges). Communication in an
arbitrated loop can occur in both directions on the loop depending on the technology used to build the
loop, and in some cases communication can occur both ways simultaneously.
Loops can support up to 126 devices, however, as the number of devices on the arbitrated loop
increases, so the length of the path and therefore the latency of individual operations increases.
Many loop devices, such as JBODs, have dip switches to set the device address on the loop (known as
hard addressing). Most, if not all devices, implement hard addresses so it is possible to assign a loop ID to a
device, however, just as in a SCSI configuration, different devices must have unique hard IDs. In cases
where a device on the loop already has a conflicting address when a new device is added, the new device
either picks a different ID or it does not get an ID at all (non-participating).
Note: Most of the current FC-AL devices are configured automatically to avoid any address conflicts. However,
if a conflict does happen then it can lead to I/O disruptions or failures.
Unlike many bus technologies, the devices on an arbitrated loop do not have to be given fixed addresses
either by software configuration or via hardware switches. When the loop initializes, each device on the loop
must obtain an Arbitrated Loop Physical Address which is dynamically assigned. This process is initiated
when a host or device sends out a LIP; a master is dynamically selected for the loop and the master
controls a well defined process where each device is assigned an address.
A LIP is generated by a device or host when the adapter is powered up or when a loop failure is detected
(such as loss of carrier). Unfortunately, this means that when new devices are added to a loop or when
devices on the loop are power-cycled, all the devices and hosts on the loop can (and probably will)
change their physical addresses. This can lead to unstable configurations if the operating system is not
fully aware of the changes.
For these reasons, arbitrated loops provide a solution for small numbers of hosts and devices in
relatively static configurations.
Fibre Channel Switched Fabric

Most devices today, except for some McData switches, support FC-AL.

In a switched fibre channel fabric, devices are connected in a many-to-many topology using fibre channel
switches, as shown in Figure 4 below. When a host or device communicates with another host or device,
the source and target setup a point-to-point connection (just like a virtual circuit) between them and
communicate directly with each other. The fabric itself routes data from the source to the target. In a
fibre channel switched fabric, the media is not shared. Any device can communicate with any other
device (assuming it is not busy) and communication occurs at full bus speed (1Gbit/Sec or 2Gbit/sec
today
depending on technology) irrespective of other devices and hosts communicating.

Host A

Host B

Host C

Host D

Switches Fibre Channel Fabric

Figure 4: Switched Fibre Channel fabric


When a host or device is powered on, it must first login to the fabric. This enables the device to determine
the type of fabric (there is a set of characteristics about what the fabric will support) and it causes a host or
device to be given a fabric address. A given host or device continues to use the same fabric address while it
is logged into the fabric and the fabric address is guaranteed to be unique for that fabric. When a host or
device wishes to communicate with another device, it must establish a connection to that device before
transmitting data in a way similar to the arbitrated loop. However, unlike the arbitrated loop, the connection
open packets and the data packets are sent directly from the source to the target (the switches take care of
routing the packets in the fabric).
Fibre channel fabrics can be extended in many different ways such as by federating switches or cascading
switches, and therefore, fibre channel switched fabrics provide a much more scalable infrastructure for
large configurations. Because device addresses do not change dynamically once a device has logged in to
the fabric, switched fabrics provide a much more stable storage area network environment than is possible
using an arbitrated loop configuration.
Fibre channel arbitrated loop configurations can be deployed in larger switched SANs. Many of the newer
switches from vendors like Brocade incorporate functionality to allow arbitrated loop or point-to-point
devices to be connected to any given port. The ports can typically sense whether the device is a loop device
or not and adapt the protocols and port semantics accordingly. This allows platforms such as the Sun
UE10000 or specific host adapters or devices which only support arbitrated loop configurations today, to be
attached to switched SAN fabrics.
Note that not all switches are created equal. Brocade switches are easy to deploy; Vixel and
Gadzoox switches behave more like hubs with respect to addressing.

Loops Versus Fabrics


Both fibre channel arbitrated loops and switched fabrics have pros and cons. Before deploying either,
you need to understand the restrictions and issues as well as the benefits of each technology. The
vendors documentation provides specific features and restrictions; however, the following helps to
position the different technologies.
FC-AL
Pros
Low cost
Loops are easily expanded and combined with up to 126 hosts and devices
Easy for vendors to develop
Cons
Difficult to deploy
Maximum 126 devices
Devices share media thus lower overall bandwidth
Switched Fabric
Pros
Easy to deploy
Supports 16 million hosts and devices
Communicate at full wire-speed, no shared media
Switches provide fault isolation and re-routing
Cons
Difficult for vendors to develop
Interoperability issues between components from different vendors
Switches can be expensive
Host Bus Adapters
A host bus adapter (HBA) is an interface card that resides inside a server or a host computer. It is the
functional equivalent of the NIC in a traditional Ethernet network. All traffic to the storage fabric or loop
is done via the HBA.
HBAs, with the exception of older Compaq cards and early Tachyon based cards, support both FC-AL and
Fabric (since 1999). However, configuration is not as simple or as automatic as could be supposed. It is
difficult to figure out if an HBA configures itself to the appropriate setting. On a Brocade fabric, it is
possible to get everything connected, however, some of it might be operating as loop and still appear to
work. It is important to verify from the switch side that the hosts are operating in the appropriate mode.
Note Be sure to select the correct HBA for the topology that you are using. Although some switches can
auto- detect the type of HBA in use, using the wrong HBA in a topology can lead to data loss and can cause
many
issues to the storage fabric.

Hubs, Switches, Routers and Bridges


Thus far, we have discussed the storage fabric as a generic infrastructure that allows hosts and devices
to communication with each other. As you have seen, there are fundamentally different fibre channel
topologies and these different topologies use different components to provide the infrastructure.

SAN Fundamentals Study Guide

Page 18 of 27

SAN Fundamentals Study Guide

Switches typically support 16, 32, 64 or even 128 ports today. This allows for complex fabric
configurations. In addition, switches can be connected together in a variety of ways to provide larger
configurations that consist of multiple switches. Several manufacturers such as Brocade and McData
provide a range of switches for different deployment configurations, from very high performance switches
that can be connected together to provide a core fabric to edge switches that connect servers and devices
with less intensive requirements.
Figure 7 below shows how switches can be interconnected to provide a scalable storage fabric supporting
many hundreds of devices and hosts (these configurations are almost certainly deployed in highly
available
topologies, section Highly Available Solutions deals with high availability).
Core backbone
switch fabric
provide a very
high
performance
storage area
interconnect

Data

Data
Data

Data

Data

Datacenter
Server

Datacenter
Server
Edge
switches
connect
departmental
storage and
servers into a
common
fabric

Figure 7: Core and edge switches in a SAN fabric


The core backbone of the SAN fabric is provided by high performance (and typically high port density)
switches. The inter-switch bandwidth in the core is typically 8Gbit/sec and above. Large data center class
machines and large storage pools can be connected directly to the backbone for maximum performance.
Severs and storage with less performance requirements (such as departmental servers) may be
connected via large arrays of edge switches (each of which may have 16 to 64 ports).
Bridges and Routers
In an ideal world, all devices and hosts would be SAN-aware and all would interoperate in a single,
ubiquitous environment. Unfortunately, many hosts and storage components are already deployed using
different interconnect technologies. To allow these types of devices to play in a storage fabric
environment, a wide variety of bridge or router devices allow technologies to interoperate. For example,
SCSI-to-fibre bridges or routers allow parallel SCSI (typically SCSI-2 and SCSI-3 devices) to be connected
to a fibre
Page 1919 of

network, as shown in Figure 8 below. In the future, bridges will allow iSCSI (iSCSI is a device interconnect
using IP as the communications mechanism and layering the SCSI protocol on top of IP) devices to connect
into a switch SAN fabric.

Switched fibre
channel fabric
Bridge

SCSI bus

Figure 8: SCSI to Fibre Channel bridge


Storage Components
Thus far, we have discussed devices being attached to the storage bus as though individual disks are
attached. While in some very small, arbitrated loop configurations, this is possible, it is highly unlikely that
this configuration will persist. More likely, storage devices such as disk and tape are attached to the
storage fabric using a storage controller such as an EMC Symmetrix or a Compaq StorageWorks RAID
controller. IBM would refer to these types of components as Fibre RAID controllers.
In its most basic form, a storage controller is a box that houses a set of disks and provides a single
(potentially redundant and highly available) connection to a SAN fabric. Typically, disks in this type of
controller appear as individual devices that map directly to the individual spindles housed in the controller.
This is known as a JBOD (just a bunch of disks) configuration. The controller provides no value-add, it is
just
a concentrator to easily connect multiple devices to a single (or small number for high availability)
Physical disks in
storage box
fabric switch port.
making up the
RAID-5 disk
Storage
Modern controllers almost always provide some level of redundancy for
set
Controller
data. For example, many controllers offer a wide variety of RAID levels such
as RAID 1, RAID 5, RAID 0+1 and many other algorithms to ensure data
availability in the event of the failure of an individual disk drive. In this
case, the hosts do not see devices that correspond directly to the
individual spindles; rather the controller presents a virtual view of
RAID-5
highly
available storage devices to the hosts called logical devices.
Physical disks
in storage box
making up the
mirror set

Logical
disks
presented to
storage
fabric

Mirror
Set

Figure 9: Logical devices

Page 20 of 27

In the example in Figure 9, although there are five physical disk drives in the storage cabinet, only two
logical devices are visible to the hosts and can be addressed through the storage fabric. The controller
does not expose the physical disks themselves.
Many controllers today are capable of connecting directly to a switched fabric; however, the disk drives
themselves are typically either SCSI, or more common now, are disks that have a built-in FC-AL interface.
As you can see in Figure 10 below, the storage infrastructure that the disks connect to is totally
independent from the infrastructure presented to the storage fabric.
A controller typically has a small number of ports for connection to the fibre channel fabric (at least two
are required for highly available storage controllers). The logical devices themselves are exposed through
the
controller ports as logical units (LUNs).
Logical

Physical
Storage
controller
backplan
e

Storage
fabric adapter

CPU
Memor
y
Data
Cache

Fiber Channel Switch

Arbitrated
loops for
connecting
physical
disk drives
to
backplane.
Typically
two
redundant
loops per
shelf

Figure 10: Internal components of a storage controller

Logical
devices exposed
by storage
controller

Physical
disk
drives in
the
storage
cabinet

Storage Controller

Highly Available Solutions


One of the benefits of storage area networks is that the storage can be managed as a centralized pool of
resources that can be allocated and re-allocated as required. This powerful paradigm is changing the way
data centers and enterprises are built, however, one of the biggest issues to overcome is that of
guaranteed availability of data. With all of the data detached from the servers, the infrastructure must be
architected to provide highly available access so that the loss of one or more components in the storage
fabric does not lead to the servers being unable to access the application data. All areas must be
considered including:
No single point of failure of cables or components such as switches, HBAs or storage controllers.
Typical highly available storage controller solutions from storage vendors have redundant components
and can tolerate many different kinds of failures.
Transparent and dynamic path detection and failover at the host. This typically involves multi-path drivers
running on the host to present a single storage view to the application across multiple, independent HBAs.
Built-in hot-swap and hot-plug for all components from HBAs to switches and controllers. Many high-end
switches and most if not all enterprise class storage controllers allow interface cards, memory, CPU and
disk drives to be hot-swapped.
There are many different storage area network designs that have different performance and availability
characteristics. Different switch vendors provide different levels of support and different topologies, however,
most of the topologies are derived from standard network topology design (after all a SAN is a network, just
the interconnect technology is tuned to a given application).

Topologies include:

Multiple independent fabrics

Federated fabrics

Core Backbone

Multiple Independent Fabrics


In a multiple fabric configuration, each device or host is connected to multiple fabrics, as shown in Figure
11 below. In the event of the failure of one fabric, hosts and devices can communicate using the remaining
fabric.

Switch

Switch

Figure 11: Multiple independent fabrics


Pros
Resilient to management or user errors. For example, if security is changed or zones are deleted,
the configuration on the alternate fabric is untouched and can be re-applied to the broken fabric.
Cons
Managing multiple independent fabrics can be costly and error prone. Each fabric should have the same
zoning and security information to ensure a consistent view of the fabric regardless of the
communication port chosen
Hosts and devices must have multiple adapters. In the case of a host, multiple adapters are typically
treated as different storage buses. Additional multi-pathing software such as Compaq SecurePath or EMC
PowerPath are required to ensure that the host gets a single view of the devices across the two HBAs.
Federated Fabrics
In a federated fabric, multiple switches are connected together, as shown in Figure 12 below.
Individual hosts and devices are connected to at least two switches.
Switch

Switch

Figure 12: Federated switches for single fabric view

Inter-switch
link to federate
switches into a
single highly
available fabric

Pros
Management is simplified, the configuration is a highly available, single fabric, and therefore there is
only one set of zoning information and one set of security information to manage.
The fabric itself can route around failures such as link failures and switch
failures.
Cons
Hosts with multiple adapters must run additional multi-pathing software such as Compaq SecurePath or
EMC PowerPath to ensure that the host gets a single view of the devices where there are multiple
paths from the HBAs to the devices.
Management errors are propagated to the entire fabric.
Core Backbone
A core backbone configuration is really a way to scale-out a federated fabric environment. Figure 7 shows a
backbone configuration. The core of the fabric is built using highly scalable, high performance switches
where the inter-switch connections provide high performance communication (e.g. 8-10GBit/Sec using
todays technology). Redundant edge switches can be cascaded from the core infrastructure to provide
high numbers of ports for storage and hosts devices.
Pros
Highly scalable and available storage area network
configuration.
Management is simplified, the configuration is a highly available, single fabric, and therefore there is
only one set of zoning information and one set of security information to manage.
The fabric itself can route around failures such as link failures and switch
failures.
Cons
Hosts with multiple adapters must run additional multi-pathing software such as Compaq SecurePath or
EMC PowerPath to ensure that the host gets a single view of the devices where there are multiple
paths from the HBAs to the devices.
Management errors are propagated to the entire fabric.
FC Layers
Fibre Channel consists of multiple layers similar to the Open Systems Interconnect (OSI) layers in
network protocols. These layers communicate instructions for transmitting data.
The functions of each layer are:

Node level
o Upper-level protocol

(ULP) Provides the


communication path for
the operating system,
drivers, and software
applications over Fibre
Channel
FC-4 Defines the
mapping of the ULP to the
Fibre Channel
FC-3 Provides common

services for multiple ports on a Fibre Channel node


Port level
o FC-2 Transfers frame formats, performs sequence and exchange
management, controls the flow of data, and administers the topologies
o FC-1 Encodes and decodes data to transmit it through a physical media
o FC-0 Acts as an interface for the physical media

Framing classes of service


Fibre Channel defines several communication systems called classes of service. The selection of a class
of service depends on the type of data being transmitted.
Class 1
Class 2
Class 3
Class 4 and 6
The following table summarizes the classes of service in Fibre Channel.
Class of
service

Class 1

Class 2

Class 3

Class 4

Class 5
Class 6

Fibre Channel description

Dedicated connection
In-order delivery, acknowledge first frame only
No flow control after first frame of connection

Connectionless
Frame switched
Out-of-order delivery possible
Acknowledge each frame
Buffer-to-buffer and end-to-end flow control for all frames

Frame switched
Out-of-order delivery possible
No acknowledgments
Buffer-to-buffer frame control for all frames

Connection oriented
Virtual circuit
In-order delivery

Reserved
Connection oriented
Multicast service

Page 24 of 27

Naming and addressing


Each node has a fixed 64-bit worldwide name (WWN) assigned by
the manufacturer. The address can be considered analogous to a
media access control (MAC) address. This guarantees uniqueness
within a large, switched networked.
Unlike the MAC address, the WWN is not used to transport frames
across the network.
Two WWNs are assigned:

The N_Port node WWN is assigned for the HBA.


The N_Port has a WWN assigned for each port on the
HBA.

Terms and definitions


The following terms are frequently used in Fibre Channel technology:

802.2 The IEEE logical link control layer of the OSI model.
Acknowledgement frame (ACK) Used for end-to-end flow control. An ACK is sent to
verify receipt of one or more frames in Class 1 and Class 2 services.
Arbitrated Loop Physical Address (AL_PA) A 1-byte value used in the Arbitrated Loop
topology used to identify L_Ports. This value will then also become the last byte of the address
identifier for each public L_Port on the loop.
Arbitrated loop One of the three Fibre Channel topologies. Up to 126 NL_Ports and one
FL_Port are configured in a unidirectional loop. Ports arbitrate for access to the loop based on their
AL_PA. Ports with lower AL_PAs have higher priority than those with higher AL_PAs.
Buffer-to-buffer credit (BB_Credit) Used for buffer-to-buffer flow control which determines
the number of frame buffers available in the port it is attached to.
Close primitive signal (CLS) Applies only to the Arbitrated Loop topology. It is sent by an
L_Port, which is currently communicating on the loop to close communication with the other L_Port.
End of frame (EOF) delimiter An ordered set that is always the last transmission word of a
frame. It is used to indicate that a frame has ended and indicates whether the frame is valid.
ESCON Enterprise Systems Connection, a type of fiber jumper.
Fabric A set of one or more connected Fibre Channel switches acting as a Fibre Channel
network.
Fiber optic (or optical fiber) The medium and technology associated with the transmission of
information as light impulses along a glass or plastic wire or fiber.
Frame The basic unit of communication between two N_Ports. Frames are composed of a
starting delimiter (SOF), a header, the payload, the cyclic redundancy check (CRC), and an ending
delimiter (EOF).
HIPPI High-Performance Parallel Interface standards.
IEEE Institute of Electrical and Electronics Engineers standards.
IP Internet Protocol.

Page 25 of 27

Link Two adjacent unidirectional fibers (signal lines) transmitting in opposite directions,
using their associated transmitters and receivers. The pair of fibers can be copper electrical
wires (differential pairs) or optical strands. One fiber sends data out of the port and the other
fiber receives data into the port.
Link service A facility used between an N_Port and a fabric or between two N_Ports. Link
services are used for such purposes as login, sequence and exchange management,
and maintaining connections.
Node A server, storage system, tape backup device, or video display terminal. Any source or
destination of transmitted data is a node. Each node must hold at least one port for
providing access to other devices.
Nonparticipating mode Where an L_Port enters the nonparticipating mode if more than 127
devices are on a loop and it cannot acquire an AL_PA. An L_Port can also voluntarily enter the
nonparticipating mode if it is still physically connected to the loop, but does not participate. An
L_Port in the nonparticipating mode cannot generate transmission words on the loop and can
only retransmit words received on its inbound fiber.
Ordered set A 4-byte transmission word, which has a special character as its first transmission
character. An ordered set can be a frame delimiter, primitive signal, or primitive sequence.
Ordered sets are used to distinguish Fibre Channel control information from data.
Originator An N_Port that originates an exchange.
Participating mode A normal operating mode for an L_Port on a loop. An L_Port in this mode
has acquired an AL_PA and is capable of communicating on the loop.
Port The connector and supporting logic for one end of a Fibre Channel link.
Primitive sequence An ordered set transmitted repeatedly and used to establish and maintain
a link.
Primitive signal An ordered set used to indicate an event.
Private loop An arbitrated loop that stands on its own. It is not connected to a fabric.
Protocol In a Fibre Channel SAN, a data transmission convention encompassing timing,
control, formatting, and data representation.
Public loop An arbitrated loop connected to a fabric.
Responder Where the N_Port is the exchange that the originator communicates with.
SAN One or more Fibre Channel fabrics used to connect storage systems, servers, and
management appliances. Typical SANs have one fabric (for environments where moderate data
availability is required), two fabrics (when redundancy is required in the storage networks), or
even three or more fabrics (when an extremely large number of ports is required). Use caution
with the terms fabric and SAN because many SANs have two redundant fabrics.
SCSI Small Computer System Interface.
Sequence A group of related frames transmitted unidirectional from one N_Port to another.
Sequence initiator The N_Port that begins a new sequence and transmits frames to another
N_Port.
Sequence recipient The N_Port that receives a particular sequence of data frames.
Start of frame (SOF) delimiter The ordered set that is always the first transmission word of a
frame. It is used to indicate that a frame will immediately follow and indicates which class of
service the frame will use.
Special character A special 10-bit transmission character, which does not have a
corresponding 8-bit value, but is still considered valid. The special character is used to indicate
that a particular transmission word is an ordered set.
Switch A device that connects the fabric using a virtual circuit or a virtual packet circuit. The
switch can make an electric connection between ports, or it can reroute packets through the switch.
Transmission character A 10-bit character transmitted serially over the fiber.

Page 26 of 27

Transmission word A string of four consecutive transmission characters.


VI Virtual Interface.

San topologies
SAN devices connected by Fibre Channel
can be arranged using one of three
topologies:

Point-to-point A dedicated and


direct connection exists between
two SAN devices.
Arbitrated loop The SAN devices
are connected in the form of a ring.
Switched fabric SAN devices are
connected using a fabric switch.
The fabric switch enables a SAN
device to connect and communicate with multiple SAN devices simultaneously.

Fibre Channel port connection


Ports have different names depending on how they are used with each device in a SAN. The mode of
operation determines how the port is named.
A Fibre Channel port connection is where the cables are plugged. The hardware used to connect cables to
ports in a SAN is called a Gigabit Interface Converter (GBIC). Port and GBIC are interchangeable terms.
The GBIC is the physical connector and the port is the logical name.
In many cases, a single bus adaptor on hosts and devices support all of the listed port connections.
In the following table the E_Port is not shown for the switch because it connects only to another E_Port on
another switch. The switches automatically configure to the appropriate ports depending on what type of
port on the host or device it is being connected to. The only exception is the QL port, which has to be
manually
configured.
Host

Device

Hub

Ports

NLpub NLpri

NLpub

NLpri

FC-AL

F switch

Yes

No

No

Yes

No

No

No

FL switch

No

Yes

No

No

Yes

Yes

Yes

QL switch

No

Yes

Yes

No

Yes

Yes

Yes

FC-AL hub

No

Yes

Yes

No

Yes

Yes

Yes

Note: NLpub and NLpri are loop ports for private and public loops, which are explained later in the course.

You might also like