You are on page 1of 71

Change Log of this document

Power PCIe Spreadsheet


Last Updated

12 December 2014.

Added support for October 6th announcement

The information in this spread sheet is provided "as is". Though the authors have made a best effort to provide an accurate a
listing of PCIe adapters available on IBM Power Systems, the user should also use other tools such as configurators, other do
IBM web sites to confirm specific points of importance to you.

This document will be stored as a tech doc and it is the authors' intent to refresh it over time. Therefore if you are using a vers
your PC, please occasionally check to see if a newer version is available.
IBMers: http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105846
Partners: http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD105846
If you find an error or have a suggestion for change/addition, please send Mark Olson (olsonm@us.ibm.com) and Sue Baker
(smbaker@us.ibm.com) an email.
(suggestions for change/additions which include the material for the change/addition will be appreciated)
Background colors in the PCIe Adapters Worksheet are to help separate the types of information
Green = Adapter specifications (column D visible. Column E, F, G, H and I content mostly included in column D, but provided for add'l sorting
Orange = Slots in which the adapters is supported
Blue = PCIe adapter/slot sizing information also see Performance Sizer worksheet / tab
Yellow = Minimum OS level supported
Rose = PowerHA support

OS levels shown in this spread sheet are minimum versions and revisions required by the adapter. Technology Level, Technol
Service Pack PTF, machine code level, etc. requirements can be found at the following URL:
https://www-912.ibm.com/e_dir/eServerPrereq.nsf
POWER8 and POWER7 servers (rack/tower) are considered in this documentation. Blade Center and PureFlex adapters are
PCIe Gen1/Gen2 Power System slot insights

- Gen3 slots are provided in the POWER8 system units. They are either x8 or x16 slots. There are two x16 slots per socket
remaining slots are x8. x16 slots have additional physical connections compared to x8 slots which allow twice the bandwidth
are adpaters with corresponding x16 capabilities). x8 and x4 and x1 cards can physically fit in x16 or x8 slots. 7-11-6-9 is a g
memorize for POWER8 Scale-out servers. A 4U 1-socket server has 7 PCIe slots. A 4U 2-socket server has 11 PCIe slots.
server has 6 PCIe slots. A 2U 2-socket server has 9 PCIe slots.

- Gen2 slots are provided in the Power 710/720/730/740/770/780 "C" model system units and in the Power 710/720/730/740
770/780 "D" model system units. Their machine type model numbers are 8231-E1C (710), 8202-E4C (720), 8231-E2C (73
(740), 9117-MMC (770), 9179-MHC (780) and 8231-E1D (710), 8202-E4D (720), 8231-E2D (730), 8205-E6D (740), 8408-E
RMD (760), 9117-MMD (770), 9179-MHD (780)
- Optional Gen2 slots are provided via the PCIe Riser Card (Gen2), FC = #5685, on the Power 720 and Power 740.

- Gen 1 slots are provided in the initially introduced Power 710/720/730/740/750/755/770/780 system units. Their machine
numbers are 8231-E2B (710), 8202-E4B (720), 8231-E2B (730), 8205-E6B (740), 8233-E8B (750), 8236-E8C (755), 9117-M
9179-MHB (780)
- Optional Gen1 slots are provided via the PCIe Riser Card (Gen2), FC = #5610, on the Power 720 and Power 740.
- Gen 1 slots are provided in the #5802/5803/5873/5877 12X I/O drawers
- Gen 1 slots are provided in POWER6 servers

Form Factor:
FH = full height
LP = low profile

Order the right form factor (size) adapter for the PCI slot in which it will be placed. No ordering/shipping structure has been
IBM Power Systems to order a conversion. So even though many of the PCIe adapters could be converted by changing the
end of adapter to be taller (full high) or shorter (low profile), no way to do so has been announced. If demand for such a cap
a method to satisfy it will be considered. Note that tailstocks are unique to each adapter.
.
FH/LP Power Systems slot insights
LP slots are found in the Power 710/730 and in the optional 720/740 PCIe Riser card (#5610/5685) and in the PowerLinux 7
FH slots are found in the Power 720/740/750/755/760/770/780 and in POWER6 servers and in the #5802/5803/5873/5877 I

Most Gen2 cards are supported only in Gen2 or Gen3 slots. The #5913 and ESA3 large cache SAS adapters and the #5899/
Ethernet are exceptions to this generalization. Also #EC27/EC28/EC29/EC30 have some limited Gen1 slot support.

"Down generation": If you place a Gen2 card in a Gen1 slot it will physically fit and may even work. But these adapters weren
Gen1 slots and thus there is no support statement for this usage. Also be VERY careful of performance expectations of a hig
Gen2 card in a Gen1 slot. For example, a 4-port 8Gb Fibre Channel adapter has about 2X the potential bandwidth than a Ge
provide.

Most Gen3 cards are supported only in Gen2 or Gen3 slots. The #EJ0L large cache SAS adapters and #EJ0J/EJ0M SAS ad
exceptions to this generalization. The same caveat about putting unsupported adapters in a "down generation" PCIe slot Gen1

Feb 2013 note: The POWER7+ 710/720/730/740 are at a higher firmware level than the previous POWER7 710/720/730/7
"C" model 710/720/730/740 have Gen2 slots, the new Gen2 adapters EN0A/B/H/J are not be tested/supported back on the
Gen2 adapters supported on the "C" models are also supported on the "D" models. There is currently no plan to introduce
levels to the Power 710/720/730/740 "C" models.
All Gen1 adapters are supported in any Gen2 slot (subject to any specific server rules, OS/firmware support, physical size card
Blind Swap Cassettes (BSC) for PCIe adapters
Power 770/780 system unit, the POWER7+ 750/760 have Gen4 BSC
Other POWER7/POWER7+ system units do not have BSC
The 12X-attached I/O drawers #5802/5877/5803/5873 have Gen3 BSC

CCIN can be used to find an ordering feature code of an installed adapter. The ordering feature code is used to find the f
name, price, description, etc
AIX users can use the lscfg command to determine an existing adapter's CCIN (Customer Card ID Number).
IBM i users can use the STRSST command to determine an existing adapter's CCIN

This document was developed for products and/or services offered in the United States. IBM may not offer the products, featu
discussed in this document in other countries.

The information may be subject to change without notice. Consult your local IBM business contact for information on the produ
and services available in your area.

IBM, the IBM logo, AIX, BladeCenter, Power, POWER, POWER6, POWER6+, POWER7, POWER7+, PowerLinux, Power Sys
Systems Software are trademarks or registered trademarks of International Business Machines Corporation in the United Stat
countries or both. A full list of U.S. trademarks owned by IBM may be found at ibm.com/legal/copytrade.shtml

Linux is a registered trademark of Linux Torvalds in the United States, other countries or both.

Change Log of this document


Originally distributed Oct 2011
April 2012
Added April announce content
18 May 2012 . Added PCIe performance sizing factors and sizing tool

29 May 2012 . Added PowerLinux Worksheet/Tab, updated/augmented some text in Performance Sizer Tab, added more so
(E, F, G, H, I) for adapter descriptions.
11 June 2012 Added PowerHA support columns X & Y . Filled in more of the sorting column E, F, G, H, I values.
13 July 2012
Expanded PowerLinux tab to include two new adapter features and several full-high adapters for #5802/5877
22 July 2012
Added Linux RHEL6.1 support of RoCE adapter
1 Oct 2012
Added additional RoCe adapters and support

5 Feb 2013
Added EN0H/EN0J, EN0A/EN0B, updated misc points, updated PowerLinux content. Added a new workshee
provides detailed technical information about the Ethernet port attributes.

28 May 2013
Added #EL39/EL3A adapters from SolarFlare to PowerLinux tab, updated #EN0H/EN0J/EN0A/EN0B adapte
support on 770/780 servers.. updated #EN0H/EN0J to show IBM i support of FCoE starting July 2013, updated #EC27/EC28/
indicate support in #5802/5877 drawers. Updated several adpaters to show PowerHA support.
also updated we
tech doc is stored
5 AugMay 2013
added information for PowerLinux 7R4 which announced 30 July.
Oct 1 2013. Added IBM Flash Adapter 90, PCIe2 SAS adapter refresh, Solarflare adapters.

14 Jan 2014. Added PCIe Gen3 SAS adapters, Expanded support of ESA3 SAS adapter, added IBM i Bisync adapter, expa
Adapter 90 support for 7R2 and RHEL.
Added new source manufacturer tab/worksheet.
30 May 2014. Added expanded support of existing SAS adapters Added 4-port Ethernet card with copper twinax
6 June 2014. Added POWER8 PCIe adapters.
8 July 2014. Added support for July 15th announcement
12 December 2014. Added support for October 6th announcement

Adapter Gen

Feature
Code

5735

Feature Name
8 Gigabit PCI Express Dual Port Fibre Channel Adapter

Adapter General Information


Port
band
Medium width
Adapter Description
Fibre Channel 8Gb 2-port

Cable
FC

# ports

8 Gbit/sSR optical 2-port

Supported
POWER8

SRIOV
No

Other

CCIN
577D

Form
Factor
FH

Gen1 /
Gen2

Functionally
Equiv
Features

System Node (CEC)


Card Slot Support

Gen1

5273

S814, S824

Supported Slots for Adapters


POWER7

POWER8

Performance Info

I/O drawer
Support

Gen1 /
Gen2 /
Both

POWER 7/7+ CEC Card Slot


Support (see note S0 below)

POWER 7/7+
I/O Drawer
Slot Support

Both

720, 740, 750, 755, 760+, 770,


780, 520, 550, 560

5802, 5877,
5803, 5873

Slot
Comments

Sizing
Factor
16

Performance Info

Comment
Note P9

OS Support

Min AIX
Level
5.3

Min IBM i
Level
6.1

Min RHEL Min SLES


Level
Level
5.5

10

PowerHA

Comments
NPIV requires VIOS

For
AIX
Yes

PowerHA Support

For
IBM i
Yes

comments

This worksheet / Tab is for use with the IBM PowerLinux servers
Adapter support is also documented at URL:

http://www-01.ibm.com/support/knowledgecenter/8247-21L/p8e

The marketing model name for 2U servers is 7R1 or 7R2 ordering system is machine type 8246
7R1 ordering models (one socket) include 8246-L1C/L1S/L1D/L1T and 7R2 ordering models are 8246-L2C/L
The "C" and "D" models don't support attaching I/O drawers (12X or disk/SSD-only). The "S" and "T" mod
The 7R2 (model L2S/L2T only) can attach one or two 12X PCIe I/O drawers. Each drawer has full high PCIe

The marketing model name for the 4U server is 7R4 . Ordering system is machine type 8248
The 7R4 is a four socket with an ordering model of L4T. It is a 4U server with full high PCIe slots. It can att

Adapter General Information


Feature
Code

Feature Name

Adapter Description

2053

PCIe LP RAID & SSD SAS Adapter 3Gb

2055

PCIe LP RAID & SSD SAS Adapter 3Gb w/ Blin SAS-SSD Adapter - Double-wide- 4 SSD modu

2728

4 port USB PCIe Adapter

USB 4-port

5260

PCIe2 LP 4-port 1GbE Adapter

Ethernet 1Gb 4-port - TX / UTP / copper

5269

PCIe LP POWER GXT145 Graphics AcceleratorGraphics - POWER GXT145

5270

PCIe LP 10Gb FCoE 2-port Adapter

5271

PCIe LP 4-Port 10/100/1000 Base-TX Ethernet Ethernet 1Gb 4-port - TX / UTP / copper

5272
5273

PCIe LP 10GbE CX4 1-port Adapter


PCIe LP 8Gb 2-Port Fibre Channel Adapter

Ethernet 10Gb 1-port - copper CX4


Fibre Channel 8Gb 2-port

5274

PCIe LP 2-Port 1GbE SX Adapter

Ethernet 1Gb 2-port - optical SX

SAS-SSD Adapter - Double-wide- 4 SSD modu

FCoE (CNA) 2-port 10Gb - optical

5275
5276

PCIe LP 10GbE SR 1-port Adapter


4 Gbps PCIe Dual-port fibre channel adapter

Ethernet 10Gb 1-port - optical SR


Fibre Channel 4Gb 2-port

5277

PCIe LP 4-Port Async EIA-232 Adapte

Async 4-port EIA-232

5279

PCIe2 LP 4-Port 10GbE&1GbE SFP+ Copper& Ethernet 10Gb+1Gb 4Ports: 2x10Gb copp

5280
5281

PCIe2 LP 4-Port 10GbE&1GbE SR&RJ45 AdaptEthernet 10Gb+1Gb 4Ports 2x10Gb optic


PCIe LP 2-Port 1GbE TX Adapter
Ethernet 1Gb 2-port - TX / UTP / copper

5283
5284

PCIe2 LP 2-Port 4X IB QDR Adapter 40Gb


PCIe2 LP 2-port 10GbE SR Adapter

5286

PCIe2 LP 2-Port 10GbE SFP+ Copper Adapter Ethernet 10Gb 2-port SFP+ - copper twinax

5289

2 Port Async EIA-232 PCIe Adapter

Async 2-port EIA-232

5290

PCIe LP 2-Port Async EIA-232 Adapter

Async 2-port EIA-232

5708

10Gb FCoE PCIe Dual Port Adapter

FCoE (CNA) 2-port 10Gb - optical

5717

4-Port 10/100/1000 Base-TX PCI Express AdaptEthernet 1Gb 4-port - TX / UTP / copper

5732

10 Gigabit Ethernet-CX4 PCI Express Adapter Ethernet 10Gb 1-port - copper CX4

5735

8 Gigabit PCI Express Dual Port Fibre Channel Fibre Channel 8Gb 2-port

5748

POWER GXT145 PCI Express Graphics AccelerGraphics - POWER GXT145

5767

2-Port 10/100/1000 Base-TX Ethernet PCI Expr Ethernet 1Gb 2-port - TX / UTP / copper

5768

2-Port Gigabit Ethernet-SX PCI Express AdapterEthernet 1Gb 2-port - optical SX

5769

10 Gigabit Ethernet-SR PCI Express Adapter

Ethernet 10Gb 1-port - optical SR

5772

10 Gigabit Ethernet-LR PCI Express Adapter

Ethernet 10Gb 1-port - optical LR

5774

4 Gigabit PCI Express Dual Port Fibre Channel Fibre Channel 4Gb 2-port

5785

4 Port Async EIA-232 PCIe Adapter

QDR 2-port 4X IB - 40Gb


Ethernet 10Gb 2-port - optical SR

Async 4-port EIA-232

5805

PCIe 380MB Cache Dual - x4 3Gb SAS RAID AdSAS 380MB cache, RAID5/6, 2-port 3Gb

5899

PCIe2 4-port 1GbE Adapter

Ethernet 1Gb 4-port - TX / UTP / copper

5901

PCIe Dual-x4 SAS Adapter

SAS 0GB Cache, no RAID5/6 2-port 3Gb

5913
EC27

PCIe2 1.8GB Cache RAID SAS Adapter Tri-port SAS 1.8GB Cache, RAID5/6 - 3-port, 6Gb
PCIe2 LP 2-Port 10GbE RoCE SFP+ Adapter Ethernet 10Gb 2-port RoCE SFP+ copper tw

EC3A

PCIe3 LP 2-Port 40 GbE NIC RoCE QSFP+


Adapter

Ethernet 40Gb 2-port RoCE SFP+ copper


twinax

EC41
EC45
EJ16

PCIe2 LP 3D Graphics Adapter x1


PCIe2 LP 4-Port USB 3.0 Adapter
PCIe3 LP CAPI Accelerator Adapter

Graphics - 3D
USB 4-port
CAPI

EL09

PCIe LP 4Gb 2-Port Fibre Channel Adapter

Fibre Channel 4Gb 2-port

EL10

PCIe LP 2-x4-port SAS Adapter 3Gb

SAS 0GB Cache, no RAID5/6 2-port 3Gb

EL11

PCIe2 LP 4-port 1GbE Adapter

Ethernet 1Gb 4-port - TX / UTP / copper

EL27

PCIe2 LP 2-Port 10GbE RoCE SFP+ Adapter

Ethernet 10Gb 2-port RoCE - copper twinax

EL2K
EL2M

PCIe2 LP RAID SAS Adapter Dual-port 6Gb


PCIe LP 2-Port 1GbE TX Adapter

SAS 0GB Cache, RAID5/6 - 2-port, 6Gb


Ethernet 1Gb 2-port - TX / UTP / copper

EL2N

PCIe LP 8Gb 2-Port Fibre Channel Adapter

Fibre Channel 8Gb 2-port

EL2P

PCIe2 LP 2-port 10GbE SR Adapter

Ethernet 10Gb 2-port - optical SR

EL2Z

PCIe2 LP 2-Port 10GbE RoCE SR Adapter

Ethernet 10Gb 2-port RoCE - optical SR

EL38

CNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb


PCIe2 LP 4-port (10Gb FCoE & 1GbE) SR&RJ4 optical SR & 2x1Gb UTP copper RJ45

EL39

PCIe2 LP 2-port 10GbE SFN6122F Adapter

EL3A
EL3B
EL3C
EL3Z

PCIe2 LP 2-port 10GbE SFN5162F Adapter


Ethernet 10Gb 2-port SolarFlare SFP+ coppe
PCIe3 RAID SAS Adapter Quad-port 6Gb
SAS 0GB Cache, RAID5/6 - 4-port, 6Gb
PCIe2 LP 4-port (10Gb FCoE and 1GbE) CoppeEthernet 10Gb+1Gb 4Ports: 2x10Gb copp
PCIe2 LP 2-port 10 GbE BaseT RJ45 Adapter Ethernet 10Gb 4Ports: RJ45

Ethernet 10Gb 2-port OpenOnLoad SolarFla

EL60

PCIe3 LP 4 x8 SAS Port Adapter

EN0B
EN0L
EN0N
EN0T
EN0V

PCIe2 LP 16Gb 2-port Fibre Channel Adapter Fibre Channel 16Gb 2-port
PCIe2 LP 4-port (10Gb FCoE & 1GbE) SFP+Co Ethernet 10Gb+1Gb CNA 4Ports
PCIe2 4-port(10Gb FCoE & 1GbE) LR&RJ45 AdEthernet 10Gb+1Gb CNA 4Ports
PCIe2 4-Port (10Gb+1GbE) SR+RJ45 Adapter Ethernet 10Gb+1Gb CNA 4Ports
PCIe2 4-port (10Gb+1GbE) Copper SFP+RJ45 Ethernet 10Gb+1Gb CNA 4Ports

EN0Y
EN28

PCIe2 LP 8Gb 4-port Fibre Channel Adapter


2 Port Async EIA-232 PCIe Adapter

Fibre Channel 8Gb 4-port


Async 2-port EIA-232

ES09

IBM Flash Adapter 90 (PCIe2 0.9TB)

Flash memory adapter 900GB

ESA1

PCIe2 RAID SAS Adapter Dual-port 6Gb

SAS 0GB Cache, RAID5/6 - 2-port, 6Gb

SAS 0GB Cache, RAID5/6 - 4-port, 6Gb

2x10Gb Co
2x10Gb op
2x10Gb op
2x10Gb co

nowledgecenter/8247-21L/p8eab/p8eab_83x_8rx_supported_pci.htm?lang=en

chine type 8246


ng models are 8246-L2C/L2S/L2D/L2T
y). The "S" and "T" models support I/O drawers.
drawer has full high PCIe slots

ne type 8248
high PCIe slots. It can attach up to four 12X PCIe I/O drawers also with full high full high PCIe slots. The 7R4 does not

Slots
POWER7

POWER8
CCIN

Form
Factor

Gen1 /
Gen2

CEC Card Slot


Support

7R1 & 7R2 CEC


Card Slot
7R4 CEC Card
Support
Slot support

Drawer Slot
Support

57CD

LP

Gen1

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

57CD

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

57D1

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

Gen2

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

Gen1
Gen1

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T
L2C, L2S

NA
NA

NA
NA

Gen1

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

576F

5269

2B3B

5271

5272
577D

5768

LP

LP

LP

LP

LP
LP

LP

Gen1

Gen1

Gen1

S821L, S822L

NA

NA

5275
5774

57D2

2B43

2B44
5767

58E2
5287

LP
LP

LP

LP

LP
LP

LP
LP

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

Gen2

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

Gen2
Gen1

S821L, S822L
S821L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T
L2C, L2S

NA
NA

NA
NA

S821L, S822L
S821L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T
L2C, L2S

NA
NA

NA
NA

NA

NA
5802, 5877,
EL36, EL37

Gen1
Gen1

Gen1

Gen2
Gen2

S821L, S822L
S812L, S822L

S821L, S822L

5288

LP

Gen2

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

57D4

FH

Gen1

NA

Yes

Gen1

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

57D4

LP

2B3B

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

5271

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

5732

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

577D

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

5748

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

5767

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

5768

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

5769

FH

Gen1

NA

Yes

576E

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37
5802, 5877,
EL36, EL37

5774

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

57D2

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

NA

574E

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

576F

FH

Gen2

NA

Yes

5802, 5877,
EL36, EL37

57B3

FH

Gen1

NA

Yes

5802, 5877,
EL36, EL37

57B5
EC27

FH
LP

Gen2
Gen2

NA
NA

Yes
NA

5802, 5877,
EL36, EL37
NA

57BD

LP

Gen3

S812L, S822L

58F9
EJ16

LP
LP
LP

Gen2
Gen2
Gen3

S812L, S822L
S812L, S822L
S812L, S822L

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

Gen2

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

5774

57B3

576F

LP

LP

LP

Gen1

Gen1

EC27

LP

Gen2

57B4
5767

LP
LP

Gen2
Gen1

L1S, L2S, L1T,


L2T
L2S

NA
NA

NA
NA

Gen1

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

S812L, S822L

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

NA

NA

577D

5287

EC29

LP

LP

LP

Gen2

Gen2

S812L, S822L

L1D, L1T, L2D,


L2T

2B93

LP

Gen2

S812L, S822L

L1D, L1T, L2D,


L2T

na

LP

Gen2

S812L, S822L

L1D, L2D, L1T,


L2T

NA

NA

na
57B4
2CC1
2CC4

LP
LP
LP
LP

Gen2
Gen3
Gen2
Gen2

L1D, L2D, L1T,


L2T
L1D, L2D

NA
Yes

NA

S812L, S822L
S812L, S822L
S812L, S822L

57B4

LP

Gen3

S812L, S822L

577F
2CC1
2CC0
2CC3
2CC3

LP
LP
LP
LP
LP

Gen2
Gen2
Gen2
Gen2
Gen2

S812L, S822L
S812L, S822L
S812L, S822L
S812L, S822L
S812L, S822L

S812L, S822L
S812L, S822L

L1D, L1T, L2D,


L2T

NA

NA

L1C, L1S, L2C,


L2S, L1D, L1T,
L2D, L2T

NA

NA

EN0Y

LP
LP

Gen2
Gen1

578A

FH

Gen2

NA

NA

5802, 5877,
EL36, EL37

57B4

FH

Gen2

NA

Yes

5802, 5877,
EL36, EL37

lots. The 7R4 does not support low profile (LP) adapters.

Sizing
Drawer
comments

Software Support
Min RHEL
Level

Min SLES
Level

3 to 6

5.5

10

L2S, L2T, L4T


only

3 to 6

5.5

10

NPIV requires VIOS

L2S, L2T, L4T


only

0.1 to 1

5.5

10

No network install capability.

0.4 to 4

5.8

11

0.1

5.5

10

20

5.5

10

0.4 to 4

5.5

10

5 to 10
16

5.5
5.5

10
10

0.2 to 2

5.5

10

Sizing

Factor

Comments

L2S, L2T, L4T


only

5 to 10

5.5

10

0.04

5.5

10

10 to 20

5.7

10

10 to 20
0.2 to 2

5.7
5.5

10
10

30
10 to 20

5.6
5.7

10
10

10 to 20

5.7

10

0.02

5.7

10

Linux Network Install not supported

Linux Network Install not supported

0.02

5.7

10

L2S, L2T, L4T


only

20

5.5

10

L2S, L2T, L4T


only

0.4 to 4

5.5

10

L2S, L2T, L4T


only

5 to 10

5.5

10

L2S, L2T, L4T


only

16

5.5

10

L2S, L2T, L4T


only

0.1

5.5

10

L2S, L2T, L4T


only

0.2 to 2

5.5

10

L2S, L2T, L4T


only

0.2 to 2

5.5

10

5 to 10

5.5

10

5 to 10

5.5

10

L2S, L2T, L4T


only

5.5

10

L2S, L2T, L4T


only

0.04

5.5

10

L2S, L2T, L4T


only
L2S, L2T, L4T
only

withdrawn from marketing - use


EL2P instead

L2S, L2T, L4T


only

2 to 6

5.5

10

L2S, L2T, L4T


only

0.4 to 4

5.8

11

L2S, L2T, L4T


only

1 to 6

5.5

10

15 to 30
20

5.7
6.3

10
11

L2S, L2T, L4T


only

NPIV requires VIOS

no NPIV

5.5

10

no NPIV

1 to 6

5.5

10

NPIV requires VIOS

0.4 to 4

5.8

11

NPIV requires VIOS, withdrawn from marketing, use EL2N in

20

6.3

11

NPIV requires VIOS

15 to 30
0.2 to 2

5.8
5.5

10
10

NPIV requires VIOS

16

5.5

10

10 to 20

5.7

10

20

6.3

11

20

NA as of Feb

11

20

6.4

n/a

20

6.4

n/a

30

NA as of Feb

11

30

5.8

10

SSD modules withdrawn from marketing in Aug 2013,


SSD modules withdrawn from marketing in Aug 2013,

L2T only

6 to 11

6.5

NA

SSD modules withdrawn from marketing in Aug 2013,

L2S, L2T, L4T


only

15 to 30

5.8

10

rom marketing, use EL2N instead

arketing in Aug 2013,


arketing in Aug 2013,

arketing in Aug 2013,

some selected cable comments . Ths does not cover all types of adapters or cables.
Twinax Ethernet cable comments

Optical SFP+ cables for Ethernet

SFP+ (Small Form Factor Pluggable Plus)


10Gb cables

QDR 4X Infiniband cable comments

ASYNC Adapter cable comments

Graphic adapter cable comments

SAS adapter cable comments

some selected cable comments . Ths does not cover all types of adapters or cables.
Twinax Ethernet cable comments

(Ethernet Cables (twinax) for Power Copper SFP+ ports are active cables. Passive cables not tested/supported. F
codes #EN01, #EN02 or #EN03 (1m, 3m, 5m) are tested/supported. Note these twinax cables are NOT CX4 or 10
GBASE-T or AS/400 5250 twinax. Transceivers are located at each end of the #EN01/02/03 cables and are shipp
with each cable..

Active cables will work in all cases. Passive has potential risk of not working in some cases, but is lower cost. To
simplify/confusion/error/support/debug issues, Power Systems specifically designates active cabling as the only support SFP+
copper twinax cable option. Use passive at your own risk and realize it is an unsupported and untested configuration from Pow
Systems perspective.
Optical SFP+ cables for Ethernet

These cables are the same as for 8 Gb Fibre Channel cabling, but the distance considerations are not as tight. Power System
Optical SFP+ ports include a transceiver in the PCIe adapter or Integrated Multifunction Card..
SFP+ (Small Form Factor Pluggable Plus)
10Gb cables
See TechDoc TD106020 for some good insights:
IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106020
BPs: http://www-03.ibm.com/partnerworld/partnerinfo/src/atsmastr.nsf/WebIndex/TD106020

QDR 4X Infiniband cable comments

QDR cables for the #5283 and #5285 adapters which have been tested and are supported are the Copper 1m, 3m,
(#3287, #3288, and 3289) and Optical 10m, 30m (#3290, #3293). Both the copper and optical cables come with t
QSFP+ transceivers permanently attached at each end. The adapters just have an empty QSFP+ cage
.

ASYNC Adapter cable comments


The 4-port Async adapter (#5277 and #5285) has one physical port on the card, but comes with a short fan-out cable which ha
EIA-232 ports. The fan-out cable does not have its own feature code.

The 2-port Async adapter (#5289 and #5290 has two physical ports on the card which are RJ45 connectors. Feature code #3
cables are to convert from an RJ45 connector to a 9-pin (DB9) Dshell connector. These converter cables are fairly generic an
be found at many electrical stores. This same #3930 is used on the Power 710/720/730/740 system unit to convert the
serial/system ports to a DB9 connector. Order one converter cable for each of the two ports you plan on using

Graphic adapter cable comments


PCIe Graphics cards: low profile #5269 has one port on card and comes with a short fan-out cable which yields two DVI por
PCIe Graphics cards: full high #5748 has two DVI ports on card
note the older PCI (not PCIe) graphics card has one DVI port and one VGA port
you can connect a VGA cable to a DVI port via a dongle converter cable feat #4276 28 pin D-Shell DVI plug to/from 15 pin

SAS adapter cable comments


The InfoCenter generally has very good information about cabling SAS adapters

PCIe Gen1 adapters have a wider connector called "Mini-SAS". PCIe Gen2 adapters need either a HD (High Density) Mini-S
connector or an HD Narrow Mini-SAS connector , PCIe Gen3 adapters need a HD Narrow Mini-SAS connector.

YO cables - the "tail" or "bottom of the y" connects to the adapter SAS port. The two connectors on the "top of the y" connec
I/O drawer to provide redundant paths into the SAS expanders.

X cables -- the two "bottoms of the x" connect to two diiferent adapters' ports. The two "tops of the x" connect to an I/O drawe
provides redundant paths into the SAS expanders and into redundant SAS adapters.
AE or YE cables attach to tape drives
AA cables connect between two PCIe2/PCIe3 SAS adapters with write cache

The new Gen3 SAS adapters have four SAS ports which are VERY close together and require a more narrow Mini-SAS HD
connector. "Narrow" cable feature codes equivalent to the PCIe Gen2 SAS adapter cables in terms of length and electircal
characterisitcs are announced with the PCIe Gen3 SAS adapters. The Narrow cables can be used on the PCIe Gen2 SAS
adapters. The earlier "fatter" Mini--SAS HD cables are NOT supported on the new PCIe Gen3 SAS adapters since the connec
would jam together and potentially damage the ports on the back of the PCIe Gen3 card.

15 pin D-shell VGA

I/O Performance Limits Estimator Tool


Strongly suggest reading all the intro material before trying to use the tables
Table of content
Overview/purpose of the tool
Using /interpreting / adjusting sizing factors
Sizing factors for Integrated I/O, 12-X drawers, Ultra Drawer, 4X IB GX adapter
The tables
Optionally read: sharing bandwidth by PCIe slots
OVERVIEW / PURPOSE OF THE TOOL

This tool is intended as a rough rule of thumb to determine when you are approaching the IO limits of a system or IO draw

HOWEVER there are many variables and even more combinations of these variables which can impact the actual resultin
environment. Thus always apply common sense and remember this is designed to be a quick and simple approach ... a
heavy-weight tool full of precise measurements and interactions. Keep in mind the key planning performance words, "it d
mileage may vary".
You will be using the "Sizing Factor" for each PCIe adapter. These are found in the "PCIe Adapter" worksheet/tab of this
light blue columns.

For example, feature code #5769 (10Gb 1-port Ethernet adapter) it has a sizing factor of 5 to 10. If high utilization, use "
usage, use a "5". Or pick a number "6" or "7" or "8" or "9" if you think it is somewhere in between.

Or another example, feature code #5735 (8Gb 2-port Fibre Channel Adapter) has a sizing factor of 16 assuming both por
only one port is used, then use an "8".
These sizing factors are actually rough Gb/s values. So a sizing value of 8 equates to 8Gb/s.

Totaling all the sizing factors for all the adapters will give you a good idea if you are coming close to a performance limit fo
looking at an overall system performance limit as well as "subsystem limits" or "subtotal". Subsystem limits are maximum
bus, system unit, I/O drawer, PCIe Riser Card, etc. The maximum Gb/s for each subtotal/subsystem and the overall POW
further down on this worksheet.

NOTE1: these values do NOT translate directly into MB/s numbers. This tool builds on the observation that most adapt
utilization in real client application environments. Thus to help avoid over configuring for most shops, the tool assigns ma
are 70-75% of the the maximum throughput technically/theoretically possible.

NOTE2: If you are correlating these bandwidth maximums to the bandwidth values provided in marketing presentations
document, observe that there is a difference. The marketing materials are correct, but use peak bandwidth and assume
This sizing tool takes a more conservative approach which probably represents more typical usage by real clients. It assu
and sustained workloads in its maximums.

For example. For simplicity, marketing materials provide a single performance value of 20Gb/s for GX adapters/slots. Th
that a GX adapter or 12X Hub specifications are: BURST = 10 GB/s simplex & 20 GB/s duplex
SUSTAINED = 5 G
GB/s duplex.
This tool uses Gb (bit) vs GB (byte) because there is a lot of variability between different protocols/adapters and how thei
Gb and vice versa.

Comments/insights on how to use, interpret and/or adjust sizing factors

Remember to ask those common sense questions. "Are these adapters being run at the same time?" For example if yo
only goes to a tape that only runs one night a week when other adapters are not busy, you probably do not need to count
analysis. Or another good question, "Do you have redundant adapters or ports which are there only in case the primary a
do, then you can probably ignore the redundant adapter/ports.

For Fibre Channel and for Ethernet ports, simplex line rates are assumed, not duplex. Based on observations over the
simplification. Many client workloads aren't hitting the point where these adapters' duplex capability is a factor in calculati
environment is a real exception and you know you have a really heavy workload using duplex, then add another 20-25%

The actual real data rates in most applications are typically less than the line rate. This is especially true for 10Gb Ethern
applications run the adapter at less than 50% of line rates. If you know this is true for you, cut the sizing factor in half (th
adapter)

Similarly, if you have an 8Gb or 16Gb Fibre Channel adapter and it is attached to a 4Gb switch, then treat it like a 4Gb ad
factor in half). Likewise if you have a 16Gb FC adapter on an 8Gb switch, treat it lke an 8Gb adapter.

For Fibre Channel over Ethernet Adapters (FCoE or CNAs) Treat these adapters based on the mix of work done. If u
Channel only, treat it just like the 8Gb Fibre Channel adapter port above. If using a port solely for NIC traffic, treat it like
above. If a port has mixed FC and Ethernet workloads, assign a sizing factor based on the mix.

SAS Adapters Don't be surprised by SAS adapters max throughput numbers. A 2-port, 6Gb adapter is not 2 x 6 = 12G
over looking the insight that each port/connector is a "x4" connection. Four paths per connection yielding a 2 x 6 x 4 = 48
than a 2-port 8Gb Fibre Channel card ( 2 x 8 = 16), but then you could go into duplex mode for FC and 2 x 16 = 32 ... so m
SAS 2-port. Also remember most SAS adapters run in pairs so that makes the comparison more challenging. "It all dep

Usually there are a lot of SAS ports for connectivity and the per port workload is lighter. And remember there can be a hu
between SSD and HDD.

Also note there is a big difference in typical workload whether there is a lot of data / database workload and something ea
drives. Boot drives are a very light load.

Transaction workloads focus on IOPS (I/O Operations Per Second) and have lower data rates. Reduce the SAS sizing f
transaction workload. You would probably reduce the sizing factor even if SSD running transactions workloads, but SSD
leave the value higher even if transaction workload. Alternatively, if you are doing data mining or engineering scientific, o
where there are large block transfers, then Gb/s are high and use the larger sizing factor for the SAS controller.
So SAS adapter rough guides

~~ #5901 / 5278 is a 6 if running lots of drives with large data blocks or is a 1 if just supporting boot drives. If configured
adapters, each adapter has that same sizing factor. So a sizing factor for a pair could be up to 12. Paris are optional.

~~ #5805 / 5903 is an 8 if running lots of drives with large data blocks (especially SSDs) or a 1 if just doing boot drives.
work as pairs, and each of the pair has this sizing factor. The total sizing factor for a pair could thus be up to 16 with larg

#ESA1 / ESA2 runs only SSD so use 28 per adapter unless lightly loaded or unless not that many SSD. Pairs are opt
~~ #5913 or #ESA3 is a 30 if running lots of SSDs or a 16 if running lots of HDDs. These adapters always work as pairs
this sizing factor. The total sizing factor for a pair could thus be up to 60 with lots of SSD.

~~ #2053 / 2054 / 2055 runs only SSD so use a 6 if all four SSD are present. These adapters are not paired (but can be

note that "lots" of SSD can vary whether you are running the newer 387GB SSD which are faster and have higher through

Gen2/Gen1 note: The tool below ignores differences in Gen2 PCIe adapters in Gen1 PC slots. You will have to keep th
or #ESA1 (SAS adapters) and assume a max of 15 vs 30. (remember IOPS is not usually related to bandwidth limitation
adapters are only supported in Gen2 slots, so it is not commonly a consideration.

Sizing factors for Integrated I/O, 12-X attached I/O drawers, EXP30 Ultra Drawer and 4X IB adapter
Integrated IO Controllers

The 710 through 780 have integrated controllers for running HDD/SSD, DVD, tape, USB ports. The "B" models have IVE
1Gb Ethernet. You need to include these in the calculations. You might ignore the DVD and USB as they are generally ve
peaks (if any) happen at off hours. Likewise the 1Gb Ethernet is small. But If you have integrated 10G E or the Dual RAI
to include them in the calculations. Treat the integrated 10Gb Ethernet adapter like a PCIe 10Gb Ethernet adapter with th
ports.

For a rough guide for the for the integrated Dual RAID controllers: You may be up to a 10 for the integrated pair (not 10 e
you have some SSD or a lot of HDDs doing large block I/O. Remember that you can attach an #5887 EXP24S to the SA
server and then this same pair of integrated SAS controllers is handling another 24 SAS bays. Increase the sizing factor
of drives -- especially if using large block I/O.

If you are NOT using the "dual" integrated SAS adapter option and using a "single" integrated SAS adapter, use the same
#5901 / 5278 SAS adapter with a performance sizing factor or 1-6. For the Power 770/780 you can have two of these sin
their own sizing factor. If a Power 710/720/730/740/750/755 you would have just one adapter with one sizing factor.

12X- attached I/O Drawers with PCIe slots

#5802/5877 and #5803/5873

This tool focuses on the total bandwidth available to you. These I/O drawers attach to GX++ Adapters and it is that GX a
maximum bandwidth available to all the adapters in that drawer. If you have two I/O drawers on that same GX++ adapte
that same total bandwidth of the GX++ adapter. Having two vs one #5802/5877 provides some redundancy and obviousl
does NOT increase max bandwidth. Note that each #5803/5873 is logically two drawers and for higher bandwidth you wo
two GX++ adapters.

For this estimator tool, one GX+ bus will support 40 Gb/s and a GX++ bus will support 60 Gb/s. (see Note2 above) GX+
they can be "external". External GX buses are accessed through a GX slot. All of the external GX slots are GX++ on POW
the first GX slot on the Power 750.

Note that some servers share a bus for internal as well as for GX slots. Thus you'll see below times when the total bandw
of the parts.
Note the Power 750 has two different GX adapters. The #5609 is GX+ (40Gb) and #5616 is GX++ (60Gb). If a #5609 is
+ slot, the bandwidth is restricted to 40Gb/s.

#5685 PCIe Gen2 Riser Card in Power 720/740 ("B" or "C" models)
This 4-slot expansion option plugs into the GX++ slot. It's max throughput is 60 Gb/s
#5610 PCIe Gen1 Riser Card in Power 720/740 ("B" model only)
This 4-slot expansion option plugs into the GX++ slot. It's max throughput is 40 Gb/s
NOTE: the table below assumes #5685 is used . If #5610 is used, subtract 20 Gb/s from the system max and the rise

#EDR1 or #5888 EXP30 Ultra SSD I/O Drawer

The #EDR1 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "D" model 710/720/730/740
model 770/780. .The #5888 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "C" model
the Ultra Drawer are two very high performance SAS controllers, more powerful than the #5913/#EAS3. So assume a siz
this drawer if a lot of busy SSD. Note each SAS controller has a PCIe cable connecting it to the GX++ PCIe adapter, but
plugged into different GX++ PCIe adapters. Thus assume a sizing factor of 30 for each cable plugged into the GX slot.
The GX++ PCIe adapters are the #EJ03 and #EJ0H.and #1914
#EJ03 plugs into a 720/740.
#EJ0H plugs into
plugs into a 750/760/770/780.

#5266 GX++ Dual-port 4x Channel Attach

This specialized adapter is used only on the Power 710/730 "B" model 8231-E2B to connect to a 4x Infiniband switch in s
environments. (On 710/730 "C" models with PCIe Gen2 slots, the QDR PCIe adapter is used instead of this DDR option.
up to a max of 30Gb/s. Actual usage is HIGHLY dependent upon the clustering application. Use a sizing factor of 2 - 30
applications. A modest percentage of servers user this GX++ adapter.

The Tables
How to use the tables below:

(see also the example just below Power 710/730 B models and the example jus

models.)

1- Look up the sizing factor for each of the adapters. Make sure you know in which PCI slot the adapter(s) w

2- Add the maximum sizing factor of each adapter in that subgroup of PCIe slots represented by a separate c
below. Do this for all subgroups which your server will be using.
3- For the subgroups which have internal I/O, also add the sizing factor(s) for the internal I/O

4- Compare each subgroup's sizing factor total to the subtotal maximum. 4a- If the sizing factor total is less
maximum, you easily have adequate bandwidth. 4b- If the total sizing factor total is larger than the subtotal
action. Review the sizing factors and adjust downward as appropriate. Compare the revised sizing factor tota
If still too high, take action such as moving adapters out of that subgroup.

5- . Then add the subgroups together and compare to the total bandwidth available for the server. If the adap
have a larger total sizing factor than the subgroup maximum, use the smaller value. 5a- If your total of subg
equal to the system total, you should have adequate bandwidth.
5b- If your total of subgroups exceeds the
additional sizing factor adjustments would be helpful and compare again. If the system total is still exceeded, t
or make sure expectations have been set with the client.

Important insight: This tool focuses on aggregate bandwidth with subtotals. The "subtotals" or "subsystem limits" are i
individual Gen1 or individual Gen2 PCIe slots are the same, groups of Gen1 PCIe slots or groups of Gen2 PCIe slots som
Bandwidth can also be shared with other internal I/O. The table below lays out the subtotals and the total. If you exceed
subtotal, for example a GX slot, that means you have a bottleneck which will limit max throughput of the adapters located

For Power 710/730

"B" models

8231 E2B

POWER7 710 1-socket

730 2-socket

Internals 4 slot plus 1st GX slot only for 2nd GX slot only for total
integrated IO
#5266
#5266
710 using 0 GX
slots

40 Gb/s

n/a

n/a

40 Gb/s

710 using 1 GX
slot

40 Gb/s

30 Gb/s

n/a

70 Gb/s

730 using 0 GX
slots

40 Gb/s

n/a

n/a

40 Gb/s

730 using 1 GX
slot

40 Gb/s

30 Gb/s

n/a

70 Gb/s

730 using 2 GX
slots

40 Gb/s

30 Gb/s

30 Gb/s

100 Gb/s

Example: Power 730 B model. (See the table just above this row.) In the system unit (internal four
port 8Gb Fibre Channel adapters (sizing factor 16) and one 4-port 1Gb Ethernet adapter (sizing facto
PCIe slots is empty). You have three 177GB SSD in the six SAS bays run by the integrated SAS ada
to 10) and you have a 2-port 10Gb Ethernet IVE/HEA (sizing factor 10-20 ).
In this example the GX
used. You get the sizing factors from the "PCIe Adapter" worksheet/tab for the PCIe adapters and fro
the integrated I/O.
B152

Example: Power 730 B model continued.


Totaling the max sizing factors for these components yie
= 66. (16+16+4+0 = PCIe slots and 10+20 = internal.) 66 is larger than the 40 Gb/s max bandwidth
in column C of the 710/730 B Model table above. So you need to review these number to see if a low
sizing factor value should be used instead of the 16+16+4+0+10+20 maximums. If after the review a
lower sizing factors you are still above 40Gb/s, you need to move some I/O to the 2nd GX slot in an I/
could switch to a 730 C model and gain a lot of bandwidth.

For Power 710/730 and for PowerLinux 7R1/R72


8231 E1C / E1D
POWER7
8231 E2C / E2D
POWER7
8246 L1C/L1S/L1D/L1T POWER7
8246 L1C/L1S/L1D/L1T POWER7

710 using 0 GX
slots

710
730
R71
R72

"C" or "D" models with PCIe Gen2 slots


1-socket
2-socket
1-socket
PowerLinux
use same rules as for 710
2-socket
PowerLinux
use same rules as for 730

Internals 6 slots
plus integrated IO

1st GX slot only for 2nd GX slot for


total
#EJ0H or SPCN
GX++ adapter for
card
12X I/IO loop or for
#EJ0H

60 Gb/s

n/a

n/a

60 Gb/s

710 using 1 GX
slot

60 Gb/s

30 Gb/s

n/a

90 Gb/s

730 using 0 GX
slots

60 Gb/s

n/a

n/a

60 Gb/s

730 using 1 GX
slot

60 Gb/s

30 Gb/s

n/a

90 Gb/s

730 using 2 GX
slots

60 Gb/s

30 Gb/s

30 Gb/s

120 Gb/s

730 using 2 GX
slots

60 Gb/s

n/a (slot filled by


SPCN card)

60 Gb/s

120 Gb/s

For Power 720/740 "B" models


8202 E4B
8205 E6B

POWER7 720 1-socket


POWER7 740 2-socket (if only one socket populated, treat as 720 in this too

Internal 4 slots plus 1st GX slot for GX+


integrated IO
+ adapter for 12X
I/O loop or for PCIe
Riser Card

2nd GX slot for


GX++ adaper for
12X I/O loop (740
only)

total

720 using 0 GX
slots

40 Gb/s

n/a

n/a

40 Gb/s

720 using 1 GX
slot

40 Gb/s

60 Gb/s

n/a

100 Gb/s

740 using 0 GX
slots

40 Gb/s

n/a

n/a

40 Gb/s

740 using 1 GX
slot

40 Gb/s

60 Gb/s

n/a

100 Gb/s

740 using 2 GX
slots

40 Gb/s

60 Gb/s

60 Gb/s

160 Gb/s

For Power 720/740 "C" or "D" models with PCIe Gen2 slots
8202 E4C or E4D
POWER7 720 1-socket
8205 E6C or E6D
POWER7 740 2-socket (if only one socket populated, treat as 720 in this too

720 using 0 GX
slots

Internals 6 slots
plus integrated IO

1st GX slot for GX+


+ adapter for 12X
I/O loop or for PCIe
Riser Card or for
#EJ03 & EXP30

2nd GX slot for


total
GX++ adaper for
12X I/O loop (740
only) or for #EJ03 &
EXP30.

60 Gb/s

n/a

n/a

60 Gb/s

720 using 1 GX
slot

60 Gb/s

60 Gb/s

n/a

100 Gb/s

740 using 0 GX
slots

60 Gb/s

n/a

n/a

60 Gb/s

740 using 1 GX
slot

60 Gb/s

60 Gb/s

n/a

100 Gb/s

740 using 2 GX
slots

60 Gb/s

60 Gb/s

60 Gb/s

160 Gb/s

Example: Power 720 C model. In the system unit (internal six slots) you have one 2-port 4Gb Fibre
(sizing factor 8) and two 2-port 10Gb FCoE adapters (sizing factor 20 each) and a Async/comm card
and one required 4-port 1Gb Ethernet adapter in the C7 slot (sizing factor 0.4-4). (One of the PCIe s
have four HDD in the six SAS bays run by the integrated SAS adapters (sizing factor up to 10 - proba
HDD). You have two PCIe RAID SAS adapters in a PCIe Riser Card each with four SSD (sizing facto
Totaling the max sizing factors for thes internal components yield 8+20+20+0.2+0+4+10 = 62.2 (PCI
8+20+20+0.2+4 and internal = 10.) 62.2 is larger than the 60 Gb/s max bandwidth available for these
need to need to analyze the sizing factors to determine if they actually use less bandwidth. Let's ass
the revised total is less than 60 ... which we'll call "Adjusted 62.2".
example continued below

Next look at the GX slot size factors of 6+6 = 12. 12 is less than 60 so you are ok on this subtotal.
Power 720, there is an overlap of bandwidth between the 1st GX slot and internal slots add all these s
"Adjusted 62.2" + 12 is less than 100Gb/s so you fit in the system maximum as well as both subtota
configuration's bandwidth requirements should be ok.

For Power 750/755 "B" model Note this has both PCI-X DDR and PCIe Gen1 slots
8233 E8B
POWER7 750 4 socket (if only one socket populated, 2nd GX slot not enabled
8236 E8C
POWER7 755 4-socket (note 12X-I/O drawers not supported)
No
Internals 5 PCI
1st GX slot for GX+ 2nd GX slot for
slots plus integrated adaper for 12X I/O GX++ adaper for
IO
loop
12X I/O loop (2-4
socket only)

total

One socket with 0 40 Gb/s


12 X loops

n/a

n/a

40 Gb/s

One socket with 1 40 Gb/s


12 X loops

40 Gb/s

n/a

40 Gb/s

Two-to-four
40 Gb/s
sockets with 0 12X
loops

n/a

n/a

40 Gb/s

Two-to-four
40 Gb/s
sockets with 1 12X
loops

40 Gb/s

n/a

40 Gb/s

Two-to-four
40 Gb/s
sockets with 1 12X
loops

n/a

60 Gb/s

100 Gb/s

Two-to-four
40 Gb/s
sockets with 2 12X
loops

40 Gb/s

60 Gb/s

100 Gb/s

For Power 750/760 "D" models or the PowerLinux 7R4 with PCIe Gen2 slots
8408 E8D
POWER7 750
9109 RMD
POWER7 760
8248 L4T
PolwerLinux 7R4
Internal PCIe slots
1-4

Internal slots 5,6


plus integrated IO

1st GX slot for GX+


+ adapter for 12X
I/O loop or for
#1914 & EXP30

2nd GX slot for


GX++ adapter for
12X I/O loop or for
#1914 & EXP30

With 1 proc DCM 60 Gb/s


With 2 or more
60 Gb/s
proc DCM and one
12X I/O loop

60 Gb/s
60 Gb/s

n/a
60 Gb/s

n/a
n/a

With 2 or more
60 Gb/s
proc DCM and two
12X I/O loop

60 Gb/s

60 Gb/s

60 Gb/s

For Power 770/780 "B" models


9117 MMB
9179 MHB

POWER7 770
POWER7 780

Per NODE /
Processor
enclosure

Internal PCIe slots


1-4

Internal PCIe slots 1st GX slot for GX+ 2nd GX slot for
5,6 plus integrated + adapter for 12X
GX++ adapter for
IO
I/O loop
12X I/O loop

With 0 12X I/O


loops

40 Gb/s

40 Gb/s

n/a

n/a

With 1 12X I/O


loop

40 Gb/s

40 Gb/s

60 Gb/s

n/a

With 2 12X I/O


loops

40 Gb/s

40 Gb/s

60 Gb/s

60 Gb/s

For Power 770/780 "C" and "D" models with PCIe Gen2 slots
9117 MMC / MMD
POWER7 770
9179 MHC / MHD
POWER7 780
8412 EAD
Power ESE
this model has a max of one processor enclosure/node
Per NODE /
Processor
enclosure

Internal PCIe slots


1-4

Internal slots 5,6


plus integrated IO

1st GX slot for GX+


+ adapter for 12X
I/O loop or for
#1914 & EXP30

With 0 12X I/O


loops

60 Gb/s

60 Gb/s

With 1 12X I/O


loop

60 Gb/s

60 Gb/s

60 Gb/s

With 2 12X I/O


loops

60 Gb/s

60 Gb/s

60 Gb/s

2nd GX slot for


GX++ adapter for
12X I/O loop or for
#1914 & EXP30

60 Gb/s

For Power 795


each processor book has 4 GX++ slots
9119 FHB
POWER7 796
Per Processor
Book

GX slot one

With 1 12X I/O


loop

60 Gb/s

With 2 12X I/O


loops

60 Gb/s

60 Gb/s

With 3 12X I/O


loops

60 Gb/s

60 Gb/s

60 Gb/s

With 4 12X I/O


loops

60 Gb/s

60 Gb/s

60 Gb/s

Optional reading

GX slot two

GX slot three

GX slot four

60 Gb/s

More indepth discussion on sharing of bandwidth by PCIe slots:

As shown in the tables above, not all PCIe slots are equal. Individually they are equal, but depending on the server or de
drawer, some slots share bandwidth with other slots or I/O. The following sentences summarize the tables above for the
SYSTEM units
Power 710/720/730/740 "B" and "C" and "D" model system units --- All PCIe slots with a system unit are equal
Power 750/755 system unit - all PCIe slots are equal -- All PCI-X DDR slots are equal

Power 750/760 "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but slots 1-4 are on o
on a separate bus which shares bandwidth with internal controllers

Power 770/780 "B" and "C" and "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but s
and slots 5-6 are on a separate bus which shares bandwidth with internal controllers
Power 795 has no PCI slots in the system unit -- everything attached through GX++ slots
PCIe Riser Card
#5610 which can be placed in the 720/740 "B" system unit . All PCIe slots (Gen1) are equal
#5685 which can be placed in the 720/740 "B or C" or "D" system unit . All PCIe slots (Gen2) are equal
12X-attached PCIe I/O drawers: #5802/5877 (19-inch rack) and #5803/5873 (for 795/595)

The bandwidth available to a one or more I/O drawers on a 12X loop is obviously the same as the GX slot to which they a
this rough sizing tool's perspective, it usually doesn't make sense to to worry about individual PCIe slots within the I/O dra
trying to optimize PCIe card placement within an I/O drawer for most client workloads won't make that much difference. H
who are focused on the details, the following information is shared. See Info Center for more detail.

#5802/5877 when only one drawer per GX adapter (one per 12X loop). A drawer has 10 PCIe Gen1 slots and two conne
in the server. The ten PCIe slots are in 3 subgroups; Slots 1,2,3 and Slots 4,5,6 and Slots 7,8,9,10. Slots 1,2,3 and Slo
separate connection to the GX adapter. Slots 7,8,9,10 first connect to the Slot 1,2,3 subgroup and share the bandwidth
connection to the GX adapter. Thus there may be a slight advantage in putting the highest bandwidth adapters in Slot 4,5
adapters with the least latency requirements in Slots 1,2,3 or 4,5,6. HOWEVER, most of the time this will not make much
difference to the client assuming your total bandwidth need is reasonable.

#5802/5877 when two drawers per GX adapter (two per 12X loop). Each drawer will have a connection to the GX adapte
subgroups in a drawer share that one connection. Depending on the cabling used either Slots 1,2,3 or Slots 4,5,6 will be
connection upstream to the GX adapter. The other two PCIe subgroups will connect through the "closer" PCIe subgroup.
time where the PCIe adapter is placed in the drawer will not make much (if any) noticeable difference to the client. The fa
total GX adapter bandwidth can make a difference. So where I/O bandwidth is a consideration, try to use just one I/O dra
try to balance the bandwidth needs across the to I/O drawers.

#5803/5873 have 20 PCIe slots. Think of this drawer as logically two 10 PCIe drawers inside one drawer. You can attach
a GX adapter or you can attach both halves of a drawer to single GX adapter. Each of the two logical drawers each are id
described above.

For the extremely analytical reader Each #5802/5877/5803/5873 PCIe slot subgroup is on its own PCIe 8x PHB. Spe
are: BURST = 2 GB/s simplex & 4 GB/s duplex
SUSTAINED = 1.6 GB/s simplex & 2.2 GB/s duplex.
Most of th
granular detail is not useful in client configuration sizings.

GX adapter

IO limits of a system or IO drawer.

ch can impact the actual resulting performance in a real


uick and simple approach ... a rough sizer. This is not a
anning performance words, "it depends" and "your

Adapter" worksheet/tab of this spreadsheet. Look for the

to 10. If high utilization, use "10" for the value. If low


between.

factor of 16 assuming both ports are being used. But if

g close to a performance limit for your I/O. You will be


Subsystem limits are maximums per an individual GX
subsystem and the overall POWER7 system are found

he observation that most adapters are rarely run at 100%


most shops, the tool assigns max performance factors that

ded in marketing presentations and in the facts/features


e peak bandwidth and assume full duplex connections.
al usage by real clients. It assumes simplex connections

Gb/s for GX adapters/slots. The details behind this are


duplex
SUSTAINED = 5 GB/s DMA simplex & 8

rotocols/adapters and how their GB ratings translate into

ame time?" For example if you have an 8G FC port that


probably do not need to count it at all in the performance
there only in case the primary adapter/port fails?" If you

ased on observations over the years this is a good


capability is a factor in calculating throughput. If your
lex, then add another 20-25% to the sizing factor.

especially true for 10Gb Ethernet where many


cut the sizing factor in half (think of it as a 5Gb Ethernet

witch, then treat it like a 4Gb adapter (cut the 8Gb sizing
Gb adapter.

ed on the mix of work done. If using one port for Fibre


olely for NIC traffic, treat it like a 10Gb Ethernet port
e mix.

6Gb adapter is not 2 x 6 = 12Gb/s multiplication. You are


nection yielding a 2 x 6 x 4 = 48Gb/s outcome. It's bigger
e for FC and 2 x 16 = 32 ... so much closer to 48 of the
n more challenging. "It all depends".

nd remember there can be a huge, huge difference

ase workload and something easy like AIX/Linux boot

ates. Reduce the SAS sizing factor if HDD is running


nsactions workloads, but SSD are so fast you might
ning or engineering scientific, or handling large video files
or the SAS controller.

orting boot drives. If configured as a pair, or as dual


up to 12. Paris are optional.

r a 1 if just doing boot drives. These adapters always


could thus be up to 16 with large block workloads.

that many SSD. Pairs are optional.


e adapters always work as pairs, and each of the pair has
.

pters are not paired (but can be mirrored).

faster and have higher throughput than the 177GB SSD.


slots. You will have to keep this in mind if using a #5913
related to bandwidth limitations.) Most of the time Gen2

rawer and 4X IB adapter

orts. The "B" models have IVE/HEA which are 10Gb or


d USB as they are generally very small numbers whose
egrated 10G E or the Dual RAID SAS options you need
e 10Gb Ethernet adapter with the same number of 10Gb

for the integrated pair (not 10 each). The 10 would be if


h an #5887 EXP24S to the SAS port on the back of the
ays. Increase the sizing factor closer to 10 if running lots

ted SAS adapter, use the same performance factor as a


0 you can have two of these single adapters, each with
pter with one sizing factor.

X++ Adapters and it is that GX adapter which sets the


ers on that same GX++ adapter, the two drawers share
some redundancy and obviously provides more slots, but
nd for higher bandwidth you would cable the drawer to

Gb/s. (see Note2 above) GX+/++ buses can "internal" or


rnal GX slots are GX++ on POWER7 servers except for

low times when the total bandwidth is less than the sum

is GX++ (60Gb). If a #5609 is placed in the second GX+

m the system max and the riser card

f a "D" model 710/720/730/740/750/760/770/780 or a "C"


d in the GX slot of a "C" model 710/720/730/740. Inside
5913/#EAS3. So assume a sizing factor value of 60 for
to the GX++ PCIe adapter, but the cables can be
able plugged into the GX slot.
720/740.
#EJ0H plugs into a 710/730.
#1914

ct to a 4x Infiniband switch in server clustering


sed instead of this DDR option.) The adapter supports
n. Use a sizing factor of 2 - 30 depending on the

B models and the example just below Power 720/740 C

ich PCI slot the adapter(s) will be used.

represented by a separate column in the tables


internal I/O

he sizing factor total is less than the subtotal


al is larger than the subtotal of the column, take
the revised sizing factor total to subgroup maximum.

e for the server. If the adapters within a subgroup


e. 5a- If your total of subgroups is less than or
al of subgroups exceeds the system total, see if
stem total is still exceeded, take appropriate action

tals" or "subsystem limits" are important. Though


groups of Gen2 PCIe slots sometimes share bandwidth.
als and the total. If you exceed the bandwidth of a
ughput of the adapters located in that group of PCI slots.

#5266 is for Infiniband switch connection

system unit (internal four slots) you have two 2ernet adapter (sizing factor 0.4-4). (One of the
by the integrated SAS adapter (sizing factor up
In this example the GX slots are not being
the PCIe adapters and from the text above for

for these components yield 16+16+4+0+10+20


he 40 Gb/s max bandwidth available as shown
hese number to see if a lower more realistic
ums. If after the review and assignment of
to the 2nd GX slot in an I/O drawer. Or you

e same rules as for 710


e same rules as for 730

#EJ0H is for #5888 EXP30 Ultra SSD I/O Drawer attach

two #EJ0H EXP30 Ultra Drawer connections in this example


#EJ0G GX++ adapter for 12X I/O loop on 730 uses BOTH GX
slots and covers one x4 PCIe slot and one x8 PCIe slot

ulated, treat as 720 in this tool)


Notes: A 4-core 720 does not support an GX++
adapter for an I/O loop. Can attach 12X to 4X
converter cable to the GX++ adapter for DDR switch
connection.
#5610 PCIe Riser card is Gen1 and has a max of 40Gb/s
#5685 PCIe Riser card is Gen2 and has a max of 60Gb/s

ulated, treat as 720 in this tool)

Notes: A 4-core 720 does not support an GX++


adapter for an I/O loop. #5610 not supported, use
#5685. Use QDR PCIe adapter, not 12X to 4X
converter cable.

(yes, total is 100 Gb/s, not 120Gb/s)

(yes, total is 100 Gb/s, not 120Gb/s)


(yes, total is 160 Gb/s, not 180Gb/s)

have one 2-port 4Gb Fibre Channel adapter


and a Async/comm card (sizing factor 0.1-0.2)
0.4-4). (One of the PCIe slots is empty). You
ing factor up to 10 - probably less here with
with four SSD (sizing factor 6 each).
+0.2+0+4+10 = 62.2 (PCIe slots =
ndwidth available for these internal slots so you
ess bandwidth. Let's assume you do so and

example continued below

u are ok on this subtotal. Finally since on the


ternal slots add all these sizing factors together.
um as well as both subtotal maximums. Your

ulated, 2nd GX slot not enabled


not supported)

5 internal slots = 3 PCI-X and 2 PCIe slots

(yes, total is 40 Gb/s, not 80Gb/s)

(yes, total is 40 Gb/s, not 80Gb/s)

(yes, total is 100 Gb/s, not 140Gb/s)

total

100 Gb/s
160 Gb/s

(yes, total is 100 Gb/s, not 120 Gb/s)


(yes, total is 160 Gb/s, not 120 Gb/s)

200 Gb/s

(yes, total is 200 Gb/s, not 240 Gb/s)

total

80 Gb/s
140 Gb/s
180 Gb/s

processor enclosure/node
total

100 Gb/s

(yes, total is 100 Gb/s, not 120 Gb/s)

160 Gb/s

(yes, total is 160 Gb/s, not 120 Gb/s)

200 Gb/s

(yes, total is 200 Gb/s, not 240 Gb/s)

total
60 Gb/s
120 Gb/s
180 Gb/s
240 Gb/s

PCIe slots:

depending on the server or depending on the I/O


marize the tables above for the servers' PCIe slots.

ystem unit are equal

are equal, but slots 1-4 are on one bus and slots 5-6 are

Individual slots are equal, but slots 1-4 are on one bus

en2) are equal

e as the GX slot to which they are attached. Thus for


ual PCIe slots within the I/O drawer. Most of the time
t make that much difference. However, for those people
ore detail.

PCIe Gen1 slots and two connections to the GX adapter


s 7,8,9,10. Slots 1,2,3 and Slots 4,5,6 each have a
roup and share the bandwidth of the Slot 1,2,3
t bandwidth adapters in Slot 4,5,6 and also in putting the
he time this will not make much (if any) noticeable

e a connection to the GX adapter. All three PCIe


Slots 1,2,3 or Slots 4,5,6 will be the "closest" to that
gh the "closer" PCIe subgroup. HOWEVER, most of the
e difference to the client. The fact two drawers share the
ation, try to use just one I/O drawer per loop. Generally

ide one drawer. You can attach each half of a drawer to


two logical drawers each are identical to a #5802/5877

on its own PCIe 8x PHB. Specifications for this PHB


2.2 GB/s duplex.
Most of the time, this level of

This wonderfully complete table thanks to Rakesh Sharma of the IBM Austin Development team

Ethernet port attributes


Integrated Multifunction
Adapter feature code #

Attribute or IEEE
Standard
Ether II and IEEE 802.3
802.3i
802.3u
802.3x
802.3.z
802.3ab
802.3ad/802.1AX
802.3ae

Attribute Description
Ether II and IEEE 802.3 encapsulated frames
10Mb Ethernet over twisted pair, 10Base-T
100Mb Ethernet over twisted pair, 100Base-TX
Full duplex and flow control
1Gb Ethernet over fiber, 1000Base-X
1Gb Ethernet over twisted pair, 1000Base-T
Link Aggregation/LACP/Load Balancing
10Gb Ethernet over fiber

SFP+ Direct Attach


802.3ak
802.3an
802.3aq
802.1Q VLAN
EtherChannel
NIC Failover
Jumbo Frames
802.1Qaz
802.1Qbb
802.1Q/p
Multicast
Checksum Offload
VIOS
RoCE
iWARP
FCoE
iSCSI
NIM

10Gb Ethernet over Direct Attach (DA),


10GSFP+Cu, Twin-ax
10Gb Ethernet over CX4, 10GBase-CX4
10Gb Ethernet over twisted pair, 10GBase-T
10Gb Ethernet over multimode fiber, 10GBase-LR
IEEE VLAN tagging
Cisco EtherChannel (Manual Link Aggregation)
Network Interface Backup (NIB) Support
Ethernet jumbo frames
Enhanced Transmission Selection
Priority based flow control
Ethernet frame priorities
Ethernet multicast
TCP/UDP/IP hardware checksum offload
Virtual IO Server Support
RDMA over Converged Ethernet
RDMA over Ethernet using iWARP Protocol
Fibre Channel over Ethernet
Software iSCSI provided by OS
Network Boot Support

TSO
LRO

TCP Segmentation Offload/Large Segment Offload


Large Receive Offload

OpenOnload (user space


TCP/IP)
RFS (recv flow steering)

EN10
2 10Gb
CNA
copper +
2 10/1Gb
x
x
x
x
x

x
x
x
x
x
x
x
x
x
x
x
x

x
x
x
x
x

SRIOV

Single Root I/O Virtualization


(SRIOV only on POWER7+ 770/780 with latest
levels of firmware as of April 2014)

evelopment team
Houston
Opt

Houston
CU

Integrated Multifunction Cards


EN11

1768

2 10Gb
2 10Gb
CNA SR + copper +
2 10/1Gb 2 1Gb
x
x
x
x
x
x
x
x
x
x

x
x

1769
2 10Gb
SR + 2
1Gb
x
x
x
x
x
x
x

10Gb Specialty Cards


EN0H /
EN0J

EN0K /
EN0L

2 10Gb
CNA + 2
1GB
x

2 10Gb
CNA + 2
1GB

x
x
x
x

EC27 /
EC28
2 10Gb
copper
RoCE
x

EC29 /
EC30

5279 /
5745

2 10Gb
2 10Gb
copper +
SR RoCE 1Gb
x
x

x
x

5280 /
5744

EC2G /
EC2J /
EL39

2 10Gb
copper +
2 10Gb
OpenSR + 1Gb OnLoad
x
x

x
x

x
x

x
x

x
x
x

x
x
x
x
x
x
x
x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x
x
x
x
x
x
x

x
x
x
x
x
x
x
x
x
x
x

x
x
x
x
x
x
x
x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x

x
x
x

x
x

x
x
x

x
x

x
x

x
x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
capable

10Gb Ethernet Cards


EC2H /
EC2K /
EL3A

5270 /
5708

2 10Gb
copper +
x

2 10Gb
CNA SR
x

x
x

5286 /
5288

2 10Gb
copper
x

5284 /
5287

2 10Gb
SR
x

x
x

5772

1 10Gb
LR opt
x

5275 /
5769

1 10Gb
SR
x

x
x

1Gb Ethernet Cards


5272 /
5732

1 10Gb
CX4
x

5260 /
5899

4 1Gb
x
x
x
x

x
x

5274 /
5768

5281 /
5767

2 1Gb opt
SX
2 1Gb
x
x
x
x
x
x
x
x
x
x

x
x

x
x
x
x

x
x

x
x
x
x
x
x
x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x
x

x
x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

x
x

capable

rnet Cards
5271 /
5717

4 1Gb
x
x
x
x
x
x

x
x
x
x

x
x
x
x

x
x
x

Important to read this introduction

Power Systems obtains many PCIe adapters from nonIBM suppliers which leverage technology alread
supplier on special functions/capabilities to interface into Power Systems and to comply with standards
modest amount of changes which are made to fit into the Power Systems infrastructure. Very rarely is
supplier offers generically.

For AIX, IBM i, and VIOS ... For adapters ordered as a Power System feature code, it is IBM who ships
hardware/software combination. IBM development may work with the PCIe supplier and leverage bas
from a client perspective. If the problem appears to be related to the PCIe adapter hardware design o
and/or development engineers will work with the adapter supplier if needed.

For Linux... For adapters ordered as a Power System feature code, the drivers used are the generally
adapter vendors to help ensure testing of the hardware/software has been done. Support of the hardw
the client. That organization could be groups like SUSE, Red Hat, IBM or even client self-supported.

Given the above, usually knowing the specific adapter supplier is of no interest to a client Howev
adapters for Power Systems, especially in a Linux enviornment where a user is doing something unusu
Feature
Code

2053
2054
2055
2728
2893
2894
4367
4377
4807
4808
4809
5260
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5283
5284

Feature Name

PCIe LP RAID & SSD SAS Adapter 3Gb


PCIe LP RAID & SSD SAS Adapter 3Gb
PCIe LP RAID & SSD SAS Adapter 3Gb w/ Blind Swap Cassette
4 port USB PCIe Adapter
PCIe 2-Line WAN w/Modem
PCIe 2-Line WAN w/Modem CIM
Package 5X #2055 & SSD
Package 5X #2055 & SSD
PCIe Crypto Coprocessor No BSC 4765-001
PCIe Crypto Coprocessor Gen3 BSC 4765-001
PCIe Crypto Coprocessor Gen4 BSC 4765-001
PCIe2 LP 4-port 1GbE Adapter
PCIe LP POWER GXT145 Graphics Accelerator
PCIe LP 10Gb FCoE 2-port Adapter
PCIe LP 4-Port 10/100/1000 Base-TX Ethernet Adapter
PCIe LP 10GbE CX4 1-port Adapter
PCIe LP 8Gb 2-Port Fibre Channel Adapter
PCIe LP 2-Port 1GbE SX Adapter
PCIe LP 10GbE SR 1-port Adapter
PCIe LP 4Gb 2-Port Fibre Channel Adapter
PCIe LP 4-Port Async EIA-232 Adapte

PCIe LP 2-x4-port SAS Adapter 3Gb


PCIe2 LP 4-Port 10GbE&1GbE SFP+ Copper&RJ45
PCIe2 LP 4-Port 10GbE&1GbE SR&RJ45 Adapter
PCIe LP 2-Port 1GbE TX Adapter
PCIe2 LP 2-Port 4X IB QDR Adapter 40Gb
PCIe2 LP 2-port 10GbE SR Adapter

5285
5286
5287
5288
5289
5290
5708
5717
5729
5732
5735
5744
5745
5748
5767
5768
5769
5772
5773
5774
5785
5805
5899
5901
5903
5913
9055
9056
EC27
EC28
EC29
EC2G
EC2H
EC2J
EC2K
EC30
EJ0J
EJ0L
EJ0X
EL09
EL10
EL11
EL2K
EL2M
EL2N
EL2P
EL2Z

PCIe2 2-Port 4X IB QDR Adapter 40Gb


PCIe2 LP 2-Port 10GbE SFP+ Copper Adapter
PCIe2 2-port 10GbE SR Adapter
PCIe2 2-Port 10GbE SFP+ Copper Adapter

EL38
EL39

PCIe2 LP 4-port (10Gb FCoE & 1GbE) SR&RJ4


PCIe2 LP 2-port 10GbE SFN6122F Adapter

2 Port Async EIA-232 PCIe Adapter


PCIe LP 2-Port Async EIA-232 Adapter

10Gb FCoE PCIe Dual Port Adapter


4-Port 10/100/1000 Base-TX PCI Express Adapter
PCIe2 8Gb 4-port Fibre Channel Adapter
10 Gigabit Ethernet-CX4 PCI Express Adapter
8 Gigabit PCI Express Dual Port Fibre Channel Adapter
PCIe2 4-Port 10GbE&1GbE SR&RJ45 Adaptef
PCIe2 4-Port 10GbE&1GbE SFP+Copper&RJ45 Adapter
POWER GXT145 PCI Express Graphics Accelerator
2-Port 10/100/1000 Base-TX Ethernet PCI Express Adapter
2-Port Gigabit Ethernet-SX PCI Express Adapter
10 Gigabit Ethernet-SR PCI Express Adapter
10 Gigabit Ethernet-LR PCI Express Adapter
4 Gigabit PCI Express Single Port Fibre Channel Adapter
4 Gigabit PCI Express Dual Port Fibre Channel Adapter
4 Port Async EIA-232 PCIe Adapter

PCIe 380MB Cache Dual - x4 3Gb SAS RAID Adapter


PCIe2 4-port 1GbE Adapter
PCIe Dual-x4 SAS Adapter
PCIe 380MB Cache Dual - x4 3Gb SAS RAID Adapter
PCIe2 1.8GB Cache RAID SAS Adapter Tri-port 6Gb
2-Port 10/100/1000 Base-TX Ethernet PCI Express Adapter
PCIe LP 2-Port 1GbE TX Adapter
PCIe2 LP 2-Port 10GbE RoCE SFP+ Adapter
PCIe2 2-Port 10GbE RoCE SFP+ Adapter
PCIe2 LP 2-Port 10GbE RoCE SR Adapter
PCIe2 LP 2-port 10GbE SFN6122F Adapter
PCIe2 LP 2-port 10GbE SFN5162F Adapter
PCIe2 2-port 10GbE SFN6122F Adapter
PCIe2 2-port 10GbE SFN5162F Adapter
PCIe2 2-Port 10GbE RoCE SR Adapter
PCIe3 RAID SAS Adapter Quad-port 6Gb
PCIe3 12GB Cache RAID SAS Adapter Quad-port 6Gb
PCIe3 SAS Tape Adapter Quad-port 6Gb
PCIe LP 4Gb 2-Port Fibre Channel Adapter
PCIe LP 2-x4-port SAS Adapter 3Gb
PCIe2 LP 4-port 1GbE Adapter
PCIe2 LP RAID SAS Adapter Dual-port 6Gb
PCIe LP 2-Port 1GbE TX Adapter
PCIe LP 8Gb 2-Port Fibre Channel Adapter
PCIe2 LP 2-port 10GbE SR Adapter
PCIe2 LP 2-Port 10GbE RoCE SR Adapter

EL3A
EN0A
EN0B
EN0H
EN0J
EN0Y
EN13
EN14
ES09
ESA1
ESA2
ESA3

PCIe2 LP 2-port 10GbE SFN5162F Adapter


PCIe2 16Gb 2-port Fibre Channel Adapter
PCIe2 LP 16Gb 2-port Fibre Channel Adapter
PCIe2 4-port (10Gb FCoE & 1GbE) SR&RJ4
PCIe2 LP 4-port (10Gb FCoE & 1GbE) SR&RJ4
PCIe2 LP 8Gb 4-port Fibre Channel Adapter
PCIe1 1-port Bisync Adapter
PCIe1 1-port Bisync Adapter CIM
IBM Flash Adapter 90 (PCIe2 0.9TB)
PCIe2 RAID SAS Adapter Dual-port 6Gb
PCIe2 LP RAID SAS Adapter Dual-port 6Gb
PCIe2 1.8GB Cache RAID SAS Adapter Tri-port 6Gb CR

ortant to read this introduction

uppliers which leverage technology already created by the PCIe supplier. Sometimes IBM works extensively with
wer Systems and to comply with standards/structures/interfaces established by Power Systems. Sometimes it is a
wer Systems infrastructure. Very rarely is an adapter brought out by IBM Power Systems with no changes from w

r System feature code, it is IBM who ships the underlying drivers and and it is IBM who provides support of the
with the PCIe supplier and leverage base supplier driver logic, but IBM is the single point of contact for all suppo
d to the PCIe adapter hardware design or manufacturing or by underlying supplier-provided driver logic, IBM supp
plier if needed.

code, the drivers used are the generally available, open source drivers for Linux. IBM works with Linux distributo
are has been done. Support of the hardware/software combination is provided by the organization selected/contr
d Hat, IBM or even client self-supported.

pplier is of no interest to a client However, occassionlly it is useful to know who IBM worked with to provide PCI
nt where a user is doing something unusual.

Adapter Description

Manufacturer of the PCIe card

SAS-SSD Adapter - Double-wide- 4 SSD module bays


SAS-SSD Adapter - Double-wide- 4 SSD module bays
SAS-SSD Adapter - Double-wide- 4 SSD module bays
USB 4-port
WAN 2 comm ports, 1 port with modem
WAN CIM 2 comm ports, 1 port with modem
SAS-SSD Adapter - quantity 5 2055 + 20 SSD modules
SAS-SSD Adapter - quantity 5 2055 + 20 SSD modules
Cryptographic Coproc No BSC
Cryptographic Coproc w/ BSC3
Cryptographic Coproc w/ BSC4
Ethernet 1Gb 4-port - TX / UTP / copper
Graphics - POWER GXT145
CNA (FCoE) 2-port 10Gb - optical SR
Ethernet 1Gb 4-port - TX / UTP / copper
Ethernet 10Gb 1-port - copper CX4
Fibre Channel 8Gb 2-port
Ethernet 1Gb 2-port - optical SX
Ethernet 10Gb 1-port - optical SR
Fibre Channel 4Gb 2-port

IBM designed, manufactured specifically


IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
IBM designed, manufactured specifically
Broadcom
Matrox
Qlogic
Intel
Chelsio
Emulex
Intel
Chelsio
Emulex

Async 4-port EIA-232

Digi

SAS 0GB Cache, no RAID5/6 2-port 3Gb


Ethernet 10Gb+1Gb 4Ports: 2x10Gb - copper twinax & 2x1Gb UTP
Ethernet 10Gb+1Gb 4Ports 2x10Gb - optical SR & 2x1Gb UTP cop
Ethernet 1Gb 2-port - TX / UTP / copper
QDR 2-port 4X IB - 40Gb
Ethernet 10Gb 2-port - optical SR

IBM designed, manufactured specifically


Chelsio
Chelsio
Intel
Mellanox
Emulex

QDR 2-port 4X IB - 40Gb


Ethernet 10Gb 2-port - copper twinax
Ethernet 10Gb 2-port - optical SR
Ethernet 10Gb 2-port - copper twinax

Mellanox
Emulex
Emulex
Emulex

Async 2-port EIA-232


Async 2-port EIA-232

Digi
Digi

CNA (FCoE) 2-port 10Gb - optical SR


Ethernet 1Gb 4-port - TX / UTP / copper
Fibre Channel 8Gb 4-port
Ethernet 10Gb 1-port - copper CX4
Fibre Channel 8Gb 2-port
Ethernet 10Gb+1Gb 4Ports 2x10Gb - optical SR & 2x1Gb UTP cop
Ethernet 10Gb+1Gb 4Ports: 2x10Gb - copper twinax & 2x1Gb UTP
Graphics - POWER GXT145
Ethernet 1Gb 2-port - TX / UTP / copper
Ethernet 1Gb 2-port - optical SX
Ethernet 10Gb 1-port - optical SR
Ethernet 10Gb 1-port - optical LR
Fibre Channel 4Gb 1-port
Fibre Channel 4Gb 2-port

Qlogic
Intel
Qlogic
Chelsio
Emulex
Chelsio
Chelsio
Matrox
Intel
Intel
Chelsio
Intel
Emulex
Emulex

Async 4-port EIA-232

Digi

SAS 380MB cache, RAID5/6, 2-port 3Gb


IBM designed, manufactured specifically
Ethernet 1Gb 4-port - TX / UTP / copper
Broadcom
SAS 0GB Cache, no RAID5/6 2-port 3Gb
IBM designed, manufactured specifically
SAS 380MB cache, RAID5/6, 2-port 3Gb -- WITHDRAWN
IBM designed, manufactured specifically
SAS 1.8GB Cache, RAID5/6 - 3-port, 6Gb
IBM designed, manufactured specifically
Ethernet 1Gb low price, max qty1 for 720/740 2-port 1Gb - TX UTP Intel
Ethernet 1Gb low price, max qty1 for 710/730 2-port - TX UTP coppeIntel
Ethernet 10Gb 2-port RoCE - copper twinax
Mellanox
Ethernet 10Gb 2-port RoCE - copper twinax
Mellanox
Ethernet 10Gb 2-port RoCE - optical SR
Mellanox
Ethernet 10Gb 2-port OpenOnLoad SolarFlare SFP+ copper twinax Solarflare
Ethernet 10Gb 2-port SolarFlare SFP+ copper twinax
Solarflare
Ethernet 10Gb 2-port OpenOnLoad SolarFlare SFP+ copper twinax Solarflare
Ethernet 10Gb 2-port SolarFlare SFP+ copper twinax
Solarflare
Ethernet 10Gb 2-port RoCE - optical SR
Mellanox
SAS 0GB Cache, RAID5/6 - 4-port, 6Gb
IBM designed, manufactured specifically
SAS 12GB Cache, RAID5/6 - 4-port, 6Gb
IBM designed, manufactured specifically
SAS 0GB Cache, LTO-5/6 tape - 4-port, 6Gb
IBM designed, manufactured specifically
Fibre Channel 4Gb 2-port
Emulex
SAS 0GB Cache, no RAID5/6 2-port 3Gb
IBM designed, manufactured specifically
Ethernet 1Gb 4-port - TX / UTP / copper
Broadcom
SAS 0GB Cache, RAID5/6 - 2-port, 6Gb
IBM designed, manufactured specifically
Ethernet 1Gb 2-port - TX / UTP / copper
Intel
Fibre Channel 8Gb 2-port
Emulex
Ethernet 10Gb 2-port - optical SR
Emulex
Ethernet 10Gb 2-port RoCE - optical SR
Mellanox
CNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb optical SR & 2x1Gb UTP
copper RJ45
Emulex
Ethernet 10Gb 2-port OpenOnLoad SolarFlare SFP+ copper twinax

Ethernet 10Gb 2-port SolarFlare SFP+ copper twinax


Solarflare
Fibre Channel 16Gb 2-port
Emulex
Fibre Channel 16Gb 2-port
Emulex
CNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb optical SR & 2x1Gb UTP co Emulex
CNA (FCoE) 10Gb+1Gb 4Ports 2x10Gb optical SR & 2x1Gb UTP co Emulex
Fibre Channel 8Gb 4-port
Qlogic
IBM designed, manufactured specifically
WAN 1 comm port Bisync
IBM designed, manufactured specifically
WAN CIM 1 comm ports Bisync
Flash memory adapter 900GB
IBM designed, manufactured specifically
SAS 0GB Cache, RAID5/6 - 2-port, 6Gb SSD only
IBM designed, manufactured specifically
SAS 0GB Cache, RAID5/6 - 2-port, 6Gb SSD only
IBM designed, manufactured specifically
SAS 1.8GB Cache, RAID5/6 - 3-port, 6Gb
IBM designed, manufactured specifically

s IBM works extensively with PCIe


er Systems. Sometimes it is a more
stems with no changes from what the

who provides support of the


e point of contact for all support issues
provided driver logic, IBM support

BM works with Linux distributors and


he organization selected/contracted by

M worked with to provide PCIe

turer of the PCIe card

igned, manufactured specifically for IBM


igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM

igned, manufactured specifically for IBM

igned, manufactured specifically for IBM

igned, manufactured specifically for IBM


igned, manufactured specifically for IBM
igned, manufactured specifically for IBM

igned, manufactured specifically for IBM


igned, manufactured specifically for IBM
igned, manufactured specifically for IBM

igned, manufactured specifically for IBM

igned, manufactured specifically for IBM

igned, manufactured specifically for IBM


igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM
igned, manufactured specifically for IBM

You might also like