Professional Documents
Culture Documents
12 December 2014.
The information in this spread sheet is provided "as is". Though the authors have made a best effort to provide an accurate a
listing of PCIe adapters available on IBM Power Systems, the user should also use other tools such as configurators, other do
IBM web sites to confirm specific points of importance to you.
This document will be stored as a tech doc and it is the authors' intent to refresh it over time. Therefore if you are using a vers
your PC, please occasionally check to see if a newer version is available.
IBMers: http://w3.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105846
Partners: http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD105846
If you find an error or have a suggestion for change/addition, please send Mark Olson (olsonm@us.ibm.com) and Sue Baker
(smbaker@us.ibm.com) an email.
(suggestions for change/additions which include the material for the change/addition will be appreciated)
Background colors in the PCIe Adapters Worksheet are to help separate the types of information
Green = Adapter specifications (column D visible. Column E, F, G, H and I content mostly included in column D, but provided for add'l sorting
Orange = Slots in which the adapters is supported
Blue = PCIe adapter/slot sizing information also see Performance Sizer worksheet / tab
Yellow = Minimum OS level supported
Rose = PowerHA support
OS levels shown in this spread sheet are minimum versions and revisions required by the adapter. Technology Level, Technol
Service Pack PTF, machine code level, etc. requirements can be found at the following URL:
https://www-912.ibm.com/e_dir/eServerPrereq.nsf
POWER8 and POWER7 servers (rack/tower) are considered in this documentation. Blade Center and PureFlex adapters are
PCIe Gen1/Gen2 Power System slot insights
- Gen3 slots are provided in the POWER8 system units. They are either x8 or x16 slots. There are two x16 slots per socket
remaining slots are x8. x16 slots have additional physical connections compared to x8 slots which allow twice the bandwidth
are adpaters with corresponding x16 capabilities). x8 and x4 and x1 cards can physically fit in x16 or x8 slots. 7-11-6-9 is a g
memorize for POWER8 Scale-out servers. A 4U 1-socket server has 7 PCIe slots. A 4U 2-socket server has 11 PCIe slots.
server has 6 PCIe slots. A 2U 2-socket server has 9 PCIe slots.
- Gen2 slots are provided in the Power 710/720/730/740/770/780 "C" model system units and in the Power 710/720/730/740
770/780 "D" model system units. Their machine type model numbers are 8231-E1C (710), 8202-E4C (720), 8231-E2C (73
(740), 9117-MMC (770), 9179-MHC (780) and 8231-E1D (710), 8202-E4D (720), 8231-E2D (730), 8205-E6D (740), 8408-E
RMD (760), 9117-MMD (770), 9179-MHD (780)
- Optional Gen2 slots are provided via the PCIe Riser Card (Gen2), FC = #5685, on the Power 720 and Power 740.
- Gen 1 slots are provided in the initially introduced Power 710/720/730/740/750/755/770/780 system units. Their machine
numbers are 8231-E2B (710), 8202-E4B (720), 8231-E2B (730), 8205-E6B (740), 8233-E8B (750), 8236-E8C (755), 9117-M
9179-MHB (780)
- Optional Gen1 slots are provided via the PCIe Riser Card (Gen2), FC = #5610, on the Power 720 and Power 740.
- Gen 1 slots are provided in the #5802/5803/5873/5877 12X I/O drawers
- Gen 1 slots are provided in POWER6 servers
Form Factor:
FH = full height
LP = low profile
Order the right form factor (size) adapter for the PCI slot in which it will be placed. No ordering/shipping structure has been
IBM Power Systems to order a conversion. So even though many of the PCIe adapters could be converted by changing the
end of adapter to be taller (full high) or shorter (low profile), no way to do so has been announced. If demand for such a cap
a method to satisfy it will be considered. Note that tailstocks are unique to each adapter.
.
FH/LP Power Systems slot insights
LP slots are found in the Power 710/730 and in the optional 720/740 PCIe Riser card (#5610/5685) and in the PowerLinux 7
FH slots are found in the Power 720/740/750/755/760/770/780 and in POWER6 servers and in the #5802/5803/5873/5877 I
Most Gen2 cards are supported only in Gen2 or Gen3 slots. The #5913 and ESA3 large cache SAS adapters and the #5899/
Ethernet are exceptions to this generalization. Also #EC27/EC28/EC29/EC30 have some limited Gen1 slot support.
"Down generation": If you place a Gen2 card in a Gen1 slot it will physically fit and may even work. But these adapters weren
Gen1 slots and thus there is no support statement for this usage. Also be VERY careful of performance expectations of a hig
Gen2 card in a Gen1 slot. For example, a 4-port 8Gb Fibre Channel adapter has about 2X the potential bandwidth than a Ge
provide.
Most Gen3 cards are supported only in Gen2 or Gen3 slots. The #EJ0L large cache SAS adapters and #EJ0J/EJ0M SAS ad
exceptions to this generalization. The same caveat about putting unsupported adapters in a "down generation" PCIe slot Gen1
Feb 2013 note: The POWER7+ 710/720/730/740 are at a higher firmware level than the previous POWER7 710/720/730/7
"C" model 710/720/730/740 have Gen2 slots, the new Gen2 adapters EN0A/B/H/J are not be tested/supported back on the
Gen2 adapters supported on the "C" models are also supported on the "D" models. There is currently no plan to introduce
levels to the Power 710/720/730/740 "C" models.
All Gen1 adapters are supported in any Gen2 slot (subject to any specific server rules, OS/firmware support, physical size card
Blind Swap Cassettes (BSC) for PCIe adapters
Power 770/780 system unit, the POWER7+ 750/760 have Gen4 BSC
Other POWER7/POWER7+ system units do not have BSC
The 12X-attached I/O drawers #5802/5877/5803/5873 have Gen3 BSC
CCIN can be used to find an ordering feature code of an installed adapter. The ordering feature code is used to find the f
name, price, description, etc
AIX users can use the lscfg command to determine an existing adapter's CCIN (Customer Card ID Number).
IBM i users can use the STRSST command to determine an existing adapter's CCIN
This document was developed for products and/or services offered in the United States. IBM may not offer the products, featu
discussed in this document in other countries.
The information may be subject to change without notice. Consult your local IBM business contact for information on the produ
and services available in your area.
IBM, the IBM logo, AIX, BladeCenter, Power, POWER, POWER6, POWER6+, POWER7, POWER7+, PowerLinux, Power Sys
Systems Software are trademarks or registered trademarks of International Business Machines Corporation in the United Stat
countries or both. A full list of U.S. trademarks owned by IBM may be found at ibm.com/legal/copytrade.shtml
Linux is a registered trademark of Linux Torvalds in the United States, other countries or both.
29 May 2012 . Added PowerLinux Worksheet/Tab, updated/augmented some text in Performance Sizer Tab, added more so
(E, F, G, H, I) for adapter descriptions.
11 June 2012 Added PowerHA support columns X & Y . Filled in more of the sorting column E, F, G, H, I values.
13 July 2012
Expanded PowerLinux tab to include two new adapter features and several full-high adapters for #5802/5877
22 July 2012
Added Linux RHEL6.1 support of RoCE adapter
1 Oct 2012
Added additional RoCe adapters and support
5 Feb 2013
Added EN0H/EN0J, EN0A/EN0B, updated misc points, updated PowerLinux content. Added a new workshee
provides detailed technical information about the Ethernet port attributes.
28 May 2013
Added #EL39/EL3A adapters from SolarFlare to PowerLinux tab, updated #EN0H/EN0J/EN0A/EN0B adapte
support on 770/780 servers.. updated #EN0H/EN0J to show IBM i support of FCoE starting July 2013, updated #EC27/EC28/
indicate support in #5802/5877 drawers. Updated several adpaters to show PowerHA support.
also updated we
tech doc is stored
5 AugMay 2013
added information for PowerLinux 7R4 which announced 30 July.
Oct 1 2013. Added IBM Flash Adapter 90, PCIe2 SAS adapter refresh, Solarflare adapters.
14 Jan 2014. Added PCIe Gen3 SAS adapters, Expanded support of ESA3 SAS adapter, added IBM i Bisync adapter, expa
Adapter 90 support for 7R2 and RHEL.
Added new source manufacturer tab/worksheet.
30 May 2014. Added expanded support of existing SAS adapters Added 4-port Ethernet card with copper twinax
6 June 2014. Added POWER8 PCIe adapters.
8 July 2014. Added support for July 15th announcement
12 December 2014. Added support for October 6th announcement
Adapter Gen
Feature
Code
5735
Feature Name
8 Gigabit PCI Express Dual Port Fibre Channel Adapter
Cable
FC
# ports
Supported
POWER8
SRIOV
No
Other
CCIN
577D
Form
Factor
FH
Gen1 /
Gen2
Functionally
Equiv
Features
Gen1
5273
S814, S824
POWER8
Performance Info
I/O drawer
Support
Gen1 /
Gen2 /
Both
POWER 7/7+
I/O Drawer
Slot Support
Both
5802, 5877,
5803, 5873
Slot
Comments
Sizing
Factor
16
Performance Info
Comment
Note P9
OS Support
Min AIX
Level
5.3
Min IBM i
Level
6.1
10
PowerHA
Comments
NPIV requires VIOS
For
AIX
Yes
PowerHA Support
For
IBM i
Yes
comments
This worksheet / Tab is for use with the IBM PowerLinux servers
Adapter support is also documented at URL:
http://www-01.ibm.com/support/knowledgecenter/8247-21L/p8e
The marketing model name for 2U servers is 7R1 or 7R2 ordering system is machine type 8246
7R1 ordering models (one socket) include 8246-L1C/L1S/L1D/L1T and 7R2 ordering models are 8246-L2C/L
The "C" and "D" models don't support attaching I/O drawers (12X or disk/SSD-only). The "S" and "T" mod
The 7R2 (model L2S/L2T only) can attach one or two 12X PCIe I/O drawers. Each drawer has full high PCIe
The marketing model name for the 4U server is 7R4 . Ordering system is machine type 8248
The 7R4 is a four socket with an ordering model of L4T. It is a 4U server with full high PCIe slots. It can att
Feature Name
Adapter Description
2053
2055
PCIe LP RAID & SSD SAS Adapter 3Gb w/ Blin SAS-SSD Adapter - Double-wide- 4 SSD modu
2728
USB 4-port
5260
5269
5270
5271
PCIe LP 4-Port 10/100/1000 Base-TX Ethernet Ethernet 1Gb 4-port - TX / UTP / copper
5272
5273
5274
5275
5276
5277
5279
PCIe2 LP 4-Port 10GbE&1GbE SFP+ Copper& Ethernet 10Gb+1Gb 4Ports: 2x10Gb copp
5280
5281
5283
5284
5286
PCIe2 LP 2-Port 10GbE SFP+ Copper Adapter Ethernet 10Gb 2-port SFP+ - copper twinax
5289
5290
5708
5717
4-Port 10/100/1000 Base-TX PCI Express AdaptEthernet 1Gb 4-port - TX / UTP / copper
5732
10 Gigabit Ethernet-CX4 PCI Express Adapter Ethernet 10Gb 1-port - copper CX4
5735
8 Gigabit PCI Express Dual Port Fibre Channel Fibre Channel 8Gb 2-port
5748
5767
2-Port 10/100/1000 Base-TX Ethernet PCI Expr Ethernet 1Gb 2-port - TX / UTP / copper
5768
5769
5772
5774
4 Gigabit PCI Express Dual Port Fibre Channel Fibre Channel 4Gb 2-port
5785
5805
PCIe 380MB Cache Dual - x4 3Gb SAS RAID AdSAS 380MB cache, RAID5/6, 2-port 3Gb
5899
5901
5913
EC27
PCIe2 1.8GB Cache RAID SAS Adapter Tri-port SAS 1.8GB Cache, RAID5/6 - 3-port, 6Gb
PCIe2 LP 2-Port 10GbE RoCE SFP+ Adapter Ethernet 10Gb 2-port RoCE SFP+ copper tw
EC3A
EC41
EC45
EJ16
Graphics - 3D
USB 4-port
CAPI
EL09
EL10
EL11
EL27
EL2K
EL2M
EL2N
EL2P
EL2Z
EL38
EL39
EL3A
EL3B
EL3C
EL3Z
EL60
EN0B
EN0L
EN0N
EN0T
EN0V
PCIe2 LP 16Gb 2-port Fibre Channel Adapter Fibre Channel 16Gb 2-port
PCIe2 LP 4-port (10Gb FCoE & 1GbE) SFP+Co Ethernet 10Gb+1Gb CNA 4Ports
PCIe2 4-port(10Gb FCoE & 1GbE) LR&RJ45 AdEthernet 10Gb+1Gb CNA 4Ports
PCIe2 4-Port (10Gb+1GbE) SR+RJ45 Adapter Ethernet 10Gb+1Gb CNA 4Ports
PCIe2 4-port (10Gb+1GbE) Copper SFP+RJ45 Ethernet 10Gb+1Gb CNA 4Ports
EN0Y
EN28
ES09
ESA1
2x10Gb Co
2x10Gb op
2x10Gb op
2x10Gb co
nowledgecenter/8247-21L/p8eab/p8eab_83x_8rx_supported_pci.htm?lang=en
ne type 8248
high PCIe slots. It can attach up to four 12X PCIe I/O drawers also with full high full high PCIe slots. The 7R4 does not
Slots
POWER7
POWER8
CCIN
Form
Factor
Gen1 /
Gen2
Drawer Slot
Support
57CD
LP
Gen1
57CD
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
57D1
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
Gen2
S812L, S822L
NA
NA
S812L, S822L
NA
NA
S812L, S822L
NA
NA
S812L, S822L
NA
NA
Gen1
Gen1
NA
NA
NA
NA
Gen1
NA
NA
576F
5269
2B3B
5271
5272
577D
5768
LP
LP
LP
LP
LP
LP
LP
Gen1
Gen1
Gen1
S821L, S822L
NA
NA
5275
5774
57D2
2B43
2B44
5767
58E2
5287
LP
LP
LP
LP
LP
LP
LP
LP
NA
NA
NA
NA
Gen2
NA
NA
Gen2
Gen1
S821L, S822L
S821L, S822L
NA
NA
NA
NA
S821L, S822L
S821L, S822L
NA
NA
NA
NA
NA
NA
5802, 5877,
EL36, EL37
Gen1
Gen1
Gen1
Gen2
Gen2
S821L, S822L
S812L, S822L
S821L, S822L
5288
LP
Gen2
57D4
FH
Gen1
NA
Yes
Gen1
NA
NA
57D4
LP
2B3B
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
5271
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
5732
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
577D
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
5748
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
5767
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
5768
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
5769
FH
Gen1
NA
Yes
576E
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
5802, 5877,
EL36, EL37
5774
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
57D2
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
NA
574E
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
576F
FH
Gen2
NA
Yes
5802, 5877,
EL36, EL37
57B3
FH
Gen1
NA
Yes
5802, 5877,
EL36, EL37
57B5
EC27
FH
LP
Gen2
Gen2
NA
NA
Yes
NA
5802, 5877,
EL36, EL37
NA
57BD
LP
Gen3
S812L, S822L
58F9
EJ16
LP
LP
LP
Gen2
Gen2
Gen3
S812L, S822L
S812L, S822L
S812L, S822L
S812L, S822L
NA
NA
S812L, S822L
NA
NA
Gen2
NA
NA
NA
NA
5774
57B3
576F
LP
LP
LP
Gen1
Gen1
EC27
LP
Gen2
57B4
5767
LP
LP
Gen2
Gen1
NA
NA
NA
NA
Gen1
S812L, S822L
NA
NA
S812L, S822L
NA
NA
S812L, S822L
NA
NA
NA
NA
577D
5287
EC29
LP
LP
LP
Gen2
Gen2
S812L, S822L
2B93
LP
Gen2
S812L, S822L
na
LP
Gen2
S812L, S822L
NA
NA
na
57B4
2CC1
2CC4
LP
LP
LP
LP
Gen2
Gen3
Gen2
Gen2
NA
Yes
NA
S812L, S822L
S812L, S822L
S812L, S822L
57B4
LP
Gen3
S812L, S822L
577F
2CC1
2CC0
2CC3
2CC3
LP
LP
LP
LP
LP
Gen2
Gen2
Gen2
Gen2
Gen2
S812L, S822L
S812L, S822L
S812L, S822L
S812L, S822L
S812L, S822L
S812L, S822L
S812L, S822L
NA
NA
NA
NA
EN0Y
LP
LP
Gen2
Gen1
578A
FH
Gen2
NA
NA
5802, 5877,
EL36, EL37
57B4
FH
Gen2
NA
Yes
5802, 5877,
EL36, EL37
lots. The 7R4 does not support low profile (LP) adapters.
Sizing
Drawer
comments
Software Support
Min RHEL
Level
Min SLES
Level
3 to 6
5.5
10
3 to 6
5.5
10
0.1 to 1
5.5
10
0.4 to 4
5.8
11
0.1
5.5
10
20
5.5
10
0.4 to 4
5.5
10
5 to 10
16
5.5
5.5
10
10
0.2 to 2
5.5
10
Sizing
Factor
Comments
5 to 10
5.5
10
0.04
5.5
10
10 to 20
5.7
10
10 to 20
0.2 to 2
5.7
5.5
10
10
30
10 to 20
5.6
5.7
10
10
10 to 20
5.7
10
0.02
5.7
10
0.02
5.7
10
20
5.5
10
0.4 to 4
5.5
10
5 to 10
5.5
10
16
5.5
10
0.1
5.5
10
0.2 to 2
5.5
10
0.2 to 2
5.5
10
5 to 10
5.5
10
5 to 10
5.5
10
5.5
10
0.04
5.5
10
2 to 6
5.5
10
0.4 to 4
5.8
11
1 to 6
5.5
10
15 to 30
20
5.7
6.3
10
11
no NPIV
5.5
10
no NPIV
1 to 6
5.5
10
0.4 to 4
5.8
11
20
6.3
11
15 to 30
0.2 to 2
5.8
5.5
10
10
16
5.5
10
10 to 20
5.7
10
20
6.3
11
20
NA as of Feb
11
20
6.4
n/a
20
6.4
n/a
30
NA as of Feb
11
30
5.8
10
L2T only
6 to 11
6.5
NA
15 to 30
5.8
10
some selected cable comments . Ths does not cover all types of adapters or cables.
Twinax Ethernet cable comments
some selected cable comments . Ths does not cover all types of adapters or cables.
Twinax Ethernet cable comments
(Ethernet Cables (twinax) for Power Copper SFP+ ports are active cables. Passive cables not tested/supported. F
codes #EN01, #EN02 or #EN03 (1m, 3m, 5m) are tested/supported. Note these twinax cables are NOT CX4 or 10
GBASE-T or AS/400 5250 twinax. Transceivers are located at each end of the #EN01/02/03 cables and are shipp
with each cable..
Active cables will work in all cases. Passive has potential risk of not working in some cases, but is lower cost. To
simplify/confusion/error/support/debug issues, Power Systems specifically designates active cabling as the only support SFP+
copper twinax cable option. Use passive at your own risk and realize it is an unsupported and untested configuration from Pow
Systems perspective.
Optical SFP+ cables for Ethernet
These cables are the same as for 8 Gb Fibre Channel cabling, but the distance considerations are not as tight. Power System
Optical SFP+ ports include a transceiver in the PCIe adapter or Integrated Multifunction Card..
SFP+ (Small Form Factor Pluggable Plus)
10Gb cables
See TechDoc TD106020 for some good insights:
IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106020
BPs: http://www-03.ibm.com/partnerworld/partnerinfo/src/atsmastr.nsf/WebIndex/TD106020
QDR cables for the #5283 and #5285 adapters which have been tested and are supported are the Copper 1m, 3m,
(#3287, #3288, and 3289) and Optical 10m, 30m (#3290, #3293). Both the copper and optical cables come with t
QSFP+ transceivers permanently attached at each end. The adapters just have an empty QSFP+ cage
.
The 2-port Async adapter (#5289 and #5290 has two physical ports on the card which are RJ45 connectors. Feature code #3
cables are to convert from an RJ45 connector to a 9-pin (DB9) Dshell connector. These converter cables are fairly generic an
be found at many electrical stores. This same #3930 is used on the Power 710/720/730/740 system unit to convert the
serial/system ports to a DB9 connector. Order one converter cable for each of the two ports you plan on using
PCIe Gen1 adapters have a wider connector called "Mini-SAS". PCIe Gen2 adapters need either a HD (High Density) Mini-S
connector or an HD Narrow Mini-SAS connector , PCIe Gen3 adapters need a HD Narrow Mini-SAS connector.
YO cables - the "tail" or "bottom of the y" connects to the adapter SAS port. The two connectors on the "top of the y" connec
I/O drawer to provide redundant paths into the SAS expanders.
X cables -- the two "bottoms of the x" connect to two diiferent adapters' ports. The two "tops of the x" connect to an I/O drawe
provides redundant paths into the SAS expanders and into redundant SAS adapters.
AE or YE cables attach to tape drives
AA cables connect between two PCIe2/PCIe3 SAS adapters with write cache
The new Gen3 SAS adapters have four SAS ports which are VERY close together and require a more narrow Mini-SAS HD
connector. "Narrow" cable feature codes equivalent to the PCIe Gen2 SAS adapter cables in terms of length and electircal
characterisitcs are announced with the PCIe Gen3 SAS adapters. The Narrow cables can be used on the PCIe Gen2 SAS
adapters. The earlier "fatter" Mini--SAS HD cables are NOT supported on the new PCIe Gen3 SAS adapters since the connec
would jam together and potentially damage the ports on the back of the PCIe Gen3 card.
This tool is intended as a rough rule of thumb to determine when you are approaching the IO limits of a system or IO draw
HOWEVER there are many variables and even more combinations of these variables which can impact the actual resultin
environment. Thus always apply common sense and remember this is designed to be a quick and simple approach ... a
heavy-weight tool full of precise measurements and interactions. Keep in mind the key planning performance words, "it d
mileage may vary".
You will be using the "Sizing Factor" for each PCIe adapter. These are found in the "PCIe Adapter" worksheet/tab of this
light blue columns.
For example, feature code #5769 (10Gb 1-port Ethernet adapter) it has a sizing factor of 5 to 10. If high utilization, use "
usage, use a "5". Or pick a number "6" or "7" or "8" or "9" if you think it is somewhere in between.
Or another example, feature code #5735 (8Gb 2-port Fibre Channel Adapter) has a sizing factor of 16 assuming both por
only one port is used, then use an "8".
These sizing factors are actually rough Gb/s values. So a sizing value of 8 equates to 8Gb/s.
Totaling all the sizing factors for all the adapters will give you a good idea if you are coming close to a performance limit fo
looking at an overall system performance limit as well as "subsystem limits" or "subtotal". Subsystem limits are maximum
bus, system unit, I/O drawer, PCIe Riser Card, etc. The maximum Gb/s for each subtotal/subsystem and the overall POW
further down on this worksheet.
NOTE1: these values do NOT translate directly into MB/s numbers. This tool builds on the observation that most adapt
utilization in real client application environments. Thus to help avoid over configuring for most shops, the tool assigns ma
are 70-75% of the the maximum throughput technically/theoretically possible.
NOTE2: If you are correlating these bandwidth maximums to the bandwidth values provided in marketing presentations
document, observe that there is a difference. The marketing materials are correct, but use peak bandwidth and assume
This sizing tool takes a more conservative approach which probably represents more typical usage by real clients. It assu
and sustained workloads in its maximums.
For example. For simplicity, marketing materials provide a single performance value of 20Gb/s for GX adapters/slots. Th
that a GX adapter or 12X Hub specifications are: BURST = 10 GB/s simplex & 20 GB/s duplex
SUSTAINED = 5 G
GB/s duplex.
This tool uses Gb (bit) vs GB (byte) because there is a lot of variability between different protocols/adapters and how thei
Gb and vice versa.
Remember to ask those common sense questions. "Are these adapters being run at the same time?" For example if yo
only goes to a tape that only runs one night a week when other adapters are not busy, you probably do not need to count
analysis. Or another good question, "Do you have redundant adapters or ports which are there only in case the primary a
do, then you can probably ignore the redundant adapter/ports.
For Fibre Channel and for Ethernet ports, simplex line rates are assumed, not duplex. Based on observations over the
simplification. Many client workloads aren't hitting the point where these adapters' duplex capability is a factor in calculati
environment is a real exception and you know you have a really heavy workload using duplex, then add another 20-25%
The actual real data rates in most applications are typically less than the line rate. This is especially true for 10Gb Ethern
applications run the adapter at less than 50% of line rates. If you know this is true for you, cut the sizing factor in half (th
adapter)
Similarly, if you have an 8Gb or 16Gb Fibre Channel adapter and it is attached to a 4Gb switch, then treat it like a 4Gb ad
factor in half). Likewise if you have a 16Gb FC adapter on an 8Gb switch, treat it lke an 8Gb adapter.
For Fibre Channel over Ethernet Adapters (FCoE or CNAs) Treat these adapters based on the mix of work done. If u
Channel only, treat it just like the 8Gb Fibre Channel adapter port above. If using a port solely for NIC traffic, treat it like
above. If a port has mixed FC and Ethernet workloads, assign a sizing factor based on the mix.
SAS Adapters Don't be surprised by SAS adapters max throughput numbers. A 2-port, 6Gb adapter is not 2 x 6 = 12G
over looking the insight that each port/connector is a "x4" connection. Four paths per connection yielding a 2 x 6 x 4 = 48
than a 2-port 8Gb Fibre Channel card ( 2 x 8 = 16), but then you could go into duplex mode for FC and 2 x 16 = 32 ... so m
SAS 2-port. Also remember most SAS adapters run in pairs so that makes the comparison more challenging. "It all dep
Usually there are a lot of SAS ports for connectivity and the per port workload is lighter. And remember there can be a hu
between SSD and HDD.
Also note there is a big difference in typical workload whether there is a lot of data / database workload and something ea
drives. Boot drives are a very light load.
Transaction workloads focus on IOPS (I/O Operations Per Second) and have lower data rates. Reduce the SAS sizing f
transaction workload. You would probably reduce the sizing factor even if SSD running transactions workloads, but SSD
leave the value higher even if transaction workload. Alternatively, if you are doing data mining or engineering scientific, o
where there are large block transfers, then Gb/s are high and use the larger sizing factor for the SAS controller.
So SAS adapter rough guides
~~ #5901 / 5278 is a 6 if running lots of drives with large data blocks or is a 1 if just supporting boot drives. If configured
adapters, each adapter has that same sizing factor. So a sizing factor for a pair could be up to 12. Paris are optional.
~~ #5805 / 5903 is an 8 if running lots of drives with large data blocks (especially SSDs) or a 1 if just doing boot drives.
work as pairs, and each of the pair has this sizing factor. The total sizing factor for a pair could thus be up to 16 with larg
#ESA1 / ESA2 runs only SSD so use 28 per adapter unless lightly loaded or unless not that many SSD. Pairs are opt
~~ #5913 or #ESA3 is a 30 if running lots of SSDs or a 16 if running lots of HDDs. These adapters always work as pairs
this sizing factor. The total sizing factor for a pair could thus be up to 60 with lots of SSD.
~~ #2053 / 2054 / 2055 runs only SSD so use a 6 if all four SSD are present. These adapters are not paired (but can be
note that "lots" of SSD can vary whether you are running the newer 387GB SSD which are faster and have higher through
Gen2/Gen1 note: The tool below ignores differences in Gen2 PCIe adapters in Gen1 PC slots. You will have to keep th
or #ESA1 (SAS adapters) and assume a max of 15 vs 30. (remember IOPS is not usually related to bandwidth limitation
adapters are only supported in Gen2 slots, so it is not commonly a consideration.
Sizing factors for Integrated I/O, 12-X attached I/O drawers, EXP30 Ultra Drawer and 4X IB adapter
Integrated IO Controllers
The 710 through 780 have integrated controllers for running HDD/SSD, DVD, tape, USB ports. The "B" models have IVE
1Gb Ethernet. You need to include these in the calculations. You might ignore the DVD and USB as they are generally ve
peaks (if any) happen at off hours. Likewise the 1Gb Ethernet is small. But If you have integrated 10G E or the Dual RAI
to include them in the calculations. Treat the integrated 10Gb Ethernet adapter like a PCIe 10Gb Ethernet adapter with th
ports.
For a rough guide for the for the integrated Dual RAID controllers: You may be up to a 10 for the integrated pair (not 10 e
you have some SSD or a lot of HDDs doing large block I/O. Remember that you can attach an #5887 EXP24S to the SA
server and then this same pair of integrated SAS controllers is handling another 24 SAS bays. Increase the sizing factor
of drives -- especially if using large block I/O.
If you are NOT using the "dual" integrated SAS adapter option and using a "single" integrated SAS adapter, use the same
#5901 / 5278 SAS adapter with a performance sizing factor or 1-6. For the Power 770/780 you can have two of these sin
their own sizing factor. If a Power 710/720/730/740/750/755 you would have just one adapter with one sizing factor.
This tool focuses on the total bandwidth available to you. These I/O drawers attach to GX++ Adapters and it is that GX a
maximum bandwidth available to all the adapters in that drawer. If you have two I/O drawers on that same GX++ adapte
that same total bandwidth of the GX++ adapter. Having two vs one #5802/5877 provides some redundancy and obviousl
does NOT increase max bandwidth. Note that each #5803/5873 is logically two drawers and for higher bandwidth you wo
two GX++ adapters.
For this estimator tool, one GX+ bus will support 40 Gb/s and a GX++ bus will support 60 Gb/s. (see Note2 above) GX+
they can be "external". External GX buses are accessed through a GX slot. All of the external GX slots are GX++ on POW
the first GX slot on the Power 750.
Note that some servers share a bus for internal as well as for GX slots. Thus you'll see below times when the total bandw
of the parts.
Note the Power 750 has two different GX adapters. The #5609 is GX+ (40Gb) and #5616 is GX++ (60Gb). If a #5609 is
+ slot, the bandwidth is restricted to 40Gb/s.
#5685 PCIe Gen2 Riser Card in Power 720/740 ("B" or "C" models)
This 4-slot expansion option plugs into the GX++ slot. It's max throughput is 60 Gb/s
#5610 PCIe Gen1 Riser Card in Power 720/740 ("B" model only)
This 4-slot expansion option plugs into the GX++ slot. It's max throughput is 40 Gb/s
NOTE: the table below assumes #5685 is used . If #5610 is used, subtract 20 Gb/s from the system max and the rise
The #EDR1 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "D" model 710/720/730/740
model 770/780. .The #5888 EXP30 Ultra Drawer attaches to a GX++ PCIe adapter placed in the GX slot of a "C" model
the Ultra Drawer are two very high performance SAS controllers, more powerful than the #5913/#EAS3. So assume a siz
this drawer if a lot of busy SSD. Note each SAS controller has a PCIe cable connecting it to the GX++ PCIe adapter, but
plugged into different GX++ PCIe adapters. Thus assume a sizing factor of 30 for each cable plugged into the GX slot.
The GX++ PCIe adapters are the #EJ03 and #EJ0H.and #1914
#EJ03 plugs into a 720/740.
#EJ0H plugs into
plugs into a 750/760/770/780.
This specialized adapter is used only on the Power 710/730 "B" model 8231-E2B to connect to a 4x Infiniband switch in s
environments. (On 710/730 "C" models with PCIe Gen2 slots, the QDR PCIe adapter is used instead of this DDR option.
up to a max of 30Gb/s. Actual usage is HIGHLY dependent upon the clustering application. Use a sizing factor of 2 - 30
applications. A modest percentage of servers user this GX++ adapter.
The Tables
How to use the tables below:
(see also the example just below Power 710/730 B models and the example jus
models.)
1- Look up the sizing factor for each of the adapters. Make sure you know in which PCI slot the adapter(s) w
2- Add the maximum sizing factor of each adapter in that subgroup of PCIe slots represented by a separate c
below. Do this for all subgroups which your server will be using.
3- For the subgroups which have internal I/O, also add the sizing factor(s) for the internal I/O
4- Compare each subgroup's sizing factor total to the subtotal maximum. 4a- If the sizing factor total is less
maximum, you easily have adequate bandwidth. 4b- If the total sizing factor total is larger than the subtotal
action. Review the sizing factors and adjust downward as appropriate. Compare the revised sizing factor tota
If still too high, take action such as moving adapters out of that subgroup.
5- . Then add the subgroups together and compare to the total bandwidth available for the server. If the adap
have a larger total sizing factor than the subgroup maximum, use the smaller value. 5a- If your total of subg
equal to the system total, you should have adequate bandwidth.
5b- If your total of subgroups exceeds the
additional sizing factor adjustments would be helpful and compare again. If the system total is still exceeded, t
or make sure expectations have been set with the client.
Important insight: This tool focuses on aggregate bandwidth with subtotals. The "subtotals" or "subsystem limits" are i
individual Gen1 or individual Gen2 PCIe slots are the same, groups of Gen1 PCIe slots or groups of Gen2 PCIe slots som
Bandwidth can also be shared with other internal I/O. The table below lays out the subtotals and the total. If you exceed
subtotal, for example a GX slot, that means you have a bottleneck which will limit max throughput of the adapters located
"B" models
8231 E2B
730 2-socket
Internals 4 slot plus 1st GX slot only for 2nd GX slot only for total
integrated IO
#5266
#5266
710 using 0 GX
slots
40 Gb/s
n/a
n/a
40 Gb/s
710 using 1 GX
slot
40 Gb/s
30 Gb/s
n/a
70 Gb/s
730 using 0 GX
slots
40 Gb/s
n/a
n/a
40 Gb/s
730 using 1 GX
slot
40 Gb/s
30 Gb/s
n/a
70 Gb/s
730 using 2 GX
slots
40 Gb/s
30 Gb/s
30 Gb/s
100 Gb/s
Example: Power 730 B model. (See the table just above this row.) In the system unit (internal four
port 8Gb Fibre Channel adapters (sizing factor 16) and one 4-port 1Gb Ethernet adapter (sizing facto
PCIe slots is empty). You have three 177GB SSD in the six SAS bays run by the integrated SAS ada
to 10) and you have a 2-port 10Gb Ethernet IVE/HEA (sizing factor 10-20 ).
In this example the GX
used. You get the sizing factors from the "PCIe Adapter" worksheet/tab for the PCIe adapters and fro
the integrated I/O.
B152
710 using 0 GX
slots
710
730
R71
R72
Internals 6 slots
plus integrated IO
60 Gb/s
n/a
n/a
60 Gb/s
710 using 1 GX
slot
60 Gb/s
30 Gb/s
n/a
90 Gb/s
730 using 0 GX
slots
60 Gb/s
n/a
n/a
60 Gb/s
730 using 1 GX
slot
60 Gb/s
30 Gb/s
n/a
90 Gb/s
730 using 2 GX
slots
60 Gb/s
30 Gb/s
30 Gb/s
120 Gb/s
730 using 2 GX
slots
60 Gb/s
60 Gb/s
120 Gb/s
total
720 using 0 GX
slots
40 Gb/s
n/a
n/a
40 Gb/s
720 using 1 GX
slot
40 Gb/s
60 Gb/s
n/a
100 Gb/s
740 using 0 GX
slots
40 Gb/s
n/a
n/a
40 Gb/s
740 using 1 GX
slot
40 Gb/s
60 Gb/s
n/a
100 Gb/s
740 using 2 GX
slots
40 Gb/s
60 Gb/s
60 Gb/s
160 Gb/s
For Power 720/740 "C" or "D" models with PCIe Gen2 slots
8202 E4C or E4D
POWER7 720 1-socket
8205 E6C or E6D
POWER7 740 2-socket (if only one socket populated, treat as 720 in this too
720 using 0 GX
slots
Internals 6 slots
plus integrated IO
60 Gb/s
n/a
n/a
60 Gb/s
720 using 1 GX
slot
60 Gb/s
60 Gb/s
n/a
100 Gb/s
740 using 0 GX
slots
60 Gb/s
n/a
n/a
60 Gb/s
740 using 1 GX
slot
60 Gb/s
60 Gb/s
n/a
100 Gb/s
740 using 2 GX
slots
60 Gb/s
60 Gb/s
60 Gb/s
160 Gb/s
Example: Power 720 C model. In the system unit (internal six slots) you have one 2-port 4Gb Fibre
(sizing factor 8) and two 2-port 10Gb FCoE adapters (sizing factor 20 each) and a Async/comm card
and one required 4-port 1Gb Ethernet adapter in the C7 slot (sizing factor 0.4-4). (One of the PCIe s
have four HDD in the six SAS bays run by the integrated SAS adapters (sizing factor up to 10 - proba
HDD). You have two PCIe RAID SAS adapters in a PCIe Riser Card each with four SSD (sizing facto
Totaling the max sizing factors for thes internal components yield 8+20+20+0.2+0+4+10 = 62.2 (PCI
8+20+20+0.2+4 and internal = 10.) 62.2 is larger than the 60 Gb/s max bandwidth available for these
need to need to analyze the sizing factors to determine if they actually use less bandwidth. Let's ass
the revised total is less than 60 ... which we'll call "Adjusted 62.2".
example continued below
Next look at the GX slot size factors of 6+6 = 12. 12 is less than 60 so you are ok on this subtotal.
Power 720, there is an overlap of bandwidth between the 1st GX slot and internal slots add all these s
"Adjusted 62.2" + 12 is less than 100Gb/s so you fit in the system maximum as well as both subtota
configuration's bandwidth requirements should be ok.
For Power 750/755 "B" model Note this has both PCI-X DDR and PCIe Gen1 slots
8233 E8B
POWER7 750 4 socket (if only one socket populated, 2nd GX slot not enabled
8236 E8C
POWER7 755 4-socket (note 12X-I/O drawers not supported)
No
Internals 5 PCI
1st GX slot for GX+ 2nd GX slot for
slots plus integrated adaper for 12X I/O GX++ adaper for
IO
loop
12X I/O loop (2-4
socket only)
total
n/a
n/a
40 Gb/s
40 Gb/s
n/a
40 Gb/s
Two-to-four
40 Gb/s
sockets with 0 12X
loops
n/a
n/a
40 Gb/s
Two-to-four
40 Gb/s
sockets with 1 12X
loops
40 Gb/s
n/a
40 Gb/s
Two-to-four
40 Gb/s
sockets with 1 12X
loops
n/a
60 Gb/s
100 Gb/s
Two-to-four
40 Gb/s
sockets with 2 12X
loops
40 Gb/s
60 Gb/s
100 Gb/s
For Power 750/760 "D" models or the PowerLinux 7R4 with PCIe Gen2 slots
8408 E8D
POWER7 750
9109 RMD
POWER7 760
8248 L4T
PolwerLinux 7R4
Internal PCIe slots
1-4
60 Gb/s
60 Gb/s
n/a
60 Gb/s
n/a
n/a
With 2 or more
60 Gb/s
proc DCM and two
12X I/O loop
60 Gb/s
60 Gb/s
60 Gb/s
POWER7 770
POWER7 780
Per NODE /
Processor
enclosure
Internal PCIe slots 1st GX slot for GX+ 2nd GX slot for
5,6 plus integrated + adapter for 12X
GX++ adapter for
IO
I/O loop
12X I/O loop
40 Gb/s
40 Gb/s
n/a
n/a
40 Gb/s
40 Gb/s
60 Gb/s
n/a
40 Gb/s
40 Gb/s
60 Gb/s
60 Gb/s
For Power 770/780 "C" and "D" models with PCIe Gen2 slots
9117 MMC / MMD
POWER7 770
9179 MHC / MHD
POWER7 780
8412 EAD
Power ESE
this model has a max of one processor enclosure/node
Per NODE /
Processor
enclosure
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
GX slot one
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
60 Gb/s
Optional reading
GX slot two
GX slot three
GX slot four
60 Gb/s
As shown in the tables above, not all PCIe slots are equal. Individually they are equal, but depending on the server or de
drawer, some slots share bandwidth with other slots or I/O. The following sentences summarize the tables above for the
SYSTEM units
Power 710/720/730/740 "B" and "C" and "D" model system units --- All PCIe slots with a system unit are equal
Power 750/755 system unit - all PCIe slots are equal -- All PCI-X DDR slots are equal
Power 750/760 "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but slots 1-4 are on o
on a separate bus which shares bandwidth with internal controllers
Power 770/780 "B" and "C" and "D" model system units --- All PCIe slots are NOT equal. Individual slots are equal, but s
and slots 5-6 are on a separate bus which shares bandwidth with internal controllers
Power 795 has no PCI slots in the system unit -- everything attached through GX++ slots
PCIe Riser Card
#5610 which can be placed in the 720/740 "B" system unit . All PCIe slots (Gen1) are equal
#5685 which can be placed in the 720/740 "B or C" or "D" system unit . All PCIe slots (Gen2) are equal
12X-attached PCIe I/O drawers: #5802/5877 (19-inch rack) and #5803/5873 (for 795/595)
The bandwidth available to a one or more I/O drawers on a 12X loop is obviously the same as the GX slot to which they a
this rough sizing tool's perspective, it usually doesn't make sense to to worry about individual PCIe slots within the I/O dra
trying to optimize PCIe card placement within an I/O drawer for most client workloads won't make that much difference. H
who are focused on the details, the following information is shared. See Info Center for more detail.
#5802/5877 when only one drawer per GX adapter (one per 12X loop). A drawer has 10 PCIe Gen1 slots and two conne
in the server. The ten PCIe slots are in 3 subgroups; Slots 1,2,3 and Slots 4,5,6 and Slots 7,8,9,10. Slots 1,2,3 and Slo
separate connection to the GX adapter. Slots 7,8,9,10 first connect to the Slot 1,2,3 subgroup and share the bandwidth
connection to the GX adapter. Thus there may be a slight advantage in putting the highest bandwidth adapters in Slot 4,5
adapters with the least latency requirements in Slots 1,2,3 or 4,5,6. HOWEVER, most of the time this will not make much
difference to the client assuming your total bandwidth need is reasonable.
#5802/5877 when two drawers per GX adapter (two per 12X loop). Each drawer will have a connection to the GX adapte
subgroups in a drawer share that one connection. Depending on the cabling used either Slots 1,2,3 or Slots 4,5,6 will be
connection upstream to the GX adapter. The other two PCIe subgroups will connect through the "closer" PCIe subgroup.
time where the PCIe adapter is placed in the drawer will not make much (if any) noticeable difference to the client. The fa
total GX adapter bandwidth can make a difference. So where I/O bandwidth is a consideration, try to use just one I/O dra
try to balance the bandwidth needs across the to I/O drawers.
#5803/5873 have 20 PCIe slots. Think of this drawer as logically two 10 PCIe drawers inside one drawer. You can attach
a GX adapter or you can attach both halves of a drawer to single GX adapter. Each of the two logical drawers each are id
described above.
For the extremely analytical reader Each #5802/5877/5803/5873 PCIe slot subgroup is on its own PCIe 8x PHB. Spe
are: BURST = 2 GB/s simplex & 4 GB/s duplex
SUSTAINED = 1.6 GB/s simplex & 2.2 GB/s duplex.
Most of th
granular detail is not useful in client configuration sizings.
GX adapter
witch, then treat it like a 4Gb adapter (cut the 8Gb sizing
Gb adapter.
low times when the total bandwidth is less than the sum
system unit (internal four slots) you have two 2ernet adapter (sizing factor 0.4-4). (One of the
by the integrated SAS adapter (sizing factor up
In this example the GX slots are not being
the PCIe adapters and from the text above for
total
100 Gb/s
160 Gb/s
200 Gb/s
total
80 Gb/s
140 Gb/s
180 Gb/s
processor enclosure/node
total
100 Gb/s
160 Gb/s
200 Gb/s
total
60 Gb/s
120 Gb/s
180 Gb/s
240 Gb/s
PCIe slots:
are equal, but slots 1-4 are on one bus and slots 5-6 are
Individual slots are equal, but slots 1-4 are on one bus
This wonderfully complete table thanks to Rakesh Sharma of the IBM Austin Development team
Attribute or IEEE
Standard
Ether II and IEEE 802.3
802.3i
802.3u
802.3x
802.3.z
802.3ab
802.3ad/802.1AX
802.3ae
Attribute Description
Ether II and IEEE 802.3 encapsulated frames
10Mb Ethernet over twisted pair, 10Base-T
100Mb Ethernet over twisted pair, 100Base-TX
Full duplex and flow control
1Gb Ethernet over fiber, 1000Base-X
1Gb Ethernet over twisted pair, 1000Base-T
Link Aggregation/LACP/Load Balancing
10Gb Ethernet over fiber
TSO
LRO
EN10
2 10Gb
CNA
copper +
2 10/1Gb
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
SRIOV
evelopment team
Houston
Opt
Houston
CU
1768
2 10Gb
2 10Gb
CNA SR + copper +
2 10/1Gb 2 1Gb
x
x
x
x
x
x
x
x
x
x
x
x
1769
2 10Gb
SR + 2
1Gb
x
x
x
x
x
x
x
EN0K /
EN0L
2 10Gb
CNA + 2
1GB
x
2 10Gb
CNA + 2
1GB
x
x
x
x
EC27 /
EC28
2 10Gb
copper
RoCE
x
EC29 /
EC30
5279 /
5745
2 10Gb
2 10Gb
copper +
SR RoCE 1Gb
x
x
x
x
5280 /
5744
EC2G /
EC2J /
EL39
2 10Gb
copper +
2 10Gb
OpenSR + 1Gb OnLoad
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
capable
5270 /
5708
2 10Gb
copper +
x
2 10Gb
CNA SR
x
x
x
5286 /
5288
2 10Gb
copper
x
5284 /
5287
2 10Gb
SR
x
x
x
5772
1 10Gb
LR opt
x
5275 /
5769
1 10Gb
SR
x
x
x
1 10Gb
CX4
x
5260 /
5899
4 1Gb
x
x
x
x
x
x
5274 /
5768
5281 /
5767
2 1Gb opt
SX
2 1Gb
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
capable
rnet Cards
5271 /
5717
4 1Gb
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
Power Systems obtains many PCIe adapters from nonIBM suppliers which leverage technology alread
supplier on special functions/capabilities to interface into Power Systems and to comply with standards
modest amount of changes which are made to fit into the Power Systems infrastructure. Very rarely is
supplier offers generically.
For AIX, IBM i, and VIOS ... For adapters ordered as a Power System feature code, it is IBM who ships
hardware/software combination. IBM development may work with the PCIe supplier and leverage bas
from a client perspective. If the problem appears to be related to the PCIe adapter hardware design o
and/or development engineers will work with the adapter supplier if needed.
For Linux... For adapters ordered as a Power System feature code, the drivers used are the generally
adapter vendors to help ensure testing of the hardware/software has been done. Support of the hardw
the client. That organization could be groups like SUSE, Red Hat, IBM or even client self-supported.
Given the above, usually knowing the specific adapter supplier is of no interest to a client Howev
adapters for Power Systems, especially in a Linux enviornment where a user is doing something unusu
Feature
Code
2053
2054
2055
2728
2893
2894
4367
4377
4807
4808
4809
5260
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5283
5284
Feature Name
5285
5286
5287
5288
5289
5290
5708
5717
5729
5732
5735
5744
5745
5748
5767
5768
5769
5772
5773
5774
5785
5805
5899
5901
5903
5913
9055
9056
EC27
EC28
EC29
EC2G
EC2H
EC2J
EC2K
EC30
EJ0J
EJ0L
EJ0X
EL09
EL10
EL11
EL2K
EL2M
EL2N
EL2P
EL2Z
EL38
EL39
EL3A
EN0A
EN0B
EN0H
EN0J
EN0Y
EN13
EN14
ES09
ESA1
ESA2
ESA3
uppliers which leverage technology already created by the PCIe supplier. Sometimes IBM works extensively with
wer Systems and to comply with standards/structures/interfaces established by Power Systems. Sometimes it is a
wer Systems infrastructure. Very rarely is an adapter brought out by IBM Power Systems with no changes from w
r System feature code, it is IBM who ships the underlying drivers and and it is IBM who provides support of the
with the PCIe supplier and leverage base supplier driver logic, but IBM is the single point of contact for all suppo
d to the PCIe adapter hardware design or manufacturing or by underlying supplier-provided driver logic, IBM supp
plier if needed.
code, the drivers used are the generally available, open source drivers for Linux. IBM works with Linux distributo
are has been done. Support of the hardware/software combination is provided by the organization selected/contr
d Hat, IBM or even client self-supported.
pplier is of no interest to a client However, occassionlly it is useful to know who IBM worked with to provide PCI
nt where a user is doing something unusual.
Adapter Description
Digi
Mellanox
Emulex
Emulex
Emulex
Digi
Digi
Qlogic
Intel
Qlogic
Chelsio
Emulex
Chelsio
Chelsio
Matrox
Intel
Intel
Chelsio
Intel
Emulex
Emulex
Digi