Professional Documents
Culture Documents
Notice
The information in this manual is subject to change without notice. All statements,
information and recommendations in this manual are believed to be accurate, but are
presented without warranty of any kind, expressed or implied. Users must take full
responsibility for their use of any products.
6 QOS ............................................................................................................ 36
7 CAPACITY SCENARIOS............................................................................ 42
8 SUMMARY.................................................................................................. 45
In this white paper, we will confine the capacity discussion to the OFDM physical
layer in order to avoid a lengthy discussion of the additional complications that
mobility introduces; including handover and dynamic cell occupancy. Capacity for
mobile applications will be an important topic for a later white paper.
The objective of this white paper is to better understand WiMAX system capacity.
But first, what do we mean by the “capacity” of a WiMAX system? Simply put, the
system capacity refers to the number of connections that the wireless channel can
support without unduly degrading the data services carried on the channel.1. We
focus on the wireless channel rather than other system resources because normally
the airlink is the most expensive and therefore the controlling system element related
to capacity in wireless access networks.
Operators care deeply about the system capacity because of the nature of wireless
access network deployments. WiMAX access networks are often deployed in point-
to-multipoint cellular fashion where a single base station provides wireless coverage
to a collection of subscriber stations within the coverage area. The base station in
turn is linked to external wide area networks via wired, fiber, or wireless point-to-
point backhaul infrastructure. Normally, the radio spectrum that is available to a
deployment is a scarce and often expensive resource. During the planning phase of
a deployment, once an operator has determined the radio spectrum channel size for
each base station, the next question becomes: how many data connections can the
channel support? The question is doubly important since it is often prohibitively
expensive to later overlay additional wireless capacity into the same coverage area.
Further, it is central to understanding how may base stations are required for a
deployment region. And without a firm understanding of the system capacity an
operator has no way of estimating the recovery time for the up-front costs of
deploying the access network. Understanding the system capacity is therefore key to
deploying a commercially successful access network.
1
As we will see, each WiMAX connection carries data traffic characterized by a set of QoS parameters, so that a set of
connections has an associated aggregate total bandwidth. The capacity of the system can therefore equivalently be thought of
in terms of the aggregate total bandwidth required to support a set of connections.
The structure of this white paper begins with a discussion of the 802.16 protocol
model. Next we examine the system capacity at the physical layer including a
discussion of the overhead needed to support the wireless channel. Moving up the
protocol stack we next look at the capacity at the media access layer and examine
the overhead introduced there. We next examine the WiMAX QoS model and
discuss how the mapping of data services influences the system capacity. Bringing
this information together, we next illustrate the resultant system capacity that arises
in several hypothetical deployment scenarios. The white paper concludes with a brief
summary of the main points.
The physical (PHY) layer takes MAC PDUs input at the PHY SAP and arranges
them for transport over the airlink. The WiMAX protocol adds overhead to user traffic
starting with the physical layer, which transmits to peer physical layers over the
airlink. This section presents the overhead incurred at the PHY layer including
mandatory and optional elements. Variable factors including adaptive modulation
and code rate are discussed.
The 802.16 MAC/PHY standard attempts to avoid constraining the carrier frequency
(“below 11 GHz”) for OFDM/OFDMA radios and places very general limits on the
channel size (from 1.25 to 20 MHz). There are currently no worldwide spectrum
allocations for WiMAX systems.
In the OFDM PHY there are 256 sub carriers spanning the sampling spectrum which
is defined as:
Where n is the sampling factor, a constant dependent on the channel size, and BW
is the channel size in units of Hz. The number of sub carriers corresponds to the size
of the FFT/IFFT used to receive and transmit the OFDM symbols. To reduce the
complexity of the digital processing algorithms it is desirable to use FFT sizes that
are powers of 2.
For channels in the 3.5 GHz band the licensed channels are multiples of 1.75 MHz
and n = 8/7. For a channel width of 3.5 MHz the sampling spectrum is 4.0 MHz. The
256 sub carriers are equally distributed across the sampling spectrum implying a
spacing of:
Eq. 2) ∆f = Fs/256 .
In order to provide increased inter-channel interference margin and ease the radio
filtering constraints, not all of the 256 sub carriers are energized. There are 28 lower
and 27 upper “guard” sub carriers plus the DC sub carrier that are never energized.
Of the 256 total sub carriers therefore, only 200 are used which leaves a total
occupied spectrum of ∆f · 200 = 3.125 MHz for a 3.5 MHz channel. This example
implies a raw, occupied bandwidth efficiency of 89% (3.125/3.5 = 89%), but the
number varies for other channel bandwidths and sampling factors. This is the first
example we have encountered of what can be considered to be channel overhead
that decreases the channel capacity, in this case it is required by design to improve
the channel quality when adjacent spectrum is occupied.
The raw sub carrier capacity, before taking out the overhead added by redundant
error correction bits, is given by the modulation order: 6 bits/sub carrier for 64QAM, 4
bits/sub carrier for 16 QAM, and so on. For example, a channel able to support
64QAM modulation could send six bits for each data carrier per symbol. But how
long is a symbol?
Eq. 3) Tb = 1/∆f.
For example, a 3.5 MHz channel has a useful symbol time of 1/15625 = 64 us.
However for multi-path channels, we must make allowances for variable delay
spread and time synchronization errors. In OFDM, this is accomplished by repeating
a fraction of the last portion of the useful symbol time and appending it to the
beginning of the symbol for a resulting symbol time of:
Eq. 4) Ts = Tb + G · Tb,
Where G is a fraction:
The repeated symbol fraction is called the “cyclic prefix”. Larger cyclic prefix implies
increased overhead (decreased capacity since the cyclic prefix carries no new
information) but larger immunity to ISI from multi-path and synchronization errors.
For a 3.5 MHz channel the useful symbol time is 64 us and the minimum total
symbol time is Ts = 64 us + 64/32 us = 66us. The raw channel capacity per symbol
is:
Where k is the bits per symbol for the modulation being used.
Assuming 64QAM modulation (6 bits per symbol): 192 data sub carriers x 6 bits/sub
carrier / 66 us = 17.45 Mbps.
But in any practical wireless system we can expect to have occasional errors
introduced by imperfect transmission, the airlink, or imperfect detection. The solution
is to send redundant bits with the information bits in each symbol to aid in error
detection and correction, a technique known as Forward Error Correction (FEC). In
the OFDM PHY FEC is done using a combination of a Reed-Solomon outer code
combined with a convolutional inner code. Adding redundant bits adds overhead and
The useful capacity of the combined 192 data sub carriers therefore depends on the
overall coding rate as given by the following table reproduced from the standard
(Table 215 in IEEE 802.16-2004).
Notice that the modulation rates are designed so that an FEC coded block just fits in
one symbol time when all 192 sub carriers are used. For instance for 64QAM, 144
Bytes = 1152 bits / 6 bits/symbol = 192 sub carriers.
The useful channel capacity per symbol is:
Where OCR is the overall coding rate given in the table. For example, for a 3.5 MHz
channel the useful channel capacity per symbol assuming the highest rate
modulation and coding is: C = 17.45 Mbps x 3/4 = 13.1 Mbps.2
Eq. 8) E = C / BW.
We can see that our 3.5 MHz channel has a spectral efficiency (so far) up to 13.1
Mbps / 3.5 MHz = 3.74 b/s/Hz. The spectral efficiency is a useful figure of merit to
keep in mind because it lets you quickly calculate the capacity for other channel
sizes that WiMAX supports.
2 By now at least some readers must be wondering what happened to the often-hyped 75 Mbps channel capacity for WiMAX?
Taking the very largest channel size, 20 MHz, highest coding rate, and minimum cyclic prefix, the raw channel size using
equation 6 is: Craw = 192 x 6 b/sub carrier / 11.3 us = 102.0 Mbps. The useful channel size from equation 7 is: C = Craw x ¾ =
76.5 Mbps. Of course we have said nothing about the (short) range of such a hypothetical channel, and we should be aware
that this is before taking out other PHY and MAC layer overhead that, as we will see, is significant. To be blunt, talking about 75
Mbps WiMAX channels for MAN applications is about as meaningful as quoting the top end speed marked on the speedometer
of a family minivan.
In the OFDM PHY specification the allowed frame sizes are: Tf = {2.5, 4, 5, 8, 10,
12.5, 20 ms}. For a 3.5 MHz channel width with a 1/8 cyclic prefix, the symbol length
is 72 us. Assuming a 10 ms frame length the whole number of symbols per frame is:
This discussion assumes that Frequency Division Duplex (FDD) channels are used,
which uses separate spectrum for the transmit and receive channels. WiMAX also
supports Time Division Duplex (TDD) channels, which uses the same spectrum for
the receive and transmit channels. TDD is used primarily in unlicensed spectrum
deployments. With TDD, the transmit and receive frames are adjustable in length
and there is a mandatory guard time gap between them which can increase the
overhead slightly. The constraint that the transmit and receive frames must have a
whole number of symbols remains.
3.4 Preambles
3.4.1 Synchronization
Receivers need a way of synchronizing to the beginning of the TDMA frame and
symbol time.
3.4.2 Ranging
In a WiMAX system consisting of a base station communicating with a collection of
subscriber stations at different ranges, we need a method compensating for the
variable transit delay over the airlink so that the base station can coordinate selective
use of the uplink and avoid receiving symbols that overlap in time. This is
accomplished by measuring the distance (delay) between each subscriber station
and their base station. The goal is to make each subscriber station appear to be
collocated with the base station in terms of the transmission alignment.
The details of the methods used (part of the “initial ranging” and “periodic ranging”
processes) are unimportant to our capacity discussion except that periodically, a
base station will allocate one or more uplink symbols for listening to new subscriber
stations joining the network and reporting their delay compensation value and other
data needed to communicate with the base station. These ranging opportunities are
allocated on the uplink assuming a two-symbol preamble. The total allocation is
therefore at least three symbols long.5 How often the base station listens for ranging
information is configurable but in steady state an allocation every few hundred
OFDM frames is reasonable.6 The overhead is therefore normally quite negligible
and affects the uplink frame only.
3
In the 802.16 OFDM PHY protocol a burst is a consecutive group of symbols in the time axis by all of data sub carriers in the
frequency axis. For the uplink, if optional sub-channels are supported, multiple simultaneous bursts can be supported by
dividing the data sub carriers into groups. A burst is confined to a TDMA frame and must use the same channel parameters
such as modulation and coding, transmitted power, etc., during the burst.
4
Granted 108 Bytes is not a lot of data but the example is not entirely academic, many compressed VoIP implementations
generate packet sizes less than 100 Bytes every 10 to 20 ms. Aggregating the packets together implies increasing end-to-end
latency that may not be acceptable
5
It could be longer at the base station’s discretion to allow for more subscriber stations to join at once with smaller chance of
colliding with each other’s requests. Normally this is only an issue for initial base station startup where there could be a large
number of subscriber stations trying to join at once.
6
Section 10.1 in IEEE 802.16-2004 defines the maximum interval between initial ranging opportunities as 2 seconds.
3.4.3 Midambles
Midambles are allowed for OFDM PHY bursts under certain circumstances. The goal
is to improve channel estimation particularly in mobile scenarios.
For the uplink, midambles, if enabled, are inserted every 8, 16, or 32 data symbols.
For the downlink, midambles are optional if downlink sub-channels are implemented
as specified in the IEEE 802.16e-2005 amendment.
3.5 Sub-channels
The OFDM PHY allows the uplink channel to be subdivided into 16 sub-channels in
order to allow subscriber stations to concentrate their transmission power into fewer
data sub carriers in each symbol.7 This also lets multiple subscriber stations share
the channel simultaneously, which increases the flexibility (and scheduler
complexity) for efficiently using the uplink channel. Support of sub-channels by
subscriber stations is optional.
7
Sub-channels can be useful for balancing the uplink and downlink link margins since ordinarily subscriber stations, compared
to the base station, have much lower radiated power capability due to cost and antenna constraints. Regulatory power density
limits, as always, must be observed when transmitting in fewer sub-channels.
However, there are restrictions on how the 802.16 OFDM PHY organizes the data
sub carriers into Minimum Allocation Units (MAU). The MAU is the smallest two-
dimensional quantum of frequency and time that can be allocated for sending
information across the channel. In the OFDM PHY the MAU’s useful capacity (Bytes)
is variable and depends on the chosen modulation and coding according to the
following.
Nc = coded block size in Bytes (see Table 1), and Nsc = number of allocated sub-
channels (1..16) for the uplink or 16 for the downlink.8
3.6.1 Downlink
The downlink does not support sub-channels for the OFDM PHY with the current
baseline specification.9 The downlink MAU is therefore one symbol by all (192) data
sub carriers. The second column in Table 1 shows the MAU size in number of Bytes,
which varies with coding and modulation. This size allows each symbol to carry
exactly one FEC block, which protects the integrity of the data over the wireless
channel.
8
To be completely accurate, each burst should have one Byte subtracted from the total size because of the tail bits required to
flush the convolutional coder. For instance, if an uplink burst consisted of only one MAU, the total size would be one Byte less
than calculated in the equation.
9
Downlink sub-channel support in PMP deployments was recently added in the IEEE 802.16e-2005 amendment (see
8.3.5.1.1) for improved frequency reuse, lower overhead, and mobility support. In remains to be seen how widely this feature
will be adopted for fixed applications, which is the focus of this white paper.
3.6.2 Uplink
The uplink supports optional sub-channels for the OFDM PHY.
By default, when sub-channels are not used, the MAU is the same as the downlink:
one symbol by all data sub carriers. The second column in Table 1 shows the MAU
size in number of Bytes in that case which varies with coding and modulation. This
size allows each symbol to carry exactly one FEC block.
When sub-channels are implemented, the MAU is as shown in the table but divided
by the number of sub-channels allocated to the subscriber station. For example in
the case of 64QAM-2/3, if a single sub-channel is allocated to a subscriber station,
the MAU is 96 Bytes / 16 = 6 Bytes.
For the uplink, the base station controls access to the channel that is shared
between multiple subscriber stations. Only a single subscriber station can transmit
on a (sub)channel at once. As with the downlink, packing and fragmentation features
at the MAC layer that can be used to fit the size of the packets to be sent to the
MAU. But because an uplink burst can only be used by a single subscriber station,
there is less opportunity for the base station to optimally fit the amount of uplink data
to the MAU quanta. Mitigating this disadvantage is that the MAU is smaller when
sub-channels are supported. However there can be cases where the amount of data
to be sent in a burst just spills over a MAU boundary, and in those cases a nearly
empty MAU is sent, representing additional channel overhead.
10
The reason is that most systems adjust the transmitted power or receiver attenuation as a first line of defense against
channel fades, and then the code rate, and finally the modulation if necessary. Diversity also adds increased margin against
channel fades and adds stability.
In WiMAX diversity is optional but can be supported via Space-Time Coding and
Maximum Ratio Combining.11. We focus on the added overhead required rather than
implementation details.
At the PHY layer, implementing STC adds some overhead by requiring an extra
preamble in each OFDM frame. Bursts of data on the downlink that will be sent over
the two transmit chains must be preceded by a one-symbol preamble. The only extra
requirement is that the total number of symbols must be a multiple of two since the
receiver processes them in pairs according to the Alamouti algorithm. Only one STC
group of symbols is allowed per frame, and once it begins the base station must
transmit from both antennas until the end of the frame. The additional overhead is
therefore one symbol in each frame.
3.8.2 MIMO
Generalized MIMO diversity is not support for the OFDM PHY considered in this
white paper. MIMO is supported for the OFDMA PHY, which supports mobile
applications.
11
Maximal Ratio Combining (MRC) is typically used only at the base station for the uplink since it increases the cost of the
receiver hardware by requiring a second receiver chain. Received signal quality is improved by combining the two signals in
proportion to the ratio of the signal to noise levels. MRC adds no additional coding overhead.
The benefits of AAS are enhanced system capacity, in theory scaling linearly with
the number of base station antennas assuming randomly located subscriber stations.
In addition there are SNR gains available arising from coherent antenna element
signal detection, and directing gain towards subscriber stations of interest while
simultaneously placing nulls on interfering transmitters. All this comes at the expense
of additional base station antenna complexity and processing.
Because this white paper focuses on fixed (no-AAS) infrastructure we will not include
them in our capacity discussion except to note that the added complexity of
managing the space-time channel access on the downlink implies additional
management overhead that would need to be accounted for.
In the last section we looked at the overhead added by the physical layer to a
WiMAX channel. In this section we move up to the next layer in the protocol stack
and examine the Media Access Control (MAC) layer. In contrast to the physical
layer, where much of the overhead is fixed, the MAC layer introduces many variable
overhead elements that are either configuration-dependent, traffic-dependent, or
both. We begin by examining the structure of the MAC PDU.
LSB
Generic MAC Header Subheaders (optional) Payload (optional) CRC (optional)
6B (variable) 0 - 2041 B 4B
MSB
LSB
6B
Figure 2 - MAC PDU Formats
The structure of the MAC PDU is shown in Figure 2. The MAC header comes in one
of two forms: the Generic MAC Header (GMH), or a Bandwidth Request Header
(BRH). Both GMH and BRH are fixed length and six Bytes long.
The structure of the GMH is shown in the following figure reproduced from the
standard. Notice that the length field (LEN) is 11 bits and therefore can specify a
MAC PDU including the header up to 2047 Bytes. The Connection Identifier (CID)
field identifies the virtual connection/service-flow of the MAC PDU.
The GMH may optionally have one or more appended sub-headers as follows:
• Fragmentation Sub header (2B, optionally 1B)
• Packing (3B, optionally 2B)
• Grant Management (2B)
• Mesh Sub header (2B)
• Fast-Feedback-Allocation (1B)
• Extended Sub header (variable length).
The sub headers can occur only once per MAC PDU except for the Packing sub
header, which may be inserted before each MAC SDU packed into the payload.
Following the GMH and optional sub headers comes the optional payload which is
variable length up to (2047 – 6) = 2041 Bytes but with the restriction that the entire
MAC PDU including header, sub headers, payload and CRC must be less than 2048
Bytes.
Following the GMH, optional sub headers, and optional payload, is the optional CRC,
which is four Bytes long.
MAC PDUs will always begin with either a GMH or a BRH and are therefore at least
six Bytes long. MAC PDUs transporting data payloads will always begin with a
GMH.
With this background we can understand the overhead added by the MAC headers,
sub headers, and checksum to the payload being transported. From the figure we
see that MAC PDUs are variable in length and can be as short as six Bytes, or as
long as 2047 Bytes (211 - 1). The overhead for transporting the payload therefore
depends on the size of the payload. For example, a 1514B Ethernet frame (preamble
and CRC removed) has a minimum per-PDU MAC overhead of 10 / (1514+10) =
0.7% assuming no sub-headers but including a MAC CRC. On the other hand a
single short packet such as an 40B TCP/IP ACK over Ethernet would have a
minimum per-PDU MAC overhead of 10 / (54+10) = 15.6% overhead. No wonder the
standard allows multiple user packets to be packed into a single MAC PDU to
improve the efficiency of the wireless channel!
4.1.1.1 Fragmentation
12
Fragmentation refers to splitting a MAC SDU across multiple MAC PDUs. The idea
is to allow better packing of MAC SDUs into the available OFDM frequency-time
resources by using all data sub carriers in each OFDM symbol. Use of fragmentation
is optional but encouraged to improve link efficiency.
For capacity analysis, it is reasonable to assume that some fraction of the MAC
PDUs will be fragmented. The variable overhead is an additional three Bytes added
to the 802.16 MAC header for each packed SDU. A worst-case assumption is to
assume that each MAC PDU includes a sub header when fragmentation is
supported. Both downlink and uplink channels are affected.
4.1.1.2 Packing
Packing refers to combining two or more MAC SDUs into a single MAC PDU. Like its
converse, fragmentation, this allows better packing of MAC SDUs into the available
OFDM frequency-time resources by using all data sub carriers in each OFDM
symbol. Use of packing is optional but encouraged to improve link efficiency.
For capacity analysis, it is reasonable to assume that some fraction of the MAC
PDUs will be packed. The variable overhead is an additional three Bytes added to
the 802.16 MAC header for each packed SDU.13 A worst-case assumption is to
assume that each MAC PDU includes a one or more sub headers when packing is
supported; the exact number depending on the relative sizes of the SDUs and PDU.
Both downlink and uplink channels are affected.
Normally packing and fragmentation are either both supported or not at all. Since
packing and fragmentation are mutually exclusive operations for a given MAC SDU
we can conservatively estimate that, on average, one packing sub header will be
present in each MAC PDU increasing the total header overhead by three bytes.
12
The PDUs however must still be part of the same transmission burst and cannot be split across TDMA frames.
13
An exception is made for connections with fixed-length SDUs where a packing sub header is not required.
For capacity analysis the worst-case assumption is that each uplink MAC PDU
contains a GM sub header. A more realistic assumption is that 10% of the uplink
MAC PDUs carry the additional GM sub header overhead.
4.1.1.4 Mesh
The Mesh sub header is intended to support mesh operation where the subscriber
stations are allowed to communicate directly without a base station to relay the data.
Although the foundations are present in the 802.16 specification, the mesh portion
has received comparatively little attention and is relatively immature, reflecting the
current market focus on cell coverage deployments with base stations. Accordingly,
we will ignore mesh operation in this white paper.
4.1.1.5 Fast-Feedback-Allocation
The Fast-Feedback-Allocation (FFA) sub header is intended as a low-overhead and
low-latency method for allocating a small temporary uplink channel for the subscriber
station to communicate link information to the base station. It is presently supported
only for the OFDMA PHY for mobility and is therefore out of the scope of this white
paper.
The overhead associated with the FCH is fixed i.e. one MAU each downlink frame.
The percentage overhead only changes if the frame length is changed.
Examples of data that may be in the first broadcast burst; includes, maps, burst
profile descriptions (UCD, DCD), grant allocations for initial ranging, grant allocations
for contention bandwidth requests, and so on. Although all subscriber stations listen
to the broadcast bursts, there may also be ordinary MAC PDUs with broadcast,
multicast, or unicast CIDs sent within a burst.14 It is up to the subscriber station to
classify the CIDs in each MAC PDU header to identify traffic destined for it.
14
The waters are a bit murky here. Presumably a subscriber station must listen to all FCH referenced broadcast bursts that it is
capable of receiving, i.e. at the modulation and coding it uses and all lower combinations. On the other hand, it is impossible for
a subscriber station to decode bursts that are using a higher modulation and coding combination. It is the base station’s
responsibility to ensure that all broadcast information, including the maps, DCD, UCD, and so on, are sent in FCH referenced
bursts that use the lowest common modulation and coding.
The downlink map IE references a specific MAC connection (CID) and a burst profile
code (DIUC) so that a subscriber station can know whether a burst contains traffic
destined for it or not. This capability allows a subscriber station to skip bursts in the
downlink frame that contain no relevant traffic, thus reducing the processing load.
Multiple downlink map IE may point to the same burst, so that a burst can be shared
between one or more subscriber stations. The subscriber station must therefore be
capable of classifying the MAC PDU header CIDs to select the traffic destined for it.
How often downlink maps are sent, and therefore the amount of downlink overhead
they represent, depends on the channel configuration and how the base station
schedules the traffic. In cases where four or fewer burst profiles are in use, the FCH
alone could be used to reference all traffic without the need for a downlink map. On
the other hand, maps can be used to reduce the amount of downlink processing at
the subscriber station at the expense of increased map overhead. Normally this is a
design choice made by the base station designers.
For worst-case capacity estimation, each downlink frame includes a downlink map
whose size is determined by the number of active connections sharing the frame.
For example, if there were ten active connections, the size of the basic downlink map
would be 8 + 11*4 = 52 Bytes (excluding any extended information elements). How
much overhead this takes up in a frame depends on the modulation and coding used
to send the map. A worst-case assumption is that there will be some subscriber
stations capable of only BPSK modulation and the burst containing the maps will
therefore be forced to use this. In our example, a 52 Byte downlink map would
occupy five MAUs of the downlink frame using BPSK.
Notice that there is a recursive element to the size of the downlink map, set by the
number of active stations, which influences the channel capacity, which determines
the possible number of active stations! This suggests that accurate capacity analysis
needs to iterate to a self-consistent solution.
The uplink map begins with 11 Bytes of header information followed by one or more
information elements (UL-MAP_IE). Each UL-MAP_IE is six Bytes long but can
optionally contain variable length extensions. The basic size of the uplink map is
therefore 11 + N*6 Bytes, where N is the number of UL-MAP_IE using the frame.
There is one information element for each active connection using the frame and the
map must terminate with an IE marking the end of the map. Unless contention
access is intended, the subscriber station referenced in an UL-MAP_IE by CID must
refer to a unique uplink burst since each burst can only be used by a single station.
How often uplink maps are sent, and therefore the amount of downlink overhead
they represent, depends on the channel configuration and how the base station
schedules the traffic.
For capacity estimation, each downlink frame includes an uplink map whose size is
determined by the number of active connections that will share the uplink frame. For
example, if there were ten active connections the size of the basic uplink map would
be 11 + 11*6 = 77 Bytes (excluding any extended information elements). Again, how
much overhead this takes up in a frame depends on modulation and coding used to
send the map. A worst-case assumption is that there will be some subscriber
stations capable of receiving only BPSK modulation and the burst containing the
maps will therefore be forced to use this. In our example, a 77 Byte uplink map
would occupy seven MAUs of the downlink frame using BPSK.
How often downlink or uplink channel descriptors are sent, and therefore the amount
of downlink overhead they represent, depends on the channel configuration and how
often conditions change, thus forcing updates. For example the types of modulation
and coding in use could vary over time. In general however the burst profiles will be
relatively static and we can ignore them for capacity estimation.
In this section we will describe the basic mechanisms without explaining why they
exist. Later we will see how these are used for providing quality of service (QoS) and
further discuss their ramifications on channel overhead in the context of QoS
connection profiles.
4.3.1 Contention
Contention based uplink access is a method where the base station periodically
allocates part of the uplink channel capacity (grants a “transmit opportunity”) to
specified stations that might have data to send. The stations in the contention group
are identified by their CID. There are two ways used to manage the process.
The overhead associated with this method is in the small addition to the size of
uplink map (see 4.2.3), but mainly in the contention allocation itself, which must allow
for the most robust modulation and coding combination. The interval between
contention allocations is configurable.
15
Whether a base station supports Focused Contention is an implementation decision. The mandatory Full Contention method
suffers from congestion collapse beyond a ‘knee’ in the curve of the access latency versus number of active subscriber
stations. Where this point is varies according to the uplink traffic patterns of the subscriber stations. Focused Contention on the
other hand is much more robust and can handle situation where there are a large number of simultaneously contending
subscriber stations. The tradeoff is the increased overhead and latency of the three-way handshake.
The overhead associated with this method is in the small addition to the size of the
uplink map (see 4.2.3), but mainly in the uplink contention allocation itself, plus the
following bandwidth request allocation. The interval between contention allocations is
configurable.
The uplink contention allocation is two OFDM symbols, by all data carriers. This is
shared by all requesting stations as explained above.
The subsequent uplink bandwidth request allocation, one for each requesting
subscriber station, includes a one-symbol preamble followed by one or more
symbols configured for the allocation. The number of symbols depends on the
modulation and coding and the number of sub-channels used for the allocation. The
size of the allocation should be sufficient to send one BRH. For example, since the
BRH is 6 + 4 = 10 Bytes, and assuming a CRC is used, then the number of required
symbols is 10 / sizeof(MAU). If there are four sub-channels in the allocation, then
from Eq. 10 and Table 1, and assuming BPSK 1/2 modulation and coding, four
OFDM symbols are required to hold one BRH.
4.3.2 Polling
Polling is a process where the base station periodically allocates part of the uplink
channel capacity (issues a “grant” or “transmit opportunity” in the uplink map) to each
participating subscriber station that might have data to send. The transmit
opportunity itself is the poll, there is no explicit message type. The subscriber
stations use the transmit opportunity to send a BRH to request uplink bandwidth. The
grants must therefore be at least large enough to send one BRH.
Polls may be unicast or multicast or broadcast according to the CID specified in the
uplink map transmit opportunity information element. If a poll is multicast or
broadcast then one of the contention bandwidth request methods (full or focused) is
specified to collect the bandwidth request responses. Unicast polls are directed
towards a single CID associated with a single subscriber station.
The overhead associated with this method is in the small addition to the size of
uplink map (see 4.2.3), but mainly in the request allocation itself, which must allow
16
The odds of choosing the same Focused Contention sub-channel are 1 in 50 and choosing the same code are 1 in 8. The
overall collision probability for a given contention opportunity is therefore 1/(50*8) = 0.25%. The tradeoff is the additional
complexity, latency, and overhead of the method’s three-way handshake.
4.3.3 Piggyback
A piggyback bandwidth request is a method of using a previously granted uplink
channel access opportunity to inform the base station that a subscriber station
requires another allocation to send pending data. The idea is that once a subscriber
station obtains uplink channel access it can use the channel for future bandwidth
requests without incurring the overhead associated with contention or polling. This is
most useful when a subscriber station connection has long consecutive trains of data
packets to send.
To improve efficiency, the method uses a GM sub header of the GMH (see 4.1.1.3).
The overhead associated with piggybacking adds two bytes to the length of a MAC
PDU. Support for piggyback requests by a subscriber station is optional.
17
In cases where there are a large number of inactive subscriber stations to poll, supporting unicast polling to each of them
would be an inefficient use of the downlink frame’s uplink map, and the uplink frame’s bandwidth request opportunities.
Multicast polling was designed with this case in mind to improve bandwidth efficiency. The collection of subscriber stations are
assigned to a special bandwidth request multicast group and only those that have traffic to send respond to the multicast polls.
Broadcast polling is similar, except that all subscriber stations that have traffic to send respond to the polls.
1 5 8 13
input channel size input frame length input avg user pk
calc preamble o/h
(MHz) (2.5 to 20 ms) (B)
2 6 9 14
input cyclic prefix
calc frame o/h calc FCH o/h calc MAC hdr o/h
(1/4 to 1/32)
3 7 10 15
input mod and
calc frame bw calc DL map o/h calc MAC subhdr o/h
coding distribution
4 11 16
calc useful channel bw calc UL map o/h calc MAC CRC o/h
12 17
calc useful frame bw calc useful MAC bw
Beginning with the downlink, Figure 4 presents the sequence diagram for accounting
for the various PHY and MAC overhead contributions in order to calculate the useful
channel bandwidth. In the figure there are four intermediate results represented in
the four columns. In column one the raw channel bandwidth is determined. In
column two the basic TDMA framing overhead is accounted for. In column three the
required preamble and channel management overhead is taken out. In column four
the various per-packet overhead is accounted for resulting finally in the useful user
bandwidth available at the input of the MAC protocol layer.
1 5 8 16
input channel size input frame length input avg user pk
calc ranging o/h
(MHz) (2.5 to 20 ms) (B)
2 6 9 17
input cyclic prefix
calc frame o/h input subch size calc MAC hdr o/h
(1/4 to 1/32)
3 7 10 18
input mod and
calc frame bw calc MAU calc MAC subhdr o/h
coding distribution
4 11 19
calc useful channel bw calc contention o/h calc MAC CRC o/h
12 20
input burst size calc useful MAC bw
13
calc subch o/h
14
calc preamble o/h
15
calc useful frame bw
Figure 5 presents the companion sequence diagram for the uplink, which similarly
accounts for the various PHY and MAC overhead contributions leading to the useful
channel bandwidth. As with the downlink there are four intermediate results
represented in the four columns. The first two and last columns are identical to the
downlink. The third column accounts for the required preambles and required
channel management overhead, which differs from downlink.
Again, user inputs are required to complete the calculation. These inputs are
indicated in steps 1, 2, 3, 5, 9, 12, 16 of Figure 5 as previously discussed in sections
3 and 4.
sc 8 subch Configured subch's per SS (1,2,4,8,16), 16 means that SS uses all data subcarriers during uplink access
Nfull 1 # / frame Number of Full Contention allocations per frame
Nfocused 0 # / frame Number of Focused Contention allocations per frame (optional), (set 0 if not supported)
NfocusedSS 5 # / frame Number of SS contending with Focused Contention per frame
Npolls 5 # / frame Number of bandwidth request polls per frame (one for each polled connection)
ranging 3 symbols Initial ranging grant (3 symbols per ranging opportunity)
MAU 44.85 Bytes Minimum Allocation Unit (includes average coding and modulation)
MAUpf 276 MAU Number of MAU per frame
full_cont 2 MAU Full Contention request allocation
focused_cont1 0 symbols Focused Contention request allocation - phase 1
focused_cont2 10 MAU Focused Contention request allocation - phase 2
p 10 MAU Polling allocations
Nb 1 PDU Average number of MAC PDU per burst
b 78 Bytes Average burst size (depends on payload size below)
b_MAU 3 MAU Average burst allocation
bpf 68 # User traffic bursts per frame (ignores fractional filled burst and suboptimal packing efficiency)
Cf_useful 5.86 Mbps Useful downlink frame bandwidth
18
In the Microsoft Word version of this document the spreadsheet is linked as an imbedded object (right-click > worksheet
object > open). Otherwise the spreadsheet model file is available on request to SR Telecom.
On the downlink, the base station directly controls the scheduling of traffic and
allocation of the frequency-time channel resources. Dedicating a portion of the
channel bandwidth for CBR flows is therefore a matter of keeping track of the
allocated resources and transporting any available packets from appropriately
classified traffic.
For the uplink the Unsolicited Grant Service (UGS) scheduling method is used. The
base station dedicates a portion of the uplink channel bandwidth to a Subscriber
Station corresponding to one or more service flows for the duration of the flow. The
base station communicates this assignment to the Subscriber Station in the uplink
channel usage maps that are periodically broadcast out to all stations.
From a capacity standpoint, the key CBR QoS parameter is the unvarying Maximum
Sustained Traffic Rate, which is the committed information rate for the flow. The
maximum rate is unconditionally dedicated to the flow and therefore can be directly
subtracted from the available user channel size to determine the remaining capacity.
The only overhead associated with CBR flows is the UGS grant overhead, which
increases the size of the uplink channel usage map.
Although the bandwidth is dedicated for a CBR service flow, the base station
scheduler implementation could still elect to temporarily “borrow” the dedicated
bandwidth on the downlink frame if there is no CBR traffic to send. The scheduler
must however issue uplink grants according to the CBR service flow configuration
whether or not the subscriber station has any traffic to send (the scheduler has no
way of knowing in advance).
On the downlink, the base station directly controls the scheduling of traffic and
allocation of the frequency-time channel resources. Dedicating a portion of the
channel bandwidth is therefore a matter of keeping track of the allocated resources
and transporting any available packets from appropriately classified traffic. The base
station performs this scheduling successively for each TDMA frame that is sent out
(e.g. every 10 ms) so that the time varying nature of the VBR traffic can be
supported in real time.
For the uplink there are several scheduling methods depending on the QoS
requirements for the service flow.
For flows with strict real time access constraints, periodic polling assures that the
subscriber station will have guaranteed channel access up to a specified Minimum
Reserved Traffic Rate. Real time Polling Service (rtPS) operates by having the base
station poll individual subscriber stations periodically (e.g. every frame) to solicit
bandwidth requests (see 4.3.2). Extended real time Polling Service (ertPS) operates
more like UGS except that the committed maximum rate can be changed on the fly
as controlled by subscriber station signaling.
For flows with looser real time access constraints, non real time Polling Service
(nrtPS) operates like rtPS except the polls can be directed at individual or groups of
subscriber stations, and the latency of the base station response to bandwidth
requests is not guaranteed.19 The subscriber stations can also use piggyback
methods to request continuing channel access (see 4.3.3).
For capacity calculations, the two key VBR QoS parameters are the Minimum
Reserved Traffic Rate and the Maximum Sustained Traffic Rate. For VBR, the
minimum rate corresponds to the committed information rate. Since the minimum
rate is guaranteed, it can be directly subtracted from the available user channel size
to determine the remaining capacity. The maximum rate is the peak information rate
that the system will permit. Traffic, submitted by a subscriber station at rates
bounded by the minimum and maximum rates, is dealt with by the base station on a
non-guaranteed basis. The overhead associated with VBR service comes from the
polling method (see 4.3.2) except for ertPS, which basically has the same overhead
as UGS i.e. the size of the uplink channel usage maps is increased for each active
flow.
19
If the polls are directed at a group of subscriber stations the responses must use a contention bandwidth request interval to
respond since request collisions can occur.
On the downlink, the base station directly controls the scheduling of traffic and
allocation of the frequency-time channel resources. For best effort services, the
affected traffic is sent as surplus capacity that is available after satisfying other
guaranteed service types.
On the uplink, the base station should provide periodic contention intervals (see
4.3.1) in order for subscriber stations with best effort flows to submit their bandwidth
requests. The subscriber stations can also use piggyback methods to request
continuing channel access (see 4.3.3).
The overhead associated with best effort services comes from providing the
contention intervals for bandwidth requests.
20
Note that the figure illustrates the case where the scheduler actually has traffic to fill the guaranteed portion of the channel. If
that were not the case then in theory the scheduler can temporarily borrow the guaranteed bandwidth to satisfy non-guaranteed
bandwidth requests. For capacity estimations we need to assume the worst case where the guaranteed bandwidth is in use.
21
This should not come as a surprise; the base station scheduler design is similarly not described by the standard. The authors
of the standard were trying to balance the conflicting requirements of creating a standard while allowing freedom where
possible for product differentiation and innovation.
guaranteed
VBR
MR
B
non-guaranteed
VBR
C
MS
D
BE
Figure 7 – Aggregate Channel Bandwidth Partitioning
One simple way to deal with the issue might be to implement a policy of fair-sharing
the non-guaranteed bandwidth between VBR and BE. That is, equally divide any
remaining bandwidth up between all requesting VBR and BE service flows. The
problem with this approach is that is does not allow service providers much control to
differentiate their services. The other problem is that, while VBR can specify a
minimum information rate, BE services under severe congestion can be starved with
throughput rates approaching zero. A better solution is to provide a method for
prioritizing access to non-guaranteed bandwidth, which can be done by introducing
the concept of service flow over-subscription.22
6.5 Over-Subscription
Over-subscription, sometimes called over-booking, in simplest terms means taking
advantage of the fact that, for many systems, absolute peak demand on shared
resources rarely occur. Examples are everywhere in daily life. Airlines aggressively
over-subscribe their seat capacity. Public telephone networks over-subscribe their
network switching capacity. The point of over-subscription is that system capacity
requirements can be significantly reduced if the requirement to handle absolute
worst-case scenarios is ignored. However, over-subscription comes at a price that is
related to trading hard guarantees of service for soft statistical guarantees.23
Depending on the nature of shared resource usage (“the traffic”), and how
aggressively the resource is over-subscribed, there can be exceptional periods
where there is more demand than can be served.
22
The 802.16 standard also includes the ability to specify a traffic priority QoS parameter for VBR and BE service flows. This
allows basic grouping of priority between sets of service flows. However, it does not distinguish between guaranteed and non-
guaranteed VBR traffic or allow division of priority beyond eight basic levels.
23
How mathematically rigorous the statistics of the guarantees are usually depends on how much is known about the offered
traffic. One well-known example is the blocking probability associated with traditional voice Erlang statistics. On the other hand,
mixed application packet data networks are notoriously difficult to treat with statistical methods for the general case. Often this
results in resorting to empirical rules derived from traffic measurements of a given user population.
In summary of this section, the problem of allocating the aggregate system capacity
to the various service flows must take into account the QoS requirements of those
flows. Dedicated or guaranteed bandwidth must be dealt with first and what remains
is shared by non-guaranteed services. Figure 8 illustrates an example of the
allocation sequence. In the figure, steps 8 and 9 are required only if there is more
demand for non-guaranteed combined VBR and BE bandwidth than can be served.
2 7 11
alloc CBR bw calc BE MS bw alloc BE MS bw
3 8 12
calc remaining ch calc VBR MS % no remaining bw
bw remaining bw done
4 9
calc BE MS%
alloc VBR MR bw
remaining bw
5
calc remaining ch bw
Using the spreadsheet model calculator in section 5 the usable user downlink
channel size is estimated to be 9.3 Mbps. The uplink channel size is estimated to be
5.2 Mbps. Because the expected downlink/uplink offered traffic ratio is expected to
be 4:1, whereas the channel bandwidth ratio is about 2:1, the system will be
constrained by downlink bandwidth and the capacity analysis can ignore the uplink.
<BW> = 60 kbps/sub.
The operator can use this information to help determine the required number of
channels (sectors) per base station and where the base stations should be located.
• (same as above)
• 10 ms frame length
Using the spreadsheet model calculator in section 5 the usable user downlink
channel size is estimated to be 8.8 Mbps. The uplink channel size is estimated to be
4.8 Mbps. The expected broadband downlink/uplink offered traffic ratio is expected
to be 4:1. The voice downlink/uplink offered traffic ration is expected to be 1:1. It is
unclear whether the uplink or downlink channel will constrain the capacity so both
are considered in the analysis.
Neglecting VoIP for the moment, the average downlink data rate per subscriber is:
<BW> = 60 kbps/sub.
<BW> = 15 kbps/sub.
To account for the VoIP traffic we should consider the vocoder data rate and Erlang
statistics. For G.711, with cRTP and UDP checksums, the operator determines that
the rate is about 82 kbps counting all application header overhead (except 802.16).
In order to know the amount of bandwidth that must be reserved for the peak number
of simultaneous calls, the total number of lines must be known. Unfortunately, that is
precisely what the operator is trying to determine so the operator “guesses” that the
system can support 120 total lines. The estimated total offered traffic will then be:
The VoIP dedicated bandwidth needs to be deducted from the available channel
before calculating the total broadband data subscriber capacity.
Once again the constraining channel is the downlink and the capacity is therefore
120 voice and broadband data subscribers per channel.
The operator can use this information to help determine the required number of
channels (sectors) per base station and where the base stations should be located.
24
Obviously the operator’s “guess” of the correct number of lines was fortuitous for purposes of illustration. In general the
answer can be obtained by successively iterating to a self-consistent answer.
Finally, we looked at two hypothetical WiMAX service scenarios, using the tools
developed in this white paper to arrive at a capacity in terms of the number of users
that could be supported.