You are on page 1of 21

Network Performance Architecture

Ahmad M. Shaheen 200810964

December 28, 2011

Revision / Component Architectures


Network architecture is understanding of the relationships between (architectural) components of the network. Component architecture is a description of how and where each function of a network is applied within that network.

Addressing/routing Network management Performance Security.

Background / Performance Arch.


It is the relationship between network components and performance requirements. Network Components: Users, Applications and Devices Performance Requirements: Capacity, Delay, RMA

Introduction
Performance is the set of levels for capacity, delay, and RMA in a network. Performance architecture: is the set of performance mechanisms to configure, operate, manage, provision, and account for resources in the network that support traffic flows.

Where these mechanisms are applied within: - The Network - Internal & External relationships between the network and other components.

Performance consists of

Performance Architecture
1) 2) 3) Before applying performance mechanism we must insure that: The performance mechanisms are necessary for this network The performance mechanisms will solve a performance problem The performance mechanisms are sufficient for this network

If we decide to implement a performance mechanism we should start from simple and move toward the complex architecture. Such as: 1. implementing performance mechanisms only in selected areas of the network 2. By using only one or a few mechanisms 3. by selecting only those mechanisms that are easy to implement, operate, and maintain.

Performance Architecture
NOTE when performance mechanisms are implemented and not supported, maintained, or kept current, performance in the network can actually degrade to a point where it would be better not to have any performance mechanisms at all.

Performance Mechanisms
Performance mechanisms are: Quality of Service (QoS) Resource Control (RC) prioritization traffic management scheduling queuing Service-Level Agreements (SLA) Policies

Quality of Service (QoS)


There are two principal approaches to QoS , a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network. There are two standard types of QoS : 1. Differentiated services (DiffServ) implements the prioritized model. DiffServ marks packets according to the type of service they desire. 2. Integrated services (IntServ) implements the parameterized approach. In this model, applications use the Resource Reservation Protocol (RSVP) to request and reserve resources through a network.

Quality of Service (QoS)


IntServ every router in the system implements IntServ, and every application that requires some kind of guarantees has to make an individual reservation. Resource Reservation Protocol "RSVP: is the underlying mechanism to signal QoS across the network: All machines on the network capable of sending QoS data send a PATH message every 30 seconds, which spreads out through the networks. Those who want to listen to them send a corresponding RESV (short for "Reserve") message which then traces the path backwards to the sender.

Quality of Service (QoS)


DiffServ: IP packets are marked in the type of service (ToS) byte for IPv4 or in the traffic class byte in IPv6 so that they will receive the corresponding performance at each network device (or hop). DiffServ defines a set of values for classes of traffic flows It applies to aggregates of traffic flows (e.g., composite flows), not individual traffic flows.

DiffServ and IntServ can be applied individually or together,If both mechanisms applied together the DiffServ is first applied and IntServ is then overlaid onto it

Resource Control / Prioritization


The process of determining which user, application, device, flow, or connection gets service ahead of others, or gets a higher level of service. Necessary since traffics compete for network resources. limited amount of resources available in any network who gets resources first how much resources they get. Priority levels for users, applications, and devices determined during requirements analysis Priority levels for traffic flows determined during the flow analysis process

Resource Control / Prioritization


Two high-level views of performance: Single-tier performance: (may apply across the entire network) Multi-tier performance: one or more groups of traffic flows, based on groups of users, applications, and/or devices (may apply in select areas, or as an addition to single-tier )

Priority level based on: Protocol type (e.g., TCP versus UDP) Service, or port number IP or MAC-layer address Or by other information embedded within the traffic

Resource Control /Traffic Management


Consists of:

Admission control: the ability to refuse access to network resources


Traffic conditioning: a set of mechanisms that modify (increase or decrease) performance to traffic flows

Resource Control / Scheduling


Scheduling is the mechanism that determines the order in which traffic is processed for transmission. provided through network management or as part of the QoS Scheduling may be: proprietary (enterprise-specific) or standards-based use scheduling algorithms: include weighted fair queuing (WFQ) class-based queuing (CBQ)

Resource Control / Queuing


Queuing is storing packets within a network device while they wait for processing. There are a number of queuing mechanisms available in network devices: First in first out (FIFO) Class-based queuing (CBQ) Weighted fair queuing (WFQ) Random early detect (RED) Weighted RED (WRED)

Resource Control / Queuing


First in first out (FIFO) queuing: simplest queuing mechanism available

Packets are stored in a single queue Packets are transmitted onto the network in the order that they were received (at the input queue). Multiple queues with differing priorities Priority levels are configurable in the network device and indicate the performance levels required for each traffic type Packets of each priority level are placed in their respective queues Higher-priority queues are processed before lower-priority queues

Class-based queuing (CBQ):

Resource Control / Queuing


Weighted fair queuing (WFQ)

Assigns priorities (weights) to queues High-priority traffic flows are processed first, and lower-priority traffic flows share the remaining resources.

Weighted RED (WRED) operates in the same fashion as RED but supports multiple priority levels (one for each queue) for dropping packets

Resource Control / Queuing


Generally, when a queue becomes full, packets are dropped either from the beginning of the queue (head) or end of the queue (tail). In either case, the dropping of these packets is likely to be unfair to one or a few traffic flows. As a result, Random Early Detect (RED) was developed to randomize the packet dropping process across a queue. RED will drop packets early (before the queue is actually (full) to force traffic flows (i.e., TCP flows) to adjust by reducing their transmission rate.

NOTE
The combination of QoS, prioritization, traffic management, and scheduling can be applied across a network to achieve various performance levels for traffic flows (Figure 8.6)

Thank you all

Ahmad M. Shaheen

You might also like