You are on page 1of 10

Extended Linux HTB Queuing Discipline Implementations

Doru Gabriel BALAN, Dan Alin POTORAC Electrical Engineering and Computer Science Faculty Stefan cel Mare University of Suceava Romania

AbstrAct: In computer networks the traffic control is an essential management issue and a permanent challenge for network engineers. The paper is evaluating the main QoS (Quality of Service) technologies used in networks management. The article is mainly focusing on classful queuing disciplines and HTB (Hierarchy Token Bucket) Linux implementations. The shaping and prioritization mechanisms are explained, and are proposed three different practical solutions for implementing HTB, under a common Linux environment, for a defined QoS scenario. Keywords: Network traffic, Linux implementation, Network service Received: 11 November, Revised: 18 December 2009, Accepted: 29 December 2009 1. Introduction This material is oriented over the methods in which the HTB queuing discipline (Devera, 2003) can be implemented in a Linux environment with scalable and precise results as showed by Ivancic, Hadjina, and Basch (2005). The following discussions are focused on three methods that can be used to implement QoS rules using HTB: a command line method a text file method a web interface method first, from the classic and traditional UNIX command line representing a common shell (sh, csh, bash, etc.), using the tc (traffic control) command line tool from iproute2 software packet (Kuznetsov and Hemminger, 2002), second, using HTB-tools proposed by Spirlea, Subredu, and Stanimir (2007) to simplify the difficult process of bandwidth allocation, and third, a set of WEB tools interfaces: WebHTB (Delicostea, 2008) and T-HTB (Lazarov, 2009) used for shaping packets.

These methods are:

2. QoS terms and background Traffic control consists of the following series of actions (Hubert, 2002): - Shaping When traffic is shaped, its rate of transmission is under control. Shaping may be more than lowering the available bandwidth; it is also used to smooth out bursts in traffic for better network behavior. Shaping occurs on egress. - Scheduling By scheduling the transmission of packets it is possible to improve interactivity for traffic that needs it while still guaranteeing bandwidth to bulk transfers. Reordering is also called prioritizing, and happens only on egress. - Policing If shaping deals with transmission of traffic, policing pertains to traffic arriving, so it occurs on ingress. - Dropping Traffic exceeding a set bandwidth may also be dropped forthwith, both on ingress and on egress. Processing of traffic is controlled by three kinds of objects: qdiscs, classes and filters.

122

International Journal of Information Studies Volume 2 Issue 2 April 2010

a. Qdiscs Queueing disciplines are the basic elements for understanding traffic control. Whenever the kernel needs to send a packet to an interface, it is enqueued to the qdisc configured for that interface. Immediately afterwards, the kernel tries to get as many packets as possible from the qdisc, for giving them to the network adaptor driver. The default qdisc in Linux kernel is the pfifo_fast one, which does no processing at all and is a pure First In, First Out queue. It does however store traffic when the network interface cant handle it momentarily. b. Classes Some qdiscs can contain classes, which contain further qdiscs; traffic may then be enqueued in any of the inner qdiscs, which are within the classes. When the kernel tries to dequeue a packet from such a classful qdisc it can come from any of the classes. A qdisc may for example prioritize certain kinds of traffic by trying to dequeue from certain classes before others. c. Filters Filters reside within qdiscs where a filter is used by a classful qdisc to determine in which class a packet will be enqueued. Whenever traffic arrives at a class with subclasses, it needs to be classified. All filters attached to the class are called, until one of them returns with a verdict. If no verdict was made, other criteria may be available. There are two categories of queuing disciplines: A. Classful queuing disciplines (contain classes and provide a handle to which to attach filters), and B. Classless queuing disciplines (no classes, nor is it possible to attach filters). Classless queuing disciplines are those that can accept data and only reschedule, delay or drop it. These can be used to shape traffic for an entire interface, without any subdivisions (Hubert, 2004). Each of these queuing disciplines can be used as the primary qdisc on an interface, or can be used inside a leaf class of a classful qdisc. These are the fundamental schedulers units used under Linux. Some of the classless queuing disciplines used in Linux are: - [p|b]fifo It is the simplest usable qdisc, pure First In, First Out behaviour. - pfifo_fast Standard qdisc for Advanced Router enabled kernels. Consists of a three-band queue, which honors Type of Service flags, as well as the priority that may be assigned to a packet. - RED Random Early Detection simulates physical congestion by randomly dropping packets when nearing configured bandwidth allocation. Well suited to very large bandwidth applications. - SFQ Stochastic Fairness Queueing reorders queued traffic so each session gets to send a packet in turn. - TBF The Token Bucket Filter is suited for slowing traffic down to a precisely configured rate. It scales well to large bandwidths. The classful queuing disciplines can have filters attached to them, allowing packets to be directed to particular classes and subqueues (Brown, 2006). Classful qdiscs are very useful when there are different types of traffic, which should have differing treatment. Some of the classful queuing disciplines used in Linux are: - CBQ Class Based Queueing implements a rich linksharing hierarchy of classes. It contains shaping elements as well as prioritizing capabilities. Shaping is performed using link idle time calculations based on average packet size and underlying link bandwidth.ede

International Journal of Information Studies Volume 2 Issue 2 April 2010

123

HTB

The Hierarchy Token Bucket implements a rich link sharing hierarchy of classes with an emphasis on conforming to existing practices. HTB facilitates guaranteeing bandwidth to classes, while also allowing specification of upper limits to inter-class sharing. It contains shaping elements, based on TBF and can prioritize classes. PRIO The PRIO qdisc is a non-shaping container for a configurable number of classes, which are dequeued in order. This allows for easy prioritization of traffic, where lower classes are only able to send if higher ones have no packets available. To facilitate configuration, Type Of Service (TOS) bits from the IP header are honored by default. Hierarchical Token Bucket (HTB) is a packet scheduler and is currently included in Linux kernels; /net/sched/sch_htb.c kernel source tree. HTB is meant as a more understandable, intuitive and faster replacement for the CBQ (Class Based Queueing) qdisc in Linux (Devera, 2002). Both CBQ and HTB help to control the use of the outbound bandwidth on a given link. Both allow using one physical link to simulate several slower links and to send different kinds of traffic on different simulated links. In both cases, the administrator has to specify how to divide the physical link into simulated links and how to decide which simulated link to use for a given packet to be sent. Unlike CBQ, HTB shapes traffic based on the Token Bucket Filter (TBF) algorithm, which does not depend on interface characteristics and so does not need to know the underlying bandwidth of the outgoing interface (Devera and Hubert, 2002). 3. Linux kernel resources Linux kernel provides a set of controls that are used to enable the QoS mechanism. When the kernel has several packets to send out over a network device, it has to decide which ones to send first, which ones to delay, and which ones to drop. This is the job of the queuing disciplines, several different algorithms for how to do this fairly have been proposed, and HTB is one of them (Torvalds, 2003). The QoS mechanism is activated, on a linux kernel, at configuration time (make menuconfig), by enabling one kernel variable: NET_SCHED. At this moment it is also activated the default queuing discipline for a linux kernel, pfifo_fast, identified by the NET_SCH_FIFO variable. After this step it has to be selected the queuing discipline that will be compiled at the same time with the kernel code. The HTB queuing discipline has a configuration correspondent identified by the kernel variable: NET_SCH_HTB. In the same manner, other QoS algorithms can be activated at kernel configuration time to be available later on kernel space to provide packet management. Each algorithm has a kernel variable, for example: CBQ - NET_SCH_CBQ, RED - NET_SCH_RED, SFQ - NET_SCH_SFQ, TBF - NET_SCH_TBF, etc. After kernel compilation queuing discipline are available for QoS implementations directly in kernel code, in case of a monolithic kernel compilation, or as kernel modules if the kernel was compiled with module support. These modules can be identified in the linux file system in the /lib/modules/kernel/net/ schedulers directory, for example, HTB kernel module is: sch_htb.ko file. 4. Iproute2/tc tool Iproute2 is a collection of utilities for controlling TCP / IP networking and traffic control in Linux (Kuznetsov and Hemminger, 2002), with the objective to realize the QoS implementation in the Linux kernel. Most network configuration manuals still refer to ifconfig and route as the primary network configuration tools, but ifconfig is known to behave inadequately in modern network environments (Kuznetsov and Hemminger, 2002). They should be deprecated, but most distros still include them. Most network configuration systems make use of ifconfig and thus provide a limited feature set. The /etc/net project aims to support most modern network technologies, as it doesnt use ifconfig and allows a system administrator to make use of all iproute2 features, including traffic control (Kuznetsov and Hemminger, 2002). Iproute2 is usually shipped in a package called iproute or iproute2 and consists of several tools, of which the most important are ip and tc. ip controls IPv4 and IPv6 configuration and tc stands for traffic control.

124

International Journal of Information Studies Volume 2 Issue 2 April 2010

The tc tool from iproute2 can be used to show or manipulate traffic control settings in a Linux router, so the tc tool is used to configure traffic control at the Linux kernel level (Hubert, 2002). Lets see a short example that looks like this: We have two customers, A and B, both connected to the Internet via eth0. We want to allocate 60 kbps to B and 40 kbps to A. Next we want to subdivide As bandwidth 30kbps for WWW and 10kbps for everything else (Devera, 2002). To solve this situation we have to type line by line or create a script with the following tc commands: #tc qdisc add dev eth0 root handle 1: htb default 12. This command attaches queue discipline HTB to eth0 and gives it the handle 1:. This is just a name or identifier with which to refer to it below. The default 12 parameter means that any traffic that is not otherwise classified will be assigned to class 1:12. The immediate result can be visualized with: #tc qdisc show dev eth0 output: qdisc htb 1: root r2q 10 default 12 direct_packets_stat 2398. Next we can build the classes: #tc class add dev eth0 parent 1: classid 1:1 htb rate 100kbps ceil 100kbps #tc class add dev eth0 parent 1:1 classid 1:10 htb rate 30kbps ceil 100kbps #tc class add dev eth0 parent 1:1 classid 1:11 htb rate 10kbps ceil 100kbps #tc class add dev eth0 parent 1:1 classid 1:12 htb rate 60kbps ceil 100kbps. The first line creates a root class, 1:1 under the qdisc 1:. The definition of a root class is one with the htb qdisc as its parent. A root class, like other classes under an htb qdisc allows its children to borrow from each other, but one root class cannot borrow from another. We could have created the other three classes directly under the htb qdisc, but then the excess bandwidth from one would not be available to the others. In this case we do want to allow borrowing, so we have to create an extra class to serve as the root and put the classes that will carry the real data under that (the next three code lines). The immediate result can be viewed with: #tc -s -d class show dev eth0 output: class htb 1:11 parent 1:1 prio 0 rate 80000bit ceil 800000bit burst 1600b cburst 1599b class htb 1:1 root rate 800000bit ceil 800000bit burst 1599b cburst 1599b class htb 1:10 parent 1:1 prio 0 rate 240000bit ceil 800000bit burst 1599b cburst 1599b class htb 1:12 parent 1:1 prio 0 rate 480000bit ceil 800000bit burst 1599b cburst 1599b. We also have to describe which packets belong in which class using the tc filter options. The commands will look something like this: #tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 1.2.3.4 match ip dport 80 0xffff flowid 1:10 #tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip src 1.2.3.4 flowid 1:11. The immediate result is: #tc -s -d filter show dev eth0 output: filter parent 1: protocol ip pref 1 u32

International Journal of Information Studies Volume 2 Issue 2 April 2010

125

filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:10 (rule hit 98140 success 97403) match 5060782c/ffffffff at 12 (success 98140) match 00000050/0000ffff at 20 (success 97403) filter parent 1: protocol ip pref 1 u32 fh 800::801order 2049 key ht 800 bkt 0 flowid 1:11 (rule hit 483 success 483) match 5060782c/ffffffff at 12 (success 483 ). A more detailed output can be realized with placing s (statistics) and/or d (details) to any tc command, like this: #tc -s -d qdisc show dev eth0 #tc -s -d class show dev eth0 #tc -s -d filter show dev eth0. We can optionally attach queuing disciplines to the leaf classes: #tc qdisc add dev eth0 parent 1:10 handle 20: pfifo limit 5 #tc qdisc add dev eth0 parent 1:11 handle 30: pfifo limit 5 #tc qdisc add dev eth0 parent 1:12 handle 40: sfq perturb 10. In this manner we can create any QoS scenario and implement it with the iproute2/tc tools. 5. HTB-Tools HTB-tools Bandwidth Management Software is a software suite with several tools that help simplify the difficult process of bandwidth allocation, for both upload and download traffic, using the Linux kernels HTB facility, proposed by Spirlea, Subredu, and Stanimir (2007). It can generate and check configuration files and also provides a real time traffic overview for each separate client. The principal features of the HTB-Tools are: * bandwidth limitation using public IP addresses, using the two configuration files for upload and download * bandwidth limitation using private IP addresses (SNAT), using a single configuration file * match mark * match mark in u32 * metropolitan/external limitation * menu based management software for configuration and administration of HTB-tools (starting with version 0.3.0). The set of HTB-tools includes: q_parser: reads a configuration file (the file defines classes, clients, bandwidth limits) and generates an HTB settings script; q_checkcfg: check configuration files; q_show: displays in the console the status of the traffic and the allocated bandwidth for each class/client defined in the configuration file; q_show.php: displays in a web page the status of the traffic and the allocated bandwidth for each class/client defined in the configuration file; wHTB-tools_cfg_gen: create and generate configuration files from a web page (only in HTB-tools 0.3.0); htbgen: generate configuration files from bash shell.

The configuration files can be created with the htbgen tool or can be created with any file editor in separated mode, for download and upload, as proposed by Rusu, Subredu, Sparlea, and Vraciu. (2002).

126

International Journal of Information Studies Volume 2 Issue 2 April 2010

The configuration files format must contain declarations for htb classes and clients for each class. The class syntax must be like: class class_1 { bandwidth 192; limit 256 burst 2 priority 1 que sfq; } The clients syntax rules are: client client1 { bandwidth 48 limit 64 burst 2; # or burst 0; only for HTB-tools 0.3. priority 1 mark 20 dst { 192.168.100.4/32 } }; The configuration files can be checked with the q_checkcfg tool, from command line: #q_checkcfg /etc/htb/eth1-qos.cfg. Verifications made by this tool include the calculation of the CIR (Committed Information Rat) and MIR (Maximum Information Rate) values for each defined traffic class in configuration files and look like this: Default bandwidth: 8 Class class_1, CIR: 192, MIR: 256 ** 4 clients, CIR2: 192, MIR2: 256 1 classes; CIR / MIR = 192 / 256; CIR2 / MIR2 = 192 / 256. Visualization in real-time of traffic can be made with the command line tool q_show. An example of the output of this tool is presented in Figure 1. 6. WebHTB WebHTB (Delicostea, 2008) is a software suite that helps in process of QoS implementation using HTB qdisc by simplifying the difficult process of bandwidth allocation providing a simple and efficient web interface for online configuration. This is an application with a web interface that has the facility to generate and check the configuration files, providing a real time traffic overview for each separate client. The front interface of the WebHTB is presented in Figure 2. The applications menu is intuitive, offering to administrator the possibility to add interfaces for witch traffic shaping will apply and to define classes and clients for each class.

International Journal of Information Studies Volume 2 Issue 2 April 2010

127

Figure 1. HTB-tools q_show output

Figure 2. WebHTB interface

128

International Journal of Information Studies Volume 2 Issue 2 April 2010

WebHTB stores the configuration settings in a mysql database and saves the configuration file in xml format. The configuration file for the eth0 interface is stored in xml directory and named xml/eth0-qos.xml. A sample configuration file in xml format looks like this: <root rate=1024000 ceil=1024000 quantum=6000> <class> <name>download</name> <id>20</id> <bandwidth>100000</bandwidth> <limit>100000</limit> <burst>10</burst> <priority>0</priority> <que>sfq</que> <client> <name>client_A</name> <id>21</id> <bandwidth>10000</bandwidth> <limit>20000</limit> <burst>0</burst> <priority>0</priority> <rule> <src> <ip>80.96.120.5</ip> </src> </rule> </client> </class> <class> <name>default</name> <bandwidth>8000</bandwidth> </class> </root>.

Figure 3. Traffic show from WebHTB International Journal of Information Studies Volume 2 Issue 2 April 2010

129

Show menu will give access to the real time monitoring of the shaped traffic. A short capture of this facility is presented in Figure 3. 7. T-HTB WEB manager T-HTB WEB manager is another useful WEB frontend application that provides a very simple and intuitive method for generating the traffic control rules. In figure 4 is presented a part of the web interface of the T-HTB application, used to create the traffic classes that will be implemented in the QoS scenario. The T-HTB application uses a web server (Apache+PHP), a sql server (mysql) and a scripting language (perl) to interact with

Figure 4. T-HTB interface tc tools (Iproute2) with the final result of obtaining two things: - a system script (command line) with rules for trafic control (tc commands), named: rc.rules, and - a shell script, for linux crontab, to generate graphical statistics of the traffic classes (rc.graph). Traffic control rules from rc.rules looks like that: #!/bin/bash #Flush mangle table /sbin/iptables -t mangle -D POSTROUTING -j SHARE_USERS /sbin/iptables -t mangle -F SHARE_USERS /sbin/iptables -t mangle -X SHARE_USERS /sbin/iptables -t mangle -N SHARE_USERS #Shaper interfaces: eth0 /sbin/tc qdisc del dev eth0 root /sbin/tc qdisc add dev eth0 root handle 1: htb r2q 2 #Root class:

130

International Journal of Information Studies Volume 2 Issue 2 April 2010

/sbin/tc class add dev eth0 parent 1: classid 1:1 htb rate 90 ceil 90 #Class::USV /sbin/tc class add dev eth0 parent 1:1 classid 1:1001 htb rate 100mbps ceil 100kbps burst 2Kbit prio 3 /sbin/tc qdisc add dev eth0 parent 1:1001 handle 1001: sfq perturb 10 /sbin/iptables -t mangle -A SHARE_USERS -o eth0 --protocol all -s 0.0.0.0/0 -d 0.0.0.0/0 1:1001 #Class::dcti /sbin/tc class add dev eth0 parent 1:1 classid 1:1002 htb rate 30kbps ceil 80kbps burst 2Kbit prio 3 /sbin/tc qdisc add dev eth0 parent 1:1002 handle 1002: sfq perturb 10 /sbin/iptables -t mangle -A SHARE_USERS -o eth0 --protocol all -s 80.96.120.0/22 -j CLASSIFY --set-class 1:1002 #Class::client /sbin/tc class add dev eth0 parent 1:1002 classid 1:1003 htb rate 40kbps ceil 50kbps burst 2Kbit prio 3 /sbin/tc qdisc add dev eth0 parent 1:1003 handle 1003: sfq perturb 10 /sbin/iptables -t mangle -A SHARE_USERS -o eth0 --protocol all -s 172.20.12.15/32 -d 80.96.120.30/32 --set-class 1:1003 #IPTABLES run /sbin/iptables -t mangle -A POSTROUTING -j SHARE_USERS. 8. Conclusions Implementations of QoS mechanism based on HTB queuing discipline with the presented solutions are very scalable and competitive, being tested and used in various environments. The preferred medium for implementations was represented by a Linux operating system, named Fedora Linux, but the presented solutions can run in any Linux based router. The solutions cover the full scale of the actual methods of QoS implementation, starting with the oldest and most usual method, the CLI (Command Line Interface), passing through the configuration of a Linux network service (/etc/init.d/htb), using the text configuration files (/etc/htb/eth0-qos.cfg), and ending with the most actual method, the Web interface. Personally tests and implementations made in real environments (public institutions and private corporations), using all of these three methods, provided eloquent results which can prove that HTB implementations can fully satisfy any complex QoS requirements. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] Devera, M. (2003). Hierarchical Token Bucket Theory. http://luxik.cdi.cz/~devik/qos/htb/ Kuznetsov, A., Hemminger, S.(2002). NET:Iproute2, http://www.linuxfoundation.org/en/Net:Iproute Spirlea, I., Subredu, M., Stanimir, V. (2007). HTB-Tools. http://htb-tools.skydevel.ro Delicostea, D. (2008). WebHTB. http://webhtb.sourceforge.net Devera, M. (2002). HTB manual user guide. http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm Hubert, B. (2002). iproute2 TC (8). Linux man page (8). Hubert, B. (2004). Linux Advanced Routing & Traffic Control HOWTO. http://lartc.org/howto/ Brown, M. A. (2006). Traffic Control. http://linux-ip.net/articles/Traffic-Control-HOWTOO Devera, M., Hubert B.(2002). iproute2 HTB. Linux man page (8). Rusu, O., Subredu, M., Sparlea, I., Vraciu, V. (2002). Implementing Real Time Packet Forwarding Policies Using HTB. First RoEduNet International Conference. Cluj-Napoca, Romania. [11] Ivancic, D., Hadjina, N., Basch, D. (2005). Analysis of precision of the HTB packet scheduler. Applied Electromagnetics and Communications- ICECom 2005. Dubrovnik, Croatia. [12] Torvalds, L. (2003). The Linux Kernel Archives. http://www.kernel.org [13] Lazarov, T. (2009). T-HTBmanager. http://sourceforge.net/apps/mediawiki/t-htbmanager -j CLASSIFY --set-class

-j CLASSIFY

International Journal of Information Studies Volume 2 Issue 2 April 2010

131

You might also like