You are on page 1of 5

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts

for publication in the ICC 2008 proceedings.

Can We Multiplex IPTV and TCP?


Fengdan Wan, Lin Cai and Aaron Gulliver
Department of Electrical and Computer Engineering University of Victoria, PO Box 3055, STN CSC Victoria, BC V8W 2Y2, Canada
AbstractTelecommunication service providers are racing to deliver IPTV/video on demand (VoD), voice, and data, the so called triple-play services. IPTV trafc, supported by the UDP protocol, has highly variable data rates and stringent Quality of Services (QoS) requirements in terms of delay and loss. Data and VoD ows are normally supported by TCP, which has its own congestion control loop to adjust the sending rate, so the trafc load is also highly dynamic. If IPTV and TCP trafc is simply multiplexed, their performance is difcult to predict and the competition between them will jeopardize their QoS. To efciently utilize network resources and provide satisfactory QoS for both trafc types, we propose multiplexing IPTV and TCP trafc with the protection of a class based queuing (CBQ) scheme. We also develop an analytical framework to model the multiplexed IPTV trafc and TCP trafc with CBQ. The analytical results can be used as a guide to determine the admission region of IPTV and the CBQ parameters. Simulation results are presented which validate the analytical results and demonstrate the effectiveness of the proposed solution. By multiplexing IPTV and TCP trafc appropriately, network resources can be more efciently utilized, the QoS of IPTV can be maintained, and TCP ows can obtain higher throughputs.

I. I NTRODUCTION Internet Protocol TV (IPTV) has been predicted to be a technology winner. Telecommunication service providers are racing to deliver IPTV/video on demand (VoD), voice, and data, the so called triple-play services. To successfully roll out these services, the key issue is how to efciently utilize limited network resources to guarantee the Quality of Service (QoS) of heterogeneous applications. IPTV trafc is normally delivered using RTP/UDP protocols. Since consumers require a high standard of TV quality, IPTV has very stringent QoS requirements. According to [1], the packet loss rate (PLR) for high denition (HD)-IPTV service should be no larger than 106 , and the one-way delay and jitter should be less than 200 ms and 50 ms, respectively. To ensure the stringent QoS requirements for IPTV, effective network resource allocation and reservation is needed. Because the data rate of video sources is highly variable, the reserved bandwidth for IPTV trafc should be much higher than its average data rate. Conversely, the majority of Internet data applications and VoD are supported by the TCP protocol. To efciently utilize network resources, a TCP sender probes for available bandwidth in the network by linearly increasing its sending rate if no network congestion is detected. It also responds to packet losses, which indicates network congestion, by exponentially decreasing its sending rate, so transient network congestion can be relieved. Ideally, TCP controlled ows can be multiplexed with IPTV trafc to achieve multiplexing gain. However, if we simply

multiplex IPTV and TCP in a link without additional control, since TCP ows are aggressive in probing for available bandwidth and create transient congestion, the packet loss rate for IPTV may be too high. In addition, IPTV trafc is supported by the non-responsive UDP protocol, which does not respond to network congestion. When the data rate of IPTV trafc is high and the network becomes congested, TCP ows may exponentially backoff and become starved. Assigning strict priorities to different trafc classes may guarantee the QoS of one class at the cost of the other. To efciently multiplex IPTV and TCP trafc, we propose the use of a simple class based queuing (CBQ) management scheme. CBQ comes into effect only when congestion occurs. It protects each trafc class from others, in that a portion of the bandwidth is reserved for each class. IPTV trafc has a highly variable instantaneous data rate, and TCP trafc is also very dynamic with its own congestion control loop, so their performance is difcult to predict. To effectively support both types of trafc and meet their QoS requirements, an in-depth understanding of their behavior is needed, which is the main focus of this paper. The main contributions of this paper are two-fold. First, we propose a practical solution using CBQ to multiplex IPTV and TCP trafc. Second, we develop an analytical framework to model the behavior of IPTV and TCP trafc, and quantify their performance. The analytical results can be used as a guide to choosing system parameters, like the admission region of IPTV, and the CBQ parameters. Extensive simulations with NS-2 were conducted which validate the analytical results and demonstrate the effectiveness of the proposed solution. Our results show that appropriately multiplexing IPTV and TCP improves the QoS of IPTV and the number of IPTV ows supported in a link can potentially be enlarged. In addition, TCP ows can obtain higher throughputs. The remainder of the paper is organized as follows. The system model, TCP congestion control and CBQ are introduced in Section II. An analytical framework to quantify the performance of IPTV and TCP is given in Section III, followed by simulation and numerical results in Section IV. We then discuss the related work in Section V, and provide some concluding remarks in Section VI. II. S YSTEM M ODEL We consider a link shared by multiple high denition IPTV connections and data trafc. Compressed video streams are encapsulated into UDP packets. Data/VoD streams are segmented and supported by TCP. The IPTV trafc supported by UDP is uni-directional, and they compete with TCP data

978-1-4244-2075-9/08/$25.00 2008 IEEE

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

packets. UDP simply delivers all packets through the link without ow and congestion control, and TCP uses a windowbased congestion control mechanism responding to network congestion. A. TCP congestion control TCP congestion control was initially proposed by Van Jacobson in the late 1980s [2]. Basically, a TCP sender uses a congestion window (cwnd) to control the sending rate with three main algorithms, slow start, congestion avoidance, and exponential backoff. In the slow-start phase, cwnd is initiated as one packet1 and is increased by one packet on each new acknowledgment (ACK). Effectively, during the slow-start phase, cwnd size (W ) increases exponentially every round trip time (RTT) until it reaches the slow-start threshold. Thereafter, in the congestion avoidance phase, the TCP sender inates its window size by 1/W on each new ACK, increasing cwnd by one packet per RTT until a packet loss is detected by tripleduplicate ACKs or a timeout. Packet losses indicate network congestion, and the TCP sender multiplicatively reduces cwnd in response to triple-duplicate ACKs followed by congestion avoidance, or sets cwnd back to its initial value followed by slow-start when a timeout occurs. In steady state, a TCP ow can be approximated as an additive increase and multiplicative decrease (AIMD) ow with increase rate and decrease ratio equal to one and half, respectively. A general AIMD(a, b) model is as follows: 1) Additive increase: W W + a, and 2) Multiplicative decrease: W W b W , where AIMD(1, 1/2) is equivalent to the standard TCP. When we multiplex bursty trafc in a Drop-Tail queue, multiple packet losses may occur during one TCP window. To effectively recover multiple packet losses during one RTT, the TCP selective ACK (SACK) scheme [3] has been proposed, and this is adopted in our system model. B. Class-based queuing management TCP and UDP have disparate transmission mechanisms, and TCP and UDP ows have different trafc characteristics. TCP congestion-controlled ows aggressively probe for available bandwidth, and try to fully utilize all the resources they can obtain. This can create transient network congestion. IPTV ows supported by a best-effort, unresponsive UDP protocol, may experience higher loss rates and longer delay jitter in the presence of TCP. On the other hand, TCP ows are responsive to network congestion and will exponentially backoff and become starved when congestion persists, whereas UDP ows are unresponsive to network congestion and may occupy all link bandwidth in a congested network. Thus, we propose to deploy a class-based resource management scheme at the bottleneck link to support heterogeneous trafc and meet different QoS constraints. CBQ is a linksharing approach which enables the gateway to distribute
terms network packets and transport segments are used interchangeably in this paper since transport layer protocols can negotiate for the maximum segment size when establishing their connection to avoid IP fragmentation.
1 The

capacity on local links in response to local needs [4]. TCP packets and IPTV (over UDP) packets are separated into two classes. In the absence of congestion, a general scheduler (FIFO or round-robin scheduler), could be used. All packets are multiplexed into one queue with the condition that a certain portion of the TCP packets are inserted between the IPTV packets. This portion is set according to the link-sharing parameter and the size of the IPTV and TCP packets. If the IPTV and TCP packet have the same size, the buffer sizes allocated to the TCP and the UDP trafc are pBc and (1 p)Bc , respectively, where Bc is the total buffer size, and p is the portion of link capacity assigned to TCP. When congestion occurs (when one or two classes require more than their allocated bandwidth), CBQ invokes a linksharing scheduler to rate-limit the over-limit class(es) to their assigned capacity. In this way, CBQ can guarantee the QoS of both IPTV trafc and TCP trafc while achieving multiplexing gain. According to CBQ, the available capacity in the output link for the two classes (IPTV over UDP and data over TCP) can be denoted as Cv (t) Cd (t) = max{C d (t), (1 p)C}, = max{C v (t), pC}, (1)

where the arrival rates of IPTV and TCP data trafc are v and d , respectively, and C is the bottleneck link capacity. The instantaneous output rate of each class is bounded by the available bandwidth at any time instant Uv (t) Ud (t) = = v (t), v (t) Cv (t) Cv (t), v (t) > Cv (t) d (t), d (t) Cd (t) Cd (t), d (t) > Cd (t)

Note that different from the Generalized Processor Sharing (GPS) approach (e.g., weighted fair queue) that guarantees the long-term average bandwidth received by different classes, the CBQ scheme adopted here guarantees that each trafc class receives its allocated bandwidth over the relevant time interval [4]. Thus, it can protect IPTV trafc from bursty data trafc. Even if the data trafc is idle for a long period, it cannot occupy more instantaneous bandwidth as compensation (which is not true using GPS), so the instantaneous bandwidth allocated to IPTV trafc is guaranteed. On the other hand, if any trafc class is idle, the other classes can occupy the total link bandwidth to achieve multiplexing gain. III. F LUID M ODEL To understand the behavior of TCP-controlled data trafc and UDP-based video trafc sharing the link with CBQ, and to provide guidelines for setting system parameters, we use a uid model approach to investigate the system performance. A. Video prole For IPTV video sources, advanced source coding technologies have been developed to aggressively increase the compression ratio and reduce the source data rate. Video frames (I, P and B) are compressed by canceling temporal and spatial

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

redundancy. MPEG-4/H.264 has about twice the compression efciency compared to MPEG-2, so it is anticipated to be widely deployed in the transmission and storage of high denition (HD) content. For this reason, we consider MPEG4/H.264 video sources in our system. Although MPEG-4 H.264 can achieve higher compression rates, it also introduces higher trafc burstiness (peak to average ratio). For example, the H.264 video trace of From Mars to China in HDTV format (1920 x 1080i) [5] has a peak rate of 78 Mbps but an average rate of only around 4.85 Mbps. We develop a video model to facilitate our analysis based on the following observations. Basically, video content is coded as frame units. A number of frames are gathered into a group of pictures (GoP). Each GoP consists of one I frame and several B and P frames. Each GoP can be encoded and decoded independently. I frames are intra-coded, while B and P frames are both intra- and inter-coded. Hence, the I frame size is based on spatial correlation, the P frame size is related to the time correlation with the I frame in the GoP, and the B frame size is determined by the time correlation with previous or subsequent P or I frames. Typically, an I frame contains more redundancy than a B or P frame, since it does not exploit spatial correlation with consecutive frames. In [8], a uid model for video trafc was developed: a variable bit rate video source was modeled as 20 multiplexed mini sources. Each mini source independently alternates between an off state and an on state, with A bps generated during the on state. The average residence time in the off state and on state are 1/ and 1/ seconds, respectively. In this paper, we use the uid model to capture GoP size. Since H.264 has a much higher compression ratio than the source coding considered in [8], we use fewer mini sources to emulate the more bursty video sources. The number of mini sources in the on state follows an underlying Markov chain, where the parameter set is chosen such that the rst and second order statistics of the video source are conserved. Denote the size of the ith GOP as Yi , 1 i N , where N is the total number of GoPs. The parameters of the underlying Markov chain can be obtained from E(Y ) = M qA and ( ) = M A2 q(1 q)e(+) , where M is the number of mini sources and q = /( + ). The expectation and auto-covariance of the Markov chain should conform to the GoP size in the real video trace. Specifically, the auto-covariance of the two sequences should be the same. The auto-covariance of the Markov chain follows an exponential distribution. The auto-covariance of the GoP size is dened by [5] ( ) = N 1 N (Ym E(Y ))(Ym+ E(Y )) , m=1 2 where is the lag of the two GoPs and 2 denotes the variation in the GoP size. Dene the correlation coefcient of two sequences i and j as , where CoV (i, j) indicates their R(i, j) = CoV (i,j)
CoV (i,i)CoV (j,j)

TABLE I N OTATIONS Symbol W (t) R(t) Qd (t) B Tm v d v d C p Description TCP congestion window size at time t TCP sending rate queue length of the TCP trafc buffer size for the TCP data the base RTT, a deterministic value including propagation delay, processing delay, etc. the video input rate at the bottleneck buffer the TCP input rate at the bottleneck buffer the video output rate at the bottleneck buffer the TCP output rate at the bottleneck buffer bottleneck link capacity portion of the capacity assigned to TCP implemented in the CBQ scheme

between the B frame and P frame sizes in the same GoP2 are 0.7588 and 0.8674, respectively, which show that there is a high correlation. Hence, the following linear functions are chosen to approximate the frame data rates within each GoP: Ii = a1 Yi ; Pi = (1 a1 )/(1 + a2 )Yi ; Bi = a2 Pi , where Ii , Bi and Pi are the I, B and P frame sizes in GoP Yi , a1 = N Ii /N Yi is the average ratio of the I frame size i=1 i=1 to GoP size, and a2 = N Bi /N Pi is the average ratio of i=1 i=1 the B frame size to P frame size. B. TCP behavior A uni-directional TCP data ow is considered and we assume that no TCP ACK is lost in the reverse direction. We also neglect the slow-start phase and focus on the steady state behavior of TCP. It is assumed that the advertised receiver window size is always larger than the congestion window size, and TCP packet loss only happens at the bottleneck link buffer. TCP ow is modeled as arbitrarily small information cells being transferred in the TCP pipe and stored in the bottleneck buffer [6]. We use a uid model to capture the TCP behavior. For easy reference, the notation used is given in Table I. The base RTT is split into two parts: the forward delay Tf m denotes the delay from the TCP sender to the bottleneck buffer input, and the backward delay Tbm denotes the delay from the buffer output to the sender. If the sending rate of TCP at time t is R(t), the input rate from TCP at the bottleneck buffer is the delay term of R(t): d = R(t Tf m ). Without packet losses, the congestion window size will approximately increase by one for every RTT. Let dW /dACK denote the rate of window growth with every ACK. According to the TCP congestion avoidance algorithm dW /dACK = 1/W, where ACKs arrive at the sender at the rate of dACK/dt = d (t Tbm ), which is the delay term of d (t). Thus, the window size increases at the rate of dW /dt = d (t Tbm )/W. The instantaneous throughput of TCP, i.e., the output rate of the bottleneck TCP buffer, is bounded by the input data rate and the instantaneous link bandwidth d (t) = d (t), Cd (t), d (t) Cd (t) d (t) > Cd (t)

covariance. For the video trace we examined, the correlation coefcient between the I frame size and its GoP size, and

2 In fact, the number of I, B and P frames in the GoP is determined by the GoP pattern, however this factor does not affect the CoV calculation.

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

where Cd (t) can be obtained from (1), and is related to the queue management scheme and the input video rate. The TCP congestion control is a closed-loop control system, the sending rate is R(t) = d (t Tb m) + R(t) , where R(t) counts the increase in the window size, and R(t) = dW/dt = d (t Tb m)/W . A packet loss event occurs when Qd (t) = max{0, d d } > B. The TCP SACK option is used to improve the loss recovery speed, and multiple losses in a window of packets is counted as one loss event. A variable DupThresh holds the number of duplicate acknowledgments required to trigger a retransmission [3]. This threshold is normally dened to be three. If one packet loss event happens at time Tl at the bottleneck buffer, the sender is notied of the packet loss event with duplicate ACKs requesting the lost packet after a delay D = De + Tbm , where De is given by B + 3 = tB +De Cv ( )d [6] . tB Denote the congestion window size at time D + Tl as Wini W (D + Tl ). A TCP sender will freeze its congestion window size, so the received packets will not inate the window size until b Wini packets in-ight are received by the receiver. The last (1 b) of the Wini in-ight packets will trigger (1 b) Wini packets being transmitted. In other words, R(t) = d (t Tbm ) during this period. After sending all the packets, the TCP sender shrinks its window size by b. A new congestion avoidance phase is then started. Even though the expression above is for one packet loss, it also can be used as an approximation for multiple packet losses within one window. Thus, the average TCP throughput during the
t2

compare the loss rate of 9 IPTV ows occupying dedicated bandwidth and that of sharing the link with TCP using CBQ. Therefore, by multiplexing IPTV and TCP data trafc together appropriately using CBQ, we can ensure that the PLR of IPTV can be lower, so potentially we can support more IPTV ows by taking advantage of the multiplexing gain. B. TCP Throughput To validate our video and TCP model, we compare the analytical results with extensive simulation using NS-2. As shown in Figs. 1(b), different numbers of IPTV ows share the link with TCP, and the ow throughput of TCP is compared. RTT is 60 ms and the TCP buffer size is 47 packets. The simulation employed multiple copies of the video trace From Mars to Earth with random starting points for each. Thus the traces can be considered independent. As shown in the gure, the TCP model can successfully estimate the average TCP throughput when it shares the link with a number of IPTV sources with highly-variable data rates. We have several observations from the gures. First, the simulation and analytical results both show that when TCP and IPTV are multiplexed with CBQ, the TCP throughput is much higher than that in a dedicated 4.25 Mbps link. TCP can achieve around 10 Mbps when it is competing with eight IPTV ows in the link, which is more than twice that of the assigned bandwidth. Therefore, by multiplexing IPTV and TCP data trafc in the bottleneck with appropriate CBQ parameters, not only can the packet loss rate of IPTV be reduced, but also a higher TCP throughput can be achieved. Second, the analytical results for TCP throughput are close to the simulation results in all cases. The analysis slightly over-estimates the TCP throughput when the number of IPTV sources is small ( 5). This is because, with a smaller number of IPTV sources, the available bandwidth for TCP is more dynamic and more difcult to estimate. With more IPTV sources, since the overall video rate is constrained by the CBQ scheme, the average service rate of IPTV will be close to (1 p)C, and the variation in available bandwidth for TCP is small. In summary, even though there are many random factors inuencing the throughput of TCP, the uid model presented here provides reasonably accurate estimates. C. Impacts of TCP RTT We further investigate TCP performance with different RTTs and buffer sizes. In the Internet, a round-trip delay of tens of milliseconds is typical for intra-continental trafc, and hundreds of milliseconds round trip delay for inter-continental trafc. The TCP throughput with 60 ms and 600 ms RTTs is given in Fig. 1(c). This shows that the TCP throughput is largely affected by RTT. Ideally, in steady state, the maximum TCP window size should be equal to Wmax = RT T + B, and the TCP window size oscillates between Wmax /2 and Wmax . With a smaller RTT, TCP can adapt to available bandwidth quickly, and change its state according to network conditions promptly. With longer delay, the product of delay and capacity becomes

time period [t1 , t2 ) can be obtained as T h =

t1

v ( )d

t2 t1

IV. P ERFORMANCE EVALUATION BY SIMULATION In this section, we rst demonstrate the advantage of using a CBQ scheme to support heterogeneous IPTV and TCP data trafc. Then the uid models for IPTV and TCP are validated by extensive simulation using NS-2. A. Video Loss Rate First, we let the IPTV and TCP data trafc share a bottleneck link of 85 Mbps without CBQ. With 8 IPTV sources, the video PLR is up to 104 even with a 1000 packet buffer. The average throughput of TCP is quite high, around 44 Mbps, however, the instantaneous TCP throughput is highly variable, and sometimes even drops to near zero. Second, we separate the IPTV and TCP trafc by allocating dedicated bandwidth to each, e.g., 95% of the 85 Mbps link capacity is dedicated to IPTV trafc, and the remaining 5% to TCP trafc. In this case, if the packet loss rate (PLR) requirement is 105 , we can support at most eight IPTV ows with a buffer size exceeding 800 packets, as shown in Fig. 1(a). Next, we multiplex eight IPTV ows and a TCP ow in the 85 Mbps link with CBQ, and assign 95% of the link capacity to IPTV when congestion occurs. The results show that the PLR of eight IPTV connections is much lower than that in the previous case. The same conclusion can be observed if we

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2008 proceedings.

Packet Loss Rate [dB]

TCP Throughput [Mbps]

TCP Throughput [Mbps]

-2 8 -2.5 8 videos 9 -3 9 videos -3.5 -4

videos with TCP, CBQ in 80.75Mbps channel videos with TCP, CBQ in 80.75Mbps channel

20

Simulation Analysis

14 12 10 8 6 4 2 0 20

15

TCP with 8 videos, RTT=60ms Simulation TCP with 8 videos, RTT=600ms Simulation

10

-4.5 -5

-5.5 -6 400 500 600 700 800 900 Video Buffer Size [Packets]

0 4 5 6 7 8 Number of video sources

25

30

35

40

45

50

Data Buffer Size [Packets]

(a) Video loss rate with and without TCP

(b) TCP throughput, RTT=60ms


Fig. 1. Simulation results

(c) TCP with 8 videos

larger, and TCP is inefcient in high bandwidth-delay product networks. This is because, rst, TCP may encounter random packet losses (e.g., due to transmission errors or transient network congestion) before it can fully expand its window size to Wmax . Second, TCP increases its sending rate by one packet per RTT, so TCP with a large RTT increases its sending rate too slowly to fully utilize the available bandwidth which is time-varying. As shown in Fig. 1(c), although a larger buffer size may help increase TCP throughput with small RTT, the effect is less obvious when RTT is large. Comparing the analytical and simulation results in the gure, when the buffer size is small, the analysis slightly over-estimates the TCP throughput. This is because slow-start is not considered in the analytical model, and slow-start happens more frequently with a smaller buffer. With a large buffer, the analytical and simulation results are closer. In summary, the proposed analytical model can accurately predict network performance with dynamic IPTV ows and closed-loop controlled TCP ows. The analytical framework can estimate the statistical prole of IPTV and TCP throughputs. These results are helpful for network design and planning for home, access and backbone networks. V. R ELATED W ORK The admission region of IPTV over wired, one hop wireless and multi hop wireless links was derived in [7]. In reality, data trafc is mostly supported by TCP, and its inuence on IPTV trafc has not been well investigated yet. In this paper, we proposed a video prole and TCP model which facilitate the analysis of multiplexed IPTV and TCP data trafc. The source rate of video trafc is related to several factors, e.g. coding schemes, video content, encoding parameters, and GoP patterns. Intensive research on video modeling has been done in recent decades, e.g. [8]-[9]. Our video prole is based on Markov chains, and this differs from previous work as we consider an advanced video coding scheme and exploit the relationship between the I, B and P frames in each GoP. By considering the frame size statistics, our model can be used to analyze IPTV performance with ner granularity. The modeling of TCP congestion control is a well studied topic. The analysis of TCP performance with variable bottleneck link capacity was investigated in [6]. We extend the uid based TCP model to consider highly dynamic IPTV trafc and

resource management deployed at the bottleneck router. To the best of our knowledge, our analytical framework is the rst to thoroughly consider the interaction between IPTV and TCP trafc when they are multiplexed in the same link with CBQ. VI. C ONCLUSIONS In this paper, we have proposed a technique for multiplexing IPTV and TCP trafc in a bottleneck link under CBQ protection. The bottleneck link can reside in the backbone, access networks, or home networks. To obtain insight into the multiplexed trafc behavior, we have developed an analytical framework to study its performance. Extensive simulation with NS-2 was conducted which validates the analytical results and demonstrates the effectiveness and efciency of the proposed solution. According to our results, by multiplexing IPTV and TCP trafc appropriately, the QoS of IPTV can be improved, and the number of IPTV ows supported in a link can potentially be enlarged. In addition, TCP ows can obtain higher throughputs thanks to the multiplexing gain. ACKNOWLEDGMENT
This work has been supported in part by research grants from the Natural Sciences and Engineering Research Council of Canada and Bell Canada.

R EFERENCES
[1] DSL forum architecture & transport working group, triple-play services quality of experience (QoE) requirements, DSL Forum, Technical Report TR-126, Dec. 2006. [2] V. Jacobson, Congestion avoidance and control, ACM Computer Commun. Review, vol. 18, no. 4, pp. 314329, Aug. 1988. [3] E. Blanton, M. Allman, K. Fall and L. Wang, A conservative selective acknowledgment (SACK)-based loss recovery algorithm for TCP, Network Working Group, RFC 3517, Apr. 2003. [4] S. Floyd and V. Jacobson, Link-sharing and resource management models for packet networks, IEEE/ACM Trans. Network., vol. 3 no. 4, pp. 365386, Aug. 1995. [5] P. Seeling and M. Reisslein, Evaluating multimedia networking mechanisms using video traces, IEEE Potentials, vol. 24, no. 4, pp. 2125, Oct./Nov. 2005. [6] A. Baiocchi and F. Vacirca, TCP uid modeling with a variable capacity bottleneck link, Proc. IEEE INFOCOM, pp. 10461054, May 2007. [7] E. Shihab, F. Wan, L. Cai, T. A. Gulliver, and N. Tin, Performance analysis of IPTV in home networks, Proc. IEEE Globecom, Nov./Dec. 2007. [8] B. Maglaris, D. Anastassiou, P. Sen, G. Karlsson, and J. Robbins, Performance models of statistical multiplexing in packet video communications, IEEE Trans. Commun., vol. 36, no. 7, pp. 834844, July 1988. [9] U. K. Sarkar, S. Ramakrishnan, and D. Sarkar, Modeling full-length video using markov-modulated gamma-based framework, IEEE/ACM Trans. Network., vol. 11, no. 4, pp. 638649, Aug. 2003.

You might also like