You are on page 1of 5

SIMULATION AND PERFORMANCE ANALYSIS OF COLLABORATIVE PLANNING TRAFFIC OVER A MULTI-HOP NEAR TERM DIGITAL RADIO (NTDR) NETWORK

Michael Snyder & James Dimarogonas The MITRE Corporation Battlefield Communications & Networks Department Eatontown, New Jersey msnyder@mitre.org, jad@mitre.org ABSTRACT The United States Army is introducing new technology capabilities, concurrent with their efforts to upgrade the tactical communication infrastructure. Simulation is playing a significant role in providing information on the impact new technology injection will have upon the evolving tactical communication network. A new capability that is being considered for fielding is collaborative planning. Collaborative planning applications allow the warfighters to effectively plan missions by integrating voice, text chat, and whiteboarding. The traffic that collaborative planning will generate will primarily traverse the Near Term Digital Radio (NTDR) network among battalion and brigade Tactical Operation Centers (TOC). The NTDR employ share two-tier ad hoc network protocols that are self-organizing. In this paper we describe our simulation approach and present the simulation results of the performance of collaborative planning over the NTDR network. INTRODUCTION A form of collaborative planning that most of us are familiar with is when John Madden explains and draws out a play during a football game telecast. One frame has been captured and is being broadcast to us while the overlay data in the form of Xs and Os and arrows are being sent in pieces as he creates them with an electronic telestrator. The United States Army is using this concept and similar application to coordinate and plan battle engagements. In fact, collaborative planning has become a First Digital Division (FDD) requirement, and currently, collaborative planning application is being fielded down to maneuver battalion level. However, there are differences in the application and communications environment that John Madden works within and the tactical Army commanders. First, the application will be on individual computers with capabilities for integrating voice, text chat, and whiteboarding. The application will convert the information into data that is grouped into files or packets. These packets are then disseminated using the Internet Protocol (IP) over the tactical communications network. Secondly, unlike the large bandwidth, relatively low noise and stationary communications environment that the commercial television media uses, the tactical communications network is characterized as having a low bandwidth, being very noisy and in a mobile environment. Therefore, a major concern has been the impact that the tactical communications network will have on collaborative planning performance. The current collaborative planning requirement includes text chat and whiteboarding. We assumed that text chat should not present a difficulty in this environment due to its small file transfers. However, whiteboarding performance could be degraded when used over tactical communications. Whiteboarding begins with the dissemination of a common backdrop that normally is a screen capture file of a map. After the common backdrop file is sent and received by all participants, only updates of symbols and markings are exchanged. These updates are small in file and packet sizes. Therefore, in our simulation and analysis of the whiteboarding performance, we focused on the dissemination of the initial backdrop. This will be a screen capture file of a map of the area of operations, which varies in size between 200 to 800 kilobytes of data. The file is sent from the session originator to all session participants. The collaborative planning application currently performs this transfer by using multiple unicast connections over the Transport Communication Protocol (TCP) for reliable transport. Because the collaborative planning application will reside at the maneuver battalion and brigade TOCs, the primary communication media that the collaborative planning data will traverse is over the NTDR network. The NTDR network formation algorithm organizes itself into a two-tier structured network: intra-cluster grouping for local traffic and backbone clusterhead grouping for inter-cluster traffic. Because the NTDR clusterhead has only one transceiver that requires it to be either on the backbone channel or the intra-cluster channel, a third

common data reservation channel is used to negotiate and manage these resources. The NTDR channel access is similar to Multiple Access Collision Avoidance (MACA) as described in [1] over the common broadcast channel. Listed below are additional NTDR characteristics: Data rate: Frequency: High Power: Low Power: 500 kbps 420 - 435 Mhz 20 W 3.2 W

The topology also shows the NTDR two-tiered network structure. We used OPNET as the application tool to create our simulation. The tool has a library of modules that provided the TCP and IP functionality. The TCP module allowed us to change the TCP parameters to match default settings used within the SOLARIS operating system. We did make modifications to the application module to imitate the collaborative planning.
1-66 BN BDE TOC

In our study, we focused on a session within a brigade (BDE) area. In this scenario, up to 8 participants will have to set up their whiteboarding session over a multihop NTDR network. So, a fairly large file will have to be distributed unicast using TCP, through a mutlihop network 7 times. There have been studies in the past focusing on TCP over wireless multi-hop protocols. In [2], TCP throughput was studied over multiple hops using Carrier Sense Multiple Access (CSMA), Floor Acquisition Multiple Access (FAMA) and MACAW as their channel access, for ring, grid and string network topologies. In [3], TCP performance was studied over a mobile ad-hoc 802.11 wireless network. Both concluded that throughput decreases dramatically with the number of hops, and this is due primarily to the inability of TCP to differentiate between network delays, congestion and packet loss, as well as to the contention delays between data packets and returning acknowledgements. The above results prompted us to investigate the behavior of the backdrop file dissemination over the NTDR BDE area network. Because there was a need to understand the performance quickly and the availability of both NTDR and test funds were limited, physical testing was conducted over a local area network to ascertain backdrop file size and functionality. Simulation was used to collect performance data. The simulation was created and executed using the OPNET simulation application tool. SIMULATION APPROACH To emulate the environment and scenario of the current NTDR network, we used information obtained from field experiments. This information included the physical topology of the network (Ssee Figure 1) and the traffic patterns that the military users generated. These traffic patterns, which we categorized and quantified as Limited User Test (LUT) data, provided the background loading necessary for assessing the impact of the additional collaborative planning traffic on the network.

BDE TOC

DMAIN

299 ENG.

BN 1-22

124 Signal

LEGEND
CP Application Clusterhead Nodes Local Nodes Inter-cluster Link Intra-cluster Link 4-42 BN Test Node BDE TAC2

3-66 BN

BDE TAC

4th FSB

Figure 1. Simulated NTDR Network Topology Each node was represented by a subnet that consisted of a NTDR node model and a LAN node model. The NTDR node model is shown in Figure 2. The standard ethernet2_gtwy node model was modified by replacing one of the ethernet layer connects with custom data link modules and radio transmitter and receiver modules. The dlc process module interfaced with the standard IP module. The mac process module provided the custom medium access layer functionality and will be covered in more detail later in this section. The xmt_mgr and rec_mgr modules were added for more resolution at the physical layers. These modules provided extra functionality such as forward error correction (FEC) and synchronization determination. Six of the 14 pipeline stage functions were modified to provide for the NTDR physical layer characteristics. These stages were: dra_rxgroup, dra_propdel, dra_power, dra_inoise, dra_bkgnoise, and dra_ecc. The dra_rxgroup was modified for half-duplex operation. The dra_propdel and dra_power pipeline stage functions were altered to receive external computed values for range and path loss between nodes. The dra_bkgnoise and dra_inoise received changes to compute their impact for a direct spread waveform. The dra_power and
2

dra_ecc function was also involved with the synchronization determination.

determined by the successful exchange of a message packet. Other functionality such as queuing discipline, duplicate filtering and retransmission attempts also were implemented in the model. Figure 3 depicts the state diagram within OPNET that was created for the channel access module.

Figure 2. NTDR Node Model As previously stated, the data link layer required a custom channel access module and data link module to interface with the TCP/IP stack. This channel access process is described in [4]. The NTDR has a contention based, channel access protocol that uses a senderreceiver handshake. Over the common data reservation channel, the source node generates a control packet, Request To Send (RTS), that identifies the source and destination nodes, channel, power and packet length. The destination node responds with a control packet, Clear To Send (CTS), that echoes this information. All receiving neighbors of the source and destination update resource availability lists and their next access time. The source and destination nodes proceed to exchange data and acknowledgement packets on the announced channel, return to monitor the common control channel and compute their next access times. Time to access is based upon four rules that are described in [3]. These rules attempt to: provide higher priority packets with a faster opportunity to access resources increase the probability of success for subsequent attempts of unsuccessful transmissions (no CTS responses received or no Link Acknowledgement) with a backoff algorithm prevent multiple transmission on a channel by calculating a holdoff delay from received RTS of a Broadcast message or CTS prevent one node from dominating access to resources when other nodes require access as

Figure 3. OPNET Channel Access State Diagram The simulations were run to cover a simulation period of 1.6 hours, with several different random seeds. During this period, background traffic loading was sent unreliable (via UDP) to assess the impact the additional collaborative planning traffic would have on the background traffic (percentage of successful reception and end-to-end delay). Collaborative planning sessions occurred randomly over the period and file completion transfer times were collected. Additional simulation runs were made with increases in the initial loading of the background traffic by factors of 2 and 4 times. VERIFICATION To ensure the simulation reacted similarly to the application, experimentation was conducted to understand more details about the applications functionality. The laboratory setup consisted of workstations loaded with the collaborative planning applications and connected over a local area network. A protocol analyzer was used to collect the data. The intent of the experimentation was not to recreate a realistic battlefield scenario, but rather to obtain a more general characterization of the applications as to how they operate, what protocols they use, file sizes, image compression ratios and delays. Whiteboarding sessions were initiated and the screen capture of a map was copied off of an Army Battle Control Station (ABCS) and pasted into the workspace.

SIMULATION RESULTS We first simulated single file transfers over one, two and three hops in the network. We ran the simulation with different random seeds and collected data on the file transfer delays and the background traffic packet completion rates. The results can be seen in Figure 4. We see that increasing the background traffic has a more pronounced effect for multiple hops than for a single hop. For the case of one hop, increasing the background traffic four-fold resulted in an increase in file transfer delay of a factor of 1.5. For the case of three hops however, the increase is a factor of 4. The difference can be primarily attributed to one of the access rules and TCP functionality. The access rule that prevents one node from dominating resources also increases delays and packet transfers across multiple hops deliveries. If a packet starts from a non-clusterhead node, it first transfers a packet to the clusterhead. The clusterhead would have a greater chance of accessing the channel next as compared to the originator because the access algorithm adds additional wait time for successful transfer of the first packet from the originator. TCP functionality also contributes to the delays. Different TCP window sizes did not produce an improvement. The reason for this is that the bandwidth delay product was almost equal to the standard window size for SOLARIS TCP of 8192 bytes. The increase in delay for more background traffic is probably due to the larger queuing delays at the clusterheads and the larger number of packets being dropped. At one time, for background traffic the window size varied between two and three segments. With higher levels of background traffic, the TCP window remained usually at a value of one segment, thus essentially sending one packet and waiting for the ACK
500 450 400 350 300 250 200 150 100 50 0 0 1 trafic 2 3 4 5

before sending the next one. This results in very low utilization of the bandwidth, and therefore the large transfer delays observed. We then tried sending files unicast to multiple recipients, opening the TCP sessions concurrently. This was done with the specific collaborative planning applications in mind. The results can be seen in Figure 5. Here we have the case of a 567 Kbyte file being sent to seven different recipients, two of them one hop away, and five of them three hops away. This was done for three different window sizes, indicated in the graph. We see that window size does not significantly affect the performance. The delay measured is the delay to complete all seven file transfers. We first note that the delay at high background loading is far less than sending the seven files separately, as seen in Figure 6. So while at low loading, the delay to complete all file transfers is about the same for the two cases, at 4 times the traffic it is about 2.5 time higher. We also note that the effect of background traffic is much smaller for the concurrent case than for the consecutive file transfer case. For the concurrent case delay doubles when the background traffic is varied between one to four times. But for the consecutive transfer case, the delay was an order of magnitude higher. In the case of concurrent file transfers, even if one session is reduced to sending one segment and waiting for the ACK to return, the other TCP sessions are utilizing the unused channel, therefore increasing bandwidth utilization. At lower loading, we see that this is not the case, since the window is larger and the two TCP sessions are competing for the same bandwidth. These results have been reported in the past in [5] and [6].
90 2000 1600 Delay (Seconds) 1200
Completion rates

85 % Completion Rate 80 75 70 65

Delay (seconds)

1 Hop 2 Hops 3 Hops

800 400 0 0 1 2 3 4 5
Delays Window Size 1460 Window Size 256000 Window Size 8192

60 55 50

Times LUT Traffic


Times Background Traffic

Figure 4. Average file transfer delay for file size of 567 Kbytes and window size of 8192 Bytes with respect to background traffic loading (see text)

Figure 5. Transfer delay for completing the transfer of seven files of 567 Kbytes, and background traffic completion rates, for three different TCP window sizes at different levels of background traffic.

2500

2000

needed on actual systems to determine the maximum number of users that applications can support in this bandwidth-constrained environment.
Consecutive Concurrent

Delay (Seconds)

1500

1000

REFERENCES [1] P. Kern, MACA A New Channel Access Method for Packet Radio, Proceeedings. ARRL/CRRL Amateur Radio 9th Computer Networking Conference, 22 Sep 90. [2] M. Gerla, K. Tang, R. Bargodia, TCP performance in wireless multi-hop networks, in Mobile Computing Systems and Applications, 1999. Proceedings. WMCSA '99. Second IEEE Workshop on , 1999 , Page(s): 41 50 [3] G. Holland and N. Vaidya, Analysis of TCP performance over mobile ad hoc networks, Proceedings of the fifth annual ACM/IEEE international conference on Mobile computing and networking , 1999, Pages 219 - 230 [4] ITT Aerospace/Communications Division, Link Layer Protocol Specification for NTDR Software, Revision 1.1, July 1997 [5] David Iannucci and John Lakashman, MFTP: Virtual TCP Window Scaling Using Multiple Connections, Technical Report RND-92-002, NASA Ames Research Center, 1992 [6] Mark Allman, Chris Hayes, Hans Kruse, Shawn Ostermann, TCP Performance Over Satellite Links, Proceedings of the 5th International Conference on Telecommunication Systems, 1997

500

0 0 1 2 3 4 5

Times Background Traffic

Figure 6. Average file transfer delay for file size of 567 Kbytes and window size of 8192 Bytes with respect to background traffic loading, for consecutive and concurrent file transfers CONCLUSIONS From previous studies of TCP performance over multiple HOP wireless links, we saw that we could expect large file transfer delays. From the simulation study we saw that opening TCP sessions concurrently makes the map file transfer faster. Still, we can expect startup delays for the whiteboarding on the order of 6 to 16 minutes or more depending on the background loading of the network. During the experiment, we saw in one of the applications that the initial screen capture file underwent compression or there was a decrease in the resolution. This is an acceptable solution, since high resolution is not needed for the purposes of collaborative planning. Multicast may also improve the performance by minimizing the amount of data that has to traverse the net in order for the session to initiate. Further study is

You might also like