You are on page 1of 5

Network throughput in GNS3

GNS3 and especially Dynamips have never been made to get crazy performances.
However, I've recently ran some tests to identify where are the network throughput bottlenecks in GNS3 and I
think some of the findings will be useful for some of you.
To measure the throughput, I have used iperf on one client VM or real host and one server VM with the tested
device between these 2 elements.
Commands used
VM1 or host: iperf -c <IP_of_VM2> -t 30 -P 10
VM2: iperf -s
Please note that everything has been run on a bare metal server i7-4770 CPU @ 3.40GHz (8 cores) with 32
GB of RAM with Linux Ubuntu and using VirtualBox and Qemu using VT-x when possible. Depending on your
machine you may achieve lower or higher throughputs.
Test results
Note that iperf has been run 3 times for each test.
Linux VM1 (VirtualBox) <-> Dynamips c3660 router with Leopard-2FE <-> Linux VM2 (VirtualBox)
18 Mbits/sec
18.1 Mbits/sec
18 Mbits/sec
Linux VM1 (VirtualBox) <-> Dynamips c3660 router with NM-1FE-TX <-> Linux VM2 (VirtualBox)
18 Mbits/sec
18 Mbits/sec
17.9 Mbits/sec
Linux VM1 (VirtualBox) <-> Dynamips c3725 router with GT96100-FE <-> Linux VM2 (VirtualBox)
1.15 Mbits/sec
1.12 Mbits/sec
1.15 Mbits/sec

Network throughput in GNS3

Using the GT96100-FE network module (default for slot0) resulted in less throughput.
Linux VM1 (VirtualBox) <-> Dynamips c3725 router with NM-1FE-TX <-> Linux VM2 (VirtualBox)
18 Mbits/sec
18.1 Mbits/sec
17.9 Mbits/sec
Linux VM1 (VirtualBox) <-> Dynamips c7200 router with PA-2FE-TX <-> Linux VM2 (VirtualBox)
17.6 Mbits/sec
17.5 Mbits/sec
17.6 Mbits/sec
Linux VM1 (VirtualBox) <-> Dynamips c7200 router with PA-FE-TX <-> Linux VM2 (VirtualBox)
112 Kbits/sec
106 Kbits/sec
109 Kbits/sec
Using the PA-FE-TX port adapter resulted in a extremely low throughput. I would advised against using it.
Linux VM1 (VirtualBox) <-> Dynamips c7200 router with PA-4E <-> Linux VM2 (VirtualBox)
18.3 Mbits/sec
18.3 Mbits/sec
18.2 Mbits/sec
Linux VM1 (VirtualBox) <-> Dynamips c7200 router with PA-8E <-> Linux VM2 (VirtualBox)
18.3 Mbits/sec
18.3 Mbits/sec
18.3 Mbits/sec
Linux VM1 (VirtualBox) <-> Linux VM2 (VirtualBox)
831 Mbits/sec
942 Mbits/sec

Network throughput in GNS3

928 Mbits/sec
Without any surprise , VMs back to back resulted is the highest throughput.
Linux VM1 (VirtualBox) <-> Dynamips Ethernet switch <-> Linux VM2 (VirtualBox)
779 Mbits/sec
770 Mbits/sec
832 Mbits/sec
Linux VM1 (VirtualBox) <-> IOU L2 switch <-> Linux VM2 (VirtualBox)
420 Mbits/sec
426 Mbits/sec
424 Mbits/sec
Linux VM1 (VirtualBox) <-> IOU L3 router <-> Linux VM2 (VirtualBox)
649 Mbits/sec
655 Mbits/sec
654 Mbits/sec
IOU performances are higher that I thought.
Linux VM1 (VirtualBox) <-> CSR1000v (VirtualBox) <-> Linux VM2 (VirtualBox)
2.35 Mbits/sec
2.32 Mbits/sec
2.32 Mbits/sec
It is important to note that CSR1000v has 2.5 Mb/s throughput-limited trial license. These results were
expected.
Activating the 60-day trial premium license with the "license boot level premium" command will get you a limit
of 50Mb/s (you must reboot and this is valid for release Cisco IOS XE 3.12S and earlier).
47 Mbits/sec
47.1 Mbits/sec
47 Mbits/sec

Network throughput in GNS3

Finally, please note that with Cisco IOS XE 3.13S and later, the throughput is limited to 100 Kb/s (comparing to
2.5Mbps that has XE 3.12S and earlier), this is in demo mode - without any license. However with the AppX
evaluation license you can get 10 Gb/s. Please see Cisco CSR 1000v Installation on Qemu Virtual Machine for
details.
Linux VM1 (VirtualBox) <-> vIOS (Qemu without KVM, e1000 NICs) <-> Linux VM2 (VirtualBox)
2.09 Mbits/sec
2.05 Mbits/sec
2.09 Mbits/sec
It looks like vIOS has also a throughput limitation like in CSR1000v.
Linux VM1 (Qemu with KVM) <-> vIOS (Qemu with KVM, e1000 NICs) <-> Linux VM2 (Qemu with KVM)
2.13 Mbits/sec
2.09 Mbits/sec
2.11 Mbits/sec
It doesn't matter if you have KVM enabled or not.
Local host <-> TAP <-> Dynamips c3660 router with NM-1FE-TX <-> Linux VM2 (VirtualBox)
17.9 Mbits/sec
17.9 Mbits/sec
18.0 Mbits/sec
Local host <-> TAP <-> Dynamips Ethernet switch <-> Dynamips c3660 router with NM-1FE-TX <->
Linux VM2 (VirtualBox)
18.1 Mbits/sec
18.1 Mbits/sec
18 Mbits/sec
Remote host <-> Gigabit Ethernet NIC (nio_gen_eth in the cloud) <-> Dynamips c3660 router with
NM-1FE-TX <-> Linux VM2 (VirtualBox)
1.54 Mbits/sec
1.09 Mbits/sec

Network throughput in GNS3

1.69 Mbits/sec
Interesting finding here, connecting your topology to a cloud using an Ethernet Generic NIO results in a low
throughput.
Remote host <-> Gigabit Ethernet NIC <-> TAP (nio_tap in the cloud) <-> Dynamips c3660 router with
NM-1FE-TX <-> Linux VM2 (VirtualBox)
17.9 Mbits/sec
17.8 Mbits/sec
17.9 Mbits/sec
No such problem using an TAP NIO. Note that IP forwarding was enabled to allow traffic to pass from the
Ethernet NIC to the TAP.
Remote host <-> Gigabit Ethernet NIC <-> Bridge br0 <-> TAP (nio_tap in the cloud) <-> Dynamips
c3660 router with NM-1FE-TX <-> Linux VM2 (VirtualBox)
17.9 Mbits/sec
17.9 Mbits/sec
17.8 Mbits/sec
Still good using a bridge which contains the Ethernet NIC and the TAP.
Findings
Low throughputs have been identified when using the following devices:
Dynamips c7200 router with PA-FE-TX
Dynamips c3725 router with GT96100-FE
CSR1000v and IOSv are artificially limited to 2.5 Mb/s with their demo license.
IOU is faster than expected.
More interestingly throughput is low when using an Ethernet interface directly (nio_gen_eth) in GNS3 on Linux
(apparently this doesn't affect Windows). Dynamips uses libpcap to attach to an Ethernet interface on Linux
and for an unknown reason it results in a lower throughput than using a TAP interface.

You might also like