Bernhard's Homepage

GNS3 - Performance of Network Elements

GNS3 - Performance of Network Elements

GNS3 is mainly meant as a tool for testing network functionality. The performance of the data transfer within GNS3 has no resemblance with that of real network elements, therefore it had no priority during design and implementation of the software. Furthermore, the performance of the network elements depends on the implementation of its VM and is therefore beyond the control of GNS3.

Nevertheless it is interesting, how good (or bad) GNS3 is performing in some common situations. I’m using a fairly old computer with an i5 processor running Debian Linux. GNS3 is running natively on this computer, no GNS3VM is used.

I’ve got a decent internet connectivity, When using CURL to download a big file from http://deb.debian.org/ I achieve a speed of 29 MByte/sec / 230 MBit/sec.

Connection to the Outside

The first test uses a cloud connection to the internet, as an end device I’m using a docker container with a Linux OS.

Cloud Connection

Under these circumstances I get a speed of 0.23 MByte/sec, 1.8 MBit/sec. In my experience this poor throughput happens only, when using ethernet interfaces. When using TAP or bridge interfaces the performance is very good.

Let’s test the NAT cloud instead, which is the same as a cloud interface to the “virbr0” bridge.

NAT Connection

This results in a download speed of 28.2 MByte/sec / 225 MBit/sec, almost the same speed as outside GNS3. So the NAT cloud is quite an efficient way to connect to the outside. But GNS3 runs two ubridge processes, at this network utilisation they each have a CPU usage of 40% (of one core). So in total ubridge uses 80% of one core. That means, that 20% of the i5 CPU are used by the link from the NAT cloud to the Docker container.

Router

Let’s insert a router and see, how this changes the performance.

Router

Model Version Speed [MByte/sec] Speed [Mbit/sec] CPU usage
Dynamips 3725 12.4(25d) 0.13 1.0 20%, mainly dynamips
IOSv 15.7(3)M3 0.25 2.0 30%, mainly qemu
IOU 15.7(3)M2 12.0 96 105%, 80% ubridge, 25% IOU
Docker Alpine 3.13 30.5 244 150%, ubridge
QEMU Alpine 3.13 28.0 224 250%, 125% qemu, 125% ubridge

Dynamips images have a quite poor performance. IOSv images are rate limited to 2 Mbit/sec, so they can’t deliver a good transfer speed. IOU images perform reasonably well, but the incomplete implementation makes it difficult to use them. Docker and QEMU Linux images are very powerful. But even this simple QEMU topology requires 3 CPU cores to achieve a good performance.

Switch

Now the tests are repeated using switch devices.

Switch

Model Version Speed [MByte/sec] Speed [Mbit/sec] CPU usage
Dynamips 3725 12.4(25d) 28.5 228 170%, 40% dynamips, 130% ubridge
IOSv-L2 15.2 0.25 2.0 35%, mainly qemu
IOU-L2 15.2 10.3 82 100%, 60% ubridge, 40% IOU
Docker OpenvSwitch 2.12.3 11.0 88 170%, 100% ovs-vswitchd, 70% ubridge
Docker Alpine 3.13 30.5 244 150%, ubridge
QEMU Alpine 3.13 27.9 223 230%, 120% qemu, 110% ubridge

The Dynamips Etherswitch board has a surprisingly good performance. IOSv-L2 images are rate limited to 2 Mbit/sec, the same way as IOSv images. IOU-L2 images perform reasonably well, similar to routing with the IOU images. The OpenvSwitch image has also a good performance, but needs a lot of CPU for switching. Docker and QEMU Linux images are very fast, but they need quite some CPU power for that.

Conclusion

You can expect a good performance from Linux based images. Other images are good for testing functionality, but most of them have a low throughput. GNS3 uses ubridge processes to create connections between the nodes. Its performance is not bad, but requires a fast CPU.