Sunday, May 27, 2012

OpenVSwitch Installation Notes: OpenFlow Toolbelt

In the lab, I quite often find my self installing Open vSwitch instances, configuring network parameters for certain functionality and probing network interfaces to check connectivity. You know, the daily routines in an OpenFlow research laboratory. To ease the workflow, I put together a small toolbelt.

$ of-toolbelt.sh
Usage: of-toolbelt.sh <ACTION>
Actions: ofctl
         vsctl
         vswitchd
         insmod
         db_create
         db_start
         get_controller
         set_controller
         del_controller
         get_dpid
         set_dpid
         get_stp
         set_stp
         dl_ovs
         install_ovs
         ethtool_help
         arping_help
$ of-toolbelt.sh dl_ovs
$ of-toolbelt.sh install_ovs openvswitch-1.4.1.tar.gz
$ of-toolbelt.sh db_create
$ of-toolbelt.sh db_start
$ of-toolbelt.sh vsctl add-br br0
$ of-toolbelt.sh set_dpid br0 1
$ of-toolbelt.sh get_dpid br0
"0000000000000001"
$ of-toolbelt.sh get_stp br0
false
$ of-toolbelt.sh set_stp br0 true
$ of-toolbelt.sh insmod
$ of-toolbelt.sh vswitchd
$ of-toolbelt.sh ethtool_help
ethtool eth0
ethtool -s eth0 speed 10 duplex full autoneg off
ethtool -s eth0 autoneg on
ethtool -s eth0 autoneg on adver 0x001    # 10 Half
                                 0x002    # 10 Full
                                 0x004    # 100 Half
                                 0x008    # 100 Full
                                 0x010    # 1000 Half(not supported by IEEE standards)
                                 0x020    # 1000 Full
                                 0x8000   # 2500 Full(not supported by IEEE standards)
                                 0x1000   # 10000 Full
                                 0x03F    # Auto
$ sudo ethtool -s eth0 speed 10 duplex half autoneg off
$ of-toolbelt.sh vsctl add-port br0 eth0

You can find the sources in the following gist. Feel free to use it for your own purpose. (Free as in free beer.) No need to say but here it goes: Comments/Contributions are highly welcome.

Friday, May 11, 2012

Performance of Linux IP Aliased Network Interfaces

TL;DR ‐ I put together a setup to measure the performance of IP aliasing in Linux. As the numbers at the bottom of the post describe, observed throughput increases as the number of aliases increase. WTF?

For a couple of months I have been putting together some fancy hacks using IP aliasing feature in Linux, that is, associating more than one IP address to a network interface. The limits of IP aliasing are endless...

$ sudo ifconfig eth0 192.168.1.1 netmask 255.255.0.0
$ for I in `seq 0 254`; do sudo ifconfig eth0:$I 192.168.2.$I; done

But obviously there is a price (overhead) to pay for this at kernel level. To shed some more light into the problem at hand, for experimentation purpose, I setup a simple network as follows.

First, I setup two identical Linux boxes with gigabit ethernet cards (RTL-8169 Gigabit Ethernet [10ec:8169] rev 10) connected through a Huawei gigabit switch. (Cable is CAT6 262M of length 1 meter.) Then, I started creating iperf instances binded to particular IP aliased interfaces. That is, first iperf instance is bind to 192.168.2.1 at eth1:1, second is bind to 192.168.2.2 at eth1:2, and so on. In other words, Nth iperf instance is bind to 192.168.2.N at eth1:N.

To ease the workflow, I put together a server.sh script as follows.

Using server.sh, I'll be able to start as many iperf instances (and necessary IP aliases for them) as I want. Next, I write client.sh as follows.

Then the workflow becomes relatively simple.

server$ ./server.sh eth1:%d 192.168.2.%d 32
client$ ./client.sh 192.168.2.%d 32 30

While going this further, nobody could stop me from writing a Gnuplot script to visualize these results.

Ok, too much talk so far. Let's get to results. (Timeout is set to 60 seconds.)

#Kbits/sec
1615,216
2612,580
4616,071
8615,777
16616,686
32615,340
64618,838
96622,344
128654,269
160640,364
192662,783
224658,962
254670,788

As the numbers suggest, Linux IP aliasing does a fairly good job that the overhead imposed by the aliases are nearly negligible. (At least I hope that is something I succeeded in measuring.) But the strange thing is, there is an improvement in the throughput as the number of network interfaces increase. What might be the explanation of this observation? Is my setup mistaken? I will be happy to hear your ideas.