I'll have to simulate a very slow server, in order to find out some issues. Ho can this be done?
The idea is to use the tc
("traffic control") interface of Linux to do
some traffic shaping.
First off, I can bind a virtual Ethernet device to my real NIC, and request a DHCP address for it.
host1# ip link add link eth0 name eth01 type macvlan
host1# dhclient eth01
Now we do have a new network device, with a different mac address.
host1# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
3: eth01@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc netem state UNKNOWN mode DEFAULT group default qlen 1000
link/ether yy:yy:yy:yy:yy:yy brd ff:ff:ff:ff:ff:ff
The idea is to tune this new device, eth01
so that it shows some delay.
Before using it for real, let's see how it works in terms of throughput.
Let's mock up the server like this:
host1$ nc -l 9090 </dev/zero
And we can measure the bitrate at client-side like this:
host2$ nc host1 9090 | pv --rate >/dev/null
[ 263MiB/s]
Netcat (nc
) will now retrieve an endless stream of data. The pv
utility measures the transfer rate, periodically writing the measure on
stderr, and dumping the pipe (lots of zeros) to /dev/null
.
And while this one is running, let's place our limit. For example:
host1# tc qdisc add dev eth01 root netem delay 40000
will introduce a 40 milliseconds delay in transmissions. This results in the ongoing transmission to be slowed down to...
host2$ nc host1 9090 | pv --rate >/dev/null
[ 5.9MiB/s]
Isn't this great stuff?
EDIT - I noticed that the introduced delay affected not only the macvlan device, but also the real device. I could not figure out why, so I decided to ask on Unix StackExchange