Emulating Bad Networks Skip to main content
samwho logo, a small keyboard arranged in a 3x2 grid, with 'sam' at the top, and 'who' at the bottom. A cable runs from the top of the keyboard to off the top of the screen.

Emulating Bad Networks

Recently I've been spending time looking into methods of creating difficult environments for programs to run in. One of the most relevant environments is a bad network. Through a combination of readily-available tools, it's possible to emulate bad network conditions and run a process under these conditions in isolation.

# NETEM

The most crucial tool for creating this environment is called NETEM. It stands for "NETwork EMulation", and it's a kernel module that can be controlled through a tool called tc.

Here's an example command:

$ tc qdisc add dev eth0 root netem delay 100ms

You will have to run this as a privileged user.

This very simply adds a 100ms delay to any packet passing through your eth0 interface.

# Wait, I got an error running that...

Did you get some weird-looking message about an answer from RTNETLINK? Don't worry, these errors can be fixed but it can get a little involved.

If the error you got said "Operation not permitted", that's because you need to run the command as root. Prepending "sudo" to the start of it should do the trick.

If you got an error about "no such file or directory" then you may need to compile a custom kernel.

# The NETEM kernel module

In order for NETEM to work, it needs a module loaded into your kernel. The module is called sch_netem (the sch bit is short for "scheduler") and you can test for its existence on your system by running:

$ lsmod | grep netem

If that command has some output, this likely means that the module is present and you can load it into your kernel like so:

$ modprobe sch_netem

Again, this needs to be run as root for it to work.

If you didn't get any output from the lsmod command, you may need to compile a custom kernel. I run Arch Linux, and I couldn't find a better way than this. I couldn't find the sch_netem module anywhere so I just took the nuclear option and compiled it into a custom kernel. The steps for doing this can be found at Kernels/Arch Build System1.

1

Note that if you're doing this on a virtual machine, you'll also need to recompile the virtualbox kernel modules: $ pacman -S virtualbox-guest-dkms.

# Network Namespaces

It's all well and good emulating really poor network conditions, but it doesn't do us a lot of good if we have to degrade our entire machine to do it.

Enter network namespaces.

Network namespaces are logical copies of Linux's networking stack. When you create a network namespace, you are creating a networking stack that you can modify in any way you want, without modifying the default networking stack. Then you can run programs in this network namespace instead of the default one, and they'll be using whatever network config you've applied to that namespace.

What we want to do is create a network namespace, assign it a virtual ethernet interface, then route packets through that virtual ethernet interface to our real ethernet interface in order to give it internet connectivity. When we've achieved this, we will be able to apply NETEM rules to the virtual ethernet interface without tampering with our real ethernet interface.

Note: all commands run from here on out assume that you're running them as root. This is to say that you should prepend sudo to all of the commands in this section if you want them to work.

# Step 1: creating the network namespace

$ ip netns add bad

We assign it the name "bad" to denote its purpose. We'll use this name in other commands.

# Step 2: creating a virtual ethernet interface

$ ip link add veth-a type veth peer name veth-b

We've created a virtual ethernet interface called "veth-a" and we've peered it with another one called "veth-b". This allows the two interfaces to communicate with each other. The purpose of this veth pair will be to connect the "bad" namespace to the global one, and allow it to communicate to the Internet.

# Step 3: move veth-a into the "bad" namespace

$ ip link set veth-a netns bad

This pushes veth-a into the "bad" namespace, which is a required step to getting the "bad" namespace able to communicate with the Internet.

To demonstrate what I mean here, try to ping something from inside the "bad" namespace:

$ ip netns exec bad ping google.com

You'll notice that you can't, ping complains "name or service not known". This is because there's nowhere for network packets to go in this namespace, so the initial DNS lookup of google.com fails. We'll be fixing that in the coming steps.

# Step 4: bring up the virtual interfaces with their own IP addresses

$ ip netns exec bad ifconfig veth-a up 192.168.163.1 netmask 255.255.255.0
$ ifconfig veth-b up 192.168.163.254 netmask 255.255.255.0

Interfaces need IP addresses to be useful, and virtual interfaces are no exception. These commands give each pair its own IP address. Notice how we had to set the address of veth-a by running a command inside our "bad" namespace. This is because the interface simply doesn't exist in the default namespace:

$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:1d:f9:08 brd ff:ff:ff:ff:ff:ff
3: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:c0:c3:a9 brd ff:ff:ff:ff:ff:ff
10: veth-b@if11: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000
    link/ether da:51:a5:51:27:3a brd ff:ff:ff:ff:ff:ff link-netnsid 0

$ ip netns exec bad ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
11: veth-a@if10: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether fe:79:84:95:57:80 brd ff:ff:ff:ff:ff:ff link-netnsid 0

# Step 5: adding a default gateway for veth-a

$ ip netns exec bad route add default gw 192.168.163.254 dev veth-a

Up until we run this command, packets destined for veth-a have nowhere to go. Adding a default gateway gives them somewhere to go if no other routing rules exist (and they don't). So this command tells them to go to veth-b. This is the command that links our "bad" namespace to the default namespace.

We still can't communicate with the internet, though, and that's because our connection to the internet is done through our physical ethernet interface. For me this is enp0s3 because Arch Linux uses systemd and systemd uses an interface naming scheme designed to be more consistent across reboots for machines with many physical interfaces. You can read more here. It's likely that your interface will be either eth0 or wlan0 if you're using wireless networking.

# Step 6: forwarding packets from veth-b to your physical interface

$ echo 1 > /proc/sys/net/ipv4/ip_forward
$ IFACE=enp0s3
$ IP=$(ip addr show dev ${IFACE} | grep -Po 'inet \K[\d.]+')
$ iptables -t nat -A POSTROUTING -s 192.168.163.0/24 -o ${IFACE} -j SNAT --to-source ${IP}

There's a lot going on here, so let's break it down.

$ echo 1 > /proc/sys/net/ipv4/ip_forward

By deafult, kernels in most distros won't have IP packet forwarding enabled. This is all that this command does: enable packet forwarding. We'll need it to send packets between interfaces.

$ IFACE=enp0s3
$ IP=$(ip addr show dev ${IFACE} | grep -Po 'inet \K[\d.]+')

This sets up shell variables that describe the name of our physical interface and its IP address on the local network. Make sure that they're valid before continuing:

$ echo $IP
10.0.2.15

You'll likely see a different IP address here. As long as it's there, and the variable isn't empty, you should be safe to proceed.

$ iptables -t nat -A POSTROUTING -s 192.168.163.0/24 -o ${IFACE} -j SNAT --to-source ${IP}

This is the meat and potatoes of this step. What we're doing here is setting up an iptables rule that will forward any packet coming from the 192.168.163.0/24 subnet to our physical interface, and rewrite the packet's source address to the address of our physical interface. This has the effect of forwarding packets that come from our "bad" namespace through our physical interface.

This completes the setup, and you should now be able to ping through the "bad" namespace:

$ ip netns exec bad ping google.com
PING google.com (216.58.198.174) 56(84) bytes of data.
64 bytes from lhr25s10-in-f14.1e100.net (216.58.198.174): icmp_seq=1 ttl=55 time=14.3 ms
64 bytes from lhr25s10-in-f14.1e100.net (216.58.198.174): icmp_seq=2 ttl=55 time=14.4 ms
64 bytes from lhr25s10-in-f14.1e100.net (216.58.198.174): icmp_seq=3 ttl=55 time=14.2 ms
^C
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 14.297/14.358/14.415/0.048 ms

# Using NETEM through tc

The NETEM website has a bunch of examples that I won't repeat here, but something they don't show is that you can chain multiple effects together to create some truly terrible networks. For example:

$ tc qdisc add dev veth-b root netem delay 200ms 400ms 25% loss 15.3% 25% duplicate 1% corrupt 0.1% reorder 5% 50%
$ ip netns exec bad ping google.com
PING google.com (216.58.206.110) 56(84) bytes of data.
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=1 ttl=55 time=498 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=2 ttl=55 time=79.2 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=3 ttl=55 time=227 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=4 ttl=55 time=18.5 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=5 ttl=55 time=445 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=6 ttl=55 time=92.7 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=7 ttl=55 time=318 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=8 ttl=55 time=60.6 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=9 ttl=55 time=84.7 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=10 ttl=55 time=519 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=11 ttl=55 time=14.3 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=12 ttl=55 time=14.6 ms
64 bytes from lhr25s14-in-f14.1e100.net (216.58.206.110): icmp_seq=13 ttl=55 time=37.6 ms
^C
--- google.com ping statistics ---
13 packets transmitted, 13 received, 0% packet loss, time 12017ms
rtt min/avg/max/mdev = 14.355/185.570/519.629/186.193 ms

# Wait, what's a "qdisc"?

It stands for "queuing discipline". The kernel maintains a queue of packets for each network interface on your machine, and there are various ways of queueing packets to allow for things like Quality of Service (QoS, which gives some packets higher priority than others). NETEM plugs into this architecture by exposing a qdisc that inserts latency, loss, and other fun things.

# Putting all of this together

The full list of commands above is quite intricate and tricky, so I put it all together into one big script:

#!/usr/bin/env bash

IFACE="enp0s3"
IP=$(ip addr show dev ${IFACE} | grep -Po 'inet \K[\d.]+')
NSNAME="bad"

VETHA="veth-a"
VETHB="veth-b"
VETHPREFIX="192.168.163.0/24"
VETHNETMASK="255.255.255.0"
VETHAIP="192.168.163.1"
VETHBIP="192.168.163.254"

ip netns add ${NSNAME}
ip link add ${VETHA} type veth peer name ${VETHB}
ip link set ${VETHA} netns ${NSNAME}
ip netns exec ${NSNAME} ifconfig ${VETHA} up ${VETHAIP} netmask ${VETHNETMASK}
ifconfig ${VETHB} up ${VETHBIP} netmask ${VETHNETMASK}
ip netns exec ${NSNAME} route add default gw ${VETHBIP} dev ${VETHA}
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s ${VETHPREFIX} -o ${IFACE} -j SNAT --to-source ${IP}
tc qdisc add dev ${VETHB} root netem delay 200ms 40ms 25% loss 15.3% 25% duplicate 1% corrupt 0.1% reorder 5% 50%

I'm sure it could be prettier and more robust, but for now it works for me.