Debian Virtualization: LXC Network Isolation

Linux containers (LXC) is a lightweight virtualization technology built into Linux kernel. In my previous article, Debian Virtualization: LXC Application Containers, I have detailed the steps to configure and run a simple application container using LXC. LXC application containers are very lean and consume strictly the resources the application requires. This is in sharp contrast with other virtualization technologies which are running a full Linux distribution in VM.

The container uses its own file system, built by mounting read-only the relevant directories from the host file system. The host is an older computer running Debian 7 “wheezy”. The virtual machine is controlled through GNU screen if the VM was started automatically at boot time, or through a regular xterm.

One thing I left out was the networking stack. In my Lighttpd web server example, the VM uses the same networking stack as the host. This could become a problem if someone manages to compromise the web server: the intruder could then probe the networks connected to our host, in search for the next victim.

In this article I’ll modify the VM to run on a separate networking stack. I will place the VM on its own network segment, connected to the host through a Linux bridge interface. I will then go and set up the host firewall using iptables. This effectively isolates the VM and limits the potential damage that could be inflicted on the larger network. The final setup looks like this:

Network setup

Network setup

The virtual machine is connected to the bridge through a mechanism called “veth pair”. This is a sort of a tunnel between host br0 device and eth0 device in vm1. Every packet going in on one side will be transfered to the other side without any modifications. The network addresses used on the tunnel are 10.10.20.1 for br0 and 10.10.20.10 for eth0 in vm1.

Configuring the Linux bridge

For our purpose, the network bridge device br0 acts just as other Ethernet interface. The bridge is managed using brctl program in bridge-utils package, and the IP address is set as usual with ifconfig. The setup sequence is as follows (I assume a user root):

# apt-get install bridge-utils
# brctl addbr br0
# ifconfig br0 10.10.20.1/24 up

We should have in this moment our bridge up and running:

# ifconfig -a br0
br0       Link encap:Ethernet  HWaddr e6:55:ca:1c:29:4a  
          inet addr:10.10.20.1  Bcast:10.10.20.255  Mask:255.255.255.0
          inet6 addr: fe80::e455:caff:fe1c:294a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:468 (468.0 B)

Moving the virtual machine on the bridged network

LXC supports “veth pair” connections. It also supports a number of other network connection types, but this is another discussion altogether. For now, I just go into my lxc.conf and instruct LXC to create the veth tunnel between br0 and container’s eth0:

# cat ~/vm1/lxc.conf
lxc.utsname = vm1

# networking
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 00:11:22:33:44:55
lxc.network.ipv4 = 10.10.20.10/24

# filesystem
lxc.rootfs = vm1/rootfs
lxc.mount.entry=/lib lib none ro,bind 0 0
lxc.mount.entry=/bin bin none ro,bind 0 0
lxc.mount.entry=/usr usr none ro,bind 0 0
lxc.mount.entry=/sbin sbin none ro,bind 0 0
lxc.mount.entry=/lib64 lib64 none ro,bind 0 0
lxc.mount.entry=proc /proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry=tmpfs /dev/shm tmpfs  defaults 0 0
lxc.pts=1024

The configuration has a new #networking section specifying the connection type, the host bridge, IP and MAC addresses. For IP we add 10.10.20.10 as discussed above, and something random for MAC address – start with 00 so you don’t set by mistake the multicast bit flag. With this modifications, it is time to open an xterm and start the virtual machine manually:

# cd ~
# lxc-execute -n vm1 -f vm1/lxc.conf -- /bin/bash
root@debian:/# reset
root@debian:/# export PS1="\e[01;31m\h:\W \u # \e[00m"
vm1:/ root# route add default gw 10.10.20.1

Once the VM has started, I check the interfaces (ifconfig) and the default gateway (netstat):

Checking network setup in virtual machine

Checking network setup in virtual machine

I can start now Lighttpd (/etc/init.d/lighttpd start) inside the container, and verify it is working by pointing my browser to 10.10.20.10.

Default Lighttpd web page

Default Lighttpd web page

Enable routing on the host

By default Linux workstations and servers have IPv4 forwarding disabled. This means no packets are moved from one interface to another. In our setup we need to be able to pass packets from our internal bridge interface br0 to our external Ethernet interface eth0 and the other way around. This is fixed by running the following command on our host:

# echo "1" > /proc/sys/net/ipv4/ip_forward
# cat /proc/sys/net/ipv4/ip_forward
1
# 

With forwarding enabled in Linux kernel, we can access in this moment our web server from the outside network.

Configuring the firewall

In many cases the setup we have so far would be sufficient. We can however improve the security of the setup by enabling netfilter firewall in Linux kernel. The user-space tool to control it is iptables and it is probably already installed on your machine. In case it is not, the command is apt-get install iptables.

There are many tools you can use to build the configuration, some of them with a nice graphic interface. I use to do it manually, as I always seem to run into something new to learn. Without going into too much detail, I set port forwarding such as the traffic going into 192.168.254.19:80 is redirected to 10.10.20.10:80, and allow only the relevant http traffic between eth0 and br0:

# iptables -A FORWARD -i eth0 -d 10.10.20.10 -p tcp --dport 80 -j ACCEPT
# iptables -A FORWARD -i eth0 -j DROP
# iptables -A FORWARD -i br0 -m state --state NEW,INVALID -j DROP
# iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to 10.10.20.10:80

Wraping up

I put all the network configuration into ~/vm1/network.sh. The script is run before the virtual machine is started.

#!/bin/bash

#
# Network configuration script for vm1
#

# bridge setup
brctl addbr br0
ifconfig br0 10.10.20.1/24 up

# enable ipv4 forwarding
echo "1" > /proc/sys/net/ipv4/ip_forward

# netfilter cleanup
iptables --flush
iptables -t nat -F
iptables -X
iptables -Z
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT

# external interface - accept only packets going to 10.10.20.10:80
iptables -A FORWARD -i eth0 -d 10.10.20.10 -p tcp --dport 80 -j ACCEPT
iptables -A FORWARD -i eth0 -j DROP

# internal interface - no new connections
iptables -A FORWARD -i br0 -m state --state NEW,INVALID -j DROP

# netfilter port forwarding
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to 10.10.20.10:80

Configuring an LXC virtual machine is not difficult, it does however carry you through several Linux subsystems, starting with LXC and CGROUPS, building up filesystems and managing the controlling terminal, network bridges and firewalls.

As mentioned in my first article, LXC is basically a chroot on steroids and it doesn’t promise anything regarding security. This is changed on production boxes usually by setting each virtual machine on its own isolated network, running mandatory access control (SELinux, AppArmor) on the applications inside the containers, adding grsecurity and PaX support to Linux kernel, and probably a number of other things.

To be continued…

Related posts

13 thoughts on “Debian Virtualization: LXC Network Isolation

  1. Pingback: Debian Virtualization: LXC Network Isolation | Debian-News.net - Your one stop for news about Debian

  2. Pingback: Debian Virtualization: LXC Network Isolation - Debian Info

  3. Pingback: Debian Virtualization: LXC Network Isolation | Hallow Demon

  4. Daniel

    Very good article.
    Just a little side note:
    – Please use ip instead of ifconfig which is deprecated
    – The “-m state –state” in iptables config is also deprecated, it’s better to use “-m conntrack –ctstate”

    Reply
  5. KR

    This was a good read.
    I manage my containers in a similar way but there is still something that I did not figured out:
    how am I supposed to open a port from the container to the the main network interface ?

    Let’s say I have a mail server running inside a container.
    To send e-mails from my container I should redirect packets on port 25 but using
    something like “iptables -A POSTROUTING -i br0 -o eth0 -j MASQUERADE” breaks
    the container isolation (all the LAN+WAN host network is available from the container).

    I’m not an iptables expert so I’m quite stuck.

    Reply
  6. Denys

    You have a small typo in the first line of your container’s config:
    # cat ~/vm1/lxc.conf
    xc.utsname = vm1

    Should be:
    lxc.utsname = vm1

    Reply
  7. Pingback: Virtualisation | Blender Fox

Leave a comment