Linux containers (LXC) is a lightweight virtualization technology built into Linux kernel. Unlike other similar technologies, the virtual machines (VM) are driven without any overhead by the kernel already running on the computer. In VM you run only the processes you need, most of the time without even going through the regular SysV or Linux init. This means that memory is used very conservatively. These lightweight containers are sometimes called application containers, as opposed to distribution containers where you run a full distro starting with SysV/Linux init.
In this article I’ll take a look at installing and building a small web server application container using LXC on a Debian 7 workstation. Debian is a popular distribution for personal use among software developers, and the amount of software packaged far exceeds any other Linux disto out there. It also encourages tinkering, experimenting, and in a more general sense, learning about Linux and Free/Open Source Software.
The computer I am using is an old amd64 dual-core computer with a minimal LXDE desktop manager installed. For those of you interested, I have detailed the installation steps in my Lightweight Debian: LXDE Desktop From Scratch article. Since we are dealing with kernel utilities, all the commands in this article are specified as user root.
Installing LXC and activating Linux Control Groups (CGROUPS)
By default, a minimal Debian install doesn’t bring in LXC user-space tools. Also, CGROUPS have to be installed and enabled. The installation is as follows:
# apt-get install cgroup-bin libcgroup1 lxc
CGROUPS is a feature of the modern Linux kernels that allows the allocation of resources such as CPU time, system memory, network bandwidth among the various processes running on the system. To activate CGROUPS you have to add the following line in /etc/fstab file:
cgroup /sys/fs/cgroup cgroup defaults 0 0
Reboot the computer and check LXC and CGROUPS installation:
# lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-3.2.0-4-amd64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Creating a file system for your VM
We start by creating a new vm1 directory with a basic filesystem skeleton, with the intention to populate it by mounting read-only the main software directories from the host computer. The resulting filesystem has a very small footprint:
# cd ~ # mkdir -p vm1/rootfs # cd vm1/rootfs # mkdir bin # mkdir home # mkdir lib # mkdir lib64 # mkdir opt # mkdir proc # mkdir root # mkdir sbin # mkdir sys # mkdir tmp # mkdir usr # mkdir var
We also need to create and populate /dev and /etc directories:
# cp -a /etc . # cp -a /dev . # rm -fr dev/pts # mkdir dev/pts # rm -fr dev/shm # mkdir dev/shm
/etc can be further cleaned up of unnecessary files, user entries can be removed from /etc/passwd and /etc/shadow files, /home directory can also be set etc.
Creating LXC configuration file
Back in vm1 directory, next step is to create our virtual machine configuration file lxc.conf:
# cd ~/vm1 # cat lxc.conf lxc.utsname = vm1 lxc.rootfs = vm1/rootfs lxc.mount.entry=/lib lib none ro,bind 0 0 lxc.mount.entry=/bin bin none ro,bind 0 0 lxc.mount.entry=/usr usr none ro,bind 0 0 lxc.mount.entry=/sbin sbin none ro,bind 0 0 lxc.mount.entry=/lib64 lib64 none ro,bind 0 0 lxc.mount.entry=proc /proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=tmpfs /dev/shm tmpfs defaults 0 0 lxc.pts=1024
It is a minimal configuration mounting read-only a big part of the real filesystem. The configuration doesn’t touch networking, the container in this example uses the same networking stack as the host computer.
Starting the virtual machine
Use lxc-execute command to start the VM and run applications in the isolated environment. When playing as root with this kind of VMs, it is a good idea to have the terminal running the virtual machine visibly different than your regular terminal. For example if you run an LXDE desktop environment and lxterminal, start your virtual machines in xterm (apt-get install xterm). This terminal is the controlling terminal for your virtual machine.
I open an xterm, su as root and start the virtual machine. Once started, I reset the terminal and go for a more descriptive prompt:
# cd ~ # lxc-execute -n vm1 -f vm1/lxc.conf -- /bin/bash root@vm1:/# reset root@vm1:/# export PS1="\e[01;31m\h:\W \u\$ \e[00m"
To shut down the virtual machine close the controlling terminal, or type exit several times. Before that, however, you would need to shut down or kill -9 all the processes you started manually in VM.
Installing and starting your application in VM
I use Lighttpd web server as an example. I installed it on my computer prior to starting the virtual machine as follows:
# apt-get install lighttpd
In Debian, Lighttpd is started by default when you install it. It needs to be stopped and removed from the bootup sequence. The files are served from /var/www and the configuration is in /etc/lighttpd/lighttpd.conf. We need to make sure these files are copied in our VM filesystem along with other thttpd-specific directories under /var:
# update-rc.d -f lighttpd remove # /etc/init.d/lighttpd stop # cp -a /var/www ~/vm1/rootfs/var/. # mkdir ~/vm1/rootfs/var/log # cp -a /var/log/lighttpd ~/vm1/rootfs/var/log/. # mkdir -p ~/vm1/rootfs/var/cache # cp -a /var/cache/lighttpd ~/vm1/rootfs/var/cache/.
In the virtual machine, start Lighttpd as /etc/init.d/lighttpd start.
Starting the virtual machine at boot time
The last thing to do is to have the virtual machine started at boot time, if you so desire. In this case using xterm as the controlling terminal will not work. I will turn my attention to GNU Screen (apt-get install screen).
GNU Screen is a software application that can be used to multiplex several virtual consoles inside a single terminal window or remote terminal session. I run lxc-execute through screen, and I inject /etc/init.d/lighttpd start in the screen session together with the terminal reset. For simplicity, all the commands go into /etc/rc.local file:
#!/bin/sh -e # # /etc/rc.local file starting vm1 # cd /root screen -AmdS vm1 bash sleep 1 screen -S vm1 -p 0 -X stuff "/usr/bin/lxc-execute -n vm1 -f vm1/lxc.conf -- /bin/bash" screen -S vm1 -p 0 -X stuff '\r' sleep 1 screen -S vm1 -p 0 -X stuff "reset" screen -S vm1 -p 0 -X stuff '\r' sleep 1 screen -S vm1 -p 0 -X stuff "/etc/init.d/lighttpd start" screen -S vm1 -p 0 -X stuff '\r' exit 0
To connect to the virtual machine console use screen -R vm1 as root. This is how the default Lighttpd page should look in your web browser:
In this article I’ve tried to describe the main steps for building a webserver application container using LXC and a bunch of other free software tools. The distribution I used for this example is Debian 7 “wheezy”, however the example should work on most Linux distributions out there. Most distros have LXC and CGROUPS support compiled by default in the kernel, and installing the user-space utilities is just a matter of grabbing them with your favorite package manager.
I didn’t mention anything regarding security. At the current stage, LXC kernel implementation does not promise anything. As the implementation matures and new features are added, security features will start coming in. For now, think about LXC as a chroot technology on steroids, with all the problems chroot has. To add the necessary layer of security needed to run it in production, LXC is often used in conjunction with SELinux, AppArmor and other mandatory access control technologies.
The virtual machine described in this article uses the same TCP/IP network stack as the host. In case the VM is compromised, the intruder gains full access to the networks connected to the host. Typically, this is mitigated by isolating the VM on a different network segment, with a different TCP/IP network stack inside the virtual machine. This is a very common security measure in virtualization world. In my second article in this series, Debian Virtualization: LXC Network Isolation, I modify the VM to run its own separate networking stack, and move it out on a different network segment. Access to and from VM is then restricted using netfilter firewall in Linux kernel.
This is the script for VM creation, based on the instructions in this article:
#!/bin/bash # build an LXC container based on your current filesystem # mounted read-only if [ $# -ne 1 ] then echo "Error: directory missing" echo "Usage: $0 directory-name" exit 1 fi # create rootfs mkdir -p $1/rootfs cd $1/rootfs mkdir bin mkdir home mkdir lib mkdir lib64 mkdir opt mkdir proc mkdir root mkdir sbin mkdir sys mkdir tmp mkdir usr mkdir var cp -a /etc . cp -a /dev . rm -fr dev/pts mkdir dev/pts rm -fr dev/shm mkdir dev/shm cd .. # create lxc configuration cat > lxc.conf << EOF lxc.utsname = $1 lxc.rootfs = $1/rootfs lxc.mount.entry=/lib lib none ro,bind 0 0 lxc.mount.entry=/bin bin none ro,bind 0 0 lxc.mount.entry=/usr usr none ro,bind 0 0 lxc.mount.entry=/sbin sbin none ro,bind 0 0 lxc.mount.entry=/lib64 lib64 none ro,bind 0 0 lxc.mount.entry=proc /proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=tmpfs /dev/shm tmpfs defaults 0 0 lxc.pts=1024 EOF # print startup instruction cd .. PWD=`pwd` echo "Virtual machine created, start it as:" echo " # cd $PWD" echo " # lxc-execute -n $1 -f $1/lxc.conf -- /bin/bash" echo
Save it as vmscript.sh, and chmod +x vmscript.sh. Run it as:
# cd ~ # /path/to/vmscript.sh vm1