StoWiki/ blog/ debian/ Debian Squeeze, PowerPC and the Linux Containers

Two kids, their really busy mother and my paid job leave me without much time to blog or do Debian related work lately (well, at least on my free time, I do Debian related things at work, but mostly as a user, not as a developer).

Anyway, a couple of weeks ago I decided it was time to upgrade my home servers to Squeeze and I did it, but it was harder than expected.

At home I'm using two old laptops as servers, an old Aluminium PowerBook and an Asus EeePC; the Asus was installed to replace an older PowerBook (a really old one, BTW) that I was using as home server since my father gave it to me.

The plan was to use OpenVZ on the Asus to move all the PowerPC services to a couple of Virtual Environments, but as I wanted to migrate and change almost all the services I never got enough free time to finish the job and when the old PowerBook hardware failed I replaced it with another PowerBook that I wasn't using anymore, but instead of reinstalling the machine I did a clean Lenny install using a kernel with support for linux-vserver (OpenVZ does not work on PowerPC) and transformed the old machine installation (it was an Etch installation at the time) into a Virtual Private Server that run on the new hardware.

Having both systems running I upgraded the VPS to Lenny and, as usually happens, left the things as they were without consolidating the services into only one machine, as I initially planned.

With this state of affairs I upgraded the Asus to Squeeze without much trouble (in fact I installed a kernel without OpenVZ support, as the services I use from this laptop were running on the host and not on a VE) and did the same with the PowerPC host, but to my surprise the linux-vserver VPS failed to start with a message that seemed to imply that the VServer support was not enabled.

I should have filled a bug on the BTS then, but as I looked into how to solve the issue I found bugs saying that the meaning of the message was that I had no support for linux-vserver and I needed to start the VPS ASAP, as it was the machine that runs my SMTP server.

Before doing a restore of my last backup I did some digging and found a lot of messages recommending to move OpenVZ and Linux-VServer virtual machines to LXC and decided to give it a try.

First I built a container on the Asus and it worked OK, after that I did the same on the PowerPC, but the script failed; luckily the patch was trivial, the problem was on the /usr/lib/lxc/templates/lxc-debian script; it uses arch to get the Debian architecture, but for powerpc it gives ppc instead of powerpc, so it needs to be fixed on the script (Note to self: I have to submit bug + patch to the lxc package to fix it).

After creating this container and trying it I tried to boot my old VPS with a LXC configuration:

After a couple of tries I noticed that the system was not booting because it was missing the devices files needed; to fix it I copied the /dev directory of my first LXC test and using a chroot I also removed the udev packages from the container.

After that last changes the machine booted as expected and all services were running OK.

To summarize, I decided to do the move to LXC and fixed the configuration to boot the virtual machines on each restart:

I know that LXC is still missing some functionality (I hate the way the container stop function kills everything instead of doing a run-level change, I guess I'll be using hacks until I move to a newer kernel with the proper support enters into Debian), but having the code on the mainline kernel is a great bonus and the user level utilities are good enough for my home needs... and I hope they'll arrive to a point where we'll be able to migrate the OpenVZ containers at work (we are using Proxmox and the support of the OpenVZ patchset is starting to worry us).

On my next post:

The Freak Firewall or The Story of a HA Firewall based on OpenBSD's pf running on Debian GNU/kFreeBSD hosts.