Debian, QEMU, libvirt, qcow2 and fstrim

After some discussion with colleagues on how to best approach fstrim for qcow2 on libvirt in Debian 10, I sat down one sunday afternoon researching and applying fstrim to my libvirt VMs.

My hypervisors and VMs are mostly running vanilla Debian stable, which is why this post is not necessarily applicable to other distributions – but perhaps somewhat helpful nonetheless.

Directive

The goal was to have my libvirt VMs (around two dozen across two hypervisors) automatically discard unused space from their underlying qcow2 image files. Apart from saving space, I was hoping to take some time off my online backup mechanism, which can take up to four hours for seven VMs on spinning disks. The two main approaches – as far as I can see – are either to add a discard option to a VMs fstab, or use a manual fstrim timer provided by Debian. Some more explanation here. I’ll be using a custom cronjob to invoke the fstrim command manually every few days, more on that later.

State of things

All of my VMs root systems are hosted inside qcow2 images, which I find to be more flexible than using LVM volumes. Some of these VMs have extra data partitions (eg. blockchain data, apt-mirrors) which don’t need backups and are therefore arranged as LVM volume groups. That’s why I’ll only be looking at setting up fstrim for root partitions (but extending it’s functionality across all partitons is trivial). Debian 10 ships with QEMU 3.1. Additionally, there’s one Windows 10 VM.

Research

There’s a really helpful post regarding fstrim and KVM by Chris Irwin for (what I’m guessing) he’s doing with non-Debian hypervisors. I recommend reading it, but here’s a summary:

  • starting with QEMU 4.0, virtio supports the discard option natively
  • no need to add an additional virtio-scsi controller anymore
  • specific VM machine type has to be pc-q35-4.0 and upwards

Executing kvm -machine help on my hypervisor shows support only up to pc-q35-3.1, which expected with QEMU 3.1:

root@atlas:~# kvm -machine help
Supported machines are:
pc                   Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-3.1)
pc-i440fx-3.1        Standard PC (i440FX + PIIX, 1996) (default)
pc-i440fx-3.0        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-2.9        Standard PC (i440FX + PIIX, 1996)
[...]
q35                  Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-3.1)
pc-q35-3.1           Standard PC (Q35 + ICH9, 2009)
pc-q35-3.0           Standard PC (Q35 + ICH9, 2009)
pc-q35-2.9           Standard PC (Q35 + ICH9, 2009)
[...]

Setup part 1

Luckily, Debian is offering QEMU 5.0 through buster-backports as of now (November 2020). After manually upgrading the respective packages, I’m now able to use pc-q35-5.0.

Note: at this point I recommend shutting down all VMs on the hypervisor that’s being worked on.

apt update; apt install qemu qemu-block-extra qemu-system-common qemu-system-data qemu-system-gui qemu-system-x86 -t buster-backports
root@atlas:~# kvm -machine help
Supported machines are:
[...]
pc                   Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-5.0)
pc-i440fx-5.0        Standard PC (i440FX + PIIX, 1996) (default)
pc-i440fx-4.2        Standard PC (i440FX + PIIX, 1996)
pc-i440fx-4.1        Standard PC (i440FX + PIIX, 1996)
[...]
q35                  Standard PC (Q35 + ICH9, 2009) (alias of pc-q35-5.0)
pc-q35-5.0           Standard PC (Q35 + ICH9, 2009)
pc-q35-4.2           Standard PC (Q35 + ICH9, 2009)
pc-q35-4.1           Standard PC (Q35 + ICH9, 2009)
[...]

Note: depending on your setup and configuration management, it might be recommended to set some kind of apt-pinning for qemu packages from buster-backports, as to not miss any updates.

Now, using virt-manager and enabling it’s XML editing setting, several things need to be taken care of:

  • for machine type, I’m using q35, which libvirt automatically extends to pc-q35-5.0
<type arch="x86_64" machine="pc-q35-5.0">hvm</type>
  • the discard option needs to be added to the qcow2 driver
<driver name="qemu" type="qcow2" discard="unmap"/>

By the way, WordPress won’t let me add angle brackets without selecting the ugly default code type, otherwise it thinks it’s HTML code. Can’t really be bothered to explore it further, but makes me think about converting my entire page to static content some time…

Little detour

At this point I had to apply a few more changes to VMs that were apparently created a while ago, with machine types like pc-i440fx-2.8. In order to apply q35 to their configs, libvirt wanted me to change the PCI controller type from pci-root to pcie-root.

<controller type="pci" index="0" model="pcie-root"/>

After booting any of these VMs, their network did not seem to come up again. With adding pcie-root, Debian cheerfully renamed the interface names according to the new PCIe bus on the virtualized systems, breaking their network settings. The naming scheme always went from the original ens0 to enp0s3. Predictable interface names anyone?

It was quickly rectified after manually logging into the machines local root console through virt-manager’s VNC connection and editing the network config.

Note: the Windows 10 VM was one of the Old Ones, but bravely handled the PCIe bus change by informing me that it’s now connected to “Network 2”. Whatever that means.

Setup part 2

With my VMs up and running again, fstrim should now be available:

root@proxy:~# fstrim -v /
/: 56.6 GiB (60766765056 bytes) trimmed

Success!

As mentioned earlier, I’ve opted for my own custom cronjob, with a small puppet module wrapped around it.

class js::module::fstrim_kvm {
 
  package { 'virt-what': ensure => installed }
 
  if $facts['virtual'] == 'kvm' {
 
    cron { 'fstrim-root':
      ensure  => present,
      command => '/sbin/fstrim -v / >> /var/log/fstrim.log',
      user    => 'root',
      minute  => [fqdn_rand(30)],
      hour    => '23',
      weekday => [3,7],
      require => Package['virt-what'],
    }
  }
}

The cronjob requires the package virt-what, which puppet is using via it’s built-in fact virtual to determine whether the host is a KVM (QEMU) VM. The cronjob executes at a random minute (as to not have them all running at the same time) during the 23rd hour twice a week, shortly before my VM backups are running. Also, if there’s a log server to pick up log data, having fstrim stats might be (mildly) interesting.

Results

Comparing qcow2 images on the hypervisor before and after fstrim, the images are now taking up almost 70% less space. Very nice.

total 148G
 28G -rw-r--r--  1 libvirt-qemu libvirt-qemu  28G Nov 21 16:49 vm01.qcow2
 21G -rw-r--r--  1 libvirt-qemu libvirt-qemu 101G Nov 21 16:49 vm02.qcow2
 14G -rw-r--r--  1 libvirt-qemu libvirt-qemu  14G Nov 21 16:49 vm03.qcow2
 53G -rw-r--r--  1 libvirt-qemu libvirt-qemu  53G Nov 21 16:49 vm04.qcow2
 11G -rw-r--r--  1 libvirt-qemu libvirt-qemu  11G Nov 21 16:49 vm05.qcow2
6,6G -rw-r--r--  1 libvirt-qemu libvirt-qemu 6,7G Nov 21 16:49 vm06.qcow2
 17G -rw-r--r--  1 libvirt-qemu libvirt-qemu  17G Nov 21 16:49 vm07.qcow2
total 43G
9,4G -rw-r--r--  1 libvirt-qemu libvirt-qemu  28G Nov 22 13:10 vm01.qcow2
7,1G -rw-r--r--  1 libvirt-qemu libvirt-qemu 101G Nov 22 13:10 vm02.qcow2
5,2G -rw-r--r--  1 libvirt-qemu libvirt-qemu  14G Nov 22 13:10 vm03.qcow2
5,0G -rw-r--r--  1 libvirt-qemu libvirt-qemu  53G Nov 22 13:10 vm04.qcow2
6,0G -rw-r--r--  1 libvirt-qemu libvirt-qemu  11G Nov 22 13:10 vm05.qcow2
2,8G -rw-r--r--  1 libvirt-qemu libvirt-qemu 6,8G Nov 22 13:10 vm06.qcow2
6,5G -rw-r--r--  1 libvirt-qemu libvirt-qemu  17G Nov 22 13:10 vm07.qcow2

To do

I’m yet to implement fstrim on my Windows VM (if possible), mostly because it’s only one VM with maybe a couple of gigabytes to reclaim. Also I’m too lazy to look into it. If you have a working solution, please drop a comment.

persistent postfix config inside PHP docker container

One of my recent tasks included migrating an internal PHP-FPM application from a Debian 9 host (with a global PHP 7.0 installation) to a more flexible docker setup. One of the requirements was to retain the ability for the app to send mails to it’s users, which meant having a local SMTP server directly accessible to the PHP docker instance, and relaying any mails to a server on the outside.

I decided to set up a dockerized PHP-FPM environment through PHP’s official docker repo, using their image tagged as php:7.4-fpm-buster.

After some trial and error regarding proper RUN commands in the Dockerfile, this is what I came up with, which allows for a persistent mail server setup inside the PHP-FPM container.

FROM php:7.4-fpm-buster
 
ENV TZ="Europe/Berlin"
RUN echo "date.timezone = Europe/Berlin" > /usr/local/etc/php/conf.d/timezone.ini
RUN date
 
RUN echo "postfix postfix/mailname string internalapp.example.com" | debconf-set-selections
RUN echo "postfix postfix/main_mailer_type string 'Internet Site'" | debconf-set-selections
 
RUN apt-get update && apt-get install -y postfix libldap2-dev libbz2-dev \
    && docker-php-ext-install bcmath ldap bz2
 
RUN postconf -e "myhostname = internalapp.example.com"
RUN postconf -e "relayhost = 172.18.0.1"
RUN /etc/init.d/postfix restart

Of course, “internalapp.example.com” is just a placeholder for the actual service URL. It’s important to set the postfix variables early through debconf-set-selections to allow for a promptless postfix installation later on, otherwise the container deployment gets stuck. I’ve also had to manually set the time zone, confirming it’s correctness by visually echoing date during deployment.

The relayhost is just the docker host itself, which is – in this case – running a postfix as well. Since I want it to act as a relay for my dockerized app, I’ve had to edit /etc/postfix/main.cf, allowing for relay access to it from my docker network (which has been explicitly persisted in it’s docker-compose.yml):

mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 172.18.0.0/24

One advantage of using the host mail server as a relay is everything gets logged in it’s local mail.log, which might be helpful for further debugging or auditing.

Keeping latest kernels in Debian with backports and puppet

I like running Debian stable as well as making use of recent kernels. Since I’m managing most of my infrastructure using puppet, I came up with a simple module which is included in my baseline role deployed on all systems.

The puppet apt module is needed here.

class js::module::kernel_update {
 
  class { 'apt':
    update => {
      frequency => 'daily',
    }
  }
 
  if $facts['os']['architecture'] == 'amd64' {
 
    if $facts['os']['distro']['codename'] == 'stretch' {
      package { 
        ['linux-image-amd64']:
          ensure => latest,
          install_options => ['-t', 'stretch-backports']
      }
    }
 
    if $facts['os']['distro']['codename'] == 'buster' {
      package { 
        ['linux-image-amd64']:
          ensure => latest,
          install_options => ['-t', 'buster-backports']
      }
    }
  }
}

Naturally the backports repo needs to be included for this to work. My sources.list.erb (also included in the baseline role) looks like this:

<% if @os['distro']['id'] == 'Debian' -%>

deb http://aptmirror/debian/ <%= @os['distro']['codename'] %> main contrib non-free
deb http://aptmirror/debian-security/ <%= @os['distro']['codename'] %>/updates main contrib non-free
deb http://aptmirror/debian/ <%= @os['distro']['codename'] %>-updates main contrib non-free
deb http://aptmirror/debian/ <%= @os['distro']['codename'] %>-backports main contrib non-free
deb http://apt.puppetlabs.com <%= @os['distro']['codename'] %> puppet

<% end -%>

Just replace ‘aptmirror‘ with an apt mirror to your liking. Or run one yourself.

Scroll to top