Moving from Firefox ESR to Firefox Quantum, or bye RequestPolicy

When Firefox Quantum was released last fall I switched to the ESR branch, currently on v52.7.3. My main – and pretty much only – reason for not using Quantum until now was due to incompatibilites with addons not written as native WebExtensions. It’s been over six months since Quantums initial release, and as more WebExtension addons are availabe, I wanted to see if I’d be comfortable with moving on as well.

First of all, Quantum feels much faster than the old Firefox, even with a dozen enabled addons. My main concern was with RequestPolicy Continued, which I used for years to build my own whitelist in order to keep out as much browser tracking as possible. Since there is still no WebExtension port, I started exploring other addons and found that uBlock Origin is capable of everything RequestPolicy can do. I’ve used UO on Firefox before, but only as a general adblock addon with default settings. By denying any 3rd-party resources globally while using the default filter lists for blocking undesired 1st-party content, uBlock Origin has broader capabilites than RequestPolicy. Here’s a nice explanation. But since there’s no way to export my RP whitelist to UO, I had to start over – which is not as painful as I initially feared. UO is a lot more effective in building a global whitelist for Firefox. The UO github has good explanations on it’s different blocking modes.

 

Here’s what RequestPolicy Continued on Firefox ESR (52.7.3) vs. uBlock Origin in Hard Mode with Firefox 59.0.2 looks like on heise.de.

        

 

UO is globally rejecting any 3rd-party resource by default and I can create my whitelist on each website below. Note the yellow indicator, which applies the common blocklists to all 1st-party resources. In addition, I disabled web fonts globally in UO (bottom right indicator) which renders websites a little less pretty, but works for me so far.

I had no problem migrating my NoScript whitelist, since it already has a WebExtension port. A few other great privacy-related addons for Quantum include Cookie AutoDelete and Privacy Settings. There’s also an addon disabling Referrers globally, but it’s missing some functionality from RefControl, which I used before.

 

Overall, I’m happy with migrating to Firefox Quantum. It’s faster, less resource-hungry and I was able to transfer all of my privacy related workflows.

Upgrading to Debian Stretch & fixing Cacti thold notification mails

With the upgrade from Debian Jessie to Stretch, the Cacti package went from 0.8.8b to 0.8.8h. A problem I had – and apparently a few other people, according to the Cacti forums – was that Cacti 0.8.8h in combination with the thold v0.5 plugin and Stretch refused to send “Downed device notifications”, or threshold warnings in general. Sending test emails with the Cacti settings plugin worked just fine, but that was it.

The issue lies with the split() function, which had been deprecated for a while and was now removed from PHP 7. Cacti logged the following error:

PHP Fatal error:  Uncaught Error: Call to undefined function split() in /usr/share/cacti/site/plugins/thold/includes/polling.php:28

To fix the problem and have Cacti send mails again, simply replace split() with explode() in polling.php:

sed -i -e 's/split(/explode(/g' /usr/share/cacti/site/plugins/thold/includes/polling.php

Upgrading to Debian Stretch with dovecot, postfix & opendkim

Debian Stretch is about to be released. I’m already upgrading some of my systems, and want to document a few issues I encountered after upgrading my mail server from Debian Jessie to Stretch.

 

Dovecot forgot what’s SSLv2

Before the upgrade, dovecot was configured to reject login attempts with SSLv2 & SSLv3. The corresponding line in /etc/dovecot/dovecot.conf looked like this:

ssl_protocols = !SSLv3 !SSLv2

After upgrading, logging into the mail server failed. Looking at the syslogs

dovecot: imap-login: Fatal: Invalid ssl_protocols setting: Unknown protocol 'SSLv2'

With the upgrade to Stretch and openssl 1.1.0, support vor SSLv2 was dropped entirely. Dovecot simply doesn’t recognize the argument anymore. Editing dovecot.conf helped.

ssl_protocols = !SSLv3

opendkim using file based sockets (Update 2017-10-13)

UPDATE – previous releases of opendkim on Stretch (v2.11.0) were affected by a bug, ignoring it’s own config file. See the Debian bug report.

The correct way to (re)configure the systemd daemon is to edit the default conf and regenerate the systemd config.

vi /etc/default/opendkim
# listen on loopback on port 12301:
SOCKET=inet:12301@localhost
/lib/opendkim/opendkim.service.generate
systemctl daemon-reload; systemctl restart opendkim

Tell postfix to use the TCP socket again, if nessecary.

vi /etc/postfix/main.cf
# DKIM config
milter_protocol = 2
milter_default_action = accept
smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
systemctl restart postfix

This should do it.

——————————————————–

Before the upgrade, opendkim (v2.9.2) was configured as an initd service using loopback to connect to postfix.

/etc/default/opendkim

SOCKET="inet:12301@localhost" # listen on loopback on port 12301

/etc/postfix/main.cf

# DKIM config
milter_protocol = 2
milter_default_action = accept
smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
root@host:~# systemctl status opendkim
opendkim.service - LSB: Start the OpenDKIM service
   Loaded: loaded (/etc/init.d/opendkim)
   Active: active (running) since Mi 2017-05-31 15:23:34 CEST; 6 days ago
  Process: 715 ExecStart=/etc/init.d/opendkim start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/opendkim.service
           ├─791 /usr/sbin/opendkim -x /etc/opendkim.conf -u opendkim -P /var/run/opendkim/opendkim.pid
           └─796 /usr/sbin/opendkim -x /etc/opendkim.conf -u opendkim -P /var/run/opendkim/opendkim.pid

During the system upgrade, opendkim daemon was reconfigured as a native systemd daemon, which meant /etc/default/opendkim and /etc/init.d/opendkim became obsolete, even though I was asked to install the new package maintainers version of /etc/default/opendkim.

Now the opendkim (v2.11.0) systemd daemon looked like this:

opendkim.service - OpenDKIM DomainKeys Identified Mail (DKIM) Milter
   Loaded: loaded (/lib/systemd/system/opendkim.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/opendkim.service.d
           └─override.conf
   Active: active (running) since Wed 2017-06-07 13:10:15 CEST; 23s ago
 Main PID: 4806 (opendkim)
    Tasks: 7 (limit: 4915)
   CGroup: /system.slice/opendkim.service
           ├─4806 /usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock
           └─4807 /usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/run/opendkim/opendkim.sock

I tried editing /etc/postfix/main.cf & adding the postfix user to the opendkim group to reflect the changes:

# DKIM config
milter_protocol = 2
milter_default_action = accept
smtpd_milters = local:/var/run/opendkim/opendkim.sock
non_smtpd_milters = local:/var/run/opendkim/opendkim.sock
root@host:~# adduser postfix opendkim

Restarting opendkim & postfix, the connection still failed to work.

postfix/smtpd[4451]: warning: connect to Milter service local:/var/run/opendkim/opendkim.sock: No such file or directory

Some research revealed that postfix does chroot its process to /var/spool/postfix (didn’t know that). To reflect this, I created new subdirectories and edited the systemd daemon.

root@host:~# mkdir -p /var/spool/postfix/var/run/opendkim
root@host:~# chown -R opendkim:opendkim /var/spool/postfix/var
root@host:~# systemctl edit opendkim
[Service]
ExecStart=
ExecStart=/usr/sbin/opendkim -P /var/run/opendkim/opendkim.pid -p local:/var/spool/postfix/var/run/opendkim/opendkim.sock

Note that the double ExecStart isn’t a typo.

After restarting all affected services, my sent mails were getting a valid DKIM signature again.

opendkim[11357]: OpenDKIM Filter v2.11.0 starting (args: -P /var/run/opendkim/opendkim.pid -p local:/var/spool/postfix/var/run/opendkim/opendkim.sock)

Encrypt an existing Linux installation with LUKS and LVM

An issue I encountered recently – how to encrypt an exisiting Xubuntu Setup. There are several ways to achieve this. I want to document my process I used.

I’m working with following assumptions:

  • The Linux installation to be encrypted is the only OS on disk.
  • The system is a (X)Ubuntu or similar (Debian). Commands, paths to config files or package names might differ in other Distributions.
  • The system is EFI-enabled. This means there is a 512 MiB FAT partition at the beginning of the disk, containing the EFI loader. This partition has to remain untouched. If your system is using legacy boot, ignore instructions regarding EFI later on.
  • A Live Linux USB stick (e.g. Xubuntu 16.10) and a separate hard disk with at least the same size as the system drive are available and ready. When in doubt, use a disk which is larger than the system drive.
  • The entire process takes time.
  • Mistakes happen. Be ready to lose data from the installed system! Ideally, there are multiple recent backups in place.

Before booting from the USB linux, prepare the Linux system by installing necessary packages & latest updates.

root@host:~# apt update; apt upgrade; apt install cryptsetup pv lvm2 gparted

Remove old kernel images. This might take a while, depending on the age of the Linux installation.

root@host:~# apt autoclean; apt autoremove

Shut down the computer, connect the USB disk and the second hard drive. Boot into the live system. Make sure your keyboard layout is set accordingly.

root@live:~# dpkg-reconfigure keyboard-configuration

Install necessary packages on the live system as well.

root@live:~# apt update; apt install cryptsetup pv lvm2 gparted

Annoyingly, my live system auto-mounted the old system disk. Unmount if necessary.

Use fdisk -l to check the order of drives. In my case, sda is the old system disk, sdb is the USB stick, sdc is the second hard drive. Use dd to copy the entire system disk to the second drive, with pv monitoring progress. Don’t overwrite your system.

root@live:~# dd if=/dev/sda | pv --progress --eta --bytes --rate | dd of=/dev/sdc

When finished, open gparted and choose your system disk.

Delete the root and swap partition, create a new boot partition (512MiB, ext4, set boot / esp flags) and create a “cleared” partition from remaining available space. Leave the EFI partition untouched. Note: if there’s no EFI boot partition, format the entire disk and create partitions as described.

The result looks like this:

Consider secure erasing of the old system partition. It takes time, but leaves no trace of unencrypted data on the system drive.

root@live:~# cryptsetup open --type plain /dev/sda3 container --key-file /dev/urandom

Proceed to create the encrypted volume on the cleared partition and choose a strong password.

root@live:~# cryptsetup luksFormat -c aes-xts-plain64:sha512 -s 512 /dev/sda3

Open the encrypted volume.

root@live:~# cryptsetup luksOpen /dev/sda3 encrypted_system

Create a LVM volume group and logical volumes on top of the opened LUKS volume. Note: tempo is the name I chose. Feel free to use another name for the volume group, but keep it consistent.

root@live:~# pvcreate /dev/mapper/encrypted_system
root@live:~# vgcreate tempo /dev/mapper/encrypted_system
root@live:~# lvcreate -L 8G tempo -n swap
root@live:~# lvcreate -l 100%FREE tempo -n root

Set up the swap and root volume.

root@live:~# mkswap /dev/mapper/tempo-swap
root@live:~# mkfs.ext4 /dev/mapper/tempo-root

Mount the new root volume to /mnt.

root@live:~# mount /dev/mapper/tempo-root /mnt

Mount the old root partition, which has been copied to the second drive.

root@live:~# mount /dev/sdc3 /media/old_root/

Navigate to the old root directory and use tar to copy the root system to the new LVM volume. The command doesn’t compress file input but redirects it to stdout. The output is then piped to the 2nd command where tar reads it from stdin. This way, all file & system attributes are preserved.

root@live:~# cd /media/old_root/
root@live:~# tar cvf - . | tar xf - -C /mnt/

When finished, delete all contents from the boot directory, since this will be the mount point for the new boot partition. Use the piped tar command to copy contents from the second drive. Mount the EFI partiton as well.

root@live:~# rm -rf /mnt/boot/*
root@live:~# mount /dev/sda2 /mnt/boot
root@live:~# cd /media/old_root/boot/
root@live:~# tar cvf - . | tar xf - -C /mnt/boot/
root@live:~# mount /dev/sda1 /mnt/boot/efi

Get the UUID of the encrypted LUKS volume. We need this later on.

root@live:~# blkid /dev/sda3
/dev/sda3: UUID="0f348572-6937-410f-8e04-1b760d5d11fe" TYPE="crypto_LUKS" PARTUUID="85f58482-8b18-446a-8cb6-cfdfe30c7d55"

Prepare the new root system in /mnt for chroot.

root@live:~# for dir in /dev /dev/pts /proc /sys /run; do mount --bind $dir /mnt/$dir; done
root@live:~# chroot /mnt

In the chrooted environment, we need to create or edit several config files to tell Linux where to look for the LVM swap / root volumes and how to open them. Create /etc/crypttab with the name of the volume group (tempo in my case) and the LUKS UUID we got earlier.

# 				
encrypted_system  UUID=0f348572-6937-410f-8e04-1b760d5d11fe  none  luks,discard,lvm=tempo

Create a file named /etc/initramfs-tools/conf.d/cryptroot in the chrooted environment. Replace tempo with the name used to open the LUKS volume and the UUID of the LUKS partition.

CRYPTROOT=target=tempo-root,source=/dev/disk/by-uuid/0f348572-6937-410f-8e04-1b760d5d11fe

Run the follwing command in the chrooted environment. It should pass without issues.

root@live:~# update-initramfs -k all -c

Open /etc/default/grub in the chrooted environment. Find this line:

GRUB_CMDLINE_LINUX=""

Insert the appropriate values (volume group name, LUKS UUID):

GRUB_CMDLINE_LINUX="cryptops=target=tempo-root,source=/dev/disk/by-uuid/0f348572-6937-410f-8e04-1b760d5d11fe,lvm=tempo"

Update grub in the chrooted environment. It will read arguments from /etc/default/grub and create new boot entries.

root@live:~# update-grub

Open /etc/fstab in the chrooted environment. Update the entry for the encrypted root and swap volume. Use blkid to find the UUID of the new boot partition. Leave the EFI partition entry untouched. My new fstab looks like this:

UUID=2886e598-0d5c-4576-87e7-a234011e7725	/boot		ext4	defaults		0	2
UUID=E2F4-2888					/boot/efi	vfat	umask=0077		0	3
/dev/mapper/tempo-root				/		ext4	errors=remount-ro	0	1
/dev/mapper/tempo-swap				none		swap	sw			0	0

That’s it. Close the chrooted environment and shut down the computer. Remove the USB stick and second hard drive. A password prompt should appear during boot. If everything goes well, the newly encrypted system will boot. Check if all partitions are mounted accordingly. Reboot again to check if recovery mode is working as well. Note that you still have an exact copy of your system previous to encryption on the second hard drive. After verifying the encrypted system is working as intended, you might want to consider secure erasing secure erasing of the unencrypted disk.

PGP key generation – increase system entropy

While creating a new PGP key pair using Enigmail, the progress bar seems stuck, and there’s no CPU activity.

The problem – missing entropy for /dev/random. Take a look at the available kernel entropy:

user@host:~# watch -n 0.2 cat /proc/sys/kernel/random/entropy_avail

If the number stays below – say – 300, PGP can’t find enough random data through /dev/random and won’t generate keys. There’s still /dev/urandom, which Engimail/PGP apparently ignores. So in order to generate acceptable levels of entropy for /dev/random and Engimail, I’m installing haveged, a “random number generator feeding Linux’s random device”.

user@host:~# sudo apt install haveged
user@host:~# sudo systemctl enable haveged.service
user@host:~# sudo systemctl start haveged.service

Now my system’s availabe entropy is at 1800, enough for Enigmail to generate my PGP keys.

Detach running processes from SSH shells

After starting some process in a SSH shell, you’ll begin noticing it takes A LOT longer than anticipated. You want to close the session, but forgot to execute the process within a screen or tmux session. Silly you.

But there’s a solution – suspend the process, put it in the background, and disown it from the current shell. More info about bash’s job control.

To suspend a running process, press Ctrl + Z:

user@remotehost:~# rsync -av /backup/ ~/restore
sending incremental file list
file1
file2
...

Ctrl + Z

[1]+  Stopped                 rsync -av /backup/ ~/restore

Now put the process in the background:

user@remotehost:~# bg
[1]+ rsync -av /backup/ ~/restore

And disown the job, with the job ID given by bg:

user@remotehost:~# disown -h %1

That’s it. Your process has been disassociated from it’s session, and will continue after logging out & closing the shell. There’s one downside, though: stdin and stdout will be redirected to /dev/null, manual double checking after your process finished is recommended. Or use screen next time.

Accelerating MySQL file imports

Just a quick note on how to speed up the slow process of importing SQL files on Linux Systems. I wanted to import a 50 MiB MySQL dump (and monitor it using pv), which just took way too long.

pv database.sql | mysql -u user -p  database
 
Enter password: 
 148KiB 0:00:13 [13.8KiB/s] [>         ]  0% ETA 1:13:21
 160KiB 0:00:14 [13.8KiB/s] [>         ]  0% ETA 1:13:03
 228KiB 0:00:19 [12.5KiB/s] [>         ]  0% ETA 1:09:29

I’m importing a 50 MiB file on a SSD here, so that can’t be right. Did I mention pv is great?

Anyway, the solution is to disable autocommit mode, which performs a log flush to disk for every insert. More information here.
Just open the SQL file with any text editor and add these statements to the very top and very bottom of the file (in nano, use CTRL + w + v to jump to the bottom):

SET autocommit=0;
 
[content]
 
COMMIT;

That’s it. SQL imports should now be finished in no time.

49.1MiB 0:00:52 [ 960KiB/s] [=========>] 100%

Virtual machine backup without downtime

Using QCOW2 file based VMs in Linux has lots of neat features. One of my favourites is the virsh blockcopy operation (assuming you are using libvirt, or are familiar with it – please read the the blockcopy section in virsh’s man before proceeding).

With libvirt, it’s possible to make use of a powerful snapshot toolkit. For now, I only want to copy an image for backup purposes without having to shut the virtualized guest down. This is where the blockcopy command comes into play. It’s simple enough, the only requirement is to temporarily undefine the guest during the blockcopy operation.

You can test it with a few commands – but be careful. Clone your VM and it’s configuration manually (shutdown & cp to somewhere else) beforehand, as files are easily overwritten by accident. Both the name of the guest and of the image are identical in my example (guest123). The target device is sda – yours might be vda or hda, take a look the guest configuration, namely the disk section. Depending on your hardware and the size of the guest, the blockcopy process might take some time. I’m using htop / iotop to monitor activity during the operation.

virsh dumpxml --security-info guest123 > guest123.xml
virsh undefine guest123
virsh -q blockcopy guest123.qcow2 sda guest123-backup.qcow2 --wait --finish
virsh define guest123.xml

That’s it. You now have a backup image of your running guest, without any downtime. Libvirt does not sparse the copied image, meaning it’s as large as the original image at the moment the operation finishes.

I’m using a cron & a simple script to periodically pull backups of my VMs. It’s assuming the name of the guest and image file are the same, as in the example above. It can be used as follows:

$ ./libvirt-backup guest123

With several VMs, each one gets it’s own cronjob. For the moment, my crontab looks similar to this:

# m h  dom mon dow   command
 05 00 * * 1 /vm/backup/libvirt-backup.sh guest1
 05 01 * * 1 /vm/backup/libvirt-backup.sh guest2
 05 02 * * 1 /vm/backup/libvirt-backup.sh guest3

I am having the blockcopy-backup.sh in the same directory as the images (/vm/backup). It works for now, but I might change that setup in the future. Don’t forget to set the executable flag.

$ chmod +x libvirt-backup.sh

And the script itself. It checks if the target file is truly *.qcow2, and if the guest is running, logs the time & size of the VM, dumps the XML, undefines the VM, blockcopies the guest to /vm/backup and adds the current date to the file name, defines the VM again, transfers the copy to a “target-host” using rsync and deletes the local copy. Important note – using the “-S”flag with rsync transfers sparsed QCOW2 files, saving space & bandwidth.

#!/bin/bash
 
GUEST=$1
 
BACKUP_LOCATION=/vm/backup
XML_DUMP="${BACKUP_LOCATION}/xml"
GUEST_LOCATION=`virsh domstats $GUEST | grep block.0.path | cut -d = -f2-`
BLOCKDEVICE=`virsh domstats $GUEST | grep block.0.name | cut -d = -f2-`
DATE=`date +%F_%H-%M`
GUEST_SIZE=`du -sh $GUEST_LOCATION | awk '{ print $1 }'`
 
        if [ `qemu-img info $GUEST_LOCATION | grep --count "file format: qcow2"` -eq 0 ]; then
                echo "Image file for $GUEST not in qcow2 format."
                exit 0; 
        fi
 
	if [ `virsh list | grep running | awk '{print $2}' | grep --count $GUEST` -eq 0 ]; then
		echo "$GUEST not active, skipping.."
		exit 0;	
	fi
 
	logger "Guest backup for $GUEST starting - current image size at $GUEST_SIZE"
 
	virsh dumpxml --security-info $GUEST > $XML_DUMP/$GUEST-$DATE.xml
 
	virsh undefine $GUEST > /dev/null 2>&1
 
	virsh -q blockcopy $GUEST $BLOCKDEVICE $BACKUP_LOCATION/$GUEST-$DATE.qcow2 --wait --finish
 
	virsh define $XML_DUMP/$GUEST-$DATE.xml > /dev/null 2>&1 
 
	rsync -S $BACKUP_LOCATION/$GUEST-$DATE.qcow2 target-host:/libvirt_daily_backups/$GUEST/
 
	rsync -S $XML_DUMP/$GUEST-$DATE.xml target-host:/libvirt_daily_backups/$GUEST/
 
	rm $BACKUP_LOCATION/$GUEST-$DATE.qcow2
 
	logger "Guest backup for $GUEST done"
 
exit 0;

The blockcopy and rsync operations are rather I/O heavy. If you are scheduling VM backups, it’s always good practise to leave enough time between cronjobs and to avoid other processes on your system which might be triggered at similar times, such as a smartd scans for example.
Also, as mentioned before – development on KVM/QEMU and libvirt is ongoing & very active. For Debian based systems, it might be worth considering to upgrade to unstable APT sources for at least these packages.

Erasing hard disks fast & securely with OpenSSL

Erasing & overwriting disks with dd can take a very long time, both with /dev/null and /dev/urandom. Most modern CPUs are capable of AES-NI, accelerating cryptographic operations while reducing system load dramatically. That’s why I’m using OpenSSL to erase my disk drives. The advantages are clear – encrypted pseudorandom data output and maximum I/O throughput. Studies have shown that one wipe is sufficient on magnetic HDDs.

 

openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 </dev/null | base64)" -nosalt </dev/zero | pv --progress --eta --rate --bytes | dd of=/dev/sdX

 

Replace sdX with the target drive. Make sure pv is installed before executing. OpenSSL is encrypting /dev/zero with a randomized password of /dev/urandom. You should see a progress bar & ETA.