Apple Time Machine Network Backups with a Raspberry Pi 4

This guide will cover how to leverage a Raspberry Pi and a Linux software RAID 5 to perform automatic Time Machine backups for your Mac over the network. While creating a Time Machine backup with a Raspberry Pi has been covered before, I put my own spin on this as I had some old drives laying around from when I upgraded my Plex server and wanted a reference for setting up a Linux software RAID using mdadm.

The transition to working from home full time has forced a lot of us to re-evaluate our normal workflow. As my laptop now spends nearly 100% of its time plugged in and sitting on the desk, I decided to minimize the number of peripherals that need to be plugged into it. Specifically, I no longer need to carry around a back-up hard drive and I did not want it sitting on my desk 24/7.

This can be done using any model of Raspberry PI however I strongly recommend using a Raspberry Pi 4 to take advantage of its support for full gigabit ethernet and USB 3.0. Backups can take a prolonged period of time and we want to maximize our throughput. 

Hardware needed:

Raid Configuration:

Begin with updating your Pi and installing mdadm.

pi@raspberrypi:~ $ sudo apt update && apt upgrade -y
pi@raspberrypi:~ $ sudo apt install mdadm -y

Add your drives into the drive bays and plug the enclosure into the Pi. SSH to it remotely or open a terminal locally. Once plugged in, list out the available drives using lsblk. Note that the drives will appear as /dev/sdx and differ from the SD card at /dev/mmcblk0.

NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  1.8T  0 disk
sdb           8:16   0  1.8T  0 disk
sdc           8:32   0  1.8T  0 disk
sdd           8:48   0  1.8T  0 disk
sde           8:64   0  1.8T  0 disk
mmcblk0     179:0    0 28.8G  0 disk
├─mmcblk0p1 179:1    0  256M  0 part  /boot
└─mmcblk0p2 179:2    0 28.5G  0 part  /

With the five available 2TB drives (reported as 1.8T by the OS), we will create the array in a RAID 5 configuration using mdadm.

pi@raspberrypi:~ $ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=5 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde

The RAID array will start the build process and can be viewed with cat /proc/mdstat.

pi@raspberrypi:~ $ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sde1[5] sdc1[2] sda1[0] sdd1[3] sdb1[1]
      7813525504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
      [=======>.............]  recovery = 26.3% (1924851/104038498) finish=7.3min speed=200808K/sec

unused devices: <none>

One nice perk of mdadm is that we can continue to interact with the RAID while it is building. Time Machine requires Apple’s proprietary Hierarchical File System (HFS) so we will need to install the required utilities on the Raspberry Pi for interoperability. 

pi@raspberrypi:~ $ sudo apt install hfsutils hfsprogs netatalk -y

With HFS+ support installed, format the drive to HFS+ with mkfs in Linux.

pi@raspberrypi:~ $ sudo mkfs.hfsplus /dev/md0 -v timeCapsule

If we run lsblk again, we should see our five drives configured in a RAID.

NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  1.8T  0 disk
└─sda1        8:1    0  1.8T  0 part
  └─md0       9:127  0  7.3T  0 raid5 
sdb           8:16   0  1.8T  0 disk
└─sdb1        8:17   0  1.8T  0 part
  └─md0       9:127  0  7.3T  0 raid5 
sdc           8:32   0  1.8T  0 disk
└─sdc1        8:33   0  1.8T  0 part
  └─md0       9:127  0  7.3T  0 raid5 
sdd           8:48   0  1.8T  0 disk
└─sdd1        8:49   0  1.8T  0 part
  └─md0       9:127  0  7.3T  0 raid5 
sde           8:64   0  1.8T  0 disk
└─sde1        8:65   0  1.8T  0 part
  └─md0       9:127  0  7.3T  0 raid5 
mmcblk0     179:0    0 28.8G  0 disk
├─mmcblk0p1 179:1    0  256M  0 part  /boot
└─mmcblk0p2 179:2    0 28.5G  0 part  /

Now we need to create the directory to mount our new RAID disk. Create it using the following command:

pi@raspberrypi:~ $ sudo mkdir /srv/TimeCapsule

Before mounting we need the UUID of the drive. Take note of the entry that points to ../../md0 as we will need to add this to /etc/fstab:

pi@raspberrypi:~ $ ls -alh /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 100 Apr 13 01:08 .
drwxr-xr-x 7 root root 140 Apr 13 01:08 ..
lrwxrwxrwx 1 root root  15 Mar  4 23:04 7581-8A48 -> ../../mmcblk0p1
lrwxrwxrwx 1 root root   9 Apr 13 01:08 e89b1b71-39e3-3632-b4cb-3eed298133e6 -> ../../md0
lrwxrwxrwx 1 root root  15 Mar  4 23:04 fa37d505-e741-4d35-bcec-4580aef395e1 -> ../../mmcblk0p2

Open up /etc/fstab with a text editor and add a line to automatically mount the drive to /srv/TimeCapsule on boot or reboot:

proc            /proc           proc    defaults          0       0
PARTUUID=83c4223d-01  /boot           vfat    defaults          0       2
PARTUUID=83c4223d-02  /               ext4    defaults,noatime  0       1
UUID=e89b1b71-39e3-3632-b4cb-3eed298133e6 /srv/TimeCapsule hfsplus force,rw,user,noauto 0 0
# a swapfile is not a swap partition, no line here
#   use  dphys-swapfile swap[on|off]  for that

Mount the RAID disk at the directory:

pi@raspberrypi:~ $ sudo mount /srv/TimeCapsule/

If no output returns, the mount is successful. You can validate it by running lsblk to verify that md0 has a mount point set to /srv/TimeCapsule:

pi@raspberrypi:~ $ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0  1.8T  0 disk
└─sda1        8:1    0  1.8T  0 part
  └─md127     9:127  0  7.3T  0 raid5 /srv/TimeCapsule
sdb           8:16   0  1.8T  0 disk
└─sdb1        8:17   0  1.8T  0 part
  └─md127     9:127  0  7.3T  0 raid5 /srv/TimeCapsule
sdc           8:32   0  1.8T  0 disk
└─sdc1        8:33   0  1.8T  0 part
  └─md127     9:127  0  7.3T  0 raid5 /srv/TimeCapsule
sdd           8:48   0  1.8T  0 disk
└─sdd1        8:49   0  1.8T  0 part
  └─md127     9:127  0  7.3T  0 raid5 /srv/TimeCapsule
sde           8:64   0  1.8T  0 disk
└─sde1        8:65   0  1.8T  0 part
  └─md127     9:127  0  7.3T  0 raid5 /srv/TimeCapsule
mmcblk0     179:0    0 28.8G  0 disk
├─mmcblk0p1 179:1    0  256M  0 part  /boot
└─mmcblk0p2 179:2    0 28.5G  0 part  /

With the drive mounted, configure permissions so that we can write to it without requiring root privileges:

pi@raspberrypi:~ $ sudo chown pi:pi /srv/TimeCapsule

Time Machine Configuration:

Configure /etc/nsswitch.conf and add mdns mdns4 to the end of the hosts: line:

pi@raspberrypi:~ $ sudo vim /etc/nsswitch.conf
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.

passwd:         files
group:          files
shadow:         files
gshadow:        files

hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns mdns4
networks:       files

protocols:      db files
services:       db files
ethers:         db files
rpc:            db files

netgroup:       nis

Finally, configure Netatalk to mimic Apple’s Time Capsule in /etc/netatalk/afp.conf. Add the mimic model in the [Global] block and the path to our mounted Time Capsule volume in the [My Time Machine Volume] block. Be sure to remove the semi-colon characters in the [My Time Machine Volume] block as well.

pi@raspberrypi:~ $ sudo vim /etc/netatalk/afp.conf
;
; Netatalk 3.x configuration file
;

[Global]
; Global server settings
  mimic model = TimeCapsule6,106

; [Homes]
; basedir regex = /xxxx

; [My AFP Volume]
; path = /path/to/volume

[My Time Machine Volume]
  path = /srv/TimeCapsule
  time machine = yes

Finally, start up the two configured services to broadcast the drive as a Time Capsule and then enable them to start automatically at boot.

pi@raspberrypi:~ $ sudo systemctl start avahi-daemon.service
pi@raspberrypi:~ $ sudo systemctl start netatalk.service
pi@raspberrypi:~ $ sudo systemctl enable avahi-daemon.service
Synchronizing state of avahi-daemon.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable avahi-daemon
pi@raspberrypi:~ $ sudo systemctl enable netatalk.service
Synchronizing state of netatalk.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable netatalk

Note: Occasionally the HFS+ volume will be mounted in read-only mode if the Raspberry Pi restarts. The best explanation I have been able to find is that this is due to journaling with HFS+. I haven’t been able to find solid support for HFS+ journaling on Linux so the best way to fix this is to unmount and then remount the drive in the following way.

pi@raspberrypi:~ $ sudo systemctl stop avahi-daemon.service
pi@raspberrypi:~ $ sudo systemctl stop netatalk.service
pi@raspberrypi:~ $ sudo umount /srv/TimeCapsule
pi@raspberrypi:~ $ sudo mount -t hfsplus -o force,rw /srv/TimeCapsule
pi@raspberrypi:~ $ sudo systemctl start avahi-daemon.service
pi@raspberrypi:~ $ sudo systemctl start netatalk.service

Pivot back to your Mac host and you should be able to directly connect to the Raspberry Pi using the same credentials via the GUI by navigating to Finder > Go > Connect to server…

Once connected, open up Time Machine Preferences, click “Select Disk” and choose your newly created Time Machine volume. For transparency, the screenshot shows my other time machine volume next to the newly created one.

That’s it! You should be up and running now and you can forget about every having to plug a backup drive into your computer again.

Final Thoughts:

This solution works pretty well for me and has been running stable for the last year. In the future, I may add some more granular permissions for each user that will be backing up to this rather than have a single account for everyone to use to connect to the Pi. Additionally, I am interested to hear if anyone has a better solution than un-mounting and re-mounting the volume and restarting the services when it becomes read-only. I’ve tried the cronjob solution without any luck.

References:

GPU Pass-through with Linux

This post is intended to get you up and running with GPU passthrough using Arch Linux and Windows. There are a number of different guides out there including the Arch wiki; however, I found that some detail was lacking for beginners. I also wrote this for my own sanity in case I break something or need to do this again.

What is needed:

  • A motherboard that supports IOMMU
  • 2x GPUs
    • Anything newer than 2012 should work
    • One for the host to run and one for the guest that you will be passing through
    • The Guest GPU must support UEFI
    • Enough RAM for the guest and the host (this particular build is 32GB RAM)

**Note: you must have two distinct GPUs.  This will not work if you have two identical GPUs

  • Two monitors or a monitor that has two inputs (one monitor or input for each GPU)
  • A Windows ISO

For this guide, I will be using the following:

  • Intel Core i7 6700k
  • ASUS  Maximus VIII Gene Motherboard
  • Nvidia Geforce GTX 1070 (to be passed through to the Windows VM)
  • Intel SSD 600p Series 256GB M.2 SSD (for the host OS)
  • SanDisk Ultra II 960GB SSD (to be passed through and dedicated to the Windows VM)
  • 32 GB Ram
  • Arch Linux*
  • Windows 10 64-bit

*Side note: My Arch install is currently using the linux-lts kernel due to a bug related to Intel Integrated graphics and the newest linux kernel.

With this particular configuration, the i7-6700K integrated GPU will be configured as the primary GPU for the host OS with the GTX1070 being used for the Windows VM.  

IOMMU Configuration:

Virtualization must be enabled via the BIOS.  You will want to look for settings that say VT-d, VT-x, AMD-V, virtualization, etc.

Next, enable IOMMU support via the bootloader.  Depending on your CPU, you will need to use:

  • intel_iommu=on
  • amd_iommu=on

If using systemd-boot, run:

sudo vim /boot/loader/entries/arch.conf

Add the parameter “intel_iommu=on” if you are using an intel cpu or “amd_iommu=on” if you are using an AMD cpu

Example of my arch.conf:

Check that IOMMU is enabled by running:

sudo dmesg | grep -e DMAR -e IOMMU

You should see a line that says “IOMMU enabled”

Example output:

To find the devices you will be passing through, list pci devices with:

lspci

Look for any devices resembling your GPU.  In this case I have a Nvidia GPU and Audio device:

Keep in mind that we will need to pass both of these devices through as they are both a part of the video card.

Kernel Modules:

Next, we will need to add kernel modules to be loaded before boot.  Using your favorite editor, open up the file mkinitcpio.conf and add the modules.  For this, I am using vim as the editor:

sudo vim /etc/mkinitcpio.conf

Add the modules:

vfio vfio_iommu_type1 vfio_pci vfio_virqfd

An example of how they should be added:

Exit and save changes.

Regenerate the initframfs config with:

mkinitpcio -p linux-lts

**I have specified “linux-lts” since I am using the linux-lts kernel.  You can substitute “linux-lts” with “linux” or “linux-vfio” depending on the kernel you are using.  Tab-completing should give you the right one but chances are you will want the standard “linux”.

At this point you should reboot to verify that the changes have taken place.  You will want to confirm that the vfio-pci module is loaded for the devices that you want to pass through with lspci.

lspci -nnk

You are looking for the kernel driver in use to be set to “vfio-pci”.

Take note of the IDs associate with your GPU and the GPU’s audio system.  In this case, mine are labeled “10de:1b81” and “10de:10f0”.

Once it is verified that the kernel drivers in use are set to “vfio-pci”, the next step is to add the hardware Ids to the VFIO config file.  Use your favorite editor and open the file:

sudo vim /etc/modprobe.d/vfio.conf

Add the hardware IDs to options:

options vfio-pci ids=10de:1b81,10de:10f0

If the file vfio.conf does not exist, make sure to create it and add the options line at the top of the file.

**Note: If you have other additional hardware you want to pass through, add it here.  We will not be adding a keyboard and mouse here as we will be using evdev to toggle input from the keyboard and mouse between the host and guest VM later on in the startup script.

Save and exit the file and reboot.

QEMU Setup:

Now that the hardware is configured, we will need to configure OVMF as the UEFI BIOS and QEMU.  Start with installing qemu and rpmextract

sudo pacman -S qemu rpmextract

Grab the compatible OVMF image from here: https://www.kraxel.org/repos/jenkins/edk2/

We will be looking for the x64 version as we will be running a 64bit OS.  Once the download has finished, run “rpmextract.sh” with the file you downloaded:

rpmextract.sh edk2.git-ovmf-x64-X-XXXXXX

A /usr/share/ directory should have extracted.  Copy this over into /usr/share/:

sudo cp -R usr/share/* /usr/share

Configuring the Guest VM:

At this point, you should be able to start QEMU from the command line or configure a GUI such as virt-manager.  For the purposes of simplicity, this guide will only cover the command line setup.  To test out that everything is working, this base script can be run:

#!/bin/bash
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 1024 \
-cpu host,kvm=off \
-vga none \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \

Running this script should boot into a UEFI shell.  This will appear on the monitor that you have plugged into the GPU that is being passed through. If you are using a single monitor with two inputs, check the other input to see if it has booted.  

With a successful UEFI boot, the next step is to specify and mount the installation ISOs and drive that Windows is to be installed to.  Grab the latest or stable VirtIO drivers from Fedora: https://fedoraproject.org/wiki/Windows_Virtio_Drivers#Direct_download

A few new lines will need to be added to our Bash script to mount the ISOs we have now so that they can be accessed by the VM.  Additionally, we will need to add our virtual or physical disk to install Windows to.  I have opted to dedicate a 1TB SSD to the Windows VM so this guide will not go in depth on creating a virtual disk.

Specify the devices to be accessed by QEMU:

-device virtio-scsi-pci, id=scsi \</span>

Add the disk that will be dedicated to the VM:

-drive file=/dev/sda,id=disk,format=raw,if=none -device scsi-hd,drive=disk \</span>

To find the path to the disk, use the “lsblk” command in the terminal to list your available disks.

Specify the path to the Windows 10 ISO and the VirtIO driver image that we downloaded from Fedora:

-drive file=/home/dcellular/ISOs/Windows10x64.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \</span>
-drive file=/home/dcellular/downloads/virtio-win.iso,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd</span>

Finally, we need to address input devices to be shared with the VM. QEMU 2.6 saw the introduction of evdev input support which allows for switching input between the VM and host by hitting both CTRL keys at the same time.  

**Note: If you are not running at least QEMU 2.6, input device switching will not work and you will need to grab a newer version.  Use “qemu-system-x86_64 -version” to see which version you have installed.

List your connected devices:

ls /dev/input/by-id/</span>

Your devices will vary and you may see more of them based on how many peripherals you have connected.

Use the event-kbd and event-mouse IDs and create a line for each to specify them in the script:

-object input-linux,id=kbd,evdev=/dev/input/by-id/usb-Matias_Ergo_Pro_Keyboard-event-kbd,grab_all=yes \</span>
-object input-linux,id=mouse,evdev=/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse </span>

Add one more line into the script to enable sound:

-soundhw hda \</span>

**Note: This will allow you to reroute sound through the speakers connected to the host using a utility such as pulseaudio. Sound output can also be configured in Windows to output from the video card via a compatible hdmi or displayport cable.

By now, your script should look something like this:

#!/bin/bash
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 1024 \
-cpu host,kvm=off \
-vga none \
-soundhw hda \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/sda,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
-drive file=/home/dcellular/ISOs/Windows10x64.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
-drive file=/home/dcellular/downloads/virtio-win.iso,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd
-object input-linux,id=kbd,evdev=/dev/input/by-id/usb-Matias_Ergo_Pro_Keyboard-event-kbd,grab_all=yes \
-object input-linux,id=mouse,evdev=/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse

You are now ready to boot and install Windows.  Run your script from the command line and you should see the QEMU window pop-up on the desktop and the “Press any key to continue…” dialog on your second monitor or input.  Continue through the setup process until you reach the screen asking you to select the disk to install Windows onto.  Follow the prompt or select the option to install the necessary driver.  Navigate through the virtio ISO to the virtscsi folder and find the Windows 10 x64 driver.  This will allow you to see the disk drive that you want to install Windows to.  Follow through and finish the install.  

At this point, you should have a fully functional installation of Windows so make sure to install all necessary updates before installing and drivers.  

**Note: I had issues originally trying to install the Nvidia drivers and was receiving errors about compatibility.  I eventually found out that Windows needed to be updated before they would install properly.

There are some additional configurations you may want to make to further optimize your VM, such as allocating additional RAM, specifying processor cores and threads, or using xrandr to control display output.  The following is how I have configured my script to start my VM:

#!/bin/bash

xrandr --output HDMI1 --off
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 16384 \
-smp cores=4,threads=2 \
-cpu host,kvm=off \
-vga none \
-soundhw hda \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/sda,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
-object input-linux,id=kbd,evdev=/dev/input/by-id/usb-Matias_Ergo_Pro_Keyboard-event-kbd,grab_all=yes \
-object input-linux,id=mouse,evdev=/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse
xrandr --output HDMI1 --mode "3440x1440" --rate 49.99 --right-of HDMI2

Additional resources and references:

https://bufferoverflow.io/gpu-passthrough/

http://dominicm.com/gpu-passthrough-qemu-arch-linux/

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

https://www.kraxel.org/blog/2016/04/linux-evdev-input-support-in-qemu-2-6/