GPU Pass-through with Linux

This post is intended to get you up and running with GPU passthrough using Arch Linux and Windows. There are a number of different guides out there including the Arch wiki; however, I found that some detail was lacking for beginners. I also wrote this for my own sanity in case I break something or need to do this again.

What is needed:

  • A motherboard that supports IOMMU
  • 2x GPUs
    • Anything newer than 2012 should work
    • One for the host to run and one for the guest that you will be passing through
    • The Guest GPU must support UEFI
    • Enough RAM for the guest and the host (this particular build is 32GB RAM)

**Note: you must have two distinct GPUs.  This will not work if you have two identical GPUs

  • Two monitors or a monitor that has two inputs (one monitor or input for each GPU)
  • A Windows ISO

For this guide, I will be using the following:

  • Intel Core i7 6700k
  • ASUS  Maximus VIII Gene Motherboard
  • Nvidia Geforce GTX 1070 (to be passed through to the Windows VM)
  • Intel SSD 600p Series 256GB M.2 SSD (for the host OS)
  • SanDisk Ultra II 960GB SSD (to be passed through and dedicated to the Windows VM)
  • 32 GB Ram
  • Arch Linux*
  • Windows 10 64-bit

*Side note: My Arch install is currently using the linux-lts kernel due to a bug related to Intel Integrated graphics and the newest linux kernel.

With this particular configuration, the i7-6700K integrated GPU will be configured as the primary GPU for the host OS with the GTX1070 being used for the Windows VM.  

IOMMU Configuration:

Virtualization must be enabled via the BIOS.  You will want to look for settings that say VT-d, VT-x, AMD-V, virtualization, etc.

Next, enable IOMMU support via the bootloader.  Depending on your CPU, you will need to use:

  • intel_iommu=on
  • amd_iommu=on

If using systemd-boot, run:

sudo vim /boot/loader/entries/arch.conf

Add the parameter “intel_iommu=on” if you are using an intel cpu or “amd_iommu=on” if you are using an AMD cpu

Example of my arch.conf:

Check that IOMMU is enabled by running:

sudo dmesg | grep -e DMAR -e IOMMU

You should see a line that says “IOMMU enabled”

Example output:

To find the devices you will be passing through, list pci devices with:

lspci

Look for any devices resembling your GPU.  In this case I have a Nvidia GPU and Audio device:

Keep in mind that we will need to pass both of these devices through as they are both a part of the video card.

Kernel Modules:

Next, we will need to add kernel modules to be loaded before boot.  Using your favorite editor, open up the file mkinitcpio.conf and add the modules.  For this, I am using vim as the editor:

sudo vim /etc/mkinitcpio.conf

Add the modules:

vfio vfio_iommu_type1 vfio_pci vfio_virqfd

An example of how they should be added:

Exit and save changes.

Regenerate the initframfs config with:

mkinitpcio -p linux-lts

**I have specified “linux-lts” since I am using the linux-lts kernel.  You can substitute “linux-lts” with “linux” or “linux-vfio” depending on the kernel you are using.  Tab-completing should give you the right one but chances are you will want the standard “linux”.

At this point you should reboot to verify that the changes have taken place.  You will want to confirm that the vfio-pci module is loaded for the devices that you want to pass through with lspci.

lspci -nnk

You are looking for the kernel driver in use to be set to “vfio-pci”.

Take note of the IDs associate with your GPU and the GPU’s audio system.  In this case, mine are labeled “10de:1b81” and “10de:10f0”.

Once it is verified that the kernel drivers in use are set to “vfio-pci”, the next step is to add the hardware Ids to the VFIO config file.  Use your favorite editor and open the file:

sudo vim /etc/modprobe.d/vfio.conf

Add the hardware IDs to options:

options vfio-pci ids=10de:1b81,10de:10f0

If the file vfio.conf does not exist, make sure to create it and add the options line at the top of the file.

**Note: If you have other additional hardware you want to pass through, add it here.  We will not be adding a keyboard and mouse here as we will be using evdev to toggle input from the keyboard and mouse between the host and guest VM later on in the startup script.

Save and exit the file and reboot.

QEMU Setup:

Now that the hardware is configured, we will need to configure OVMF as the UEFI BIOS and QEMU.  Start with installing qemu and rpmextract

sudo pacman -S qemu rpmextract

Grab the compatible OVMF image from here: https://www.kraxel.org/repos/jenkins/edk2/

We will be looking for the x64 version as we will be running a 64bit OS.  Once the download has finished, run “rpmextract.sh” with the file you downloaded:

rpmextract.sh edk2.git-ovmf-x64-X-XXXXXX

A /usr/share/ directory should have extracted.  Copy this over into /usr/share/:

sudo cp -R usr/share/* /usr/share

Configuring the Guest VM:

At this point, you should be able to start QEMU from the command line or configure a GUI such as virt-manager.  For the purposes of simplicity, this guide will only cover the command line setup.  To test out that everything is working, this base script can be run:

#!/bin/bash
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 1024 \
-cpu host,kvm=off \
-vga none \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \

Running this script should boot into a UEFI shell.  This will appear on the monitor that you have plugged into the GPU that is being passed through. If you are using a single monitor with two inputs, check the other input to see if it has booted.  

With a successful UEFI boot, the next step is to specify and mount the installation ISOs and drive that Windows is to be installed to.  Grab the latest or stable VirtIO drivers from Fedora: https://fedoraproject.org/wiki/Windows_Virtio_Drivers#Direct_download

A few new lines will need to be added to our Bash script to mount the ISOs we have now so that they can be accessed by the VM.  Additionally, we will need to add our virtual or physical disk to install Windows to.  I have opted to dedicate a 1TB SSD to the Windows VM so this guide will not go in depth on creating a virtual disk.

Specify the devices to be accessed by QEMU:

-device virtio-scsi-pci, id=scsi \</span>

Add the disk that will be dedicated to the VM:

-drive file=/dev/sda,id=disk,format=raw,if=none -device scsi-hd,drive=disk \</span>

To find the path to the disk, use the “lsblk” command in the terminal to list your available disks.

Specify the path to the Windows 10 ISO and the VirtIO driver image that we downloaded from Fedora:

-drive file=/home/dcellular/ISOs/Windows10x64.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \</span>
-drive file=/home/dcellular/downloads/virtio-win.iso,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd</span>

Finally, we need to address input devices to be shared with the VM. QEMU 2.6 saw the introduction of evdev input support which allows for switching input between the VM and host by hitting both CTRL keys at the same time.  

**Note: If you are not running at least QEMU 2.6, input device switching will not work and you will need to grab a newer version.  Use “qemu-system-x86_64 -version” to see which version you have installed.

List your connected devices:

ls /dev/input/by-id/</span>

Your devices will vary and you may see more of them based on how many peripherals you have connected.

Use the event-kbd and event-mouse IDs and create a line for each to specify them in the script:

-object input-linux,id=kbd,evdev=/dev/input/by-id/usb-Matias_Ergo_Pro_Keyboard-event-kbd,grab_all=yes \</span>
-object input-linux,id=mouse,evdev=/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse </span>

Add one more line into the script to enable sound:

-soundhw hda \</span>

**Note: This will allow you to reroute sound through the speakers connected to the host using a utility such as pulseaudio. Sound output can also be configured in Windows to output from the video card via a compatible hdmi or displayport cable.

By now, your script should look something like this:

#!/bin/bash
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 1024 \
-cpu host,kvm=off \
-vga none \
-soundhw hda \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/sda,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
-drive file=/home/dcellular/ISOs/Windows10x64.iso,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
-drive file=/home/dcellular/downloads/virtio-win.iso,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd
-object input-linux,id=kbd,evdev=/dev/input/by-id/usb-Matias_Ergo_Pro_Keyboard-event-kbd,grab_all=yes \
-object input-linux,id=mouse,evdev=/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse

You are now ready to boot and install Windows.  Run your script from the command line and you should see the QEMU window pop-up on the desktop and the “Press any key to continue…” dialog on your second monitor or input.  Continue through the setup process until you reach the screen asking you to select the disk to install Windows onto.  Follow the prompt or select the option to install the necessary driver.  Navigate through the virtio ISO to the virtscsi folder and find the Windows 10 x64 driver.  This will allow you to see the disk drive that you want to install Windows to.  Follow through and finish the install.  

At this point, you should have a fully functional installation of Windows so make sure to install all necessary updates before installing and drivers.  

**Note: I had issues originally trying to install the Nvidia drivers and was receiving errors about compatibility.  I eventually found out that Windows needed to be updated before they would install properly.

There are some additional configurations you may want to make to further optimize your VM, such as allocating additional RAM, specifying processor cores and threads, or using xrandr to control display output.  The following is how I have configured my script to start my VM:

#!/bin/bash

xrandr --output HDMI1 --off
cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd
qemu-system-x86_64 \
-enable-kvm \
-m 16384 \
-smp cores=4,threads=2 \
-cpu host,kvm=off \
-vga none \
-soundhw hda \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/tmp/my_vars.fd \
-device virtio-scsi-pci,id=scsi \
-drive file=/dev/sda,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
-object input-linux,id=kbd,evdev=/dev/input/by-id/usb-Matias_Ergo_Pro_Keyboard-event-kbd,grab_all=yes \
-object input-linux,id=mouse,evdev=/dev/input/by-id/usb-Logitech_USB_Receiver-if02-event-mouse
xrandr --output HDMI1 --mode "3440x1440" --rate 49.99 --right-of HDMI2

Additional resources and references:

https://bufferoverflow.io/gpu-passthrough/

http://dominicm.com/gpu-passthrough-qemu-arch-linux/

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

https://www.kraxel.org/blog/2016/04/linux-evdev-input-support-in-qemu-2-6/