Linux

GPU Passthrough on Fedora 32

Time to blog again, it’s been some time already!

The concept of GPU Passthrough with VMs has been dear to me for years, but on and off. Like with many things, in other words.

I had previously managed to get this to work nicely with a host on a Linux distribution (of course) and with the main guest being Windows 10, mainly for gaming.

It took a bit to get it to work but now it’s looking pretty good. That’s why I’ve decided to write down a guide for myself (and for whoever might have a similar configuration out there and also willing to do the same thing) so that if I ever forget/get confused about how I got it to work, well… it won’t take much time to get it sorted.

First things first though, a short description of what GPU Passthrough is all about. Some of us technophiles have decided to run a Linux distro as our main driver for every day use, and that’s fine. That said, there are things for which Linux is still not great: Office applications, Adobe programs and… gaming. Gaming did get better on Linux over the last few years, it is true, but for some games you still need to use Windows for a good experience. That or you’re ok spending possibly hours of your free time to find how to get things to work with Wine. If you do, that’s ok, no judgement from my end.

In any case, it’s for those exceptions that having a Windows VM on the side can be helpful. Sometimes, a simple VM will be enough, especially if it’s about using low-resources apps like MS Word and the likes. However, this starts being a little bit less adequate for applications that normally make full use of your actual GPU (if you have a dedicated one). This is what it’s all about: using a Windows 10 VM which actually uses a dedicated graphics card.

Before going into the meat of it, here’s my basic setup:

  • Motherboard: Asus Z97 Pro Gamer
  • CPU: Intel i7-5775c (with iGPU)
  • 16gb RAM (I’ll have to think about upgrading sometime soon…)
  • NVIDIA 1070 GTX (using nvidia proprietary drivers) as the dedicated GPU that I want to offload to my Windows VM guest
  • Fedora 32 with kernel 5.11.11 and sddm as display manager on the host

Now, let’s get started. By the way, I’m not even going to mention anything AMD-related on this blog post since I’m using Intel hardware.

  1. What you want to do as a first step is to make sure that VT-d is actually enabled in your BIOS. Without this, you won’t even be able to get this whole GPU Passthrough to work at all. There should be an option for this if your hardware is compatible with it.
  2. Then, still in your BIOS, you need to set your iGPU to the default graphics processor. This is because if you have a similar setup to mine, you’ll boot up your computer using the iGPU and if you want to run a guest VM that uses your dedicated graphics card, all you’ll need to do is to unload the graphics card drivers, load the vfio drivers and bind them to your GPU. Basically at least.
  3. Moving on, you want to make sure that IOMMU is enabled as a kernel parameter. Doing this is easy, just add intel_iommu=on to your grub parameters at the end of GRUB_CMDLINE_LINUX and then update grub for the change to be applied and reboot for the kernel to take it into effect.
  4. Once you’ve rebooted, you should check your IOMMU groups. You can use the following script to do this, which you should then run as root:
#!/bin/bash
for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*}; n=${n%%/*}
  printf 'IOMMU Group %s ' "$n"
  lspci -nns "${d##*/}"
done
  1. If you get an output similar at the one below (with different IOMMU groups), it looks like you can proceed. Otherwise, please make sure IOMMU AND vt-d are enabled:
[root@desk ~]# ./iommu.sh 
IOMMU Group 0 00:00.0 Host bridge [0600]: Intel Corporation Broadwell-U Host Bridge - DMI [8086:1610] (rev 0a)
IOMMU Group 10 00:1c.7 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 8 [8086:8c9e] (rev d0)
IOMMU Group 11 00:1d.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1 [8086:8ca6]
IOMMU Group 12 00:1f.0 ISA bridge [0601]: Intel Corporation Z97 Chipset LPC Controller [8086:8cc4]
IOMMU Group 12 00:1f.2 SATA controller [0106]: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode] [8086:8c82]
IOMMU Group 12 00:1f.3 SMBus [0c05]: Intel Corporation 9 Series Chipset Family SMBus Controller [8086:8ca2]
IOMMU Group 13 05:00.0 Network controller [0280]: Intel Corporation Wireless 8260 [8086:24f3] (rev 3a)
IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Broadwell-U PCI Express x16 Controller [8086:1601] (rev 0a)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1070] [10de:1b81] (rev a1)
IOMMU Group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
IOMMU Group 2 00:03.0 Audio device [0403]: Intel Corporation Broadwell-U Audio Controller [8086:160c] (rev 0a)
IOMMU Group 3 00:14.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB xHCI Controller [8086:8cb1]
IOMMU Group 4 00:16.0 Communication controller [0780]: Intel Corporation 9 Series Chipset Family ME Interface #1 [8086:8cba]
IOMMU Group 5 00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I218-V [8086:15a1]
IOMMU Group 6 00:1a.0 USB controller [0c03]: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2 [8086:8cad]
IOMMU Group 7 00:1b.0 Audio device [0403]: Intel Corporation 9 Series Chipset Family HD Audio Controller [8086:8ca0]
IOMMU Group 8 00:1c.0 PCI bridge [0604]: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 [8086:8c90] (rev d0)
IOMMU Group 9 00:1c.3 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev d0)
IOMMU Group 9 03:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)
  1. Next step is to create your guest VM. For this, best to use virt-manager/KVM, which is a hypervisor. I suggest you first create the VM normally and only when you’ve already booted into it to move on to the next step. That said, an important thing before the first boot will be to customize the settings by choosing the Q35 chipset for the CPU and UEFI OVMF as the firmware. From what I’ve read, this helps towards compatibility.
  2. Once the VM works normally, install a program called barrier which will allow you to basically share your host keyboard and mouse with the guest VM without having to pass them through as hardware items like the dedicated GPU. The program works on the basis of server/client where your host is set up as the server and the guest is set as the client, which is why it should be installed on both. The setup is not particularly difficult so I won’t go into it here. Then, shut down the guest VM.
  3. In order to make sure that the VM is not recognized as a VM by Windows, we need to hide the hypervisor by editing the configuration file of the guest via sudo virsh edit VMname
...  
<cpu mode='host-model' check='partial'>
    <feature policy='disable' name='hypervisor'/>
</cpu>
...
  1. Once that’s done, you can finally add the PCI IDs specific to your dedicated GPU (typically one for video and another one for the audio) and remove the virtual graphics (Display Spice, Video QXL and this kind of thing).
  2. Don’t start just yet though! If your dedicated GPU is being used e.g. with the nvidia/intel sound proprietary drivers, you’ll first need to 1. unload and unbind them and 2. load and bind the vfio drivers. I created two scripts being heavily inspired from what I’ve found on level1tech. One of them is to be used when you’re using the normal drivers and want to bind them to vfio and the other one is when you want to do the opposite (if you’re done playing with your VM):

    First use systemctl isolate multi-user.target before executing any of these scripts and make sure to replace the IDs by your own!
#!/bin/bash

## Unload vfio
modprobe -r vfio_pci
modprobe -r vfio_iommu_type1
modprobe -r vfio

## VGA Controller: unbind vfio-pci and bind nvidia
echo '0000:01:00.0' > "/sys/bus/pci/devices/0000:01:00.0/driver/unbind"
echo '10de 1b81'    > "/sys/bus/pci/drivers/nvidia/new_id"
echo '0000:01:00.0' > "/sys/bus/pci/devices/0000:01:00.0/driver/bind"
echo '10de 1b81'    > "/sys/bus/pci/drivers/nvidia/remove_id"

## Audio Controller: unbind vfio-pci and bind snd_hda_intel 
echo '0000:01:00.1' > "/sys/bus/pci/devices/0000:01:00.1/driver/unbind"
echo '10de 10f0'    > "/sys/bus/pci/drivers/snd_hda_intel/new_id"
echo '0000:01:00.1' > "/sys/bus/pci/devices/0000:01:00.1/driver/bind"
echo '10de 10f0'    > "/sys/bus/pci/drivers/snd_hda_intel/remove_id"

## Load nvidia
modprobe nvidia_modeset
modprobe nvidia_uvm
modprobe nvidia
modprobe nvidia_drm

systemctl isolate graphical.target
#!/bin/bash

GPU=01:00
GPU_ID="10de 1b81"
GPU_AUDIO_ID="10de 10f0"

modprobe -r nvidia_drm
modprobe -r nvidia_modeset
modprobe -r nvidia_uvm
modprobe -r nvidia

# Unbind the GPU from its drivers
#echo -n "0000:${GPU}.0" > /sys/bus/pci/drivers/nvidia/unbind || echo "Failed to unbind gpu from nvidia"
echo -n "0000:${GPU}.1" > /sys/bus/pci/drivers/snd_hda_intel/unbind || echo "Failed to unbind hdmi audio in gpu"

# Load vfio driver
modprobe vfio-pci

# Hand over GPU to vfio-pci
echo -n "$GPU_ID" > /sys/bus/pci/drivers/vfio-pci/new_id
echo -n "$GPU_AUDIO_ID" > /sys/bus/pci/drivers/vfio-pci/new_id


systemctl isolate graphical.target

  1. If everything went according to plan, you should be able to confirm that the driver used for both of your dedicated GPU hardware IDs is now vfio-pci instead of nvidia/snd_hda_intel. If that’s the case, you should be ready to actually start your VM and it should show up on the monitor connected to your dedicated GPU.

VoilĂ !

Note: thank you to all participants in the pages I’ve read from r/vfio, level1tech and Andryo Marzuki’s blog!

Leave a Reply

Your email address will not be published. Required fields are marked *