To have an ‘HVM’ for gaming, you must have
A dedicated GPU. By dedicated, it means: it is a secondary GPU, not the GPU used to display dom0. In 2023, ‘Nvidia’ and ‘Amd’ GPU work. Not tested with Intel GPUs. External GPU using thunderbolt work (Create a Gaming HVM – #8 by solene)
A lot of patience. GPU passthrough is not trivial, and you will need to spend time debugging.
A screen available for the gaming ‘HVM’. (It can be a physical monitor or just to have multiple cables connected to the screen and switching between input source) [Optional]
Dedicated gaming mouse and keyboard [Optional] .
Goal
What
The goal of this step if to retrieve the default IOMMU Group (VFIO – “Virtual Function I/O” — The Linux Kernel documentation) of your hardware.
Why
It can help understanding potential issue with your setup (what devices live in the same IOMMU group as your GPU) / finding potential workaround.
If you feel lucky, skip this step.
How
You can’t see your IOMMU Group when you are using Xen (the information is hidden from dom0).
Boot a live linux distribution
In the grub, enable iommu: Add the parameters iommu=1 iommu_amd=on to the linux commandline
Once you logged in your live linux distribution, you need to retrieve the folder structure of /sys/kernel/iommu_group.
You can use the following script to do that:
#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
echo “IOMMU Group ${g##*/}:”
for d in $g/devices/*; do
echo -e “t$(lspci -nns ${d##*/})”
done
done
You must hide your secondary GPU from dom0. To do that, you have to modify the GRUB. In a dom0 Terminal, type:
qvm-pci
Then find the devices id for your secondary GPU. In my case, it is dom0:0a_00.0. Edit /etc/default/grub, and add the PCI hiding.
GRUB_CMDLINE_LINUX=”… rd.qubes.hide_pci=0a:00.0 ”
then regenerate the grub
grub2-mkconfig -o /boot/grub2/grub.cfg
If you are using UEFI and Qubes OS 4.1 or earlier, the file to override with grub2-mkconfig is /boot/efi/EFI/qubes/grub.cfg.
Note: if after this step when you reboot the computer you get stuck in the QubesOS startup that means you are trying to use the GPU you just hid. Check your BIOS options. Also check the cables, BIOS have some GPU priority based on the type of cable. For example, DisplayPort can be favoured over HDMI.
Once you have rebooted, in dom0, type sudo lspci -vvn, you should see “Kernel driver in use: pciback” for the GPU you just hid.
Since the release of this qubes version of xen: 4.17.2-8 (R4.2, 2024-01-03), no additional configuration is required.
Remove any existing “max-ram-below-4g” workaround
If you are using an older version
Why do we need to do that ?
github.com/QubesOS/qubes-issues/issues/4321
Copy-paste of the comment:
This is caused by the default TOLUD (Top of Low Usable DRAM) of 3.75G provided by qemu not being large enough to accommodate the larger BARs that a graphics card typically has. The code to pass a custom max-ram-below-4g value to the qemu command line does exist in the libxl_dm.c file of xen, but there is no functionality in libvirt to add this parameter. It is possible to manually add this parameter to the qemu commandline by doing the following in a dom0 terminal.
( “max-ram-below-4g” is not related to the amount of ram you can pass to the VM. It is not related to VRAM either. It is related to WHAT is available in the memory space for 32 bits system and WHAT is stored in the memory space for 64 bits system (usable ram is only a part of what need to be mapped in the memory space) )
Finding the correct value for this parameter
Below, we set the “max-ram-below-4g” parameter to “3.5G”.
For some GPU this value need to be “2G” (discovered here: Quick howto: GPU passthrough with lots of RAM). It is not currently well understood why the value need to be exactly “2G” or exactly “3.5G”, or maybe some other values for other GPU/configuration we never saw yet. ( AppVM with GPU pass-through crashes when more than 3.5 GB (3584MB) of RAM is assigned to it · Issue #4321 · QubesOS/qubes-issues · GitHub )
More investigation is required to understand what is going on with this parameter.
Current best guess is to run this command in dom0:
lspci -vvs GPU_IDENTIFIER | grep Region , for example: lspci -vvs 0a:00.0 | grep Region.
if the max value of [size=XXXX] is 256MB, try 3.5G for max-ram-below-4g. If the max value is bigger, try 2G for max-ram-below-4g.
Update: I think I discovered the reason (AppVM with GPU pass-through crashes when more than 3.5 GB (3584MB) of RAM is assigned to it · Issue #4321 · QubesOS/qubes-issues · GitHub Xen project Mailing List ). If you want and have the skills required to compile the xen package, try to apply this patch Fix guest memory corruption caused by hvmloader by neowutran · Pull Request #172 · QubesOS/qubes-vmm-xen · GitHub and confirm it if work as expected. With this patch, the part about “patching stubdom-linux-rootfs.gz” is not needed.
Patching stubdom-linux-rootfs.gz
I modified the original code to:
make it works with Qubes R4.1/R4.2
removed one of the original limitations by restricting the modification to VM with a name starting with “gpu_”
Added a way to modify per vm the value for “max-ram-below-4g”. Ex, if you want specifically to use 2G for “max-ram-below-4g”, name the vm “gpu_2G_YOURNAME”, if you want specifically to use 3.5G for “max-ram-below-4g”, name the vm “gpu_3n5G_YOURNAME”
mkdir stubroot
cp /usr/libexec/xen/boot/qemu-stubdom-linux-rootfs stubroot/qemu-stubdom-linux-rootfs.gz
cd stubroot
gunzip qemu-stubdom-linux-rootfs.gz
cpio -i -d -H newc –no-absolute-filenames ../qemu-stubdom-linux-rootfs
sudo mv ../qemu-stubdom-linux-rootfs /usr/libexec/xen/boot/
Note that this will apply the change to the HVM with a name starting with “gpu_”. So you need to name your gaming HVM “gpu_SOMETHING”.
Alternatively, the following dom0 script “patch_stubdom.sh” does all the previous steps:
#!/bin/bash
patch_rootfs(){
filename=${1?Filename is required}
cd ~/
sudo rm -R “patched_$filename”
mkdir “patched_$filename”
cp /usr/libexec/xen/boot/$filename “patched_$filename/$filename.gz”
cp /usr/libexec/xen/boot/$filename “$filename.original”
cd patched_$filename
gunzip $filename.gz
cpio -i -d -H newc –no-absolute-filenames &2 && exit
patch_string=$(cat ../$filename.patched
sudo cp ../$filename.patched /usr/libexec/xen/boot/$filename
cd ~/
}
grep -i “max-ram-below-4g” /usr/share/qubes/templates/libvirt/xen.xml && “!!ERROR!! xen.xml is patched ! EXITING ! ” && exit
patch_rootfs “qemu-stubdom-linux-rootfs”
patch_rootfs “qemu-stubdom-linux-full-rootfs”
echo “stubdom have been patched.”
OUTDATED – DO NOT USE – Other method: Patching xen.xml instead of stubdom
Instead of patching stubdom-linux-rootfs, you could inject the command directly inside the configuration template. It is the file “core-admin/ “templates/libvirt/xen.xml” in the “qubes-core-admin” repository. In dom0 this file is in “/usr/share/qubes/templates/libvirt/xen.xml”
See below the part that have been modified to add the needed “max-ram-below-4g” option.
/root/.pulse/client.conf
The sound was buggy/laggy on my computer. So tried to find a workaround by playing with pulseaudio settings. It was more or less random tries, so I can’t really explain it: In /etc/pulse/daemon.conf add the following lines:
default-fragments = 60
default-fragment-size-msec = 1
high-priority = no
realtime-scheduling = no
nice-level = 18
In /etc/pulse/qubes-default.pa{.text} change
load-module module-udev-detect
to
load-module module-udev-detect tsched=0
You can launch you favorite Windows Manager like that
sudo ./xorgX1.sh /usr/bin/i3
I trying to write a script to automate all the previous steps.
It is available here:
https://git.sr.ht/~yukikoo/gpu_template
Please be carefull and read the code, not much tests have been done
Issues and fixes
In one case in a setup with Intel IGPU + Nvidia DGPU, dom0 xorg crashed.
Solved the case by adding a Xorg configuration to explicitly use Intel:
I was able to pass through my GPU without even hiding it using boot parameters by just configuring the correct GPU to boot Xorg on. Since I’m using my iGPU for Qubes, the config file is this:
vim /etc/X11/xorg.conf.d/20-intel.conf
Section “Device”
Identifier “Intel Graphics”
Driver “Intel”
Option “DRI” “3”
EndSection
I think this might fix your situation. If it doesn’t work, you can simply just delete the 20-intel.conf file you created and go back to normal.
References
This document was migrated from the qubes-community project
Page archive
First commit: 18 Jan 2023. Last commit: 18 Jan 2023.
Applicable Qubes OS releases based on commit dates and supported releases: 4.1
Original author(s) (GitHub usernames): neowutran
Original author(s) (forum usernames): @neowutran
Document license: CC BY 4.0
Contributors

