Tesla M40 vGPU Proxmox 7.1


This guide is for the 24GB verison of the Tesla M40 to override the default vGPU profiles to run 3x 8GB vGPUS
(This can also be used with the 12GB version just adjust the framebuffer accordingly)


Host: Proxmox 7.1-10
Kernel: 5.13.19-2-pve
Driver (Host): NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.run from NVIDIA portal
Driver (Guest): 511.79-quadro-rtx-desktop-notebook-win10-win11-64bit-international-dch-whql Direct from NVIDIA driver download
Profile: nvidia-18 [framebuffer overridden to 8053063680 (8Gb) using vgpu_unlock-rs]
Software: Parsec & VB Cable for audio


Download the Host driver from the NVIDIA licensing portal then configure the driver following the below instructions.

Installation - vGPU Unlock

Install Dependencies
nano /etc/apt/sources.list
deb http://download.proxmox.com/debian/pve buster pve-no-subscription

apt update
apt -y upgrade
apt -y install git build-essential dkms jq

git clone https://github.com/DualCoder/vgpu_unlock

-NOTE- Recommend downloading on another system and transferring vgpu_unlock to Proxmox via SCP. Failed through GIT, successful when SCP’d. Makes no sense, I know.

If your Kernel isn’t already 5.13.19-** (You can check by doing uname -a example output below ) you can install one using apt install pve-headers-5.13.19-*-pve && apt install pve-kernel-5.13.19.*-pve

wget http://ftp.br.debian.org/debian/pool/main/m/mdevctl/mdevctl_0.81-1_all.deb

chmod -R +x vgpu_unlock
dpkg -i mdevctl_0.81-1_all.deb

nano /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
- OR -
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"

Save file and close then update-grub

Load VFIO modules on boot
nano /etc/modules


Save file and close

echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf

update-initramfs -u

Verify IOMMU Enabled
dmesg | grep -e DMAR -e IOMMU
chmod +x NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.run
./NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.run --dkms

In the file nano /usr/src/nvidia-510.47.03/nvidia/os-interface.c add the line #include "/root/vgpu_unlock/vgpu_unlock_hooks.c" after #include "nv-time.h" so that it looks like the following

#define  __NO_VERSION__

#include "os-interface.h"
#include "nv-linux.h"

#include "nv-time.h"

#include "/root/vgpu_unlock/vgpu_unlock_hooks.c"

Then add ldflags-y += -T /root/vgpu_unlock/kern.ld to the end of the file nano /usr/src/nvidia-510.47.03/nvidia/nvidia.Kbuild Then save and quit and run

dkms remove -m nvidia -v 510.47.03 --all
dkms install -m nvidia -v 510.47.03

And lastly


MDEVCTL Configuration

When you log back in run mdevctl types and you should see something along the lines of the below

Determine which vGPU Profile you’d like to use. Make note of the profile identifier at the top. In the above example, that is “nvidia-18”.

Generate UUIDs for each vGPU you want to create. I used https://uuidgenerator.net, as it allows you to generate as many UUIDs as you’d like, and download them in a text file. This allows for easy scripting in the next step.

vGPU Rust Unlock

Clone vgpu_unlock-rs and run to download and install rust

curl https://sh.rustup.rs -sSf | sh -s --

Then run source $HOME/.cargo/env to reconfigure your shell. Then cd vgpu_unlock-rs and run cargo build --release

Create the requried directories

mkdir /etc/systemd/system/nvidia-vgpud.service.d
mkdir /etc/systemd/system/nvidia-vgpu-mgr.service.d

Then create the following files

nano /etc/systemd/system/nvidia-vgpud.service.d/vgpu_unlock.conf
nano /etc/systemd/system/nvidia-vgpu-mgr.service.d/vgpu_unlock.conf

with the following:


Create the profile override using:

nano /etc/vgpu_unlock/profile_override.toml

and populate with the below: (for 8gb vGPU’s)

framebuffer = 8053063680

MDEVCTL Configuration

Take one of the uuid’s you generated earlier and use it in this command (repeat as many times as VM’s you want) where the PCI address can be found by using nvidia-smi

mdevctl start -u [UUID] -p [PCI Address] --type nvidia-18
mdevctl define --auto --uuid [UUID]

Setup a Windows VM in the GUI making sure to set cpu to host and disabling balloning in the memory options, once done modify VM config at the following location and add the below nano /etc/pve/qemu-server/###.conf Or if part of a cluster nano /etc/pve/nodes/[Node]/qemu-server/###.conf replacing [UUID] with your generated one

args: -device 'vfio-pci,sysfsdev=/sys/bus/mdev/devices/[UUID],display=off,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,x-pci-vendor-id=0x10de,x-pci-device-id=0x17F0,x-pci-sub-vendor-id=0x10de,x-pci-sub-device-id=0x11A0' -uuid [UUID]

Install the Windows VM as per normal, Once at the desktop install the NVIDIA driver for a Quadro M6000
Install Parsec as a service and login before shutting down the VM.

Once done modify your VM’s settings either by using the GUI and setting the display to none or via the command line by adding vga: none in the config

Important notes

  1. Once configured modify the Windows power plan to not put the computer to sleep or turn off the display else the VM will suspend and parsec won’t connect
  2. If you can’t get all three VM’s to boot on the host run nvidia-smi -e 0 to disable ECC memory to allow more VRAM for the VM
  3. If you wish to change to 2 or 4 VM’s modify the profile_override.toml to be 5.5Gb or 11.5Gb in bytes Handy Link
    6gb = 5905580032
    8gb = 8053063680
    12gb = 12348030976