Back to Blog
Infrastructure
“You Can’t Have It Both Ways”: The Hard Truth About Sharing a Single GPU Between VMs and LXC in Proxmox
February 26, 2026
8 min read read
It always starts with momentum.
You finally get GPU passthrough working for a VM in Proxmox 9.1.1. The PCIe device shows up cleanly. The VM boots. Drivers install. CUDA works. You feel unstoppable.
Then the next thought hits: can I do this for LXC containers too?
That’s exactly the crossroads here. GPU passthrough working perfectly for VMs — but the moment you ask whether the same GPU can also be used by LXC containers, the answer gets uncomfortable fast .
Because what looks like a small extension is actually a fundamental architectural conflict.
## Passthrough to a VM Means Letting Go
One commenter laid it out clearly: when you pass a PCIe device through to a VM, the Proxmox host no longer interacts with it .
That’s not just a technical detail. That’s the entire point of passthrough.
The PCIe communication is handed directly to the VM. The host doesn’t load drivers. It doesn’t manage the device. It steps aside.
And once that happens, nothing else can use it. Not the host. Not an LXC. Not another VM.
The GPU becomes exclusive property of that one guest.
So if your plan is “keep GPU passthrough to VM and also expose it to LXC containers,” you’re fighting the definition of passthrough itself.
## LXC “Passthrough” Isn’t the Same Thing
Here’s where terminology trips people up.
Another user explained that passing a PCIe device to an LXC is kind of a misnomer . You’re not truly handing over the hardware bus like you do with a VM.
With LXC, the host still manages the GPU. The host loads kernel modules. Then you expose device nodes into the container so it can use the GPU’s capabilities.
That distinction matters.
VM passthrough = direct hardware control by the VM.
LXC GPU access = host-controlled device shared into container.
These are fundamentally different models.
And they don’t mix.
If the GPU is bound to vfio-pci for a VM, the host can’t manage it. If the host can’t manage it, it can’t expose it to LXC.
You have to choose which model you’re using.
## “Can I Share It Across Multiple LXCs?”
Now the conversation gets spicier.
One commenter said yes, it’s possible to pass the GPU to an LXC — but strongly advised against trying to use passthrough on multiple LXCs for the same GPU .
The reasoning is simple: passthrough is designed around one machine owning the entire device bus.
Trying to “share” it at that level leads to instability. Errors. Weird behavior. Time sinks.
If you want proper splitting, you need vGPU profiles — actual fractional GPU allocation at the driver level .
That’s a whole different beast.
Without vGPU, you’re basically asking multiple workloads to grab the same physical steering wheel at once.
It might move. It might crash.
It’s rarely elegant.
## The vGPU Temptation
Another comment hinted at the difference between passthrough and vGPU. With vGPU, the host facilitates communication rather than completely stepping away from the device bus .
That’s the clean way to carve a GPU into slices.
But now you’re in licensing territory. Driver constraints. Compatibility questions. NVIDIA policies. Suddenly what started as “can I share my GPU?” turns into “how deep into enterprise virtualization do I want to go?”
For home lab AI workloads across multiple LXCs, that complexity can spiral quickly.
## Kernel Mismatch and Driver Frustration
Then there’s the Proxmox 9.1.1 wrinkle.
The original poster ran into NVIDIA package issues not matching the newest kernel .
That’s a common Proxmox pain point. LXCs use the host kernel. If your host kernel is ahead of packaged drivers, you’re stuck compiling or using the NVIDIA .run installer.
And that’s exactly what others described doing:
- Install NVIDIA drivers on the host via the .run file.
- Let it disable nouveau.
- Reboot.
- In the LXC, install only user-space utilities with `--no-kernel-module` .
It worked for some — including unprivileged LXCs .
But even that came with caveats. One user mentioned reboot timing issues where the LXC tried to initialize drivers before the host had finished loading them. The fix? Delay container startup by 90 seconds .
That’s the kind of workaround that works… until you forget why you set it up that way.
## Privileged vs Unprivileged LXC
There was also the question: does the LXC need to be privileged?
One response clarified that LXCs run on the host kernel . Another example showed successful GPU usage in an unprivileged LXC .
So no, you don’t necessarily need a privileged container. But you do need correct device mappings, cgroup permissions, and proper driver layering.
Which brings us back to the main tension.
## The Core Conflict: VM + LXC at the Same Time
If your GPU is passed through to a VM via vfio-pci, the host cannot use it .
If the host cannot use it, it cannot expose it to LXC.
That’s the hard boundary.
So your real options look like this:
1. **GPU passthrough to a single VM.**
Maximum isolation. Zero host access. No LXC usage.
2. **Host-managed GPU shared to multiple LXCs.**
No VM passthrough. Containers consume GPU via exposed device nodes.
3. **vGPU configuration.**
Advanced setup. Fractional GPU allocation. Licensing and driver complexity.
What you can’t do cleanly is bind the GPU to a VM and expect it to simultaneously serve multiple LXCs.
That’s not how PCIe passthrough works.
## The Licensing Anxiety
There was even concern about NVIDIA “charging” if multiple containers use the GPU .
That fear often comes from confusion between physical GPU usage and virtualized enterprise features. If the host is managing the GPU and exposing it to containers, it’s still one physical GPU on one machine.
Licensing complications usually come into play when you enter vGPU or enterprise virtualization features.
But technically speaking, the GPU doesn’t care how many containers access it — as long as the host manages it properly.
The constraint is architectural, not moral.
## So What Should You Do?
If you need multiple AI-focused LXCs using the GPU, unbind it from vfio-pci and let the host manage it.
Install drivers on the host. Expose `/dev/nvidia*` into containers. Handle kernel rebuilds when necessary. Accept the occasional inconvenience of driver maintenance .
If you need one high-performance VM with full hardware ownership, stick with passthrough — and accept that the GPU belongs to that VM alone.
Trying to do both at the same time is where frustration lives.
The architecture doesn’t bend just because the hardware is powerful.
## The Bottom Line
Yes, GPU access in LXC is possible .
Yes, passthrough to a VM works.
No, you can’t have exclusive VM passthrough and host-managed LXC sharing simultaneously.
You’re choosing who owns the device bus.
And once you understand that, the question stops being “is it possible?”
It becomes: which ownership model fits your workload best?
Because in Proxmox, the GPU can be flexible.
It just can’t be in two places at once.
Keep Exploring
A New Proxmox Tool Launched With Big Promises—and Immediate Skepticism
PveSphere launched as a production-ready multi-cluster management platform for Proxmox VE. The community's reaction? Cautious optimism mixed with hard-earned skepticism about what 'production ready' really means.
When a Three-Node Proxmox Cluster Becomes a Small Data Center
A three-node Proxmox cluster with 4.5TB of RAM and hundreds of CPU cores drew major attention once readers realized it was serious production infrastructure.
“I Just Wanted a Simple Backup”: How a Kopia Error Turned Into a BlinkDisk Love Story — and a Lesson in Home Lab Reality
It started with a familiar kind of frustration. A mini PC running Ubuntu. A Windows laptop. A clean goal: back up files from multiple devices to one small home server. Nothing fancy. Nothing enterprise. Just solid, reliable backups.
“Is VMware Dying? Inside the Anxiety, Anger, and Hard Truths Facing Every VMware Administrator Right Now”
The question isn’t subtle. It’s raw. “What is the future for VMware administrator?” That’s not a casual career check-in. That’s someone staring at job boards, seeing fewer openings, hearing whispers about rising prices and companies jumping ship, and wondering if the ground beneath them is starting to crack .