Back to Blog
Proxmox
Windows 11
Virtualization
KVM
Performance
Windows 11 on Proxmox Is Broken for Some Power Users — And the Community Can't Agree Why
January 10, 2026
9 min read read
There's a special kind of frustration that only shows up when you've done everything "right."
You buy a flagship CPU. You pair it with fast DDR5, bleeding-edge NVMe storage, a modern GPU, and a clean Proxmox install. On paper, this thing should scream. Linux VMs fly. Benchmarks on the host look perfect. Temperatures are fine. Boost clocks are there when you need them.
Then you boot into a fresh Windows 11 VM, open Task Manager, and everything feels… wrong.
Apps crawl. Simple UI actions lag. Cinebench scores look like they came from a laptop released a decade ago. Windows insists your many-core monster CPU is running at a suspiciously neat 2.0GHz — and no amount of tuning seems to move that number.
For a growing number of power users, this isn't a misconfiguration. It's a pattern. And right now, nobody fully agrees on what's actually broken.
## The Setup That Should Be Impossible to Mess Up
The affected systems all look roughly the same: modern Intel CPUs with hybrid architectures, Proxmox as the hypervisor, Windows 11 as the guest. Often with GPU passthrough. Often built as high-end workstations rather than homelab toys.
On the host side, everything behaves exactly as expected. CPU boost clocks hit 5GHz and beyond. Disk throughput is absurdly fast. GPU passthrough works. Network performance is fine. Linux guests feel snappy.
Inside Windows 11, though, the experience collapses.
Multi-threaded benchmarks land around 3,000 points when they should be closer to 30,000. Heavy apps take minutes to launch. File Explorer hesitates. Task Manager shows almost no CPU usage while the system feels like it's wading through mud.
And yes, Windows always reports a hypervisor is present. That part isn't surprising. What is surprising is how consistently bad the performance stays no matter what people change.
## The 2.0GHz Red Herring
One of the first things everyone notices is the clock speed.
Windows reports the CPU maxing out at roughly 1997MHz. That looks damning. But it's also misleading. Under KVM, Windows doesn't have reliable access to real-time frequency scaling, so it often shows the TSC base clock instead of actual boost behavior.
Normally, that doesn't matter. Even if the number looks wrong, performance should still be there.
Here, it isn't.
The low Cinebench scores and real-world sluggishness confirm this isn't just cosmetic. Something is actively preventing Windows from using the CPU the way it should.
## E-Cores, P-Cores, and the Pinning Rabbit Hole
The first major theory centers on Intel's hybrid core layout.
Modern i9 chips mix performance cores and efficiency cores, and Linux doesn't always enumerate them in a clean, intuitive order. Pin the VM to the wrong logical CPUs and you might end up running a heavyweight Windows workload almost entirely on E-cores.
That explanation makes sense. It's happened before. Tools like lscpu can lie by omission, while hwloc reveals a very different topology. Several users were convinced this had to be the issue.
Except many of the affected systems did their homework.
They verified core mappings with lstopo. They pinned only confirmed P-cores. They matched sockets, cores, and threads correctly. Windows reported the expected topology. And still, performance didn't budge.
If this were just bad pinning, fixing the affinity would've solved it. For a lot of people, it didn't.
## Hyper-V Enlightenments: Optimization or Trap?
Another camp points to Hyper-V enlightenments.
Windows expects certain paravirtualization hints when it's running under a hypervisor. These aren't about detection — they're performance shortcuts. They reduce VM exits, improve interrupt handling, and generally keep Windows from doing dumb, expensive things.
In desperation, many users tried disabling all Hyper-V flags, hiding KVM, and stripping the VM down to bare metal vibes in hopes of tricking Windows or GPU drivers.
What they may have done instead is kneecap performance.
Several experienced Proxmox users argue that removing those enlightenments, especially combined with kvm=off, effectively forces Windows into a worst-case compatibility path. You still have a VM, but without the optimizations that make it usable.
That explanation fits the symptoms. Except some users restored all recommended Hyper-V flags, removed kvm=off, and still saw no improvement.
At that point, the blame starts drifting away from configuration.
## The Windows 11 24H2 Suspicion
This is where things get uncomfortable.
Multiple reports mention the same pattern: older Windows versions behave fine. Linux VMs behave fine. Windows 10 behaves fine. Windows 11 23H2 behaves fine.
Windows 11 24H2 does not.
Fresh installs of 24H2 feel broken out of the gate. Rolling back to older ISOs suddenly restores normal performance on the same Proxmox host, with the same VM configuration.
That strongly suggests a regression — either in Windows itself or in how QEMU presents certain CPU features to newer Windows builds.
There's precedent for this kind of thing. Microsoft has been steadily tightening virtualization security, adding layers like VBS, HVCI, and deeper scheduler changes. Most of the time, hypervisors adapt quietly. Sometimes, they don't.
Right now, it looks like this might be one of those times.
## "It's Proxmox." "No, It's QEMU." "No, It's Windows."
Ask ten people what's broken and you'll get three confident answers.
Some argue Proxmox hasn't caught up with recent Windows changes yet. Others insist Proxmox is just a wrapper and the issue lives squarely in QEMU. A third group points the finger at Microsoft pushing assumptions that don't hold outside Hyper-V.
What makes this messier is that each explanation sounds plausible — and none of them fully explain why identical configurations behave so differently across Windows builds.
The lack of a clear smoking gun is why this issue feels so maddening. There's no kernel panic. No obvious error. No single toggle that fixes it for everyone.
Just slow Windows, low CPU usage, and a lot of wasted weekends.
## The "CPU Type" Debate That Won't Die
One workaround keeps coming up: don't use cpu: host.
Some users claim that switching to a generic x86-64 v2 or v3 CPU model — and reinstalling Windows from scratch afterward — magically fixes everything. The theory is that exposing too many modern CPU features causes Windows to enable behaviors that perform badly under KVM.
The problem? Plenty of people tried exactly that and saw no change at all.
It might work for some setups. It clearly doesn't for others. Which makes it feel less like a solution and more like a dice roll.
## Why This Hurts More Than a Typical Bug
This isn't a niche lab problem. These are workstation-class machines being used for CAD, rendering, development, and creative work. The whole point of Proxmox here is consolidation — one powerful box doing everything.
When Windows inside that box suddenly performs like it's running on a ten-year-old CPU, the entire value proposition falls apart.
And because Linux guests are fine, it's hard to justify tearing down the host just to make Windows happy.
## Where Things Stand Right Now
As of today, there's no universally accepted fix.
The most reliable mitigation seems to be avoiding Windows 11 24H2 entirely, sticking to older builds, or falling back to Windows 10. That's not great. It's not future-proof. And it doesn't help people who already reinstalled multiple times chasing ghosts.
What's missing is coordination. A confirmed bug report. A reproducible test case that Proxmox, QEMU, or Microsoft can't ignore.
Until that happens, this lives in the worst possible space: real, painful, and just ambiguous enough that everyone thinks it's someone else's fault.
## The Quiet Lesson Here
If you're running Windows 11 on Proxmox today and everything feels fine, that's great. Truly.
But if you're planning a new high-end build, this is one of those moments where being an early adopter quietly backfires. Hybrid CPUs, fast-moving Windows releases, and complex virtualization stacks don't always play nicely together — even when each piece works perfectly on its own.
Right now, "Windows 11 on Proxmox" isn't broken for everyone.
But for some power users, it absolutely is. And until the community — or the vendors — pin down why, the safest workaround might be the least satisfying one of all: don't upgrade.
Keep Exploring
Old CPUs, New Tricks: Squeezing Performance from Legacy Hardware with Proxmox
Can a 14-year-old dual Xeon system still run Windows 11? The homelab community shows how Proxmox turns aging hardware into a learning playground—even when performance isn't the primary goal.
Windows VMs Crawling in Proxmox? Changing This One Setting Might Be the Fix
Slow Windows VMs in Proxmox? The CPU type setting might be killing your performance. Learn why 'Host' isn't always best and how switching to emulated models can deliver 15x speed improvements.
Running PBS on the Same Host? Here's Why Your Backups Might Crawl
High-end hardware but slow backups? Learn why running Proxmox Backup Server in a VM on the same host creates bottlenecks—and what you can do about it.
Docker in LXC vs VMs on Proxmox: Why This Debate Refuses to Die in 2026
Docker in LXC or VM on Proxmox? Compare security, performance, backup behavior, and operational risk so you can pick the right model.