Back to Blog
Proxmox
iSCSI
Storage
VMware
Performance
When Proxmox and iSCSI Just Don't Click: What Ex-VMware Users Need to Know
October 19, 2025
8 min read
If you're coming from VMware, the first thing you'll probably try in your new Proxmox setup is to get your iSCSI storage talking to your hypervisor. It makes sense. iSCSI is fast, reliable, and something you already know. But then everything breaks—or at least slows to a crawl—and you're stuck asking, "Why is my Proxmox iSCSI storage performance so bad?"
You're not the only one in this boat.
## Why Proxmox iSCSI Storage Isn't Like VMware
Let's get one thing straight: Proxmox doesn't treat iSCSI storage the same way VMware does. VMware's VMFS file system handles shared storage over iSCSI like a pro—it was designed that way. In Proxmox, though, when you use LVM over iSCSI, you're flying much closer to the metal. That means:
- No per-VM IO isolation
- No smart queuing
- No "friendly" buffering layer like VMFS
The result? One IO-heavy VM can clog the entire storage pipe.
## Symptoms of Poor Proxmox iSCSI Storage Performance
This isn't just a theoretical problem. Here's what people are seeing in real Proxmox environments using iSCSI storage:
- Backups (especially with PBS) tank all other VM performance
- Windows Updates spike IO wait across every VM
- A single VM's disk load slows everything else to a crawl
- Massive latency spikes during random write workloads
If any of that sounds familiar, your Proxmox iSCSI setup probably needs some love—or a rethink entirely.
## The Hardware Setup Isn't Always the Problem
You might think the fix is just beefier gear. Not always. One user outlined this config:
- TrueNAS Core backend
- 4 mirrored SSD vdevs
- 10Gb SFP+ networking
- Jumbo frames at 9000 MTU
- LVM over iSCSI exported to Proxmox
It looks solid on paper—but IO performance was still a disaster.
The culprit? Not hardware. It was how Proxmox handles iSCSI storage, and how it differs from VMware.
## So… Can IOThread Fix It?
This gets tossed around a lot: "Just enable IOThread on your VM disk."
Unfortunately, that's rarely a silver bullet. IOThread can help with specific workloads—like high-throughput VMs—but it won't patch deeper architectural issues in your Proxmox iSCSI storage setup.
## Top Ways to Fix Proxmox iSCSI Storage Bottlenecks
Let's cut to the chase. If your iSCSI performance in Proxmox sucks, here's what people have done to fix it:
### 1. Switch to NFS Instead of iSCSI
This one came up over and over again. If you're using TrueNAS, NFS is often faster, simpler, and more reliable for Proxmox storage than iSCSI.
Why? Because NFS abstracts away many of the block-level headaches and plays nicer with Proxmox's file-based storage model. Snapshots, backups, and VM migration all become smoother.
### 2. Rebuild Your ZFS Pool with Mirrors
RAIDZ2 is great for resilience—but terrible for random write IOPS. If you're using SSDs, consider setting up stripes of mirrored vdevs (RAID10 equivalent). That gives you the read/write performance VMs actually need.
### 3. Add a SLOG Device
TrueNAS users often overlook this: if you're not using a fast SLOG (like an NVMe drive), your sync writes are probably dragging you down. Adding a SLOG can dramatically improve performance in ZFS when paired with iSCSI or NFS.
### 4. Use Multipath for iSCSI (If Possible)
If you have more than one NIC, multipath iSCSI can increase bandwidth and add redundancy. But even here, setup is tricky and doesn't always yield VMware-level smoothness. Still, it's a must-have for serious iSCSI deployments.
### 5. Install VirtIO Drivers on Windows (Properly)
If Windows won't see your VirtIO disks, you're not alone. But don't give up—check your driver version, ISO source, and install method. Without VirtIO, you're leaving performance on the table.
## Common Pitfalls with Proxmox iSCSI Storage
Let's quickly hit some other things that often go wrong:
- **PBS on iSCSI-backed spinning disks** — recipe for pain
- **Using DRAM-less SSDs** — huge performance hits under load
- **No local storage fallback** — all IO goes over network, even temp files
- **MTU mismatch** — 9000 MTU won't help if any device in the chain is still at 1500
## Final Word: Don't Force Proxmox to Be VMware
Here's the hard truth: Proxmox iSCSI storage just doesn't behave like VMware's iSCSI+VMFS model. If you try to replicate your old setup exactly, you'll likely end up disappointed—or worse, stuck with a system that falls over every time you run a backup.
Instead, rethink your architecture. Proxmox shines when you play to its strengths:
- ZFS local storage with snapshots and replication
- NFS over iSCSI when central storage is needed
- Smart VM placement to reduce IO bottlenecks
Making the jump from VMware? Then let go of the VMFS mindset. Once you embrace how Proxmox handles storage—quirks and all—you'll realize it's not worse. It's just different.
## TL;DR – Key Takeaways for Proxmox iSCSI Storage
- Proxmox doesn't handle iSCSI like VMware; expect issues without tuning.
- LVM over iSCSI lacks isolation; one VM can throttle the whole cluster.
- NFS often outperforms iSCSI for Proxmox + TrueNAS setups.
- RAID10-style mirrored vdevs improve ZFS performance drastically.
- Use a SLOG and consider multipath networking if sticking with iSCSI.
Want Proxmox to run like a dream? Start by rethinking how you're using Proxmox iSCSI storage. It just might be time to let go of what worked in VMware.
Keep Exploring
Two-Node Clusters, Fibre Channel, and a Leap of Faith: Inside a VMware-to-Proxmox Migration
An IT team managing 10 clusters and 21 hosts across global sites is migrating its entire VMware infrastructure to Proxmox, navigating architectural constraints and storage complexities that don't appear in vendor documentation.
Proxmox Clusters and SANs: The VMware Exit Problem Nobody Warned You About
Leaving VMware for Proxmox? Your SAN-backed cluster won't behave the same way—and that gap in expectations catches many teams flat-footed.
Proxmox in the Enterprise: The Gotchas VMware Admins Don't See Coming
VMware admins migrating to Proxmox face unexpected challenges that go beyond technical specs. From storage design to NUMA tuning and Windows licensing, here are the real-world gotchas experienced engineers wish they'd known before starting their migration.
Why Your Proxmox Migration Failed (Hint: It Wasn't Proxmox)
Most failed Proxmox migrations aren't Proxmox failures at all. They're the result of assumptions VMware spent 15 years teaching us to make — and infrastructure that stopped hiding its complexity.