Back to Blog
Proxmox
VMware
SAN
Storage
Migration
Proxmox Clusters and SANs: The VMware Exit Problem Nobody Warned You About
January 10, 2026
9 min read
Leaving VMware feels like ripping off a Band-Aid. Painful, sure, but everyone tells you it's worth it. Licensing fatigue, cost blowups, contracts that suddenly feel hostile — the reasons are familiar. Proxmox shows up looking calm, open, and refreshingly honest. It promises clustering, HA, live migration, and a future where you're not negotiating with a sales rep every renewal cycle.
So you do what seems obvious.
You take the architecture you've trusted for a decade — a small cluster backed by a SAN — and you plug Proxmox into it. Same storage, new hypervisor. Easy, right?
That's where things start to get weird.
Because the part nobody warns you about is this: SAN-backed Proxmox does not behave like SAN-backed VMware, even when everything technically "works." The gap isn't performance. It's expectations.
And that gap catches a lot of teams flat-footed.
## The VMware Muscle Memory Problem
Most VMware environments trained admins to think in a very specific way. Shared block storage is the center of the universe. Snapshots are cheap. Thin provisioning is invisible. You overcommit storage with confidence and deal with consequences later — usually never.
VMFS quietly absorbs complexity. The SAN does its magic. The hypervisor stays out of the way.
When people jump to Proxmox, they often assume that same division of labor still applies. The SAN handles storage intelligence. The hypervisor just consumes it.
But Proxmox doesn't hide storage decisions from you. It makes you own them.
## Shared LVM: The First Surprise
Most SAN-based Proxmox clusters start with shared LVM over iSCSI or Fibre Channel. It's supported, documented, and stable. On paper, it checks the VMware replacement box.
And yes, it works. Live migration works. HA works. Multipath works.
But then the questions start.
Why are disks thick provisioned?
Why are snapshots missing or dangerous?
Why does deleting a VM not give space back?
This is where VMware muscle memory breaks down.
Shared LVM in Proxmox is intentionally conservative. Volumes want guaranteed space. Snapshots aren't lightweight metadata tricks; they're full-on copy-on-write structures that can balloon instantly. A 1TB virtual disk snapshot can mean 1TB of space pressure — even if nothing changes.
SAN-level thin provisioning doesn't magically fix this. The hypervisor doesn't see it. Proxmox plans storage as if it's real, reserved, and finite — because from its point of view, it is.
Admins expect flexibility. Proxmox gives predictability instead.
## "But the SAN Does Thin Provisioning"
This comes up a lot, and it's technically true.
Many modern SANs handle thin provisioning, deduplication, and compression beautifully. They reclaim blocks efficiently. They lie, politely, about capacity.
The problem is visibility.
With shared LVM, Proxmox doesn't understand how the SAN is cheating physics on your behalf. It still sees a big, fixed LUN. When you snapshot a disk, Proxmox prepares for worst-case growth. When you delete a VM, the SAN might not reclaim anything unless discard is configured perfectly.
That mismatch creates a false sense of safety.
Things look fine… until they're not.
And when they go wrong, they go wrong fast.
## NFS: The Option Everyone Resists (Then Uses Anyway)
At some point in almost every migration, someone says it out loud: "What if we just use NFS?"
This is usually followed by silence, mild horror, and a few jokes about performance.
But here's the uncomfortable truth: NFS is often the least surprising way to run shared storage on Proxmox.
File-based storage unlocks qcow2 disks. That means thin provisioning that Proxmox actually understands. Snapshots live inside the disk file instead of consuming entire logical volumes. Space usage becomes visible, predictable, and recoverable.
It's not as fast as raw block storage. It's not as elegant. It doesn't do multipath the way Fibre Channel does.
But it behaves.
And when you're migrating off VMware, behavior matters more than benchmarks.
## The Snapshot Reality Check
Snapshots are where expectations really collide with reality.
In VMware, snapshots are dangerous but deceptively easy. Everyone knows they're not backups. Everyone still uses them. They're quick, flexible, and usually fine.
In Proxmox with shared LVM, snapshots feel like a trap. They exist, but they don't feel safe. They don't scale well. And they can blow up your storage math in ways VMware never exposed.
That forces a cultural shift.
Teams start treating VMs like cattle again. Immutable images. External backups. Short-lived snapshots, if any. Recovery through restore, not rewind.
That's not worse. It's just different.
And different is uncomfortable during a migration.
## "Why Not Just Use Ceph?"
Once SAN pain sets in, Ceph enters the chat.
Ceph is native. Integrated. Designed for Proxmox clusters from the ground up. It scales, it self-heals, and it makes shared storage a first-class citizen instead of an external dependency.
But Ceph isn't free — not in hardware, not in networking, and definitely not in operational complexity.
For small 2–3 node clusters, Ceph often feels like overkill. It wants RAM. It wants fast disks. It wants clean networks. And it rewards discipline, not shortcuts.
Some teams love it. Others bounce off hard.
Ceph isn't the answer to SAN disappointment. It's a different philosophy entirely.
## The Real Problem Isn't Storage
Here's the part nobody says out loud during planning meetings:
The real problem isn't SANs. It's assumptions.
VMware trained admins to stop thinking about storage mechanics. Proxmox brings those mechanics back to the surface. It doesn't protect you from bad math. It doesn't mask risk behind friendly defaults.
That feels like a downgrade until you realize what you're gaining.
Transparency. Control. Fewer hidden cliffs.
But that only helps if you're ready for it.
## What Successful Migrations Have in Common
Teams that come out of SAN-backed Proxmox migrations happy tend to share a few traits:
They accept that not everything maps one-to-one from VMware.
They design storage around Proxmox's strengths, not VMware nostalgia.
They treat snapshots as tools, not safety nets.
They plan capacity like it actually matters — because it does again.
Some land on NFS. Some stick with shared LVM and adjust expectations. Some go all-in on Ceph. A few mix local storage with replication and stop chasing centralized perfection altogether.
None of those choices are wrong.
What's wrong is assuming Proxmox will quietly behave like VMware if you squint hard enough.
## The Exit Is Worth It — Just Not Painless
Proxmox isn't broken. SANs aren't obsolete. And migrating away from VMware doesn't mean lowering standards.
It does mean relearning where the guardrails are.
The teams that struggle aren't the ones lacking technical skill. They're the ones who expected the storage layer to stay invisible.
Proxmox doesn't do invisible. It does honest.
And once you stop fighting that, the whole stack starts to make a lot more sense.
Keep Exploring
Two-Node Clusters, Fibre Channel, and a Leap of Faith: Inside a VMware-to-Proxmox Migration
An IT team managing 10 clusters and 21 hosts across global sites is migrating its entire VMware infrastructure to Proxmox, navigating architectural constraints and storage complexities that don't appear in vendor documentation.
Proxmox in the Enterprise: The Gotchas VMware Admins Don't See Coming
VMware admins migrating to Proxmox face unexpected challenges that go beyond technical specs. From storage design to NUMA tuning and Windows licensing, here are the real-world gotchas experienced engineers wish they'd known before starting their migration.
Why Your Proxmox Migration Failed (Hint: It Wasn't Proxmox)
Most failed Proxmox migrations aren't Proxmox failures at all. They're the result of assumptions VMware spent 15 years teaching us to make — and infrastructure that stopped hiding its complexity.
Escaping VMware Isn't the Hard Part — Making Windows Boot on Proxmox VE Is
VMware licensing chaos made leaving easy. But Windows VMs don't migrate quietly — they blue screen. Here's what actually fixes the INACCESSIBLE_BOOT_DEVICE nightmare.