Back to Blog
Proxmox
Clusters
Quorum
High Availability
Your Two-Node Cluster Is a Trap: The Brutal Truth About Proxmox Quorum No One Explains
March 25, 2026
5 min read read
# “Your Two-Node Cluster Is a Trap”: The Brutal Truth About Proxmox Quorum No One Explains
## The Excitement Phase: “I Built a Cluster—Now What?”
There’s a moment when everything clicks. You’ve got two machines, decent specs, maybe a NUC paired with a slightly beefier box, and suddenly you’ve built a cluster. It feels like leveling up. Not just running VMs anymore—you’re orchestrating infrastructure.
That excitement is real. You start thinking about best practices, redundancy, maybe even high availability. You already know the rule: three nodes is ideal. But two feels close enough, right?
That’s where the illusion begins.
Because what looks like a cluster… isn’t really behaving like one yet.
## The First Warning: “You Need a Third Node. Seriously.”
The most immediate response cuts through everything: you need a third node to avoid split brain.
It sounds like overkill at first. You’ve got two machines—why isn’t that enough? But clustering isn’t about having multiple machines. It’s about consensus.
In Proxmox, quorum decides whether the cluster is allowed to function. And quorum requires a majority. With two nodes, there is no majority if one goes down. There’s just… disagreement.
One user reinforces it even harder: a third vote—often via a QDevice—is “basically required.”
Not optional. Not nice-to-have. Required.
That’s the moment the excitement starts to shift into caution.
## The Lockout Scenario: When One Node Goes Down and Everything Stops
Here’s the part most people don’t expect.
If you’re running a two-node cluster and one node goes offline—even for something harmless like a reboot—the other node can lock you out. Completely. Because it can’t establish quorum.
Think about that for a second.
Your “cluster” becomes unusable… because one machine restarted.
That’s not a bug. That’s the system doing exactly what it’s designed to do: prevent split brain, where two nodes act independently and corrupt shared state.
But in a homelab, it feels brutal. You’re not thinking about distributed consensus—you’re just trying to keep things running.
And suddenly, the system is working against you.
## The Workaround: QDevice and the “Fake Third Node”
This is where the clever workaround comes in: a QDevice.
Instead of adding a full third server, you can add a lightweight vote—something that participates in quorum without running workloads. It’s not a full node, but it tips the balance.
That’s why someone casually mentions forgetting to “resetup my qdevice” like it’s a critical piece of the setup.
Because it is.
With a QDevice, your two-node cluster becomes functionally stable. Without it, you’re always one reboot away from confusion.
It’s the smallest fix with the biggest impact—and somehow, it’s the one most people learn about too late.
## The Magic Moment: “Wait… I Can Move Running VMs?”
Despite all the warnings, there’s a reason people stick with clustering.
Because when it works, it feels incredible.
Live migration is the feature that flips everything. Moving a running VM from one node to another without downtime feels like cheating. One user described it as “truly magical” once you decouple workloads from hardware.
Another just says it straight: “Your mind will be blown away.”
And they’re not exaggerating.
This is the moment where your homelab stops feeling like a collection of machines and starts feeling like a platform.
## The Reality Check: “Do You Actually Need a Cluster?”
Then comes the uncomfortable question.
Do you even need this?
One perspective pulls things back to earth: for most people, clusters in a homelab are about learning, not necessity. Real high availability setups—Ceph, HA, zero-downtime upgrades—are powerful, but they also add complexity fast.
Another user hints at an alternative: managing multiple standalone hosts without clustering, especially if some nodes aren’t always online.
That’s the divide:
- Clusters for experience and advanced features
- Standalone setups for simplicity and reliability
And depending on your goals, one might make way more sense than the other.
## The Hidden Constraint: Your Weakest Node Defines Everything
There’s another subtle detail that catches people off guard: your cluster is only as flexible as its weakest hardware.
One comment points out that the oldest CPU in your cluster determines what can migrate where.
That means your shiny new node doesn’t unlock new capabilities if your older node can’t support them. Migration compatibility becomes a lowest-common-denominator problem.
It’s not a dealbreaker. But it’s another reminder that clusters aren’t just about adding machines—they’re about aligning them.
## The Bigger Truth: Clusters Solve Problems You Might Not Have Yet
What makes this whole situation so interesting is how quickly people jump into clustering—and how slowly they realize what it’s actually for.
Clusters shine when you need:
- High availability
- Seamless failover
- Zero-downtime maintenance
But if you’re just running a few services, those benefits might not outweigh the added complexity.
And that’s the quiet truth running through all of this: a two-node cluster feels like progress, but without quorum, it’s fragile. Add a third vote, and it becomes stable. Add shared storage and HA, and it becomes powerful.
But until then?
It’s a learning experience disguised as infrastructure.
And maybe that’s exactly what it’s supposed to be.
Keep Exploring
The Day a Three-Node Cluster Refused to Trust Itself
A plain-English breakdown of why a three-node cluster shuts down or panics after losing quorum, even when one host can still run every VM.
When a Cluster Loses Its Mind: The Harsh Reality Behind Three-Node High Availability
Why a three-node HA cluster can panic when two nodes disappear, and how quorum and split-brain protection shape that behavior.
Ceph vs ZFS vs NAS: The Truth About High Availability Storage in Proxmox
Ceph vs ZFS vs NAS for Proxmox HA: tradeoffs, failure patterns, and architecture choices that improve reliability without overengineering.
Ceph, StarWind, Synology: How I Accidentally Tried Every Storage Idea at Once
A story about running Ceph, StarWind VSAN, and Synology simultaneously—not by design, but because every attempt to simplify storage somehow added another layer instead.