Back to Blog
NFS
Homelab
Storage
Proxmox
You're Mounting It Wrong: The NFS Mistake That Keeps Breaking Homelabs
March 22, 2026
5 min read read
# “You’re Mounting It Wrong”: The NFS Mistake That Keeps Breaking Homelabs
## The Confusion: “Why Can’t My VM Just See My Storage?”
It starts simple enough. You’ve got a Proxmox VM with limited storage—maybe 100GB—and a perfectly good NAS sitting there with terabytes of free space. The goal feels obvious: connect the two and move on.
But then you hit the wall. Where’s the path? Do you partition more space? Resize disks every time? Suddenly, something that should feel plug-and-play turns into a maze of mounting, permissions, and conflicting advice.
And that’s the first mistake: assuming NFS works like local storage. It doesn’t. It’s not about resizing disks—it’s about exposing storage correctly.
## The First Divide: VM vs LXC (And Why It Matters More Than You Think)
Almost immediately, the conversation splits into two camps: are you using a VM or an LXC?
Because the answer changes everything.
For VMs, the advice is refreshingly boring: just mount the NFS share inside the VM using `/etc/fstab` and call it a day. One commenter put it plainly: “VM you just do through fstab… then mount -a.”
But LXCs? That’s where things get weird.
Now you’re dealing with container boundaries, mount points, and permission mapping. It’s no longer just “connect to storage”—it’s “how do I safely pass storage through layers of abstraction?”
And that’s where most setups start to fall apart.
## The Clean Approach: Mount on the Host, Then Pass It Down
One of the most consistent recommendations cuts through the noise: don’t mount NFS inside the container. Mount it on the Proxmox host, then pass it into the LXC.
It sounds like extra work, but it simplifies everything.
One user laid it out step by step: mount the NAS share to something like `/mnt/proxmox_nfs` on the host, then add a mount point in the LXC config so the container can access it.
This approach does two things:
- Keeps networking and storage logic centralized
- Avoids weird container permission issues
It’s not flashy. But it works consistently.
And in homelabs, consistency usually beats cleverness.
## The Permission Nightmare: “Why Can Plex See It but Nothing Else Can?”
This is where things really break.
You mount the share. It shows up. Plex can read it. But your other apps? Nothing. No access. No errors. Just silence.
That’s because of user permissions inside containers. One frustrated comment captured it perfectly: one LXC can access the NAS, but others “use a root user that can’t access” it—even with bind mounts.
This is the part no one tells you upfront: mounting is easy. Permissions are the real problem.
You’re dealing with UID/GID mismatches between:
- The NAS
- The Proxmox host
- The container users
If those don’t align, access breaks—even if everything looks correct.
## The Security Debate: Privileged vs Unprivileged Containers
Then comes the argument that never really gets resolved.
Some people say: just use privileged containers. It’s easier. Less friction. Everything works.
Others push back hard: “Privileged LXC is a terrible idea.”
And they’re not wrong.
Privileged containers blur the isolation boundary. They make mounting easier, but at the cost of security. Unprivileged containers, on the other hand, are safer—but require more setup, especially around permissions.
One user defends unprivileged LXCs strongly, arguing they’re “safe af” if configured correctly and avoid the headaches of VMs for things like GPU sharing.
So now you’re choosing again: convenience or isolation.
## The Architecture Split: One Container vs Many
Just when you think you’ve figured it out, another debate appears.
Do you run everything in one container, or split services across multiple LXCs?
Some argue for separation: one container per app. Cleaner, safer, easier to manage long-term. Others prefer grouping services together to simplify networking and VPN setups.
One take sums it up nicely: separate containers are easier overall—but combining them can make sense depending on your workflow.
There’s no single right answer. Just tradeoffs.
## The Reality Check: NFS Isn’t the Hard Part
Here’s the twist: NFS itself is actually simple.
Define a share. Mount it. Done.
As one commenter pointed out, it’s basically just:
`IP:/path/to/share → /mnt/path`
The complexity comes from everything around it:
- Where you mount it (host vs VM vs LXC)
- How you expose it (bind mounts, config files)
- Who can access it (permissions, user mapping)
That’s why it feels harder than it should. You’re not just mounting storage—you’re integrating systems.
## The Takeaway: Stop Fighting the Wrong Layer
If there’s one lesson here, it’s this: most people struggle with NFS because they’re solving the wrong problem.
They try to:
- Resize VM disks instead of using network storage
- Mount directly inside containers instead of using the host
- Ignore permissions until everything breaks
The smoother path is almost always:
1. Mount NFS on the Proxmox host
2. Pass it into LXCs with mount points
3. Fix permissions properly
It’s not the quickest route. But it’s the one that doesn’t come back to bite you later.
And in a homelab, that’s the difference between something that works today—and something that still works next month.
Keep Exploring
USB vs SATA: The Unexpected Debate Behind Virtualized PBS Storage
When downsizing forces you to virtualize PBS, choosing between USB and SATA storage becomes more than a technical decision—it's a philosophy about reliability, convenience, and what 'good enough' really means.
Six Grand, Twelve Drives, and One Dream: How a 200TB Server Became a Portfolio
One engineer built a 200TB Proxmox and TrueNAS system from scratch—not for work, but as proof of skill. This is the story of how a homelab became a living resume.
The Sneaky Problem of Full Storage: How Proxmox Users Are Beating LVM-Thin Bloat
Proxmox LVM-thin filling up? Learn practical fixes to reclaim space, prevent pool bloat, and stop backup or VM failures before they happen.
Ceph Is a Beast, ZFS Just Works: Inside the Storage Wars of the Proxmox Community
Ceph vs ZFS in Proxmox homelabs: a practical comparison of complexity, failure handling, and performance for real-world self-hosted clusters.