Back to Blog
    Veeam
    Performance
    Memory Usage
    Backup Proxies

    128GB of RAM… and It Still Eats Everything — When Backup Software Starts Fighting Your Infrastructure

    April 8, 2026
    6 min read
    # “128GB of RAM… and It Still Eats Everything” — When Backup Software Starts Fighting Your Infrastructure ## When “High Usage” Stops Feeling Normal At first, it doesn’t seem like a crisis. A backup service using a lot of RAM? That’s expected. You’ve got a beefy server—128GB—and a workload that isn’t exactly tiny. So when the memory starts creeping up, you shrug it off. That’s what it’s there for. But then it keeps going. And going. Until one service—`Veeam.Archiver.Proxy`—is basically consuming everything it can get its hands on. Not occasionally. Not during spikes. Constantly. That’s when the tone shifts. What used to feel like “normal behavior” starts to feel like something else entirely—something you can’t quite control. ## The Weekend That Never Ended The real breaking point isn’t the RAM usage—it’s what comes next. Jobs start failing. Not immediately, not cleanly, but slowly. They drag. They stall. They run for an entire weekend without finishing. You split them into smaller chunks, hoping that’ll help. It doesn’t. You spin up a new proxy server, thinking maybe the old one is the issue. Same result. That’s the moment every admin dreads. When you’ve tried the obvious fixes, and nothing changes. When the system isn’t just slow—it’s stuck. ## The Illusion of “More Resources Will Fix It” There’s a natural instinct in situations like this: throw more resources at the problem. More RAM. More proxies. More separation between workloads. And to be fair, that advice shows up quickly. “You probably need separate proxy servers,” one voice suggests, hinting that the current all-in-one setup might be part of the issue. Another leans toward architecture changes—dedicated systems, maybe even different OS choices. But here’s the catch: the user already tried that. New proxy. Same behavior. That’s what makes this frustrating. It’s not a clear scaling issue. It’s something deeper—something structural. ## The Design Choice Nobody Talks About Then someone drops a detail that changes how you look at everything. The proxy service is *supposed* to use a lot of memory. It auto-scales, consuming up to around 80% of available RAM to process objects faster. Suddenly, what looked like a bug starts to look like a feature. And that’s where things get complicated. Because from one perspective, the system is working exactly as designed—using available resources aggressively to maximize throughput. From another perspective, it feels like it’s starving the rest of your infrastructure. Two completely valid interpretations. Same behavior. ## “It’s Not Holding RAM… It’s Using It” The debate doesn’t stop there. It gets more nuanced. One side worries that the service is reserving memory—holding onto it even when it’s not actively needed, effectively blocking other processes. The response comes back bluntly: “It doesn’t hold RAM unless it’s using it.” That distinction matters, but it doesn’t always feel reassuring in practice. Because whether it’s “reserved” or “actively used,” the outcome is the same from an operational standpoint—other workloads feel squeezed, and performance becomes unpredictable. ## The Scale Problem Hiding in Plain Sight Then you look at the numbers, and things start to click. Two proxy servers. 59 repositories. 59 jobs. Thousands of objects—4,000 for one customer alone. That’s not a small setup. It’s not massive enterprise scale either, but it’s right in that uncomfortable middle ground—big enough to stress the system, small enough that architecture shortcuts (like all-in-one servers) still seem reasonable. And that’s where friction tends to show up. Not at the extremes, but in the middle. ## Three Ways to See the Same Problem What makes this situation so interesting is how differently people interpret it. One perspective says this is a configuration issue. Too many jobs, not enough separation, proxies doing too much on a single system. The fix? Redesign the architecture. Another perspective says this is expected behavior. High RAM usage isn’t the problem—it’s how the system is designed to operate. The real issue is elsewhere, maybe in job structure or workload distribution. And then there’s the third, quieter view: this is just what happens when complexity builds up over time. More jobs, more repos, more customers—until the system hits a point where it’s technically functional, but practically difficult to manage. None of these are wrong. They’re just looking at different layers of the same problem. ## The Real Frustration: No Clear Answer What stands out most isn’t the technical details—it’s the uncertainty. Logs don’t point to anything definitive. Changes don’t produce clear improvements. Every fix feels like a guess. “I feel truly cooked trying to find a solution,” the user admits. That’s the part that resonates. Not the RAM usage, not the failed jobs—but the feeling of being stuck in a system that doesn’t give you clear feedback. ## So What’s Actually Going Wrong? That’s the uncomfortable part: there’s no single, clean answer. It’s probably not just memory. Not just proxies. Not just job size. It’s the interaction of all of them—resource scaling, workload distribution, architecture decisions, and maybe even subtle inefficiencies that only show up at this scale. And that’s what makes it hard to fix. Because you’re not solving one problem—you’re untangling a system. ## The Bigger Picture Nobody Mentions This isn’t just about one setup or one product. It’s about a pattern. Modern backup systems are powerful, but they’re also complex. They scale aggressively, assume certain architectures, and behave in ways that aren’t always intuitive. When everything lines up, they work beautifully. When something’s off, they become hard to reason about. And when you’re in that gray area—where things “should” work but don’t—that’s where the real challenge begins. Because at that point, you’re not just managing backups. You’re debugging the system that’s supposed to protect everything else.