Back to Blog
    kubernetes
    deployment
    devops
    infrastructure
    best-practices
    load-balancing

    Zero-Downtime Deployments Without Kubernetes: Proven Approaches

    November 29, 2025
    9 min read
    ## Zero-Downtime Deploys Without Kubernetes: The "Ancient Ways" Still Slap For years, Kubernetes has been the industry's favorite hammer, and every software deployment problem has looked like a nail. One of the reasons folks reach for it is simple: it makes zero-downtime deployments feel as easy as flipping a toggle. Roll out a new version, keep the old one alive until the new one's steady, let the load balancer shuffle traffic around, and call it a day. But here's the thing: people were shipping code without downtime long before Kubernetes strutted onto the scene, and they're still doing it today—often with far less complexity, fewer moving parts, and a much lighter cognitive tax. And if you hang out with engineers who've been in the trenches for a while, you'll hear a recurring theme: **Kubernetes is convenient, but it's not the only way. It's not even always the best way.** In a world where "just use Kubernetes" has become the default refrain, the so-called "ancient ways" of deployment are making a comeback. Not out of nostalgia, but because they still work—shockingly well. ### The Load Balancer: The Original MVP The most common answer from seasoned engineers comes with a shrug, like they're answering a question about gravity. **Run two servers. Let the load balancer handle it.** This approach is older than some cloud engineers today. You spin up two instances (or twenty), point a load balancer at them, and update one instance at a time. As long as your app knows how to politely bow out—more on that later—you can roll out new code without anyone noticing. It's not flashy, but it's bulletproof. A blunt instrument, sure, but one that's been trusted for decades. "We've been doing this for 30+ years," one engineer said, and the crowd basically nodded in unison. If you're using NGINX or HAProxy, you already have the core feature baked in: health-checks, connection draining, and graceful cutover. One commenter even reminded people that NGINX can swap upstreams without dropping packets. No tears, no drama. ### Blue-Green Deployments: The Classic Pattern If Kubernetes deploys are the modern Web framework of uptime, **blue-green deployments** are the HTML 1.0 of the technique—simple, direct, and still perfectly effective. Here's the playbook: - You deploy the new version into the *green* environment. - You keep the *blue* environment running the stable version. - When green passes all your checks, you switch traffic over. - If everything goes sideways, you flip traffic back to blue. It's the deployment equivalent of switching HDMI ports on your TV. Low effort. High reliability. Zero downtime unless you count the milliseconds needed for routing to catch up. Engineers in the thread swear by it. Some even brag about how fast they can do it. One person said they used to switch entire fleets—over a thousand servers—under a minute, just by swapping symlinks. Speaking of which… ### Symlinks: The UNIX Trick That Refuses to Die If there were a Hall of Fame for simple deployment hacks, this would be first-ballot material. Before containers, before orchestration platforms, before modern config management, people were deploying apps by updating a **symlink** called `current` to point from the old release directory to the new one. The flow goes like this: 1. Upload new code to `/var/www/releases/version-329320` 2. Update the `current` symlink to point at it 3. Restart (or gracefully reload) the web server if needed 4. Celebrate with something caffeinated Because updating a symlink is an atomic operation, the switch is instant—literally. And if you want to roll back? Just point `current` back to the old directory. No need to rebuild anything, no need to redeploy to a container platform, no need to close your eyes and pray that nothing breaks. People using this pattern today often pair it with tools like Ansible's `deploy_helper`, but you don't need special tooling. It works just as well with a Bash script as long as you clean up old releases once in a while. ### DNS: Technically Feasible, Practically Chaotic A few engineers admitted they've tried using DNS to pull off zero-downtime deploys. The idea is simple: set a low TTL, flip your A record to the new server, and let propagation do the rest. In practice, it sort of works. **Until it doesn't.** Caching behavior is unpredictable across ISPs, corporate networks, browsers, and devices. One engineer said you can do it "if complexity is a concern," but others quickly chimed in with horror stories about DNS changes taking minutes or hours or—yes—longer. It's fine if you're operating on a small scale or running something informal. But using DNS as a load balancer? That's like using a microwave to roast a turkey. Technically possible. Shockingly inconsistent. ### Session Draining: The Secret Sauce of Graceful Shutdowns Whether you're using Kubernetes, a homegrown script, or a pile of rusty servers in a broom closet, the golden rule of zero downtime is the same: **Your app has to be able to shut down without slamming the door on users.** The trick is listening for **SIGTERM**, the signal that tells your process to start winding down instead of dying instantly. Engineers in the thread were practically yelling from the rooftops that this is the real key to smooth deployments. Here's the high-level idea: - App receives SIGTERM - It stops accepting new requests - It lets active requests finish - After a grace period, the system kills it if it's still hanging around Every web framework worth using supports this in one way or another. One engineer said in Go it's "a 2 liner," Spring Boot has it built in, and even Python frameworks support it. If your app can handle SIGTERM gracefully, you can roll out updates with almost any infrastructure setup. ### Cloud Platforms: Zero Downtime as a Service If you prefer managed services, the good news is that pretty much every cloud has figured out how to handle zero-downtime deploys these days—without requiring you to understand the finer points of pod eviction. **Google Cloud Run** does it by spinning up new revisions and routing traffic over gradually. **AWS ECS and Fargate** have been doing rolling updates with load balancers for years. **Azure App Service** lets you deploy into staging slots and swap them like a magician switching cups. All of these are built on the same old ideas people have been using for decades: multiple instances, load balancing, and health checks. ### Docker Without Kubernetes: Still Totally Possible A lot of folks in the thread run everything in Docker but don't bother with Kubernetes. They're using: - Docker Compose with manual scaling - Traefik as a reverse proxy - docker-swarm for lightweight orchestration - Simple rolling updates one container at a time The idea is the same as running multiple servers, except each "server" is a container. Traefik, in particular, makes life easy by routing traffic only to containers that report themselves as healthy. One engineer described a workflow where CI/CD boots up a second copy of the app image, waits for it to be healthy, routes traffic to it, updates the rest of the service, and then shuts down the staging copy. It sounds fancy, but the bones are very traditional. ### The Trickiest Part: Database Migrations If there's a boogeyman haunting zero-downtime deployments, it's database changes. The problem: New code expects new columns or tables. Old code is still running. You can't break either one. The real answer from the thread is the same advice you hear from experienced teams everywhere: **Write migrations that don't break old code.** That usually means: - Add new columns instead of renaming - Stop writing to old columns before dropping them - Avoid destructive changes in the same deploy - Clean up schema later, after all old app versions are gone Get good at this, and half your deployment anxiety disappears. ### So… Do You Actually Need Kubernetes? If you're running at global scale with services talking to other services about more services? Sure. Kubernetes helps. If you're wrangling a large team that needs consistent tooling and guardrails, it absolutely earns its keep. But if you're running a few web apps? If your fleet is measured in servers rather than continents? If you want something that works and doesn't require dedicating a portion of your life to YAML? Then the old ways still slap. Load balancers. Two servers. Symlinks. Reverse proxies. SIGTERM. Blue-green. Health checks. These tools are simple, predictable, and rock-solid. Zero-downtime deployments without Kubernetes aren't some lost art. They're standard practice for tons of teams—the kind of thing engineers have been quietly running in production for decades. And they're not going anywhere.