Back to Blog
    Kubernetes
    Storage
    Production

    Looking for a replacement for Minio? S3 made easy with Garage

    February 9, 2026
    4 min read read
    # Looking for a replacement for Minio? S3 made easy with Garage ## What started the discussion **Update: garage-operator v0.1.x released — Kubernetes operator for Garage (self-hosted S3 storage)** About a month ago I shared a project I’ve been building: a Kubernetes operator for Garage (a lightweight distributed S3-compatible object store designed for self-hosting). Original post: [https://www.reddit.com/r/kubernetes/comments/1qeagkn/built\_a\_kubernetes\_operator\_for\_garage\_selfhosted/](https://www.reddit.com/r/kubernetes/comments/1qeagkn/built_a_kubernetes_operator_for_garage_selfhosted/) Since then a few people tried it out, opened issues, and gave feedback — so I just shipped the **first minor release** 🎉 # What garage-operator does It automates running Garage clusters in Kubernetes: • Deploy Garage clusters with StatefulSets • Automatic bootstrap + layout management • Multi-cluster federation across Kubernetes clusters • Bucket + quota management • S3 access key generation • GitOps-friendly CRDs Garage itself is a **lightweight distributed S3 object store designed for self-hosting**, often used as an alternative to heavier systems like MinIO or Ceph. # Improvements since the first post Some things that came directly from community feedback: • Added **COSI support** • Improved cluster bootstrap reliability • Better documentation • More robust node discovery • Cleanup of several CRD APIs • Early work toward better multi-cluster federation # Example CRDs You can now declaratively manage things like: * Garage clusters * external nodes * buckets * access keys so the whole storage system becomes **fully GitOps-managed**. # Repo [https://github.com/rajsinghtech/garage-operator](https://github.com/rajsinghtech/garage-operator) If anyone here is running Garage in Kubernetes (or thinking about it), I’d love feedback: • missing features • weird edge cases • ideas for better CRDs • production usage stories I personally manage 3 clusters and use Tailscale to enable connectivity between my clusters for garage distributed redundancy! Volsync restic backup and removed my need for Ceph. Happy to answer questions about the operator or the architecture. ## What stood out in the comments ### Discussion point 1 I'm using Garage for an on-prem cluster and I didn't want to have to rewrite all my services that rely on S3 so it's been immensely helpful and it seems to be working great. I'm not in production yet but we will be soon so I'm hoping everything goes well, congratulations on a job well done and looking forward to what comes next ### Discussion point 2 I used to manually configure my garage - then i saw the first post and asked questions and swapped to the operator. Excellent decision. Excellent operator. ### Discussion point 3 I've been following this, really might have to migrate over. Right now I deploy using a self made helm chart. But this looks like a really useful way to get s3 inside kubernetes. Hope this takes off. ### Discussion point 4 This looks really cool. Garage + a clean operator + GitOps control is a solid combo. Definitely interesting for people who want something lighter than MinIO or Ceph. ### Discussion point 5 Garage over MinIO is a solid move for self-hosted setups -- way lighter on resources. The operator approach with GitOps CRDs is the right call, manually managing distributed storage in k8s gets old fast. Curious how the multi-cluster federation handles network partitions between your Tailscale-connected clusters. ## Thread snapshot - Original subreddit: r/kubernetes - Original author: u/BigCurryCook - Reddit score: 56 - Comment count: 9 - Original thread: https://www.reddit.com/r/kubernetes/comments/1rscptj/looking_for_a_replacement_for_minio_s3_made_easy/

    Related Resources