Back to Blog
Kubernetes
Security
YubiHSM 2 + cert-manager. Hardware-signed TLS certificates on Kubernetes
March 8, 2026
4 min read read
# YubiHSM 2 + cert-manager. Hardware-signed TLS certificates on Kubernetes
## What started the discussion
I built a cert-manager external issuer that signs TLS certificates using a private key inside a YubiHSM 2. The key never leaves the device. Is it overkill for a homelab? Absolutely. But if you're going to run your own CA, you might as well make the private key physically impossible to steal.
cert-manager's built-in CA issuer just stores your signing key in a Kubernetes Secret, which is one kubectl get secret away from being stolen. The fun part of this project was wiring the HSM into Go's crypto.Signer interface so cert-manager doesn't even know the signature is coming from hardware. It just works like any other issuer.
Write-up with the architecture and code: [https://charles.dev/blog/yubihsm-cert-manager](https://charles.dev/blog/yubihsm-cert-manager)
Next up I'm building a hardware-backed Bitcoin wallet with the same YubiHSM 2. Happy to answer questions in the meantime.
## What stood out in the comments
### Discussion point 1
Physically stealing it isn’t enough either. The key is stored as a non-exportable object, so there’s literally no way to read it out over USB or any API. The HSM will sign things, but it will never reveal the key itself. Actually extracting it would require invasive chip-level attacks with specialized lab equipment. At that point you’re basically attacking silicon, not software.
### Discussion point 2
Can you cluster YubiHSM with real time key distribution and load balancing? Ie, one key dies, how do you secure redundancy, failover with guarantee on no data loss? Load balancing using multiple YubiHSM?
### Discussion point 3
Isn’t physically stealing it the only way?
### Discussion point 4
I have a Go library called yubihsm-sync that replicates cryptographic objects across 2-5+ YubiHSMs using a background daemon. It continuously detects changes and syncs them across all devices in a group. If an HSM dies, the others already have all the keys. This is done using shared wrapping keys. The same project includes a protocol-aware gateway that sits between your applications and the HSMs. It speaks native YubiHSM protocol, so apps don't need to know there are multiple devices behind it. It supports multiple routing modes: \- Primary/failover -- stick to one HSM, auto-promote another if it goes down \- Round-robin -- distribute requests across all healthy HSMs \- Content-based -- route to the specific HSM that has the requested object It handles session affinity, connection pooling, and health checking automatically. I'll be writing about these tools and what I learned building them in a future post in the series.
### Discussion point 5
No public repo at this point. This is a passion project that I'd love to turn into something commercial one day, so I'm keeping the source private for now. Right now I'm heads-down building out the underlying libraries and deep integrations across the stack. That said, nothing I'm doing is secret sauce conceptually. YubiHSM's wrap key export/import mechanism is what makes replication possible, and the protocol is well-documented by Yubico. Someone motivated could absolutely build this themselves, and I hope the posts are useful as a roadmap for that. My next blog post is about my BTC implementation and the TUI I built to go along with it. Part of the fun of writing these up is seeing what people are interested in, so questions like yours are genuinely appreciated. Feel free to reach out if you'd like more details or want to collaborate.
## Thread snapshot
- Original subreddit: r/kubernetes
- Original author: u/net_charlessullivan
- Reddit score: 63
- Comment count: 18
- Original thread: https://www.reddit.com/r/kubernetes/comments/1r6b2fg/yubihsm_2_certmanager_hardwaresigned_tls/
Keep Exploring
CVE-2026-22039: How an admission controller vulnerability turned Kubernetes namespaces into a security illusion
Just saw this nasty Kyverno CVE that's a perfect example of why I'm skeptical of admission controllers with god-mode RBAC.
The Anatomy of Modern Kubernetes Data Protection: Securing KubeVirt Workloads at Enterprise Scale in 2026
The End of kubernetes/ingress-nginx: Your March 2026 Migration Playbook
Hey everyone, sharing an article I wrote about the upcoming End-of-Life for the community-maintained kubernetes/ingress-nginx controller happening in March 2026.
Why Kubernetes 1.35 Feels Like a Security-First Release
Kubernetes 1.35 isn't your typical incremental update. With cgroup v1 dropped, hardened certificate validation, constrained impersonation, and user namespaces enabled by default, this release reads like the security overhaul the platform has needed for years.