This Website runs on an autoscaling, european, self-hosted Kubernetes cluster

My personal website, simon-frey.com, now runs on a self-managed Kubernetes cluster on Hetzner Cloud. This is, by any reasonable measure, complete overkill for a personal website that gets a few hundred visitors a day. But hey, sometimes you got to do things just for the fun of it.

For years, simon-frey.com lived on Uberspace, a pay-what-you-want shared hosting provider from Germany. I paid 5 euros a month, SSH’d into a box when I needed to change something, and it just worked. Uberspace is genuinely great at what they do and I have nothing bad to say about them. If you want simple, reliable hosting and you don’t need to overcomplicate things, they’re an excellent choice.

Why I Moved

The honest answer is that I wanted to dogfood my own infrastructure. I had already built a Kubernetes cluster on Hetzner Cloud (you can read about that setup here), and running my actual website on it felt like the natural next step. There is a big difference between running tutorial workloads on a cluster and running something you actually care about.

How It Works

The request flow for simon-frey.com goes through quite a few layers now, which is both the beauty and the absurdity of this setup.

graph LR
    User([User]) --> Cloudflare[Cloudflare DNS]
    Cloudflare --> LB[Hetzner Load Balancer]
    LB --> Traefik[Traefik Ingress]
    Traefik --> Cache[Nginx Cache]
    Traefik -->|/files| MinIO[MinIO]
    Cache -->|/| PHP[PHP + git-sync]
    Cache -->|/blog| WP[WordPress]
    WP --> MySQL[(MySQL)]
    PHP -.->|pulls every 60s| GitHub([GitHub Repo])

DNS is currently handled by Cloudflare, though I’m planning to move to a European provider like Bunny CDN at some point because I’d prefer to keep things closer to home. Cloudflare points to a Hetzner Load Balancer, which distributes traffic across the Kubernetes worker nodes. From there, Traefik acts as the ingress controller. It terminates TLS with certificates from Let’s Encrypt, which are automatically managed by cert-manager.

The Nginx cache sits in front of everything and is honestly the most important piece of the whole setup. It’s configured with proxy_cache_use_stale, which means it will serve cached content even when the backends are slow, throwing errors, or completely down. The cache also does background updates, so the first visitor after a cache expiry doesn’t have to wait for the backend response.

Behind the cache, the site is split into two backends. The main site at simon-frey.com is a custom PHP application that renders markdown files into HTML. The content lives in a GitHub repository and gets synced into the running pod via git-sync, a Kubernetes-native sidecar that polls every 60 seconds. When I push a commit to GitHub, the site updates within a minute, and git-sync automatically fires a webhook that purges the Nginx cache so visitors see the new content immediately.

There’s also a MinIO instance serving static files under simon-frey.com/files. This handles larger assets that don’t belong in a git repository.

Autoscaling

The cluster has a cluster autoscaler that can spin up additional Hetzner worker nodes when resource pressure increases. In theory, this means the site can handle traffic spikes by scaling out. In practice, provisioning a new Hetzner server takes around 5 minutes, which is an eternity if your site just got posted on Hacker News. This is fine since the Nginx cache can absorb a lot of traffic on its own, but it’s worth being honest about the limitations.

Monitoring and Alerting

The cluster runs Victoria Metrics for metrics collection, with KWatch monitoring pod states and sending alerts via Pushover directly to my phone. I also have UpDown.io doing external HTTP checks, so I get notified if the site is unreachable from the outside, not just from the cluster’s perspective.

Cost

I went from paying 5 euros a month on Uberspace to roughly 14 euros a month for the Hetzner infrastructure. That’s nearly three times the cost for hosting a personal website, which is hard to justify on pure economics. But the cluster doesn’t just run my website. It also runs monitoring, a few other small services, and serves as a general playground for testing Kubernetes features and tools. The marginal cost of adding my website to the existing cluster is essentially zero since the resources it consumes are tiny, so the real comparison is more like “5 euros for hosting only my website” versus “14 euros for a whole platform that also hosts my website.”

So yes, it was worth it. Not because it’s the smart choice for hosting a website, but because it’s the smart choice for becoming better at this kind of work. And the website still loads fast, so there’s that.

To never miss an article subscribe to my newsletter
No ads. One click unsubscribe.