Fully Automated Kubernetes Cluster: Proxmox, Talos, OpenTofu and Forgejo
Jon Archer
- 5 minutes read - 1055 wordsI’ve been running a homelab Kubernetes cluster for a while now, and kept meaning to write up how it’s all wired together. This is an overview of what will become a series of posts. I’ll be looking to cover each area of the setup, building it up as we go, including code.
The main tools I’ve used in this build are Proxmox as a hypervisor running the VMs that make up the Kubernetes cluster. Talos Linux, the immutable Linux distro I’ve favoured to build the cluster. ArgoCD, to add the GitOps approach to deploying apps onto my cluster. OpenTofu to deploy parts of the setup and Forgejo to host the Git repos, Container images and run workflows as part of the deployment.
The Hardware
Firstly, lets talk about the hardware that makes up my Kubernetes cluster and my wider home lab setup. I’ve got 4 dedicated physical hosts for the Kubernetes lab these are pre-owned Lenovo Tiny PCs. These are a great little unit, and if you pick up the right version are extremely expandable.
From the ability to add a x8 PCI gen 3 card, and if you pick up the right model (M920x) there is a second M.2 slot so plenty of storage options.

My units have all got a PCI riser card with a 10GbE NIC.

This leads me on to the networking side of my home lab, I won’t go into great detail here (probably a topic for another post), but I’m running Ubiquiti equipment and these PC’s are all hooked up to an aggregation switch for full speed 10GbE networking.
I do already run a Proxmox host on my network, and this is only important here due to the fact it’s running a number of apps which help us out here, namely Forgejo and Infisical for Git repo hosting and storing of secrets respectively.
Deployments
The deployment of my Kubernetes stack is split up into two separate Git repos, and therefore deployments/workflows, this allows me to manage the core elements separately from things that rely on the core elements without getting into dependency hell.
The core deployment will create:
- Virtual machines on the Proxmox hosts.
- Install Talos Linux on the Virtual Machines.
- Configure Talos to become a Kubernetes Cluster.
- Setup Cilium for networking.
The post Kubernetes deployment will create:
- cert-manager on Kubernetes.
- Kong as ingress gateway.
- Reflector for replicating secrets between namespaces.
- Reloader to watch for updates to Secrets and do rolling upgrades.
- Prometheus for observability.
- Longhorn for storage.
- ArgoCD for GitOps style application deployment.
This secondary deployment consists of services that run on top of the Kubernetes cluster, but they are still deployed using OpenTofu and Helm (together).
Everything in both repos is deployed using OpenTofu via Forgejo Actions workflows.
What’s in the stack
Pre deployment - chicken or egg? Obviously there is going to be a level of deployment that isn’t automated (although I’m working on that) for me this includes the Proxmox cluster, there’s nothing special here just 4 nodes connected so you can see them via the same interface - No HA or shared storage etc.
As previously mentioned Forgejo and Infisical are running as containers on my other Proxmox setup.
Infisical is a really useful piece of software for the centralised storage of secrets, these can then be used as ephemeral resources within the OpenTofu code to configure providers, such as proxmox or in my case API keys for Unifi to manage DNS. On the flip side you can push secrets from OpenTofu, such as the kubeconfig which can then be consumed elsewhere.
Forgejo is a git hosting platform that was forked from Gitea due to the change of licensing (similar to Terraform/OpenTofu), but also usefully has its own CI/CD pipeline service which is very similar to GitHub Actions, and package hosting service which serves nicely as a container registry.
Core deployment — some detail
The Kubernetes Compute is a series of Talos Linux Proxmox VMs deployed using OpenTofu with the BPG provider (more on this later). These VMs are deployed using an image generated from the Talos Factory, this is a service which allows the download of images designed to be specifically for your use case i.e. bare metal, or on a hypervisor, or with certain drivers or software included. The core-infra repo generates Talos machine configs, applies them, and installs Cilium at first boot via Talos inline manifests; after bootstrap it pushes kubeconfig and talosconfig to Infisical. For CNI and LoadBalancer I use Cilium with L2 announcements and LB IPAM — the worker subnet is the LoadBalancer pool, so Kong takes an IP from that range.
Once this deployment completes we have a fully operational Kubernetes cluster.
Kubernetes repo deployment
The second pipeline deployment sets us up for being ready to deploy applications using ArgoCD. The OpenTofu configuration deploys several Helm charts to bring up the previously mentioned services running on top of Kubernetes. It also generates TLS certs, creates DNS entries on the UniFi router, sets up persistent storage using Longhorn and creates any relevant Kong ingress rules (with associated certs). This step also sets up any secrets that may be required for operation later down the line, these being either Kubernetes secrets pulled from Infisical or generated.
AWS Use While this is all fully homelab and hosted off the cloud, I have had to rely on AWS S3 as a stop gap. There are a couple of reasons for this. S3 is currently where I’m hosting the statefiles for both of the above deployments. Initially I used Minio but they also changed their license - There’s a pattern emerging here - Which meant that a lot of features have been gradually pulled from the open source version of the product. I then moved onto Garage which I had a lot of success with once I got the configuration correct. However, problems followed when I found I needed to create an external DNS entry in Route 53 and the provider config conflicted with the statefile backend config so I temporarily switched to S3. I say temporarily, as I know there is an upcoming feature for Forgejo to store statefile(s) as part of the packages service (issue 3606).
Coming up in this series
- Part 2: Proxmox and Talos Linux Deployment — Deep-dive into setting up the Kubernetes cluster using Talos Linux on Proxmox.