## 📝 Author
Birat Aryal — birataryal.github.io
Created Date: 2026-02-16
Updated Date: Monday 16th February 2026 22:14:28
Website - birataryl.com.np
Repository - Birat Aryal
LinkedIn - Birat Aryal
DevSecOps Engineer | System Engineer | Cyber Security Analyst | Network Engineer
CAPI + vSphere UAT Cluster
This documentation covers a complete, field-tested setup of a Kubernetes workload cluster on vSphere using Cluster API.
High-level outcome
- One control-plane node and N worker nodes created via CAPI MachineDeployments.
- API Server VIP provided by kube-vip (
192.168.35.100) and a DNS recordkube-api-server.uat.local. - Node IPs allocated from InClusterIPPool (
tmsnw-pool) viaaddressesFromPools. - Guest networking stabilized by defining the default route (and metric) in the CAPV network device spec.
Why this matters
Cluster API is extremely deterministic when three pillars are correct:
- vSphere primitives are accurate (template, folder, resource pool, datastore, network)
- Guest bootstrap is healthy (cloud-init runs, routes exist, kubeadm succeeds)
- Reachability exists between:
- management cluster -> workload VIP:6443
- workload nodes -> VIP:6443 (for kubeadm join)
- workload nodes -> DNS/NTP/package repos (if needed)
If any of those are missing, you’ll see the classic pattern:
- Control plane VM exists and is ready in CAPV
- Machines/VSphereMachines for workers exist but remain Pending
- Controllers log cluster is not reachable: ... connect: no route to host
Continue to Architecture to understand the data plane + control plane flows.