A lightweight and opinionated alternative to Kubespray for provisioning production-grade Kubernetes clusters locally using kubeadm. This simplifies automating cluster deployment with Ansible while maintaining production-ready configurations.
This project will download and install the apps below with the following versions:
| Software | Version |
|---|---|
| Kubernetes | 1.34.3 |
| Helm | 3.19.2 |
| Terraform | 1.14.3 |
| Kubeseal | 0.34.0 |
| Containerd | 2.2.0 |
| Cilium | 1.19.0-pre.3 |
| Hubble | 1.18.3 |
Choose one of the following methods based on your environment:
- Vagrant on Hyper-V (using WSL)
- Vagrant on Libvirt (Linux distros with older libvirt versions)
- Pure Ansible (Recommended for newer Linux distros)
- âś… Recommended workaround for newer systems: Use the pure Ansible method instead, which works on newer distributions
- Ansible installed on your system
- Vagrant installed (if using Vagrant methods)
- Libvirt and QEMU installed (for libvirt-based deployments)
- Vagrant Libvirt provider installed (for Vagrant + Libvirt method)
- Hyper-V enabled
- Vagrant accessible from WSL environment
- Vagrant 2.4.6 has known issues with its Hyperv provider - avoid this version
- Vagrant 2.4.9 fixes mentioned Hyper-V issues
git clone https://github.com/nthskyradiated/ansible-k8s
cd ansible-k8s- If you're using Hyperv, ensure a bridge network is already created (you can do this from the Hyperv manager)
- Adjust variables in the
Vagrantfile:RAM_SIZE- Memory allocation per VMCPU_CORES- CPU cores per VMNUM_CONTROL_NODES- Number of control plane nodesNUM_WORKER_NODE- Number of worker nodesMASTER_IP_START- Starting host address for your control plane noeds (e.g. 210)NODE_IP_START- Same asMASTER_IP_STARTbut for your worker nodesLB_IP_START- Host address for your load balancer
- Variables are stored in
./pure-ansible/group_vars/all.yaml - Generate SSH keys for Ansible:
ssh-keygen -t ed25519 -f ./pure-ansible/id_ed25519 -N "" - Don't forget to update the key's location in
ansible.cfg
Add VM IP addresses to your hosts file before deployment:
# Add these entries to /etc/hosts if using linux
192.168.100.211 controlplane01
192.168.100.221 node01
192.168.100.222 node02
# Adjust IPs based on your configurationRun the appropriate command:
cd hyperv-vagrant
export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS="1"
vagrant up --provider=hypervcd libvirt-vagrant
vagrant up --provider=libvirtcd pure-ansible
ansible-playbook site.yaml --ask-become-passOnce provisioning finishes:
-
Kubeconfig is automatically configured on your local machine at
~/.kube/config -
Verify cluster access:
kubectl get nodes kubectl get pods -A
-
Retrieve kubeadm join command:
ssh your-username@controlplane01 cat ~/kubeadm-init-output.txt -
Join additional nodes by SSHing to each node and running the kubeadm join command from the output file. the
kubeadm joincommand is different between adding worker nodes from adding an additional master nodes. Verify in the txt file.
- VM/Vagrantbox: Ubuntu 24.04 LTS
- Container Runtime: Containerd
- CNI: Cilium
- Load Balancer: HAProxy (auto-provisioned for multi-master setups)
- Provisioning Tool: kubeadm
- Libvirt: Uses subnet
192.168.100.0/24 - Hyper-V: Depends on your network configuration
-
Libvirt deployment: Fully automated, no manual intervention required
-
Hyper-V deployment: You must manually select a network interface for each VM*
*See Hyper-V limitations: https://developer.hashicorp.com/vagrant/docs/providers/hyperv/limitations
Cilium is provisioned via Helm. You can modify its values in:
./tools/ubuntu/cilium-values.yaml
Add, remove, or modify Cilium features according to your requirements.
If you encounter issues with vagrant-libvirt on newer RPM-based distributions (RHEL 10, AlmaLinux 10, Rocky Linux 10, etc.), use the pure Ansible method instead. This is a known compatibility issue with newer libvirt versions.
If VMs cannot reach the internet or communicate with the host:
-
Verify NAT rules:
sudo iptables -t nat -L -n -v
-
Check if MASQUERADE rule exists for your VM subnet:
sudo iptables -t nat -L POSTROUTING -n -v | grep 192.168.100.0 -
Add MASQUERADE rule if missing (adjust interface name and subnet as needed):
# Replace wlp0s20f3 with your actual network interface (use 'ip a' to find it) sudo iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o wlp0s20f3 -j MASQUERADE -
Enable IP forwarding:
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward # Make it persistent echo "net.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/99-ip-forward.conf sudo sysctl -p /etc/sysctl.d/99-ip-forward.conf
-
Verify libvirt network is active:
sudo virsh net-list --all sudo virsh net-start k8s-net # If not active
# For firewalld (RHEL/Fedora)
sudo firewall-cmd --list-all
sudo firewall-cmd --zone=libvirt --add-masquerade --permanent
sudo firewall-cmd --reload
# For ufw (Ubuntu/Debian)
sudo ufw status
sudo ufw allow from 192.168.100.0/24If VMs fail to start on Hyper-V, ensure you've created a virtual switch in Hyper-V Manager before running vagrant up.
Ensure:
- SSH keys are present and its location added to ansible.cfg
- Firewall rules allow SSH connections
- VM IP addresses are reachable from your host
- SELinux is not blocking connections (check with
sudo ausearch -m avc -ts recent)
If kubeadm init fails:
# Check kubelet logs
sudo journalctl -xeu kubelet
# Verify container runtime is running
sudo systemctl status containerd
# Check if required ports are available
sudo ss -tulpn | grep -E ':(6443|2379|2380|10250|10251|10252)'Issues and pull requests are welcome at: https://github.com/nthskyradiated/ansible-k8s
This project is licensed under the MIT License. You are free to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the software. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the software.
The software is provided "as is", without warranty of any kind. For more details, see the full license text here