[Solved] Vagrant Kubeadm Cluster Crashes on Reboot
Issue: If you set up a kubernetes cluster in Vagrant using kubeadm on a Ubuntu box, the cluster master node service may fail to start on a VM reboot.
If you check the
kublet logs, you will see connection refused errors for etcd service.
This issue is primarily due to
Kubeadm needs the swap to be off to function properly. Sometimes, even if you make the
fstab entry to disable the swap, it might not work as expected.
When you set up the cluster you might have run the following command. It will disable swap only for the running session. If you restart the VM, it gets enabled again.
To solve the
swapoff issue, here is what you can do.
Create a crontab entry that disables swap on reboot. You can add the crontab entry using the following command.
(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab -
If you restart the VM after setting the crontab, your kubernetes master services will come up automatically.
If you want an automated kubeadm cluster on Vagrant, check out this Kubeadm vagrant Github Repository.
Also, check the Vagrant networking issue.
If you using Vagrant for Kubernetes Certification preparation, check out the Linux Foundation Coupons to save up to 50% on certifications.