We have a small server and we’re going to set up some VMs in KVM for a Kubernetes homelab. I am planing to use OpenVSwitch for my virtual network, set a proxy and a DNS, and finally, install Rancher, and create a cluster with Calico.
Over this post I am sharing my notes about installing rancher and settings up a kubernetes cluster with two controllers and two workers.
In the post, I will bring some details regarding:
- Install rancher in a containers for testing purposes
- What you need to get started and create your first kubernetes cluster
- How to clean your nodes up if you mess this up and start over.
This is one of a serie of posts with the developing of this lab. here you have the list of all:
Install Rancher in a container for testing purposes
Rancher for production should be install in a cluster of three nodes. However, for testing purposes you have the option to install it in a container with self-signed certificate. We used the latter in this case. I assuming you have docker already installed in your server. Also, you need to be sure ports 80 and 443 are not been used for other application. Let’s create our container.
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest
You should see something like this:
# sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a3c4930525f4 rancher/rancher:latest "entrypoint.sh" 2 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp sad_poitras
Now, you should go for your first backup. You need to stop the container. Create and copy the volumes of your container to a tar file. Like this:
sudo docker stop sad_poitras sudo docker create --volumes-from sad_poitras --name rancher-data-20200816 rancher/rancher:latest sudo docker run --volumes-from rancher-data-20200816 -v $PWD:/backup:z busybox tar pzcvf /backup/rancher-data-backup-2.4-20200816.tar.gz /var/lib/rancher sudo docker start sad_poitras
If you need to restore, you can do the following:
sudo docker stop sad_poitras sudo docker run --volumes-from sad_poitras -v $PWD:/backup \ busybox sh -c "rm /var/lib/rancher/* -rf && \ tar pzxvf /backup/rancher-data-backup-2.4-20200816.tar.gz" sudo docker start sad_poitras
Get start with rancher
Now it’s time to get into you rancher portal. Just go to your server port 443. Remember the certificates are self-signed, then your browser will give you a security heads-up. Just accept the risks and go ahead. You should see something like this:
Installing rancher and creating k8s cluster: welcome page
Se the password and you’re done to create your first cluster.
Cleaning up all K8s cluster nodes
If this is your first install and you haven’t created any cluster yet, then skip this section. If you need to recreate your cluster. Then, you need to properly clean everything up before to start. It’s important when you recreate your cluster, to not use the same cluster name. Even after cleaning, something was wrong when I used the same name and I had to go back again to. reate a new one with a different name. Maybe there’s a way to avoid this issue. I’ll let you to figure this out by yourself.
First of all, start cleaning up all containers, images and volumes running this command in every-node.
docker rm -f $(docker ps -qa) docker rmi -f $(docker images -q) docker volume rm $(docker volume ls -q) for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done rm -rf /etc/ceph \ /etc/cni \ /etc/kubernetes \ /opt/cni \ /opt/rke \ /run/secrets/kubernetes.io \ /run/calico \ /run/flannel \ /var/lib/calico \ /var/lib/etcd \ /var/lib/cni \ /var/lib/kubelet \ /var/lib/rancher/rke/log \ /var/log/containers \ /var/log/kube-audit \ /var/log/pods \ /var/run/calico
And then, after that, restart all nodes to flush out any iptables rule. And we are ready to go.
Let’s create our first cluster
Here you have a high level view of the topology. Everything is one Centos server configured with KVM instances and linux/ovs bridges.
Installing rancher and creating k8s cluster: high level topology view
Back to the rancher portal:
- Hit “Cluster” and the “Add Cluster”
Installing rancher and creating k8s cluster: add cluster
- Then now you need to choose “From existing nodes (Custom)”, because in this case we have our KVM instances previously created.
Installing rancher and creating k8s cluster: select customer nodes
- Set the name of your cluster
Installing rancher and creating k8s cluster: name your cluster
- Select your network provider. In this case I will use Calico.
Installing rancher and creating k8s cluster: select network provider
- Now you can hit the next, unless you want to do something else (i.e. add labels)
We are going now to install the control servers. Those servers also will run etcd. For high availability purposes, I should set 3 controllers. However, in those case I will only set two, because I don’t have much capacity in the host. But I didn’t want to have just one, to have the experience of settings more than one controller. Rancher gives us a really easy way to set those controller just copying the docker command to install the agent with some options:
- Next screen unselect “worker” and select “etcd” and “control plane”
Installing rancher and creating k8s cluster: installing controller
- Copy and run this command directly in your VM assigned as controller. IN my case, it’s control1.
- Wait until the server is completely registered, that process can take several minutes.
- Then, when its done, copy the same command to the second controller.
- Wait some minutes for the registration to finish and the server active.
- Then, unselect “etcd” and “control plane” and select worker.
- Copy that command to all workers. You don’t need to wait to anyone to finish to stat with the next one.
It’s everything goes ok, you should see all nodes in the cluster like this:
Installing rancher and creating k8s cluster: nodes status after installation
And we are done. You can go directly to the cluster and run any kubelet command or use the rancher catalog of apps to do your first deployment.
Don’t forget to comment and share. See ya!
1 thought on “kubernetes homelab: Installing rancher and creating k8s cluster”