Calico is a pure layer-3 network solution to interconnect workloads as containers or virtual servers. It relies on bird (BGP opensource project), iptables and the Linux IP stack to work. Brings also additional security features like micro-segmentation. Thanks to BGP is very scalable and also it’s tightly integrated to Kubernetes as CNI plugin.
Previous post I did it with just one node. Now I will do it with 3 nodes cluster: one controllers and two workers. I will build this with kubeadm and install Calico CNI plugin.
Prepare all your nodes
Check first if you have all requirements of your servers in the official documentation.
In my case, i am using three 8GB memory, 4 vCPUs and 25GB of disk KVM instance with Fedora34. No swap disk. Meeting all the requirements. All nodes need different MAC addresses. Be certain about this.
I will start preparing the server. I used a clean cqow2 image of Fedora34.
I will prepare the image. In order to do that, we’ll have to reboot the server. I will use cri-o container runtime. If you prefer docker, be my guest. However, it will be a little challenging in case of fedora34.
Here you have a picture of the topology lab. It’s using SRLinux with EVPN. To make the story short, K8s nodes only sees a subnet 192.168.101.0/24 (Layer2 domain). If you want to see more details of the SRLinux setup, check it github.
Prepare your server
Basically there some steps you have to do with a reboot process in the middle. Step-by-step process can be seen at Server World – Kubernetes : Install Kubeadm.
Before reboot, you will have to run those scripts (replace DNS IP and domain with the one you need):
# Install base pacjages for iptables and others dnf -y install iptables ethtool ebtables # Enable IP packet forwarding cat > /etc/sysctl.d/99-k8s-cri.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-ip6tables=1 EOF echo -e overlay\\nbr_netfilter > /etc/modules-load.d/k8s.conf # Fix cgroup version sed -E -i 's/^GRUB_CMDLINE_LINUX="/GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0 /g' /etc/default/grub grub2-mkconfig -o /boot/grub2/grub.cfg touch /etc/systemd/zram-generator.conf # Disable firewalld and system-resolv (I am using dns 172.20.20.254, you will have to replace it for the one youb need) sed -E -i 's/^(\[main\])$/\1\ndns=default/g' /etc/NetworkManager/NetworkManager.conf systemctl disable --now firewalld systemctl disable --now systemd-resolved unlink /etc/resolv.conf touch /etc/resolv.conf echo "nameserver 172.20.20.254" > /etc/resolv.conf echo "search srl.nokialab.net" >> /etc/resolv.conf # disable selinux sudo setenforce 0 sudo sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config ####### reboot server echo "rebooting server in 10 seconds" pause 10 reboot
After the server is back. You will need to run this (replace the proxy IP if you need it):
## installing packages required for kubernetes dnf module -y install cri-o:1.20/default systemctl enable --now cri-o dnf -y install kubernetes-kubeadm kubernetes-node kubernetes-client cri-tools iproute-tc container-selinux dnf -y update dnf -y clean all ## setting hostname, maybe you dont need this line hostnamectl set-hostname $(cat /etc/hosts | grep 192.168 | awk '{print $3}') ## setting up kubernetes variables sed -E -i 's/^KUBELET_ADDRESS=.*$/KUBELET_ADDRESS="--address=0.0.0.0"/g' /etc/kubernetes/kubelet sed -E -i 's/^# (KUBELET_PORT=.*)$/\1/g' /etc/kubernetes/kubelet sed -E -i "s/^KUBELET_HOSTNAME=.*$/KUBELET_HOSTNAME=\"--hostname-override=$(hostname)\"/g" /etc/kubernetes/kubelet sed -E -i 's/^(Environment="KUBELET_EXTRA_ARGS=.*)"$/\1 --container-runtime=remote --container-runtime-endpoint=unix:\/\/\/var\/run\/crio\/crio.sock"/g' /etc/systemd/system/kubelet.service.d/kubeadm.conf dnf -y install iptables ## remove next 3 lines if you don't have a proxy!!! echo 'NO_PROXY="localhost,127.0.0.1,3.0.0.0/8,192.168.0.0/16,10.0.0.0/8"' >> /etc/sysconfig/crio echo 'HTTP_PROXY="http://172.20.20.253:3128"' >> /etc/sysconfig/crio echo 'HTTPS_PROXY="http://172.20.20.253:3128"' >> /etc/sysconfig/crio ## enable and start services systemctl restart cri-o systemctl enable kubelet.service kubeadm config images pull echo "you can use kubeadm init or join now"
Initialize your controller node with kubeadm
Now it’s time to initialize your node. You could use –pod-network-cidr argument to the network segment you will use with Calico. It must not overlap with any other network address space in your LAN. I won’t do it in my case. I have two interfaces, eth1 is the one facing the network. Then, I will use –apiserver-advertise-address to be sure the right ip address is used.
kubeadm init --apiserver-advertise-address=192.168.101.30 --cri-socket=unix:///var/run/crio/crio.sock --kubernetes-version=1.20.5
If you get the message of “Your Kubernetes control-plane has initialized successfully!” your are on the right path.
It’s time to set your kubectl:
# overwrite your previous config mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should see something like this:
[root@k8s-node01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node01 NotReady master 2m6s v1.17.9 [root@k8s-node01 ~]# journalctl -n 1 -u kubelet -- Logs begin at Wed 2020-10-14 16:31:40 UTC, end at Wed 2020-10-14 18:18:37 UTC. -- Oct 14 18:18:37 k8s-node01 kubelet[8352]: E1014 18:18:37.304363 8352 kubelet.go:2184] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uni
Because cni plugin is not yet installed. It’s normal you get this message. It means it’s time to proceed installing your network solution: Calico.
Calico installation
I recommend to not join your other nodes until you get Calico Manifest installed!
Download the Calico networking manifest for the Kubernetes API datastore.
curl https://docs.projectcalico.org/manifests/calico.yaml -O
In this case, CIDR will be auto-detected. However, If all your nodes are in the same Layer-2 domain, you may want to disable encapsulation. In my case, I have disabled IP-n-IP:
- name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Never" # Enable or Disable VXLAN on the default IP pool. - name: CALICO_IPV4POOL_VXLAN value: "Never" # Set MTU for tunnel device used if ipip is enabled
it’s time to create your manifest
kubectl create -f calico.yaml
You should see something like this:
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7b4f58f565-pnn62 0/1 ContainerCreating 0 29s kube-system calico-node-glznw 0/1 Init:2/3 0 29s kube-system coredns-6955765f44-2qc2k 0/1 ContainerCreating 0 30m kube-system coredns-6955765f44-spzcn 0/1 ContainerCreating 0 30m kube-system etcd-k8s-node01 1/1 Running 0 30m kube-system kube-apiserver-k8s-node01 1/1 Running 0 30m kube-system kube-controller-manager-k8s-node01 1/1 Running 0 30m kube-system kube-proxy-bj4b6 1/1 Running 0 30m kube-system kube-scheduler-k8s-node01 1/1 Running 0 30m
After a few minutes all calico pods and coredns should be in Running status. Then, you can check if the server is Ready:
kubectl get nodes -o wide Thu Oct 15 21:10:41 2020 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-node01 Ready master 87m v1.17.9 10.10.10.101 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://1.13.1
Joining workers to the cluster
After you initialize the controller and get the calico manifesto installed and its pods running. You can join the rest of the servers to the cluster. Do not forget to prep those servers with the script in the first section of this post. Just go to every server and run the suggested kubeadm join command you got on the kubeadm process.
kubeadm join 10.10.10.101:6443 --token ih67lm.uy4g5skeokaqbwfu \ --discovery-token-ca-cert-hash sha256:14d3dc6bb9b4d4b50c6909214fa9bf2a2ea16ef96fd3b9aad946ee5c1f9d50c8
Then wait for the servers to be ready.
kubectl get nodes -o wide Thu Oct 15 21:10:41 2020 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-node01 Ready master 87m v1.17.9 10.10.10.101 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://1.13.1 k8s-node02 Ready <none> 74s v1.17.9 10.10.10.102 <none> CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://1.13.1
Time to test
This time we’ll do a deployment of 4 replicas and expose it with NodePort.
# file name: hello-node-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-node-deploy spec: selector: matchLabels: run: hello-node-svc replicas: 4 template: metadata: labels: run: hello-node-svc spec: containers: - name: hello-node-app image: pinrojas/hello-node:v1 ports: - containerPort: 8080 protocol: TCP
Let’s start creating the deployment first
kubectl create -f hello-node-deploy.yaml
You should see something like this
[root@k8s-node01 ~]# kubectl get pods --all-namespaces --output=wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default hello-node-deploy-8568b8dfb6-67f8x 1/1 Running 0 14m 10.140.135.129 k8s-node03 <none> <none> default hello-node-deploy-8568b8dfb6-kltsv 1/1 Running 0 14m 10.140.135.130 k8s-node03 <none> <none> default hello-node-deploy-8568b8dfb6-njvgc 1/1 Running 0 14m 10.140.58.193 k8s-node02 <none> <none> default hello-node-deploy-8568b8dfb6-nxvtv 1/1 Running 0 14m 10.140.58.192 k8s-node02 <none> <none> kube-system calico-kube-controllers-7b4f58f565-pnn62 1/1 Running 0 95m 10.140.85.194 k8s-node01 <none> <none> kube-system calico-node-glznw 1/1 Running 0 95m 10.10.10.101 k8s-node01 <none> <none> kube-system calico-node-pvq8s 1/1 Running 0 34m 10.10.10.103 k8s-node03 <none> <none> kube-system calico-node-qgr5m 1/1 Running 0 39m 10.10.10.102 k8s-node02 <none> <none> kube-system coredns-6955765f44-2qc2k 1/1 Running 0 125m 10.140.85.192 k8s-node01 <none> <none> kube-system coredns-6955765f44-spzcn 1/1 Running 0 125m 10.140.85.193 k8s-node01 <none> <none> kube-system etcd-k8s-node01 1/1 Running 0 124m 10.10.10.101 k8s-node01 <none> <none> kube-system kube-apiserver-k8s-node01 1/1 Running 0 124m 10.10.10.101 k8s-node01 <none> <none> kube-system kube-controller-manager-k8s-node01 1/1 Running 0 124m 10.10.10.101 k8s-node01 <none> <none> kube-system kube-proxy-bj4b6 1/1 Running 0 125m 10.10.10.101 k8s-node01 <none> <none> kube-system kube-proxy-bvx78 1/1 Running 0 34m 10.10.10.103 k8s-node03 <none> <none> kube-system kube-proxy-fmk8h 1/1 Running 0 39m 10.10.10.102 k8s-node02 <none> <none> kube-system kube-scheduler-k8s-node01 1/1 Running 0 124m 10.10.10.101 k8s-node01 <none> <none>
Now, you can expose it with NodePort Service
kubectl expose deployment hello-node-deploy --type=NodePort --name=hello-node-nport
Let’s check at what port was published
[root@k8s-node01 ~]# kubectl describe services hello-node-nport Name: hello-node-nport Namespace: default Labels: <none> Annotations: <none> Selector: run=hello-node-svc Type: NodePort IP: 10.99.66.126 Port: <unset> 8080/TCP TargetPort: 8080/TCP NodePort: <unset> 30083/TCP Endpoints: 10.140.135.129:8080,10.140.135.130:8080,10.140.58.192:8080 + 1 more... Session Affinity: None External Traffic Policy: Cluster Events: <none>
And let’s test it with curl
[root@k8s-node01 ~]# curl http://10.10.10.102:30083 Hello World version ONE! Host/Pod: hello-node-deploy-8568b8dfb6-njvgc [root@k8s-node01 ~]# curl http://10.10.10.102:30083 Hello World version ONE! Host/Pod: hello-node-deploy-8568b8dfb6-kltsv [root@k8s-node01 ~]# curl http://10.10.10.102:30083 Hello World version ONE! Host/Pod: hello-node-deploy-8568b8dfb6-kltsv [root@k8s-node01 ~]# curl http://10.10.10.102:30083 Hello World version ONE! Host/Pod: hello-node-deploy-8568b8dfb6-67f8x [root@k8s-node01 ~]# curl http://10.10.10.102:30083 Hello World version ONE! Host/Pod: hello-node-deploy-8568b8dfb6-nxvtv [root@k8s-node01 ~]# curl http://10.10.10.102:30083 Hello World version ONE! Host/Pod: hello-node-deploy-8568b8dfb6-nxvtv
And we’re done. See ya!
Additional info
If you want to see what should be output you should get from kubeadm init command
[root@k8s-node01 ~]# kubeadm init --pod-network-cidr=10.140.0.0/16 W1015 19:40:51.331555 14905 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1015 19:40:51.332165 14905 version.go:102] falling back to the local client version: v1.17.9 W1015 19:40:51.332876 14905 validation.go:28] Cannot validate kube-proxy config - no validator is available W1015 19:40:51.332928 14905 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.9 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.101] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-node01 localhost] and IPs [10.10.10.101 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-node01 localhost] and IPs [10.10.10.101 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1015 19:42:32.993240 14905 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1015 19:42:32.999184 14905 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 44.012968 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-node01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-node01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: ih67lm.uy4g5skeokaqbwfu [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.10.10.101:6443 --token ih67lm.uy4g5skeokaqbwfu \ --discovery-token-ca-cert-hash sha256:14d3dc6bb9b4d4b50c6909214fa9bf2a2ea16ef96fd3b9aad946ee5c1f9d50c8
If you want to see what should be output you should get from kubeadm join command
[root@k8s-node02 ~]# kubeadm join 10.10.10.101:6443 --token ih67lm.uy4g5skeokaqbwfu \ > --discovery-token-ca-cert-hash sha256:14d3dc6bb9b4d4b50c6909214fa9bf2a2ea16ef96fd3b9aad946ee5c1f9d50c8 W1015 21:09:02.103753 22440 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.