Calico is a pure layer-3 network solution to interconnect workloads as containers or virtual servers. I relies on bird (BGP opensource project), iptables and the Linux IP stack to work. Brings also additional security features like micro-segmentation. Thanks to BGP is very scalable and also it’s tightly integrated to Kubernetes as CNI plugin.
Now we’ll try the same setup, using kubeadm. One-node k8s setup with kubeadm and Calico. kubeadm is an open source basic tool to build kubernetes clusters. A more manual process and we’ll do one node kubenertes cluster
Prepare your node
Check first if you have all requirements of your server in the official documentation.
In my case, i am using a 8GB memory, 4 vCPUs and 25GB of disk KVM instance with Centos 7.8. No swap disk. Meeting all the requirements. More details of the configuration in my XML file uploaded in github.
I’ve run this script to prepare and install the require packages for Kubernetes 1.21.2: kubelet, kubeadm and kubectl. I am using this release, because normally in the Telco industry versions must be managed carefully. Other thing, in my case, I am using a Proxy to download packages or images. unset those in your profile to avoid issues.
#!/bin/sh ## Remove unset proxy settings in ENV # Setup kubelet repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg # exclude=kube* EOF # Set SELinux in permissive mode (effectively disabling it) setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # Enable iptables bridge routing cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system #Install kubelet, kubeadm and kubectl to version 1.21.2 yum -y clean all yum -y install kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2 yum -y clean all yum -y install kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2 # Maybe those next args are deprecated, but I have to set them to mae kubelet work. Got error message regarding the cgroups definition. echo 'KUBELET_EXTRA_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice' > /etc/sysconfig/kubelet systemctl restart kubelet systemctl enable --now kubelet systemctl status kubelet
You may want to check if those packages were actually installed
# yum info kubeadm-1.21.2 kubelet-1.21.2 kubectl-1.21.2 | grep "Name\|Version\|Size" Name : kubeadm Version : 1.21.2 Size : 43 M Name : kubectl Version : 1.21.2 Size : 44 M Name : kubelet Version : 1.21.2 Size : 113 M
Initialize your node with kubeadm
Now it’s time to initialize your node. Set the –pod-network-cidr argument to the network segment you will use with Calico. It must not overlap with any other network address space in your LAN.
kubeadm init --pod-network-cidr=10.140.0.0/16
If you get the message of “Your Kubernetes control-plane has initialized successfully!” your are on the right path.
It’s time to set your kubectl:
# overwrite your previous config mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should see something like this:
[root@k8s-node01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node01 NotReady master 2m6s v1.21.2 [root@k8s-node01 ~]# journalctl -n 1 -u kubelet -- Logs begin at Wed 2020-10-14 16:31:40 UTC, end at Wed 2020-10-14 18:18:37 UTC. -- Oct 14 18:18:37 k8s-node01 kubelet[8352]: E1014 18:18:37.304363 8352 kubelet.go:2184] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uni
Because cni plugin is not yet installed. It’s normal you get this message. It means it’s time to proceed installing your network solution: Calico.
Calico installation
Let’s start downloading the required files for resource and operator definitions
# Using proxy in my case, remove or modify that argument curl --proxy http://192.168.1.130:3128 https://docs.projectcalico.org/manifests/tigera-operator.yaml -o tigera-operator.yaml curl --proxy http://192.168.1.130:3128 https://docs.projectcalico.org/manifests/custom-resources.yaml -o custom-resources.yaml
You must change the CIDR in the resources definition to match the one you have entered when you initialized your node
# This section includes base Calico installation configuration. # For more information, see: https://docs.projectcalico.org/v3.16/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.140.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all()
Now, create the operator and resources:
kubectl create -f tigera-operator.yaml kubectl create -f custom-resources.yaml
Wait about 2 or 3 minutes for the pods to be in “running” status. You should see something like this:
[root@k8s-node01 ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE calico-system calico-kube-controllers-6b4855598b-lwc98 1/1 Running 0 2m22s calico-system calico-node-xsrkd 1/1 Running 0 2m22s calico-system calico-typha-565684b598-95ldb 1/1 Running 0 2m23s kube-system coredns-6955765f44-n5nkr 1/1 Running 0 5m34s kube-system coredns-6955765f44-vlkv9 1/1 Running 0 5m34s kube-system etcd-k8s-node01 1/1 Running 0 5m26s kube-system kube-apiserver-k8s-node01 1/1 Running 0 5m26s kube-system kube-controller-manager-k8s-node01 1/1 Running 0 5m26s kube-system kube-proxy-pv8qf 1/1 Running 0 5m35s kube-system kube-scheduler-k8s-node01 1/1 Running 0 5m26s tigera-operator tigera-operator-5b466c74ff-9xlx5 1/1 Running 0 2m42s [root@k8s-node01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-node01 Ready master 5m58s v1.21.2
And finally, you will need you need to untaint your node. because this is a controller, by default, no pods can be scheduled. Then, we need to change this condition.
kubectl taint nodes --all node-role.kubernetes.io/master-
And see something like this:
[root@k8s-node01 ~]# kubectl taint nodes --all node-role.kubernetes.io/master- node/k8s-node01 untainted
Time to test
I’ll create a simple pod based in my hello-node image and test it
#hello-node-pod.yaml # use :set paste in vi to set off autoindent. depends in your .vimrc apiVersion: v1 kind: Pod metadata: name: hello-node-app labels: app: hello-node-app type: front-end spec: containers: - name: hello-node-app image: pinrojas/hello-node:v1
If you check the pod info. You should get something like this:
[root@k8s-node01 ~]# kubectl describe pod hello-node-app | grep podIP Annotations: cni.projectcalico.org/podIP: 10.140.85.197/32 cni.projectcalico.org/podIPs: 10.140.85.197/32
Because it’s a pure layer-3 solution. And it’s not using overlay. You should be able to get access directly to the pod IP. You can test the Pod doing this:
[root@k8s-node01 ~]# curl http://10.140.85.197:8080 Hello World version ONE! Host/Pod: hello-node-app
And we’re done. See ya!
Source Info
Install kubernetes cluster with kubeadmfrom kubernetes.io documentation
Install calico manually after kubeadm from projectcalico.com documentation
One-node k8s setup with kubeadm and Calico: Additional info
If you want to see what should be output you should get from kubeadm init command
[root@k8s-node01 ~]# kubeadm init --pod-network-cidr=10.140.0.0/16 W1014 20:44:49.805328 18596 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1014 20:44:49.806489 18596 version.go:102] falling back to the local client version: v1.17.9 W1014 20:44:49.807138 18596 validation.go:28] Cannot validate kube-proxy config - no validator is available W1014 20:44:49.807189 18596 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.9 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-node01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.101] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-node01 localhost] and IPs [10.10.10.101 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-node01 localhost] and IPs [10.10.10.101 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1014 20:45:19.685405 18596 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1014 20:45:19.690873 18596 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 42.513350 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-node01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-node01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: rw43y5.p0phmwb0mpyln8oi [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.10.10.101:6443 --token rw43y5.p0phmwb0mpyln8oi \ --discovery-token-ca-cert-hash sha256:8a5daec06b1f74a7d02b4c09a97008c8090da0804c4fed7d3bf160e25ee946fe
What should be output when you create the operartors and resources from Calico
[root@k8s-node01 ~]# kubectl create -f tigera-operator.yaml namespace/tigera-operator created podsecuritypolicy.policy/tigera-operator created serviceaccount/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created [root@k8s-node01 ~]# kubectl create -f custom-resources.yaml installation.operator.tigera.io/default created