MetalLB is a young project of a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
In this post called “Ingress and MetalLB,” we’ll show some configurations examples of how to use ingress controller and a load-balancer service built with metallb. In this case, I am using NGNIX ingress controller.
If you want to see how this service was implemented, read my previous post: Calico and MetalLB working together with BGP
The next picture shows you the topology of this lab using Nokia SRLinux:
Ingress Install
Just run the manifest in my repo as follow
kubectl apply -f ingress-install.yml
The only different with the one currently posted at NGINX Ingress controller site, I am defining three replicas (check next extract). Then, I will expose the service as LoadBalancer.
spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller replicas: 3
Wait for the controller to be in a ‘running’ state
[root@ctl-a1 ~]# kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-7k9wc 0/1 Completed 0 7d17h ingress-nginx-admission-patch-6trtw 0/1 Completed 1 7d17h ingress-nginx-controller-57ffff5864-g4trp 1/1 Running 0 7d2h ingress-nginx-controller-57ffff5864-gpxnp 1/1 Running 0 7d17h ingress-nginx-controller-57ffff5864-q5szk 1/1 Running 0 7d2h
Now, let’s create the service:
kubectl expose deploy ingress-nginx-controller -n ingress-nginx kubectl patch service ingress-nginx-controller -p '{"spec":{"type": "LoadBalancer", "externalTrafficPolicy":"Local"}}' -n ingress-nginx
Let’s check this out
[root@ctl-a1 ~]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.111.229.10 10.254.254.243 80:32196/TCP,443:30661/TCP 7d17h ingress-nginx-controller-admission ClusterIP 10.100.46.67 <none> 443/TCP 7d17h
Create hello-node deployment Example
To test our setup of Ingress and MetalLB, we’ll create two images to be used as examples. Both bringing the hostname of the pod, but displaying either “Service A” or “Service B”. Check the following server.js
var http = require('http'); var handleRequest = function(request, response) { console.log('Received request for URL: ' + request.url); response.writeHead(200); response.end('Hello Service B! Host/Pod: ' + process.env.HOSTNAME + '\n'); }; var www = http.createServer(handleRequest); www.listen(8080);
And then, we use this Dockerfile
## it'll display: "Hello World version ONE! Host/Podhello-node-3439300230-bg6ew" ## You can use it to test ReplicaSet and Ingress and how it's going over different containers. ## Details at cloud-native-everything.com ## Move it to a file with the name Dockerfile and build it using for example "docker build -t gcr.io/k8s-helloworld-142719/hello-node:v1 ." FROM node:4.4 EXPOSE 8080 COPY server.js . CMD node server.js
Then, I put those into two different images in my registry:
- pinrojas/hello-svca:v1
- pinrojas/hello-svcb:v1
And I have applied this manifest to create deployments and services. And I got the following output:
[root@ctl-a1 ~]# kubectl get pods | grep svc hello-svca-5dddf9b9bb-8sh5b 1/1 Running 0 7d2h hello-svca-5dddf9b9bb-cjmpr 1/1 Running 0 7d2h hello-svca-5dddf9b9bb-mkjlf 1/1 Running 0 7d2h hello-svca-5dddf9b9bb-v7rkj 1/1 Running 0 7d2h hello-svcb-5cb5649f58-2cmvw 1/1 Running 0 7d2h hello-svcb-5cb5649f58-f6p45 1/1 Running 0 7d2h hello-svcb-5cb5649f58-fjdp9 1/1 Running 0 7d2h hello-svcb-5cb5649f58-qj6dk 1/1 Running 0 7d2h
Optionally, you can path the services to use LoadBalancer instead of NodePort.
kubectl patch service hello-svcb -p '{"spec":{"type": "LoadBalancer", "externalTrafficPolicy":"Local"}}' kubectl patch service hello-svca -p '{"spec":{"type": "LoadBalancer", "externalTrafficPolicy":"Local"}}'
Final Results
Well, Finally to test this setup of Ingress and MetalLB, from my server connected to my border leaf (must be an external host), I’ll test my ingress via the LoadBalancer IP with curl. Remember my LoadBalancer service is been deployed using MetalLB with BGP.
I will add the LoadBalancer Service IP to my /etc/hosts as follow (Use the same IP for both services, remember Ingres must split the traffic and forward it to the correct deployment based on the host rule):
bash-5.1# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.20.20.3 client-1 10.254.254.243 myservicea.foo.org 10.254.254.243 myserviceb.foo.org
And then, I will test it as follow:
bash-5.1# for i in {1..20}; do curl http://myservicea.foo.org; done Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-8sh5b Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-cjmpr Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-mkjlf Hello Service A! Host/Pod: hello-svca-5dddf9b9bb-v7rkj bash-5.1# for i in {1..20}; do curl http://myserviceb.foo.org; done Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw Hello Service B! Host/Pod: hello-svcb-5cb5649f58-fjdp9 Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45 Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45 Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45 Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45 Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45 Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw Hello Service B! Host/Pod: hello-svcb-5cb5649f58-f6p45 Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw Hello Service B! Host/Pod: hello-svcb-5cb5649f58-qj6dk Hello Service B! Host/Pod: hello-svcb-5cb5649f58-2cmvw Hello Service B! Host/Pod: hello-svcb-5cb5649f58-fjdp9
See ya!