Skip to content

kubernetes

Setting up a test cluster

This guide will assume a small test cluster, it consists of 1 master node and two worker nodes. The basic source for this guide was : visit

The master-node does not work on 1024GB of memory, it complains that it needs at least 1700MB, so i created a vagrantfile with 2048GB as default. The worker nodes do not need it but i enlarged them as well. So this test cluster at least takes 6GB of the 32GB (both laptop and hoek).

This is the vagrantfile used :

vagrantfile for cluster
# -*- mode: ruby -*-
# vi: set ft=ruby :

# you're doing.
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"

  config.vm.define "master" do |master|
    master.vm.network "private_network", ip: "192.168.55.10"
    master.vm.network "forwarded_port", guest: 22, host:5510
    master.vm.provider "virtualbox" do |vb|
      vb.gui = false
      vb.name = "k8master"
      vb.memory = "2048"
    end
  end

  config.vm.define "node1" do |node1|
    node1.vm.network "private_network", ip: "192.168.55.20"
    node1.vm.network "forwarded_port", guest: 22, host:5520
    node1.vm.provider "virtualbox" do |vb|
      vb.gui = false
      vb.name = "k8nd1"
      vb.memory = "2048"
    end
  end
  config.vm.define "node2" do |node2|
    node2.vm.network "private_network", ip: "192.168.55.30"
    node2.vm.network "forwarded_port", guest: 22, host:5530
    node2.vm.provider "virtualbox" do |vb|
      vb.gui = false
      vb.name = "k8nd2"
      vb.memory = "2048"
    end
  end
end

setting up klopt cluster

This guide was made after the certificate for *.klopt.org expired in feb 2021. Since the cluster never worked again we rebuilt it from scratch. This guide is for doc, cheatsheets and planner. The backend was added later and needs storage so there will be a guide for that later.

This is a linode specific guide.

cluster

Setting up a cluster is done in the linode dashboard visit

  • click "Kubernetes" (left menu)
  • click "Create a Cluster" (right top)
  • "Cluster label" : klopt-linode
  • "Region" : Frankfurt, DE (or at least stick with the same location)
  • "Kubernetes Version" : Take the latest (1.19 in feb 2021)
  • "Add Node Pools" : choose small (2gb/50gb) 1 is enough.
  • click "Add"
  • click "Download kubeconfig"

You will now also have a new "Linode" which is your node, linode will provide the master itself.

The configuration should put in (or merged with) ~/.kube/config. If you don't have any configuration yet perform these steps :

configuration
cp ~/Downloads/klopt-linode-kubeconfig.yaml ~/.kube/config
chmod 600 ~/.kube/config

The chmod is for security reasons, and also some tools will complain if you don't do it. If you do have some minikube setup you can merge it with the --flatten option of kubectl config.

merge with flatten
cp ~/.kube/config oldconfig # to not mess up the original 
KUBECONFIG=./oldconfig:~/Downloads/klopt-linode-kubeconfig.yaml kubectl config view --flatten > ~/.kube/config

Test it with :

get nodes
kubectl get nodes

pods

Now create the endpoints, we do doc here but the others are similar. We used devspace for this. Normally the devspace.yml is already there, so these steps won't be necessary. But for reinstall these would be the steps.

reinstall
1
2
3
devspace purge  # remove the pods and services created by this devspace.yml
rm -rf .devspace devspace.yml # remove all old structure to be sure
devspace init   # mostly choose defaults , provide a free test port 

Note that if you have a deploy.yml file you can specify that, but the default option does a perfect job as well. Note however that if you want more replicas deployed you need to alter the devspace.yml file.

add more replicas
1
2
3
4
5
6
deployments:
- name: doc
 helm:
    componentChart: true
    values:
        replicas: 3

To test you can run :

test
devspace dev

To deploy :

deploy
devspace deploy

This command will install all endpoint pods and services for you. Mostly they will be

  • a pod for each replica requested
  • one service
  • one deployment
  • one (running) replicaset

Note that old replicasets stay present, this is done so you can possibly rollback deployments and so it's called revision history. You can limit the history with the setting : revisionHistoryLimit but devspace seems to lack that setting.

So for now this will take care of the job :

keep only current replica sets
1
2
3
kubectl delete $(kubectl get all | grep replicaset.apps | grep "0         0         0" | cut -d' ' -f 1)
kubectl get rs # only the current replicasets remain
helm repo add https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx

Terminating TLS

A quick word on TLS before we continue. There are different solutions on terminating TLS depending on where it is done. From outside in :

  • On the NodeBalancer : you do that on the linode console by pasting cert + key.
  • On the ingress controller : this is done by creating a secret from cert+key and specifying it in ingress.yml
  • On the pods themselves : not tries but you can imagine just running apache in the traditional way with ssl enabled.

Most guides will be about the first two, and it figures to do this on outer nodes since you can just do it 1 time and run all pods as visit beneath that.

We have chosen to do it on the ingress level, because it can be done with yaml files and it will it is not linode specific. This means you need to keep (or set to) the nodebalancer config for port 443 to plain TCP. This way you also don't have to provide cert+key.

ingress controller

To use ingress you need an implementation of a controller, and we chose the nginx version :

nginx controller
1
2
3
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx

The controller will take some time being created, for progress status :

get services
kubectl --namespace default get services -o wide -w ingress-nginx-controller

When successful, you will now have a NodeBalancer added in the linode dashboard. The backend nodes and ports are filled automatically. You will have a new ip adres to talk to so now DNS changes are needed.

dns

We have our dns at transip, so login to the dashboard at : visit Now also login to the linode dashboard and navigate to the load balancer because we need IPv4 and IPv6 addresses !! : Click on the NodeBalancer and it will show a page with ip addresses on the right. Besides the addresses is a 'copy' button that you can use.

  • search all places where the previous ip was used, you need to change them all.
  • go to @ A record and change it to the new ip : 192.46.238.21 (last time)
  • do NOT forget to do this for the AAAA record as well : 2a01:7e01:1.. code-block::c02e:ee15
  • Now do all other places that had the old addresses.
  • Press 'Opslaan'

Now you will have to wait until the dns changes have propagated, however you can try to test by changing these values in /etc/hosts

edit hosts
sudo vi /etc/hosts

Since /etc/hosts takes precedence for many services . Try the browser on any of the urls, or try openssl from the command line :

openssl client test
echo | openssl s_client -servername doc.klopt.org -connect doc.klopt.org:443

secret

We need to put the private key and certificate as a secret into the cluster so we cannot just copy paste here. The secret is called klopt-tls in other references, so you should prepare the following files inside the same directory :

  • tls.key: put the private key here
  • tls.crt: this is where the xolphin certificate goes
  • secret.yml.tmpl: the format of this file is shown below
secrets
apiVersion: v1
    kind: Secret
metadata:
    name: klopt-tls
    namespace: default
    type: Opaque
data:
    tls.crt: SERVER_CRT
    tls.key: SERVER_KEY
type: kubernetes.io/tls

If you named everything exactly the same the next command will alter SERVER_KEY and SERVER_CRT templates with the correct md5 encoded strings and apply the secret :

apply secret
1
2
3
sed "s/SERVER_CRT/`cat tls.crt|base64 -w0`/g" secret.yml.tmpl | \
sed "s/SERVER_KEY/`cat tls.key|base64 -w0`/g" | \
kubectl apply -f -

If something went wrong you can change the kubectl apply part to a redirect into a file. Examine the file and apply if fixed, but then be sure to remove it again.

Also remember to remove the private key file (and crt).

ingress

Now apply the file in projects/klopt/base/Ingress.yml. It will most likely be changed since this document. At the time of writing the content was :

ingress
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: klopt-tls-svc
spec:
  tls:
  - hosts:
    - planner.klopt.org
    - cheatsheet.klopt.org
    - metrics.klopt.org
    secretName: klopt-tls
  rules:
  - host: planner.klopt.org
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: planner
            port:
              number: 80
  - host: cheatsheet.klopt.org
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: cheatsheets
            port:
              number: 80
  - host: doc.klopt.org
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: doc
            port:
              number: 80

Apply the latest version with :

apply config
kubectl apply -f Ingress.yml

troubleshooting

Here are some problems I encountered and what caused them :

  • 400 Bad Request The plain HTTP request was sent to HTTPS port : it means the nodebalancer strips tls and ingress as well, don't terminate on the nodebalancer but set it to 443:TCP.
  • The expired certificate seems to hang around : DNS was not yet propagated or !! you forgot to do ipv6 DNS changes !!
  • ... there were others...