kubernetes
Setting up a test cluster
This guide will assume a small test cluster, it consists of 1 master node and two worker nodes. The basic source for this guide was : visit
The master-node does not work on 1024GB of memory, it complains that it needs at least 1700MB, so i created a vagrantfile with 2048GB as default. The worker nodes do not need it but i enlarged them as well. So this test cluster at least takes 6GB of the 32GB (both laptop and hoek).
This is the vagrantfile used :
setting up klopt cluster
This guide was made after the certificate for *.klopt.org expired in feb 2021. Since the cluster never worked again we rebuilt it from scratch. This guide is for doc, cheatsheets and planner. The backend was added later and needs storage so there will be a guide for that later.
This is a linode specific guide.
cluster
Setting up a cluster is done in the linode dashboard visit
- click "Kubernetes" (left menu)
- click "Create a Cluster" (right top)
- "Cluster label" : klopt-linode
- "Region" : Frankfurt, DE (or at least stick with the same location)
- "Kubernetes Version" : Take the latest (1.19 in feb 2021)
- "Add Node Pools" : choose small (2gb/50gb) 1 is enough.
- click "Add"
- click "Download kubeconfig"
You will now also have a new "Linode" which is your node, linode will provide the master itself.
The configuration should put in (or merged with) ~/.kube/config. If you don't have any configuration yet perform these steps :
The chmod is for security reasons, and also some tools will complain if you don't do it. If you do have some minikube setup you can merge it with the --flatten option of kubectl config.
| merge with flatten | |
|---|---|
Test it with :
| get nodes | |
|---|---|
pods
Now create the endpoints, we do doc here but the others are similar. We used devspace for this. Normally the devspace.yml is already there, so these steps won't be necessary. But for reinstall these would be the steps.
| reinstall | |
|---|---|
Note that if you have a deploy.yml file you can specify that, but the default option does a perfect job as well. Note however that if you want more replicas deployed you need to alter the devspace.yml file.
To test you can run :
| test | |
|---|---|
To deploy :
| deploy | |
|---|---|
This command will install all endpoint pods and services for you. Mostly they will be
- a pod for each replica requested
- one service
- one deployment
- one (running) replicaset
Note that old replicasets stay present, this is done so you can possibly rollback deployments and so it's called revision history. You can limit the history with the setting : revisionHistoryLimit but devspace seems to lack that setting.
So for now this will take care of the job :
| keep only current replica sets | |
|---|---|
Terminating TLS
A quick word on TLS before we continue. There are different solutions on terminating TLS depending on where it is done. From outside in :
- On the NodeBalancer : you do that on the linode console by pasting cert + key.
- On the ingress controller : this is done by creating a secret from cert+key and specifying it in ingress.yml
- On the pods themselves : not tries but you can imagine just running apache in the traditional way with ssl enabled.
Most guides will be about the first two, and it figures to do this on outer nodes since you can just do it 1 time and run all pods as visit beneath that.
We have chosen to do it on the ingress level, because it can be done with yaml files and it will it is not linode specific. This means you need to keep (or set to) the nodebalancer config for port 443 to plain TCP. This way you also don't have to provide cert+key.
ingress controller
To use ingress you need an implementation of a controller, and we chose the nginx version :
| nginx controller | |
|---|---|
The controller will take some time being created, for progress status :
| get services | |
|---|---|
When successful, you will now have a NodeBalancer added in the linode dashboard. The backend nodes and ports are filled automatically. You will have a new ip adres to talk to so now DNS changes are needed.
dns
We have our dns at transip, so login to the dashboard at : visit Now also login to the linode dashboard and navigate to the load balancer because we need IPv4 and IPv6 addresses !! : Click on the NodeBalancer and it will show a page with ip addresses on the right. Besides the addresses is a 'copy' button that you can use.
- search all places where the previous ip was used, you need to change them all.
- go to @ A record and change it to the new ip : 192.46.238.21 (last time)
- do NOT forget to do this for the AAAA record as well : 2a01:7e01:1.. code-block::c02e:ee15
- Now do all other places that had the old addresses.
- Press 'Opslaan'
Now you will have to wait until the dns changes have propagated, however you can try to test by changing these values in /etc/hosts
| edit hosts | |
|---|---|
Since /etc/hosts takes precedence for many services . Try the browser on any of the urls, or try openssl from the command line :
| openssl client test | |
|---|---|
secret
We need to put the private key and certificate as a secret into the cluster so we cannot just copy paste here. The secret is called klopt-tls in other references, so you should prepare the following files inside the same directory :
- tls.key: put the private key here
- tls.crt: this is where the xolphin certificate goes
- secret.yml.tmpl: the format of this file is shown below
| secrets | |
|---|---|
If you named everything exactly the same the next command will alter SERVER_KEY and SERVER_CRT templates with the correct md5 encoded strings and apply the secret :
| apply secret | |
|---|---|
If something went wrong you can change the kubectl apply part to a redirect into a file. Examine the file and apply if fixed, but then be sure to remove it again.
Also remember to remove the private key file (and crt).
ingress
Now apply the file in projects/klopt/base/Ingress.yml. It will most likely be changed since this document. At the time of writing the content was :
Apply the latest version with :
| apply config | |
|---|---|
troubleshooting
Here are some problems I encountered and what caused them :
- 400 Bad Request The plain HTTP request was sent to HTTPS port : it means the nodebalancer strips tls and ingress as well, don't terminate on the nodebalancer but set it to 443:TCP.
- The expired certificate seems to hang around : DNS was not yet propagated or !! you forgot to do ipv6 DNS changes !!
- ... there were others...