Skip to content

jenkins

This is purely about building with jenkins. Jenkins will be mentioned in the installation section of klopt as well.

introduction

Jenkins is mainly a continues integration tool. It can also be used for deployment as shall be shown here for the doc.klopt.org website.

Note that what we want is after a push to the master branch, Jenkins will completely take over and the production install at doc.klopt.org will be updated !!

Mainly Jenkins just keeps compiling source code and testing it and reporting it to the user interface.

We will discuss the following steps :

  • Detecting a change in the master branch
  • Checkout and compile the code
  • Test the build with unittests
  • Create a docker image from the html part of the website.
  • Push the image to the private klopt repository on docker hub.
  • Deploy the docker images to linode using kubectl.

Jenkins setup

First we need Jenkins running and these plugins installed, there might be too many, but i will not de-install just to check.

  • Bitbucket Plugin
  • Credentials Plugin
  • Docker Pipeline
  • Git
  • Kubernetes-cli Plugin
  • Kubernetes Continues Deploy Plugin
  • Kubernetes Plugin

Credentials

We need to authenticate to docker hub to push the image, and to linode to deploy the image.

docker

This has to be added to Jenkins/Global, or otherwise you can only access it from doc. Go to

setting
1
2
3
4
Dashboard -> Credentials -> System -> Global Credentials(unrestricted)
# but you probably need this detour since Dashboard does not show "Credentials"
Dashboard -> People -> Kees -> Credentials -> "Click any user" -> Global Credentials (unrestricted)
# thanx Jenkins !!

There add the login you got from docker hub, and give it the name "docker-hub" or "docker-cred" (the last one is used here)

The docker credentials can just be 'username and password'. Since this name is already set in the Jenkinsfile (see later) make sure to call it 'docker-creds'.

Just fill in the name (kees) and password ()

Note that Jenkins will still complain about safety, see if there is a better way.

kubernetes

When installing a kubernetes cluster on linode you can download a file called klopt-kubeconfig.yaml You can copy this file to ~/.kube/config.yaml so that it will be found by kubectl automatically.

This file is also the one you need for the kubectl credentials, which we called kubernetes-config in Jenkinsfile.

Note that you need to use 'Secret-file' as the type for this credential. If you installed all kubernetes plugins you will probably have an extra option there that claims to do the trick but that DOES NOT WORK. Secret-file does !!

You need to install the kubernetes-cli plugin for this to work.

Jenkinsfile

The Jenkinsfile is used when you are going to work with pipelines. It seems the execute shell option in the GUI on localhost:8080 does the same as a Jenkinsfile, but that is merely for simple projects, when using pipelines (and you should) you need a Jenkinsfile.

Here is a simple test example :

pipeline
pipeline {
    agent { docker { image 'python:3.5.1' } }
    stages {
        stage('build') {
            steps {
                sh 'python --version'
            }
        }
    }
}

It simply prints the python version inside a docker container. Note the 'sh' command, these differ per OS, for windows for instance you can use 'bat'.

The actual Jenkinsfile for doc will now be dissected :

(from visit)

doc pipeline
pipeline {
    environment { 
        imagename = "kees/klopt:doc"
        registryCredentialsId = "docker-creds"
        registryUrl = "https://hub.docker.com/"
        registry_url = "registry.hub.docker.com/"
        dockerImage = ""
    }
    agent any
    stages {
        ...

The first part states that this is a (multibranch) pipeline, and some variable we use are declared. Note that these variables can be used inside 'script {}' section and they are interpreted as groovy (java like).

The pipeline block requires an agent statement, it states where the execution of the stages will take place. Normally this is 'any' but you could also have the while pipeline run in a docker container in which case 'docker' or 'dockerfile' are possible. Inside each stage, the agent can be set top other values as well.

pipeline
...
stages {
    stage('Build') {
        steps {
            sh 'make html'
            script {
               dockerImage = docker.build imagename
            }
        }
    }

The stage name is just a name you can make up, and inside a stage you specify which steps to take to execute the stage. In this case we use bash to make the html tree with sphinx-doc, and then we build a dockerImage within the script block so this is groovy code. It also uses environment variables set earlier.

test dev
1
2
3
4
5
6
7
8
9
stage('Test develop') {
       when {
           branch 'develop'
       }
       agent { docker { image 'kees/klopt:doc' } }
       steps {
           sh 'echo "nothing yet !"'
       }
   }

Here is where the term multi-branch pipeline comes in to play, the stage is only performed for the 'develop' branch in doc. I have no unit-tests but i have reserved a section here in the case we will make some later on. For instance we could do 'make spelling' here, but it will likely always fail and become a nuisance. Also here is an example of running a command with a docker agent.

push image
stage('Push image') {
    when {
        branch 'master'
    }
    steps {
        withCredentials([usernamePassword( credentialsId: 'docker-creds', usernameVariable: 'USER', passwordVariable: 'PASSWORD')]) {
            sh 'docker login -u $USER -p $PASSWORD ${registry_url}'
            script {
                docker.withRegistry("http://$registry_url", registryCredentialsId) {
                    dockerImage.push('doc')
                }
            }
        }
    }
}

Pushing the image to docker hub is a stage only done with the master image. So this will only run when we merged develop into master, or if we are going to edit inside the master branch directly.

deploy to linode
stage('Linode deploy') {
    when {
        branch 'master'
    }
    steps {
        withKubeConfig([credentialsId: 'kubernetes-config']) {
            sh 'kubectl delete pod doc-pod'
            sh 'kubectl delete deploy doc-deployment'
            sh 'kubectl apply -f ssl_deploy.yml'
        }
    }
}

Deploying on kubernetes is also only a master action, this deletes the pod and deployments because otherwise no change is detected. Then the ssl_deploy script is run. Though this is more of a kubernetes subject, i will discuss this last step in the next chapter :

ssl_deploy.yml

As said in the previous chapter, this is a description of the ssl_deploy.yml script for deploying doc to doc.codewell.nl on linode/kubernetes. These scripts can also contain multiple steps/parts that are executed sequentially :

deploy
apiVersion: v1
kind: Pod
metadata:
  name: doc-pod
spec:
  containers:
  - name: doc
    image: kees/klopt:doc
    imagePullPolicy: Always
  imagePullSecrets:
  - name: klopt-secret
---

As you can see this is a yaml file, it first states the api version, which is v1 or app/v1 almost always. Then it defines a Pod with name 'doc-pod' that gets downloaded from docker-hub every time. The credentials to login have been prepared before, see virtualization/kubernetes:access to the private repository of docker hub{.interpreted-text role="ref"} You can list the secret and view details with :

kubectl
kubectl get secrets
kubectl describe secrets klopt-secret

So after this part the pod is created with the image from docker-hub, but it is not running yet.

deploy
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: doc-deployment
  labels:
    app: doc
spec:
  replicas: 2
  selector:
    matchLabels:
      app: doc
  template:
    metadata:
      labels:
        app: doc
    spec:
      containers:
      - name: doc
        image: kees/klopt:doc
        ports:
        - containerPort: 443
---

This is a deployment that uses the pod doc-pod. It is called doc-deployment. The Labels used throughout the example are not directly of semantic value, but form key/value pairs that can be used for instance to match the doc app by name, matchLabels is used on key app: doc.

The spec is the specification of this deployment object, and kubernetes will try to maintain the spec while it is 'installed' (applied ?). So if a running instances crashes, the replicas state is now 1, so it will startup a new instance to meet the spec again.

Work instruction

Here I will describe the work instructions to install a complete project with Jenkins to kubernetes / Linodes. Since lang is up for automation next that will be our test case. The order of actions

  • Creating Jenkins project
  • Setup a Dockerfile for running the complete site
  • Constructing Jenkins file

Jenkins config

  • Dashboard -> New Item
  • Item name is "lang" and Create a Multibranch pipeline, push "Ok"
  • Note here that you can Copy from another project, but we will not for now.
  • The form presented should get
  • displayname "lang" and a suitable description
  • Branch Sources : add Git with repository git@bitbucket.org:keesklopt/lang
  • credentials can remain empty
  • Scan Multibranch Pipeline should be enabled, choose a high frequency in the beginning for testing.
  • In properties set Registry credentials to "docker-hub"
  • Save

Now the project will be scanned but we have no Jenkinsfile yet and no Dockerfile

Dockerfile

The current running docker config looks like this :

docker
1
2
3
4
5
6
7
FROM debian:buster
RUN apt-get update
RUN apt-get install -y python3 sudo
COPY . /home
RUN cd /home
RUN ls /home
RUN /usr/bin/python3 /home/buster.py

Alpine tryout

Sadly we do NOT use ansible but dedicated python scripts for debian releases. I could still try to build and alpine version, so that will be done here later.

I will present it as Dockerfile commands :

docker
1
2
3
4
5
6
FROM alpine
RUN apk update
RUN apk add --no-cache python3 musl-dev libffi-dev openssl-dev make gcc py3-pip python3-dev
#RUN apk add git openssh
RUN pip install cffi
RUN pip install ansible

Now we usually use the ssh key to get sources from git, so a mount will work better

troubleshooting

access to resource is denied

When push a docker image to docker hub, you need to get the name correct :

push
1
2
3
4
5
6
7
8
9
docker image push doc
The push refers to repository [docker.io/library/doc]
9283fd812c8c: Preparing 
bf4cb6a71436: Preparing 
5792ac1517fc: Preparing 
53c77568e9ed: Preparing 
d6e97adfe450: Preparing 
87c8a1d8f54f: Waiting 
denied: requested access to the resource is denied

This is what happens if you don't use the correct name. Log in to docker hub to see what images are stored there. It is shown there how to push the image :

push
1
2
3
4
# on the main page it says :
docker push kees/klopt:tagname
# so for the doc container it is
docker push kees/klopt:doc

permission denied docker

permission problem
sudo usermod -a -G docker jenkins
systemctl restart jenkins # needed !

can't find kubernetes plugin

This really is a nuisance inside jenkins :

Warning

The Available plugins table remains empty when you don't apply a filter !!

Choose "Dashboard" -> "Manage Jenkins" -> "Manage Plugins"

The Updates list show many plugins, but when you click "Available" it says "Plugins loading" .. and remains empty.

When you type "kube" in the filter list lot's of plugins start to show up.

jenkins won't restart after update

Please wait while Jenkins is restarting ...

Your browser will reload automatically when Jenkins is ready

This will remain in the webpage forever. Some things i tried to get it going again :

  • systemctl stop jenkins
  • refresh
  • systemctl stop jenkins
  • remove the pluginmanager part from the url
  • refresh a million times

can't find sphinx

error
make html
/bin/sh: 1: sphinx-build: not found

While as a normal user it works fine : You can reproduce it like this:

reproduce
1
2
3
4
sudo su 
su jenkins
sphinx-build 
/bin/sh: 1: sphinx-build: not found

If you do which sphinx-build as user kees you will see it is here :

find
which sphinx-build
/home/kees/.local/bin/sphinx-build

And since jenkins does not have a home directory it won't work for jenkins. So i ended up installing sphinx as root to have it run system-wide.

ERROR: Could not find credentials entry with ID 'docker-creds'

This had all to do with creating the credentials under 'doc' and not under 'jenkins'. It has to be listed under "Global Credentials (unrestricted) or other projects won't find it. You get there via this path : (see also elsewhere in this document)

option
Dashboard -> People -> Kees -> Credentials -> "Click any user" -> Global Credentials (unrestricted)

Now add the credentials here, and probably give it another name because docker-cred already exists under 'doc', then also alter the references in the Jenkinsfile. Do not forget to also attach this credential to the project explicitly :

option
Dashboard -> cheatsheet -> Configure -> Properties -> Registry Credentials 

In the dropdown box you should be able to choose the credentials just added.

Important

In general these kind of errors can be cut-short by looking in the GUI: choose "your project"->Credentials if you can't see the credential it will not work.

This at least will make you be able to fix the problem in the GUI. For instance. The next problem i encountered was :

ERROR: [kubernetes-cli] unable to find credentials with id 'kubernetes-config'

This worked for 'doc' and indeed the credentials for kubernetes-config are in the 'doc' store, so they have to be duplicated in the cheatsheet store, or better in the jenkins store. Sadly you cannot move them, so go to :

add credentials
Dashboard -> Manage Jenkins -> Manage Credentials -> Click a dropdownlist under "Domain" -> Add credentials.
Secret file -> choose ~/.kube/config

Now i chose a different name, but choosing credentials under cheatsheet now shows "kubernetes-linode" and also under 'doc' you will now see both, so the one in the 'doc' store can be removed if you change the Jenkinsfile for doc.

THIS... FINALLY... WORKS!!!!!

Permission denied @ dir_s_mkdir - /srv/jekyll/_site

Also access problem writing /srv/jekyll/Gemfile.lock

This is caused by the fact that the Docker image jekyll/build runs as another user (jekyll) than the files (jenkins). The steps involved are :

  • The image jekyll/build is called from jenkins and so runs as jenkins (uid 128:gid 137 in this case).
  • It checks out the code in /var/lib/jenkins/workspace/cheatsheet/
  • -rw-r--r-- 1 128 137 7194 Dec 28 10:16 Gemfile.lock
  • the command to build the code works by mounting this directory as a volume to /src/jekyll
  • docker run --rm --volume=/var/lib/jenkins/workspace/cheatsheet_master:/srv/jekyll -it jekyll/builder jekyll build
  • you can see inside the image that it uses the same uid/gid (sh would also work)
  • docker run --rm --volume=/var/lib/jenkins/workspace/cheatsheet_master:/srv/jekyll -it jekyll/builder ls -l /srv/jekyll
  • so use jekyll (uid 1000) cannot write these files, and that's where the error comes from.

Solution: make docker run as jenkins, not as jekyll. I changed that inside the Jenkinsfile like this :

build
1
2
3
4
5
6
7
8
stage('Build') {   
       steps { 
           script {
               sh 'docker run --rm --volume="$PWD:/srv/jekyll" -eJEKYLL_UID=`id -u` -eJEKYLL_GID=`id -g` jekyll/builder jekyll build'
              dockerImage = docker.build imagename
           } 
       }
   }

Since all jenkins command run as jenkins, id -u and id -g will give the correct ids on any installation.

apparmor failed to apply profile

In full the error was :

Error

OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: apply apparmor profile: apparmor failed to apply profile: open /proc/self/attr/exec: no such file or directory: unknown

Note that this was done without COPY inside Dockerfile. I used VOLUME :

docker
1
2
3
VOLUME . /home
RUN cd /home
RUN python3 ./buster.py

At first this gave the error above, but if i changed it to :

docker
1
2
3
VOLUME /home/kees/projects/lang /home
RUN cd /home
RUN python3 ./buster.py

It changed to 'cannot find ./buster.py'. However we are building this image ourselves, it does not need a mount it definitely needs the directory to be copied !! So change it to :

docker
1
2
3
COPY . /home
RUN cd /home
RUN python3 ./buster.py 

And now it will begin a Loooooonnngggg journey to build the lang modules.

Waiting for next available executor

A Jenkins job hangs with this message, forever. This seems to happen when you have multiple builds running at once. By default there are 2 executors set for the master node, but it won't recover when all executors are free again. So to reduce the chance of getting this situation :

  • maybe very the poll time for your projects
  • enlarge the number of executors, see below

"Manage Jenkins" -> "Manage Nodes and Clouds" -> "Master" -> Configure -> # of executors

In the ? it says to add 1 for each core/cpu, i did 6.