azure pipeline agent
We can now create agents locally, with the extra rights given. Explain how to setup such an agent.
You have 3 options for pipeline agents
- Azure agents
- Local machine agents
- Container agents
We already worked with the azure agents, so i will attempt to run the other two in this document.
installation
We have to prepare a workstation to run the agent. Azure says it supports these os's for agents
- Red Hat
- Centos
- Ubuntu
But it immediately adds more flavours to that. Also debian 9. I have a sort of mix between debian 10 and 11 because i added the bookworm repository earlier. We will see if that works.
For the software needed, in azure go to :
- Project settings
- Pipelines: Agent pools
- Add pool
- Choose the new pool
- Agents
- New Agent
The software can be downloaded there, and there is also an instruction on how to do it. You need to click on detailed instructions to get any further though. First you will need an access token.
token
- Go to your user settings, click on your avatar in the right top
- and then the ... menu and 'user settings'.
- Choose Security: Personal access Tokens
- Add a token with "Agent pool(read & write) access"
- Remember this token, you can't 'view' it afterwards.
Agent code
Unpack the tar.gz file you downloaded into a directory, it unpacks in many directories so create a directory first.
The server you have to supply is :
The token is the one you just made. The full install commands :
Enter server URL > https://dev.azure.com/TilburgU/
Enter a valid value for authentication type.
Enter authentication type (press enter for PAT) > PAT
Enter personal access token > ****************************************************
Connecting to server ...
>> Register Agent:
Enter agent pool (press enter for default) > test
Enter agent name (press enter for hoek) >
Scanning for tool capabilities.
Connecting to the server.
Successfully added the agent
Testing agent connection.
Enter work folder (press enter for _work) >
2022-08-18 09:11:41Z: Settings Saved.
Seems to work ! Now assign this agent to the pipeline :
trigger:
- main
- PRM_5450
pool:
name: test
vmImage: ubuntu-latest
strategy:
matrix:
Python37:
PYTHON_VERSION: '3.7'
maxParallel: 3
and run the agent
./run.sh
Scanning for tool capabilities.
Connecting to the server.
2022-08-18 09:15:41Z: Listening for Jobs
2022-08-18 09:15:45Z: Running job: Job Python37
debugging
A problems with this runner is that it has python 3.9 installed and this error is reported.
##[error]Failed to download Python from the Github Actions python registry (https://github.com/actions/python-versions). Error: Error: Could not find Python matching spec 3.7 (x64) in the python-versions registry. Beware that only systems listed in the Github Actions python versions manifest (https://github.com/actions/python-versions/blob/main/versions-manifest.json) are fit for downloading python on-flight. Also, proxy is not supported.
##[error]Version spec 3.7 for architecture x64 did not match any version in Agent.ToolsDirectory.
Note that just changing the python version does not work. What does work is just removing the UsePythonVersion step.
Since we now are running on hoek, be sure to kill any test runs because that will just occupy the :3000 port. My first run failed because of that !.
To prevent errors like that, a container solution is probably better. So read on
container agent
Use this Dockerfile
FROM ubuntu:20.04
RUN DEBIAN_FRONTEND=noninteractive apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends \
apt-transport-https \
apt-utils \
ca-certificates \
curl \
git \
iputils-ping \
jq \
lsb-release \
software-properties-common
RUN curl -sL https://aka.ms/InstallAzureCLIDeb | bash
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT [ "./start.sh" ]
start.sh then should also ve created :
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
# If the agent has some running jobs, the configuration removal process will fail.
# So, give it some time to finish the job.
while true; do
./config.sh remove --unattended --auth PAT --token $(cat "$AZP_TOKEN_FILE") && break
echo "Retrying in 30 seconds..."
sleep 30
done
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_PACKAGES=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=$TARGETARCH&top=1")
AZP_AGENT_PACKAGE_LATEST_URL=$(echo "$AZP_AGENT_PACKAGES" | jq -r '.value[0].downloadUrl')
if [ -z "$AZP_AGENT_PACKAGE_LATEST_URL" -o "$AZP_AGENT_PACKAGE_LATEST_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent"
echo 1>&2 "check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and extracting Azure Pipelines agent..."
curl -LsS $AZP_AGENT_PACKAGE_LATEST_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 0' EXIT
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
chmod +x ./run-docker.sh
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run-docker.sh "$@" & wait $!
Now build the image
Well, this fails with an error like :
gpg: decryption failed: No secret key
------
> [internal] load metadata for docker.io/library/ubuntu:20.04:
------
And there is not much info on that particular error but some comments suggest downloading ubuntu image by hand and running again. And behold :
That works !
Now you can run this agent with a rather long command, but there is also the possibility to use environment variables.
export AZP_URL=https://dev.azure.com/TilburgU/
export AZP_TOKEN=fill in the secret
export AZP_AGENT_NAME=hoek_docker
export AZP_POOL=test
There is also AZP_WORK for the working directory but the default "_work" is fine.
However this fails with the following message :
error: missing AZP_URL environment variable, which means the variables are not accessible inside the container. The start.sh script is only started when the container starts and not in the current shell. So we need to declare the variables inside Dockerfile as well.
Actually for most parameters just put the non secret variables in there :
ENV AZP_URL=https://dev.azure.com/TilburgU/
But the token should remain hidden and you can use an .env file for that. Create a file called .env and put that in the .gitignore file as well.
Set the environment variable like this.
Now startup the dockeragent like this :
The output will be something like this if successful :
1. Determining matching Azure Pipelines agent...
2. Downloading and extracting Azure Pipelines agent...
3. Configuring Azure Pipelines agent...
___ ______ _ _ _
/ _ \ | ___ (_) | (_)
/ /_\ \_____ _ _ __ ___ | |_/ /_ _ __ ___| |_ _ __ ___ ___
| _ |_ / | | | '__/ _ \ | __/| | '_ \ / _ \ | | '_ \ / _ \/ __|
| | | |/ /| |_| | | | __/ | | | | |_) | __/ | | | | | __/\__ \
\_| |_/___|\__,_|_| \___| \_| |_| .__/ \___|_|_|_| |_|\___||___/
| |
agent v2.206.1 |_| (commit ef1261a)
>> End User License Agreements:
Building sources from a TFVC repository requires accepting the Team Explorer Everywhere End User License Agreement. This step is not required for building sources from Git repositories.
A copy of the Team Explorer Everywhere license agreement can be found at:
/azp/license.html
>> Connect:
Connecting to server ...
>> Register Agent:
Scanning for tool capabilities.
Connecting to the server.
Successfully added the agent
Testing agent connection.
2022-08-18 11:35:43Z: Settings Saved.
4. Running Azure Pipelines agent...
Starting Agent listener with startup type: service - to prevent running of an agent in a separate process after self-update
Scanning for tool capabilities.
Connecting to the server.
2022-08-18 11:35:45Z: Listening for Jobs
toolPath error
Next error encountered is this one.
This means you have to set the correct python path in azure-pipelines.yaml
- task: PythonScript@0
displayName: 'Export project path'
inputs:
pythonInterpreter: '/usr/bin/python3'
scriptSource: 'inline'
script: |
The pythonInterpreter line was added and it fixes the error.
SyntaxError: Unexpected token =
Note first that this is preceded by these ERRORS, but they are not the problem and seem to be present in any run.
[648224:0819/115851.573514:ERROR:sandbox_linux.cc(377)] InitializeSandbox() called with multiple threads in process gpu-process.
[648224:0819/115851.578394:ERROR:gpu_memory_buffer_support_x11.cc(44)] dri3 extension not supported.
It is the unexpected token error that crashes the run !
/root/.cache/Cypress/10.3.1/Cypress/resources/app/node_modules/@packages/server/lib/plugins/child/run_plugins.js:40
invoke = (eventId, args = []) => {
^
SyntaxError: Unexpected token =
at Module._compile (internal/modules/cjs/loader.js:723:23)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18)
at Object.<anonymous> (/root/.cache/Cypress/10.3.1/Cypress/resources/app/node_modules/@packages/server/lib/plugins/child/run_require_async_child.js:6:24)
at Module._compile (internal/modules/cjs/loader.js:778:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Module.require (internal/modules/cjs/loader.js:692:17)
at require (internal/modules/cjs/helpers.js:25:18)
at Object.<anonymous> (/root/.cache/Cypress/10.3.1/Cypress/resources/app/node_modules/@packages/server/lib/plugins/child/require_async_child.js:12:13)
^CGdk-Message: 08:26:07.537: Cypress: Fatal IO error 2 (No such file or directory) on X server :99.
It seems that this is a nodejs version problem.
It is about class instance members. This example
Works perfectly fine with :
but
node -v
v10.24.0
node try.js
/home/kees/try.js:1
class Foo { bar = () => {} }
^
SyntaxError: Unexpected token =
at Module._compile (internal/modules/cjs/loader.js:723:23)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
at Module.load (internal/modules/cjs/loader.js:653:32)
at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
at Function.Module._load (internal/modules/cjs/loader.js:585:3)
at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
So try to get node up to at least v12.22 ! I recall that one test with a Docker image did present a node version v12 but still had this error, so it might be that v12.1 does not work !!
deployment
For now we must assume that the deployment step will be done from another runner than kairyu. This is because the runner needed a lot of resources, more than for the app itself so a separate runner would be better.
- An install run would take much of the resources hampering the main app.
- A separate runner can be reused for other purposes.
But this means we are not on the same machine when deploying. The ansible setup can be used to get this working.