Developer setup

Getting started

There are many ways you can setup you local environment, this is just a basic quick example of how to setup everything you’ll need to get started running and developing Armada.


To follow this section it is assumed you have:

Running Armada locally

There are two options for developing Armada locally.


You can use a kind Kubernetes clusters.

This is recommended if load doesn’t matter or you are working on features that rely on integrating with kubernetes functionality


You can use fake-executor

This is recommended when working on features that are purely Armada specific or if you want to get a high load of jobs running through Armada components

Setup Kind development

  1. Get kind (Installation help here)
     go install
  2. Create kind clusters (you can create any number of clusters)

    As this step is using Docker, it may require root to run

     kind create cluster --name demo-a --config ./example/kind-config.yaml
     kind create cluster --name demo-b --config ./example/kind-config.yaml
  3. Start Redis
     docker run -d -p 6379:6379 redis

    The following steps are shown in a terminal, but for development is it recommended they are run in your IDE

  4. Start server in one terminal
     go run ./cmd/armada/main.go --config ./e2e/setup/insecure-armada-auth-config.yaml
  5. Start executor for demo-a in a new terminal
     ARMADA_APPLICATION_CLUSTERID=kind-demo-a ARMADA_METRIC_PORT=9001 go run ./cmd/executor/main.go
  6. Start executor for demo-b in a new terminal
     ARMADA_APPLICATION_CLUSTERID=kind-demo-b ARMADA_METRIC_PORT=9002 go run ./cmd/executor/main.go

Setup Fake-executor development

  1. Start Redis
     docker run -d -p 6379:6379 redis

    The following steps are shown in a terminal, but for development is it recommended they are run in your IDE

  2. Start server in one terminal
     go run ./cmd/armada/main.go --config ./e2e/setup/insecure-armada-auth-config.yaml
  3. Start executor for demo-a in a new terminal
     ARMADA_APPLICATION_CLUSTERID=demo-a ARMADA_METRIC_PORT=9001 go run ./cmd/fakeexecutor/main.go
  4. Start executor for demo-b in a new terminal
     ARMADA_APPLICATION_CLUSTERID=demo-b ARMADA_METRIC_PORT=9002 go run ./cmd/fakeexecutor/main.go

Optional components

NATS Streaming

Armada can be set up to use NATS Streaming as message queue for events. To run NATS Streaming for development you can use docker:

docker run -d -p 4223:4223 -p 8223:8223 nats-streaming -p 4223 -m 8223

For Armada configuration check end to end test setup:

go run ./cmd/armada/main.go --config ./e2e/setup/insecure-armada-auth-config.yaml --config ./e2e/setup/nats/armada-config.yaml
Lookout - Armada UI

Lookout requires Armada to be configured with NATS Streaming. To run Lookout, firstly build frontend:

cd ./internal/lookout/ui
npm install
npm run openapi
npm run build

Start NATS Streaming:

docker run -d -p 4223:4223 -p 8223:8223 nats-streaming -p 4223 -m 8223

Start a Postgres database:

docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=psw postgres

Migrate database:

go run ./cmd/lookout/main.go --migrateDatabase

Then run go application:

go run ./cmd/lookout/main.go 

For UI development you can also use the React development server. Note that the Lookout API will still have to be running for this to work.

npm run start

Quick dev setup

Optionally, you can get a kind cluster and redis, NATS and PostgreSQL containers up by running


The script will print out commands to execute each of the three Armada components, armada, armada-lookout, and armada-executor.

If you want to run Armada components using NATS Jetstream (as opposed to NATS streaming), you can run and copy the commands from the following command:

./docs/dev/ jetstream

When you’re done, you can run


to tear down your development environment.

Testing your setup

  1. Create queue & Submit job
    go run ./cmd/armadactl/main.go create queue test --priorityFactor 1
    go run ./cmd/armadactl/main.go submit ./example/jobs.yaml
    go run ./cmd/armadactl/main.go watch test job-set-1

For more details on submitting jobs to Armada, see here.

Once you submit jobs, you should be able to see pods appearing in your cluster(s), running what you submitted.

Note: Depending on your Docker setup you might need to load images for jobs you plan to run manually:

kind load docker-image busybox:latest

Running tests

For unit tests run

make tests

For end to end tests run:

make tests-e2e
# optionally stop kubernetes cluster which was started by test
make e2e-stop-cluster

Code Generation

This project uses code generation.

The Armada api is defined using proto files which are used to generate Go source code (and the c# client) for our gRPC communication.

To generate source code from proto files:

make proto

Additionally, the moq tool has been used to generate test mocks in internal/jobservice/events and internal/jobservice/repository (and possibly other places). If you need to update the moqs, do the following:

go install
cd internal/jobservice/events (or wherever)
go generate

Usage metrics

Some functionality the executor has is to report how much cpu/memory jobs are actually using.

This is turned on by changing the executor config file to include:

   exposeQueueUsageMetrics: true

The metrics are calculated by getting values from metrics-server.

When developing locally with Kind, you will also need to deploy metrics server to allow this to work.

The simplest way to do this it to apply this to your kind cluster:

kubectl apply -f

Command-line tools

Our command-line tools used the cobra framework

You can use the cobra cli to add new commands, the below will describe how to add new commands for armadactl but it can be applied to any of our command line tools.


Get cobra cli tool:

go install

Change to the directory of the command-line tool you are working on:

cd ./cmd/armadactl

Use cobra to add new command:

cobra add commandName

You should see a new file appear under ./cmd/armadactl/cmd with the name you specified in the command.

Setting up OIDC for developers.

Setting up OIDC can be an art. The Okta Developer Program provides a nice to test OAuth flow.

1) Create a Okta Developer Account - I used my github account. 2) Create a new App in the Okta UI. - Select OIDC - OpenID Connect. - Select Web Application. 3) In grant type, make sure to select Client Credentials. This has the advantage of requiring little interaction. 4) Select ‘Allow Everyone to Access’ 5) Deselect Federation Broker Mode. 6) Click okay and generate a client secret. 7) Navigate in the Okta settings to the API settings for your default authenticator. 8) Select Audience to be your client id.

Setting up OIDC for Armada requires two separate configs (one for Armada server and one for the clients)

You can add this to your armada server config.

    anonymousAuth: false
      providerUrl: ""
      groupsClaim: "groups"
      clientId: "CLIENT_ID_FROM_UI"
      scopes: []

For client credentials, you can use the following config for the executor and other clients.

      providerUrl: ""
    clientId: "CLIENT_ID_FROM_UI"
    clientSecret: "CLIENT_SECRET"
    scopes: []

If you want to interact with Armada, you will have to use one of our client APIs. The armadactl is not setup to work with OIDC at this time.