Developer Guide
Set up your development environment and start contributing to Armada
This guide helps you set up a development environment for contributing to Armada or customizing it with new features. For contribution guidelines, see the Contributor Guide.
Prerequisites
Install the following tools before you begin. These are verified requirements from the Armada source code:
- Go (version 1.25 or later) - Required for building Armada
- gcc (for Windows, see tdm-gcc) - Required for CGO compilation
- mage - Build tool used throughout the Armada project (similar to Make, written in Go)
- Docker - Container runtime for running dependencies
- kubectl - Kubernetes command-line tool
- protobuf (version 3.17.3 or later) - Protocol buffer compiler (required if you modify
.protofiles) - kind - Kubernetes in Docker (bootstrapped via
mage BootstrapTools)
Note: Additional tools are automatically installed via mage BootstrapTools from tools.yaml, including golangci-lint, sqlc, go-swagger, and others.
Development Environment Setup
Armada provides two main ways to run components locally for development. Choose the method that best fits your workflow.
Using mage LocalDev (Recommended)
LocalDev automates the setup process and is the recommended way to get started:
- Bootstraps required tools from
tools.yaml - Creates a local Kubernetes cluster using kind
- Starts dependencies (Pulsar, Redis, PostgreSQL)
- Builds and starts Armada components
Note: If you edit a proto file, run mage proto to regenerate the Go code.
LocalDev has several modes:
# Minimal setup - runs only core components (server, executor, scheduler)
# This is what CI uses for testing
mage localdev minimal
# Full setup - runs all components including Lookout UI
mage localdev full
# Skip build step - use existing Docker images
# Set ARMADA_IMAGE and ARMADA_TAG to choose the image
mage localdev no-buildWe use mage localdev minimal to test the CI pipeline. Use it to test changes to core components.
To stop the local development environment:
mage LocalDevStopUsing Goreman
Goreman is a Go-based clone of Foreman that manages Procfile-based applications, allowing you to run multiple processes with a single command. Goreman will build the components from source and run them locally, making it easy to test changes quickly.
-
Install
goreman:go install github.com/mattn/goreman@latest -
Start dependencies:
docker-compose -f _local/docker-compose-deps.yaml up -dNote: Images can be overridden using environment variables:
REDIS_IMAGE,POSTGRES_IMAGE,PULSAR_IMAGE,KEYCLOAK_IMAGE -
Initialize databases and Kubernetes resources:
scripts/localdev-init.sh -
Start Armada components:
goreman -f _local/procfiles/no-auth.Procfile start
Local Development with Authentication
To run Armada with OIDC authentication enabled using Keycloak:
-
Start dependencies with the auth profile:
docker-compose -f _local/docker-compose-deps.yaml --profile auth up -dThis starts Redis, PostgreSQL, Pulsar, and Keycloak with a pre-configured realm.
-
Initialize databases and Kubernetes resources:
scripts/localdev-init.sh -
Start Armada components with auth configuration:
goreman -f _local/procfiles/auth.Procfile start -
Use armadactl with OIDC authentication:
armadactl --config _local/.armadactl.yaml --context auth-oidc get queues
Local Development with Fake Executor
For testing Armada without a real Kubernetes cluster, you can use the fake executor that simulates a Kubernetes environment:
goreman -f _local/procfiles/fake-executor.Procfile startThe fake executor simulates:
- 2 virtual nodes with 8 CPUs and 32Gi memory each
- Pod lifecycle management without actual container execution
- Resource allocation and job state transitions
This is useful for:
- Testing Armada's scheduling logic
- Development when Kubernetes is not available
- Integration testing of job flows
Configuration Options
You can set the ARMADA_COMPONENTS environment variable to choose which components to run:
export ARMADA_COMPONENTS="server,executor"Testing Your Setup
Verify that your development environment is working:
# Run the test suite
mage testsuiteOr manually:
go run cmd/armadactl/main.go create queue e2e-test-queue
export ARMADA_EXECUTOR_INGRESS_URL="http://localhost"
export ARMADA_EXECUTOR_INGRESS_PORT=5001
go run cmd/testsuite/main.go test --tests "testsuite/testcases/basic/*" --junit junit.xmlCode Structure
Understanding Armada's codebase structure will help you navigate and contribute effectively.
Directory Layout
armada/
├── cmd/ # Main entry points for all components
│ ├── server/ # Armada server (API server)
│ ├── executor/ # Executor (runs in each K8s cluster)
│ ├── scheduler/ # Scheduler (job scheduling logic)
│ ├── lookout/ # Lookout (job monitoring/UI backend)
│ └── armadactl/ # Command-line interface
├── internal/ # Internal packages (not for external use)
│ ├── server/ # Server implementation
│ ├── executor/ # Executor implementation
│ ├── scheduler/ # Scheduler implementation
│ ├── lookout/ # Lookout implementation
│ └── common/ # Shared utilities
├── pkg/ # Public packages (for external use)
│ ├── api/ # gRPC API definitions
│ └── client/ # Client libraries
├── config/ # Configuration files for components
├── deployment/ # Helm charts and deployment configs
├── magefiles/ # Build automation (mage targets)
└── testsuite/ # Integration test casesKey Components
- Server (
cmd/server/,internal/server/): The main API server that accepts job submissions and manages queues - Executor (
cmd/executor/,internal/executor/): Runs in each Kubernetes cluster and executes jobs - Scheduler (
cmd/scheduler/,internal/scheduler/): Determines when and where jobs should run - Lookout (
cmd/lookout/,internal/lookout/): Provides job monitoring and UI backend - armadactl (
cmd/armadactl/): Command-line interface for interacting with Armada
Using mage
mage is the build tool used throughout the Armada project. To see all available commands:
mage -lCommon mage targets:
mage localdev minimal- Start minimal local development environmentmage localdev full- Start full local development environmentmage buildDockers- Build Docker imagesmage proto- Generate Go code from proto filesmage testsuite- Run the test suitemage ui- Build and run the Lookout UI
Debugging and Profiling
Profiling with pprof
Go provides a profiling tool called pprof. To use pprof with Armada, enable the profiling socket in your config.
profiling:
port: 6060
hostnames:
- 'armada-scheduler-profiling.armada.my-k8s-cluster.com'
clusterIssuer: 'k8s-cluster-issuer'
auth:
anonymousAuth: true
permissionGroupMapping:
pprof: ['everyone']Debugging with VS Code
Run server and executor in debug mode and get a launch.json for VS Code:
mage debug vscodeAfter running this, attach to the running processes using VS Code. See the VS Code Debugging Guide for details.
Debugging with Delve
Run components in debug mode with Delve:
mage debug delveThis creates a docker-compose.dev.yaml file. You can also manually create it:
mage createDelveCompose
docker compose -f docker-compose.dev.yaml up -d server executorThen attach to the running processes:
docker compose exec -it server bash
dlv connect :4000Debug Port Mappings
| Armada service | Debug host |
|---|---|
server | localhost:4000 |
executor | localhost:4001 |
binoculars | localhost:4002 |
eventingester | localhost:4003 |
lookoutui | localhost:4004 |
lookout | localhost:4005 |
lookoutingester | localhost:4007 |
GoLand Run Configurations
Run configurations are available in the .run directory. When opening the project in GoLand, you can run Armada in both standard and debug mode.
Note: The executor requires a Kubernetes config in $PROJECT_DIR$/.kube/internal/config.
Other Debugging Methods
Run mage debug local to only spin up the dependencies, then run individual components yourself.
For required environmental variables, see the Environmental Variables guide.
Extending Armada
Armada can be extended and customized in several ways:
Custom Schedulers
The scheduler is designed to be extensible. You can implement custom scheduling algorithms by modifying the scheduler logic in internal/scheduler/. The scheduler handles:
- Job queuing and prioritization
- Resource allocation
- Gang scheduling
- Preemption logic
The scheduler code is located in internal/scheduler/ and can be customized to implement different scheduling strategies.
Custom Executors
While the standard executor works with Kubernetes, you could create custom executors for other platforms. The executor interface is defined in pkg/executorapi/ and communicates with the scheduler via gRPC.
Client Libraries
Armada provides client libraries for multiple languages that you can extend or use as reference:
- Python:
client/python/ - Java:
client/java/ - Scala:
client/scala/ - .NET:
client/DotNet/
These libraries provide programmatic access to Armada's APIs and can be used as reference for building custom clients.
Integration Examples
Armada has been integrated with various systems. These can serve as examples for creating your own integrations:
- Airflow:
third_party/airflow/- Airflow operator for Armada - Metaflow: armada-metaflow repository - Metaflow decorator for Armada
- Jenkins: jenkins-plugin repository - Jenkins plugin for Armada
- Spark: armada-spark repository - Spark cluster manager for Armada
UI Development
To develop the Lookout UI locally, the UI code is located in internal/lookoutui/. The UI is built with React and TypeScript.
When using Goreman, the UI runs automatically on http://localhost:3000 (frontend dev server).
To run the UI separately for development:
cd internal/lookoutui
yarn
yarn openapi
PROXY_TARGET=http://localhost:8089 yarn devThis starts a development server on http://localhost:3000 that proxies API requests to the backend.
Alternatively, build a production version with:
mage uiThis builds the UI and makes it available at http://localhost:8089.
Troubleshooting
Port 6443 Already in Use
If port 6443 is already in use, modify e2e/setup/kind.yaml to use a different port:
- containerPort: 6443
hostPort: 6444 # Change to an available port
protocol: TCPArm/M1 Mac Issues
On Arm/M1 Macs, you may need to set:
export PULSAR_IMAGE=richgross/pulsar:2.11.0For more information on known issues:
Additional Resources
Website Resources
- API Reference - API reference for REST and gRPC
- Contributor Guide - Contribution guidelines and PR process
- Understanding Armada - Core concepts and architecture
- Community - Get help, connect with the community, and find support resources
Last updated on