Add runner binary serving via orchestrator, update configurations and documentation

- Extend `.env.sample` with `RUNNER_DIR_HOST` for serving workflow runner binaries.
- Update `compose.yml` with `RUNNER_DIR` and corresponding volume mount.
- Add instructions for runner binary setup and serving in `README.md`.
- Enhance `mise.toml` with new tooling dependencies for building runners.

Signed-off-by: Till Wegmueller <toasterson@gmail.com>
This commit is contained in:
Till Wegmueller 2025-11-09 19:02:42 +01:00
parent f904cb88b2
commit 248885bdf8
No known key found for this signature in database
4 changed files with 30 additions and 5 deletions

View file

@ -52,6 +52,10 @@ ORCH_IMAGES_DIR=/var/lib/solstice/images
# Host working directory for per-VM overlays and logs; mounted read-write
# The libvirt backend will use /var/lib/solstice-ci inside the container; map it to a persistent host path.
ORCH_WORK_DIR=/var/lib/solstice-ci
# Host directory containing workflow runner binaries to be served by the orchestrator
# Files in this directory are served read-only at http(s)://runner.svc.${DOMAIN}/runners/{filename}
# Default points to the workspace target/runners where mise tasks may place built artifacts.
RUNNER_DIR_HOST=../../target/runners
# Forge Integration secrets (set per deployment)
# Shared secret used to validate Forgejo/Gitea webhooks (X-Gitea-Signature HMAC-SHA256)

View file

@ -45,11 +45,11 @@ Quick start
Services and routing
- Traefik dashboard: https://traefik.svc.${DOMAIN} (protect with TRAEFIK_DASHBOARD_AUTH in .env)
- Orchestrator HTTP: https://api.svc.${DOMAIN}
- Orchestrator gRPC (h2/TLS via SNI): grpc.svc.${DOMAIN}
- Forge webhooks: https://forge.svc.${DOMAIN}
- GitHub webhooks: https://github.svc.${DOMAIN}
- Runner static server: https://runner.svc.${DOMAIN}
- Orchestrator HTTP: https://api.${ENV}.${DOMAIN}
- Orchestrator gRPC (h2/TLS via SNI): grpc.${ENV}.${DOMAIN}
- Forge webhooks: https://forge.${ENV}.${DOMAIN}
- GitHub webhooks: https://github.${ENV}.${DOMAIN}
- Runner static server: https://runner.${ENV}.${DOMAIN}
- MinIO console: https://minio.svc.${DOMAIN}
- S3 API: s3.svc.${DOMAIN}
- RabbitMQ management: https://mq.svc.${DOMAIN}
@ -141,6 +141,21 @@ Notes
- Sockets and configs: compose binds libvirt control sockets and common libvirt directories read-only so the orchestrator can read network definitions and create domains.
- If you change LIBVIRT_URI or LIBVIRT_NETWORK, update deploy/podman/.env and redeploy.
Runner binaries (served by the orchestrator)
- Purpose: Builder VMs download workflow runner binaries from the orchestrator over HTTP.
- Host directory: Set RUNNER_DIR_HOST in deploy/podman/.env. This path is bind-mounted read-only into the orchestrator at /runners.
- Example (prod default in .env): RUNNER_DIR_HOST=/var/lib/solstice/runners
- Example (dev default in .env.sample): RUNNER_DIR_HOST=../../target/runners
- URLs: Files are served at http(s)://runner.${ENV}.${DOMAIN}/runners/{filename}
- Example: https://runner.prod.${DOMAIN}/runners/solstice-runner-linux
- Orchestrator injection: The orchestrator auto-computes default runner URLs from its HTTP_ADDR and contact address and injects them into cloud-init.
- You can override via env: SOLSTICE_RUNNER_URL (single) and SOLSTICE_RUNNER_URLS (space-separated list) to point VMs at specific filenames.
- To build/place binaries:
- Build the workflow-runner crate for your target(s) and place the resulting artifacts in RUNNER_DIR_HOST with stable filenames (e.g., solstice-runner-linux, solstice-runner-illumos).
- Ensure file permissions allow read by the orchestrator user (world-readable is fine for static serving).
- Traefik routing: runner.${ENV}.${DOMAIN} routes to the orchestrators HTTP port (8081 by default).
Forge integration configuration

View file

@ -190,6 +190,8 @@ services:
AMQP_ROUTING_KEY: jobrequest.v1
GRPC_ADDR: 0.0.0.0:50051
HTTP_ADDR: 0.0.0.0:8081
# Directory inside the container to serve runner binaries from
RUNNER_DIR: /runners
# Libvirt configuration for Linux/KVM
LIBVIRT_URI: ${LIBVIRT_URI:-qemu:///system}
LIBVIRT_NETWORK: ${LIBVIRT_NETWORK:-default}
@ -208,6 +210,8 @@ services:
- ${ORCH_IMAGES_DIR:-/var/lib/solstice/images}:/var/lib/solstice/images:Z
# Writable bind for per-VM overlays and console logs (used by libvirt backend)
- ${ORCH_WORK_DIR:-/var/lib/solstice-ci}:/var/lib/solstice-ci:Z
# Read-only bind for locally built workflow runner binaries to be served by the orchestrator
- ${RUNNER_DIR_HOST:-../../target/runners}:/runners:ro,Z
# Libvirt control sockets (ro is sufficient for read-only, but write is needed to create domains)
- /var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock:Z
- /var/run/libvirt/libvirt-sock-ro:/var/run/libvirt/libvirt-sock-ro:Z

View file

@ -1,5 +1,7 @@
[tools]
age = "latest"
"cargo:cross" = "latest"
fnox = "latest"
protoc = "latest"
python = "latest"
rust = "latest"