Add support for multi-OS VM builds with cross-built runners and improved local development tooling

This commit introduces:
- Flexible runner URL configuration via `SOLSTICE_RUNNER_URL(S)` for cloud-init.
- Automated detection of OS-specific runner binaries during VM boot.
- Tasks for cross-building, serving, and orchestrating Solstice runners.
- End-to-end VM build flows for Linux and Illumos environments.
- Enhanced orchestration with multi-runner HTTP serving and log streaming.
This commit is contained in:
Till Wegmueller 2025-11-01 14:31:48 +01:00
parent 9bac2382fd
commit 0b54881558
No known key found for this signature in database
21 changed files with 647 additions and 8 deletions

View file

@ -14,6 +14,22 @@ This document records project-specific build, test, and development conventions
- Top-level build:
- Build everything: `cargo build --workspace`
- Run individual binaries during development using `cargo run -p <crate>`.
- mise file tasks:
- Tasks live under `.mise/tasks/` and are discovered automatically by mise.
- List all available tasks: `mise tasks`
- Run tasks with namespace-style names (directory -> `:`). Examples:
- Build all (debug): `mise run build:all`
- Build all (release): `mise run build:release`
- Test all: `mise run test:all`
- Start local deps (RabbitMQ): `mise run dev:up`
- Stop local deps: `mise run dev:down`
- Run orchestrator with local defaults: `mise run run:orchestrator`
- Enqueue a sample job for the current repo/commit: `mise run run:forge-enqueue`
- Serve the workflow runner for VMs to download (local dev): `mise run run:runner-serve`
- End-to-end local flow (bring up deps, start orchestrator, enqueue one job, tail logs): `mise run ci:local`
- Notes for local VM downloads:
- The orchestrator injects a SOLSTICE_RUNNER_URL into cloud-init; ci:local sets this automatically by serving the runner from your host.
- You can set ORCH_CONTACT_ADDR to the host:port where the runner should stream logs back (defaults to GRPC_ADDR).
- Lints and formatting follow the default Rust style unless a crate specifies otherwise. Prefer `cargo fmt` and `cargo clippy --workspace --all-targets --all-features` before committing.
- Secrets and credentials are never committed. For local runs, use environment variables or a `.env` provider (do not add `.env` to VCS). In CI/deployments, use a secret store (e.g., Vault, KMS) — see the Integration layer notes.

7
.mise/tasks/build/all Executable file
View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
# Build all crates in the workspace (debug)
export RUSTFLAGS=${RUSTFLAGS:-}
export RUST_LOG=${RUST_LOG:-info}
command -v cargo >/dev/null 2>&1 || { echo "cargo is required" >&2; exit 127; }
exec cargo build --workspace

4
.mise/tasks/build/orchestrator Executable file
View file

@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -euo pipefail
# Build the orchestrator crate
exec cargo build -p orchestrator

7
.mise/tasks/build/release Executable file
View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
# Build all crates in the workspace (release)
export RUSTFLAGS=${RUSTFLAGS:-}
export RUST_LOG=${RUST_LOG:-info}
command -v cargo >/dev/null 2>&1 || { echo "cargo is required" >&2; exit 127; }
exec cargo build --workspace --release

24
.mise/tasks/build/runner-cross Executable file
View file

@ -0,0 +1,24 @@
#!/usr/bin/env bash
set -euo pipefail
# Cross-build the workflow-runner for Linux and Illumos targets.
# Requires: cross (https://github.com/cross-rs/cross)
# Outputs:
# - target/x86_64-unknown-linux-gnu/release/solstice-runner
# - target/x86_64-unknown-illumos/release/solstice-runner
ROOT_DIR=$(cd "$(dirname "$0")/../../.." && pwd)
cd "$ROOT_DIR"
if ! command -v cross >/dev/null 2>&1; then
echo "cross is required. Install with: cargo install cross" >&2
exit 127
fi
# Build Linux runner
cross build -p workflow-runner --target x86_64-unknown-linux-gnu --release
# Build Illumos runner
cross build -p workflow-runner --target x86_64-unknown-illumos --release
echo "Built runner binaries:" >&2
ls -l "${ROOT_DIR}/target/x86_64-unknown-linux-gnu/release/solstice-runner" 2>/dev/null || true
ls -l "${ROOT_DIR}/target/x86_64-unknown-illumos/release/solstice-runner" 2>/dev/null || true

View file

@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -euo pipefail
# Build the workflow-runner crate
exec cargo build -p workflow-runner

102
.mise/tasks/ci/local Executable file
View file

@ -0,0 +1,102 @@
#!/usr/bin/env bash
set -euo pipefail
# End-to-end local run:
# - Start RabbitMQ via docker compose
# - Build workspace
# - Run orchestrator with local defaults
# - Enqueue a job for the current repo/commit
# - Stream logs briefly, then clean up
ROOT_DIR=$(cd "$(dirname "$0")/../../.." && pwd)
cd "$ROOT_DIR"
command -v cargo >/dev/null 2>&1 || { echo "cargo is required" >&2; exit 127; }
# Defaults
export RUST_LOG=${RUST_LOG:-info}
export ORCH_CONFIG=${ORCH_CONFIG:-examples/orchestrator-image-map.yaml}
export AMQP_URL=${AMQP_URL:-amqp://127.0.0.1:5672/%2f}
export AMQP_EXCHANGE=${AMQP_EXCHANGE:-solstice.jobs}
export AMQP_QUEUE=${AMQP_QUEUE:-solstice.jobs.v1}
export AMQP_ROUTING_KEY=${AMQP_ROUTING_KEY:-jobrequest.v1}
export AMQP_PREFETCH=${AMQP_PREFETCH:-2}
export GRPC_ADDR=${GRPC_ADDR:-0.0.0.0:50051}
# Will be used by orchestrator cloud-init to let runner call back
export ORCH_CONTACT_ADDR=${ORCH_CONTACT_ADDR:-$GRPC_ADDR}
# Bring up deps
"$ROOT_DIR/.mise/tasks/dev/up"
# Ensure cleanup
ORCH_PID=""
SERVE_PID=""
cleanup() {
set +e
if [[ -n "$ORCH_PID" ]] && kill -0 "$ORCH_PID" 2>/dev/null; then
echo "Stopping orchestrator (pid=$ORCH_PID)" >&2
kill "$ORCH_PID" 2>/dev/null || true
# give it a moment
sleep 1
kill -9 "$ORCH_PID" 2>/dev/null || true
fi
if [[ -n "$SERVE_PID" ]] && kill -0 "$SERVE_PID" 2>/dev/null; then
echo "Stopping runner server (pid=$SERVE_PID)" >&2
kill "$SERVE_PID" 2>/dev/null || true
fi
"$ROOT_DIR/.mise/tasks/dev/down" || true
}
trap cleanup EXIT INT TERM
# Build required crates (debug)
cargo build -p orchestrator -p forge-integration -p workflow-runner
# Start static server to host the runner for VMs
SOL_RUNNER_PORT=${SOL_RUNNER_PORT:-8089}
# Detect a likely host IP for default libvirt network (virbr0), else fallback to 127.0.0.1
if command -v ip >/dev/null 2>&1 && ip addr show virbr0 >/dev/null 2>&1; then
HOST_IP=$(ip -o -4 addr show virbr0 | awk '{print $4}' | cut -d/ -f1 | head -n1)
else
HOST_IP=${HOST_IP_OVERRIDE:-127.0.0.1}
fi
# Orchestrator contact address for runner to stream logs back
export ORCH_CONTACT_ADDR=${ORCH_CONTACT_ADDR:-$HOST_IP:50051}
# Runner URL used by cloud-init bootstrap
export SOLSTICE_RUNNER_URL=${SOLSTICE_RUNNER_URL:-http://$HOST_IP:$SOL_RUNNER_PORT/solstice-runner}
(
exec "$ROOT_DIR/.mise/tasks/run/runner-serve" >/dev/null 2>&1
) &
SERVE_PID=$!
# Start orchestrator in background
LOGFILE=${SOL_ORCH_LOG:-"$ROOT_DIR/target/orchestrator.local.log"}
echo "Starting orchestrator... (logs: $LOGFILE)" >&2
(
exec "$ROOT_DIR/.mise/tasks/run/orchestrator" >"$LOGFILE" 2>&1
) &
ORCH_PID=$!
echo "Runner URL: $SOLSTICE_RUNNER_URL" >&2
echo "Orchestrator contact: $ORCH_CONTACT_ADDR" >&2
# Wait for it to start
sleep 3
# Enqueue a job for this repo/commit
"$ROOT_DIR/.mise/tasks/run/forge-enqueue"
# Tail logs for a short time (or override with SOL_TAIL_SECS)
TAIL_SECS=${SOL_TAIL_SECS:-15}
echo "Tailing orchestrator logs for ${TAIL_SECS}s..." >&2
if command -v timeout >/dev/null 2>&1; then
(timeout ${TAIL_SECS}s tail -f "$LOGFILE" || true) 2>/dev/null
elif command -v gtimeout >/dev/null 2>&1; then
(gtimeout ${TAIL_SECS}s tail -f "$LOGFILE" || true) 2>/dev/null
else
# Fallback: background tail and sleep
tail -f "$LOGFILE" &
TAIL_PID=$!
sleep "$TAIL_SECS" || true
kill "$TAIL_PID" 2>/dev/null || true
fi
echo "Done. Artifacts/logs in $LOGFILE. Use RUST_LOG=debug for more detail." >&2

116
.mise/tasks/ci/vm-build Executable file
View file

@ -0,0 +1,116 @@
#!/usr/bin/env bash
set -euo pipefail
# Build this repository inside Linux and Illumos VMs using the Solstice dev loop.
# - Cross-build runner for both targets
# - Serve both runner binaries locally
# - Start orchestrator and enqueue two jobs (ubuntu-22.04, illumos-latest)
# - Each VM will download the appropriate runner and execute .solstice/job.sh from this repo
ROOT_DIR=$(cd "$(dirname "$0")/../../.." && pwd)
cd "$ROOT_DIR"
# Requirements check
command -v cargo >/dev/null 2>&1 || { echo "cargo is required" >&2; exit 127; }
command -v cross >/dev/null 2>&1 || { echo "cross is required (cargo install cross)" >&2; exit 127; }
# Defaults
export RUST_LOG=${RUST_LOG:-info}
export ORCH_CONFIG=${ORCH_CONFIG:-examples/orchestrator-image-map.yaml}
export AMQP_URL=${AMQP_URL:-amqp://127.0.0.1:5672/%2f}
export AMQP_EXCHANGE=${AMQP_EXCHANGE:-solstice.jobs}
export AMQP_QUEUE=${AMQP_QUEUE:-solstice.jobs.v1}
export AMQP_ROUTING_KEY=${AMQP_ROUTING_KEY:-jobrequest.v1}
export AMQP_PREFETCH=${AMQP_PREFETCH:-2}
export GRPC_ADDR=${GRPC_ADDR:-0.0.0.0:50051}
# Detect host IP for guest access (virbr0 first)
if command -v ip >/dev/null 2>&1 && ip addr show virbr0 >/dev/null 2>&1; then
HOST_IP=$(ip -o -4 addr show virbr0 | awk '{print $4}' | cut -d/ -f1 | head -n1)
else
HOST_IP=${HOST_IP_OVERRIDE:-127.0.0.1}
fi
# Orchestrator contact address for gRPC log streaming from guests
export ORCH_CONTACT_ADDR=${ORCH_CONTACT_ADDR:-$HOST_IP:50051}
# Bring up RabbitMQ
"$ROOT_DIR/.mise/tasks/dev/up"
# Ensure cleanup
ORCH_PID=""
SERVE_PID=""
cleanup() {
set +e
if [[ -n "$ORCH_PID" ]] && kill -0 "$ORCH_PID" 2>/dev/null; then
echo "Stopping orchestrator (pid=$ORCH_PID)" >&2
kill "$ORCH_PID" 2>/dev/null || true
sleep 1
kill -9 "$ORCH_PID" 2>/dev/null || true
fi
if [[ -n "$SERVE_PID" ]] && kill -0 "$SERVE_PID" 2>/dev/null; then
echo "Stopping runner servers (pid=$SERVE_PID)" >&2
kill "$SERVE_PID" 2>/dev/null || true
fi
"$ROOT_DIR/.mise/tasks/dev/down" || true
}
trap cleanup EXIT INT TERM
# Cross-build runner for both targets
"$ROOT_DIR/.mise/tasks/build/runner-cross"
# Start multi runner servers (background)
SOL_RUNNER_PORT_LINUX=${SOL_RUNNER_PORT_LINUX:-8090}
SOL_RUNNER_PORT_ILLUMOS=${SOL_RUNNER_PORT_ILLUMOS:-8091}
(
exec "$ROOT_DIR/.mise/tasks/run/runner-serve-multi" >/dev/null 2>&1
) &
SERVE_PID=$!
# Compose URLs for both OSes and export for orchestrator cloud-init consumption
LINUX_URL="http://$HOST_IP:$SOL_RUNNER_PORT_LINUX/solstice-runner-linux"
ILLUMOS_URL="http://$HOST_IP:$SOL_RUNNER_PORT_ILLUMOS/solstice-runner-illumos"
export SOLSTICE_RUNNER_URLS="$LINUX_URL $ILLUMOS_URL"
# Start orchestrator in background (inherits env including SOLSTICE_RUNNER_URLS/ORCH_CONTACT_ADDR)
LOGFILE=${SOL_ORCH_LOG:-"$ROOT_DIR/target/orchestrator.vm-build.log"}
echo "Starting orchestrator... (logs: $LOGFILE)" >&2
(
exec "$ROOT_DIR/.mise/tasks/run/orchestrator" >"$LOGFILE" 2>&1
) &
ORCH_PID=$!
echo "Runner URLs:" >&2
echo " Linux: $LINUX_URL" >&2
echo " Illumos: $ILLUMOS_URL" >&2
echo "Orchestrator contact: $ORCH_CONTACT_ADDR" >&2
# Give it a moment to start
sleep 3
# Enqueue two jobs: one Linux, one Illumos
SOL_REPO_URL=${SOL_REPO_URL:-$(git -C "$ROOT_DIR" remote get-url origin 2>/dev/null || true)}
SOL_COMMIT_SHA=${SOL_COMMIT_SHA:-$(git -C "$ROOT_DIR" rev-parse HEAD 2>/dev/null || true)}
if [[ -z "${SOL_REPO_URL}" || -z "${SOL_COMMIT_SHA}" ]]; then
echo "Warning: could not detect repo URL/commit; forge-enqueue will attempt autodetect" >&2
fi
# Linux (Ubuntu image in example config)
SOL_RUNS_ON=ubuntu-22.04 "$ROOT_DIR/.mise/tasks/run/forge-enqueue"
# Illumos (default label / alias)
SOL_RUNS_ON=illumos-latest "$ROOT_DIR/.mise/tasks/run/forge-enqueue"
# Tail orchestrator logs for a while
TAIL_SECS=${SOL_TAIL_SECS:-30}
echo "Tailing orchestrator logs for ${TAIL_SECS}s..." >&2
if command -v timeout >/dev/null 2>&1; then
(timeout ${TAIL_SECS}s tail -f "$LOGFILE" || true) 2>/dev/null
elif command -v gtimeout >/dev/null 2>&1; then
(gtimeout ${TAIL_SECS}s tail -f "$LOGFILE" || true) 2>/dev/null
else
tail -f "$LOGFILE" &
TAIL_PID=$!
sleep "$TAIL_SECS" || true
kill "$TAIL_PID" 2>/dev/null || true
fi
echo "Done. Logs at $LOGFILE" >&2

12
.mise/tasks/dev/down Executable file
View file

@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -euo pipefail
# Stop local development dependencies (RabbitMQ)
if command -v docker >/dev/null 2>&1; then
if command -v docker-compose >/dev/null 2>&1; then
exec docker-compose down
else
exec docker compose down
fi
else
echo "Docker not found; nothing to do." >&2
fi

16
.mise/tasks/dev/up Executable file
View file

@ -0,0 +1,16 @@
#!/usr/bin/env bash
set -euo pipefail
# Start local development dependencies (RabbitMQ) via docker compose
if command -v docker >/dev/null 2>&1; then
if command -v docker-compose >/dev/null 2>&1; then
exec docker-compose up -d rabbitmq
else
exec docker compose up -d rabbitmq
fi
elif command -v podman >/dev/null 2>&1; then
echo "Podman detected but this project uses docker-compose file; please use Docker or translate to podman-compose" >&2
exit 1
else
echo "Neither Docker nor Podman found. Install Docker to run dependencies." >&2
exit 127
fi

30
.mise/tasks/run/forge-enqueue Executable file
View file

@ -0,0 +1,30 @@
#!/usr/bin/env bash
set -euo pipefail
# Enqueue a sample job via the forge-integration crate.
# Detect repo URL and commit from the current git checkout unless overridden.
command -v cargo >/dev/null 2>&1 || { echo "cargo is required" >&2; exit 127; }
command -v git >/dev/null 2>&1 || { echo "git is required to autodetect repo and commit" >&2; exit 127; }
export RUST_LOG=${RUST_LOG:-info}
# AMQP defaults for local dev
export AMQP_URL=${AMQP_URL:-amqp://127.0.0.1:5672/%2f}
export AMQP_EXCHANGE=${AMQP_EXCHANGE:-solstice.jobs}
export AMQP_QUEUE=${AMQP_QUEUE:-solstice.jobs.v1}
export AMQP_ROUTING_KEY=${AMQP_ROUTING_KEY:-jobrequest.v1}
REPO_URL=${SOL_REPO_URL:-$(git remote get-url origin 2>/dev/null || true)}
COMMIT_SHA=${SOL_COMMIT_SHA:-$(git rev-parse HEAD 2>/dev/null || true)}
RUNS_ON=${SOL_RUNS_ON:-}
if [[ -z "${REPO_URL}" || -z "${COMMIT_SHA}" ]]; then
echo "Failed to detect repo URL and/or commit. Set SOL_REPO_URL and SOL_COMMIT_SHA explicitly." >&2
exit 2
fi
args=(enqueue --repo-url "${REPO_URL}" --commit-sha "${COMMIT_SHA}")
if [[ -n "${RUNS_ON}" ]]; then
args+=(--runs-on "${RUNS_ON}")
fi
exec cargo run -p forge-integration -- "${args[@]}"

15
.mise/tasks/run/orchestrator Executable file
View file

@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -euo pipefail
# Run the Solstice Orchestrator with sensible local defaults
export RUST_LOG=${RUST_LOG:-info}
export ORCH_CONFIG=${ORCH_CONFIG:-examples/orchestrator-image-map.yaml}
export AMQP_URL=${AMQP_URL:-amqp://127.0.0.1:5672/%2f}
export AMQP_EXCHANGE=${AMQP_EXCHANGE:-solstice.jobs}
export AMQP_QUEUE=${AMQP_QUEUE:-solstice.jobs.v1}
export AMQP_ROUTING_KEY=${AMQP_ROUTING_KEY:-jobrequest.v1}
export AMQP_PREFETCH=${AMQP_PREFETCH:-2}
export GRPC_ADDR=${GRPC_ADDR:-0.0.0.0:50051}
# For Linux + libvirt users, customize via LIBVIRT_URI and LIBVIRT_NETWORK
exec cargo run -p orchestrator -- \
--config "$ORCH_CONFIG" \
--grpc-addr "$GRPC_ADDR"

44
.mise/tasks/run/runner-serve Executable file
View file

@ -0,0 +1,44 @@
#!/usr/bin/env bash
set -euo pipefail
# Serve the built workflow-runner binary over HTTP for local VMs to download.
# This is intended for local development only.
#
# Env:
# SOL_RUNNER_PORT - port to bind (default: 8089)
# SOL_RUNNER_BIND - bind address (default: 0.0.0.0)
# SOL_RUNNER_BINARY - path to runner binary (default: target/debug/solstice-runner)
#
# The file will be exposed at http://HOST:PORT/solstice-runner
ROOT_DIR=$(cd "$(dirname "$0")/../../.." && pwd)
cd "$ROOT_DIR"
command -v cargo >/dev/null 2>&1 || { echo "cargo is required" >&2; exit 127; }
PYTHON=${PYTHON:-python3}
if ! command -v "$PYTHON" >/dev/null 2>&1; then
echo "python3 is required to run a simple HTTP server" >&2
exit 127
fi
# Build runner if not present
BINARY_DEFAULT="$ROOT_DIR/target/debug/solstice-runner"
export SOL_RUNNER_BINARY=${SOL_RUNNER_BINARY:-$BINARY_DEFAULT}
if [[ ! -x "$SOL_RUNNER_BINARY" ]]; then
cargo build -p workflow-runner >/dev/null
if [[ ! -x "$SOL_RUNNER_BINARY" ]]; then
echo "runner binary not found at $SOL_RUNNER_BINARY after build" >&2
exit 1
fi
fi
# Prepare serve dir under target
SERVE_DIR="$ROOT_DIR/target/runner-serve"
mkdir -p "$SERVE_DIR"
cp -f "$SOL_RUNNER_BINARY" "$SERVE_DIR/solstice-runner"
chmod +x "$SERVE_DIR/solstice-runner" || true
PORT=${SOL_RUNNER_PORT:-8089}
BIND=${SOL_RUNNER_BIND:-0.0.0.0}
echo "Serving solstice-runner from $SERVE_DIR on http://$BIND:$PORT (Ctrl-C to stop)" >&2
exec "$PYTHON" -m http.server "$PORT" --bind "$BIND" --directory "$SERVE_DIR"

View file

@ -0,0 +1,53 @@
#!/usr/bin/env bash
set -euo pipefail
# Serve cross-built workflow-runner binaries for Linux and Illumos on two ports.
# Intended for local development only.
# Env:
# SOL_RUNNER_BIND - bind address (default: 0.0.0.0)
# SOL_RUNNER_PORT_LINUX - port for Linux runner (default: 8090)
# SOL_RUNNER_PORT_ILLUMOS - port for Illumos runner (default: 8091)
# PYTHON - python interpreter (default: python3)
#
# Exposes:
# http://HOST:PORT/solstice-runner-linux
# http://HOST:PORT/solstice-runner-illumos
ROOT_DIR=$(cd "$(dirname "$0")/../../.." && pwd)
cd "$ROOT_DIR"
PYTHON=${PYTHON:-python3}
command -v "$PYTHON" >/dev/null 2>&1 || { echo "python3 is required" >&2; exit 127; }
# Ensure cross-built artifacts exist
if [[ ! -x "$ROOT_DIR/target/x86_64-unknown-linux-gnu/release/solstice-runner" || ! -x "$ROOT_DIR/target/x86_64-unknown-illumos/release/solstice-runner" ]]; then
echo "Cross-built runner binaries not found; building with cross..." >&2
"$ROOT_DIR/.mise/tasks/build/runner-cross"
fi
SERVE_DIR="$ROOT_DIR/target/runner-serve-multi"
rm -rf "$SERVE_DIR"
mkdir -p "$SERVE_DIR"
cp -f "$ROOT_DIR/target/x86_64-unknown-linux-gnu/release/solstice-runner" "$SERVE_DIR/solstice-runner-linux"
cp -f "$ROOT_DIR/target/x86_64-unknown-illumos/release/solstice-runner" "$SERVE_DIR/solstice-runner-illumos"
chmod +x "$SERVE_DIR/solstice-runner-linux" "$SERVE_DIR/solstice-runner-illumos" || true
BIND=${SOL_RUNNER_BIND:-0.0.0.0}
PORT_LIN=${SOL_RUNNER_PORT_LINUX:-8090}
PORT_ILL=${SOL_RUNNER_PORT_ILLUMOS:-8091}
echo "Serving from $SERVE_DIR" >&2
set +e
"$PYTHON" -m http.server "$PORT_LIN" --bind "$BIND" --directory "$SERVE_DIR" &
PID_LIN=$!
"$PYTHON" -m http.server "$PORT_ILL" --bind "$BIND" --directory "$SERVE_DIR" &
PID_ILL=$!
set -e
trap 'kill $PID_LIN $PID_ILL 2>/dev/null || true' INT TERM EXIT
echo "Linux runner: http://$BIND:$PORT_LIN/solstice-runner-linux" >&2
echo "Illumos runner: http://$BIND:$PORT_ILL/solstice-runner-illumos" >&2
# Wait on background servers
wait

7
.mise/tasks/test/all Executable file
View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
# Run all tests in the workspace
export RUSTFLAGS=${RUSTFLAGS:-}
export RUST_LOG=${RUST_LOG:-info}
command -v cargo >/dev/null 2>&1 || { echo "cargo is required" >&2; exit 127; }
exec cargo test --workspace

87
.solstice/job.sh Executable file
View file

@ -0,0 +1,87 @@
#!/usr/bin/env bash
set -euo pipefail
# Solstice CI VM job script: build this repository inside the guest.
# The runner clones the repo at the requested commit and executes this script.
# It attempts to ensure required tools (git, curl, protobuf compiler, Rust) exist.
log() { printf "[job] %s\n" "$*" >&2; }
detect_pm() {
if command -v apt-get >/dev/null 2>&1; then echo apt; return; fi
if command -v dnf >/dev/null 2>&1; then echo dnf; return; fi
if command -v yum >/dev/null 2>&1; then echo yum; return; fi
if command -v zypper >/dev/null 2>&1; then echo zypper; return; fi
if command -v apk >/dev/null 2>&1; then echo apk; return; fi
if command -v pacman >/dev/null 2>&1; then echo pacman; return; fi
if command -v pkg >/dev/null 2>&1; then echo pkg; return; fi
if command -v pkgin >/dev/null 2>&1; then echo pkgin; return; fi
echo none
}
install_linux() {
PM=$(detect_pm)
case "$PM" in
apt)
sudo -n true 2>/dev/null || true
sudo apt-get update -y || apt-get update -y || true
sudo apt-get install -y --no-install-recommends curl ca-certificates git build-essential pkg-config libssl-dev protobuf-compiler || true
;;
dnf)
sudo dnf install -y curl ca-certificates git gcc gcc-c++ make pkgconf-pkg-config openssl-devel protobuf-compiler || true
;;
yum)
sudo yum install -y curl ca-certificates git gcc gcc-c++ make pkgconfig openssl-devel protobuf-compiler || true
;;
zypper)
sudo zypper --non-interactive install curl ca-certificates git gcc gcc-c++ make pkg-config libopenssl-devel protobuf || true
;;
apk)
sudo apk add --no-cache curl ca-certificates git build-base pkgconfig openssl-dev protoc || true
;;
pacman)
sudo pacman -Sy --noconfirm curl ca-certificates git base-devel pkgconf openssl protobuf || true
;;
*)
log "unknown package manager ($PM); skipping linux deps install"
;;
esac
}
install_illumos() {
if command -v pkg >/dev/null 2>&1; then
# OpenIndiana IPS packages (best-effort)
sudo pkg refresh || true
sudo pkg install -v developer/build/gnu-make developer/gcc-13 git developer/protobuf || true
elif command -v pkgin >/dev/null 2>&1; then
sudo pkgin -y install git gcc gmake protobuf || true
else
log "no known package manager found on illumos"
fi
}
ensure_rust() {
if command -v cargo >/dev/null 2>&1; then return 0; fi
log "installing Rust toolchain with rustup"
curl -fsSL https://sh.rustup.rs | sh -s -- -y
# shellcheck disable=SC1091
source "$HOME/.cargo/env"
}
main() {
OS=$(uname -s 2>/dev/null || echo unknown)
case "$OS" in
Linux) install_linux ;;
SunOS) install_illumos ;;
esac
ensure_rust
# Ensure protoc available in PATH
if ! command -v protoc >/dev/null 2>&1; then
log "WARNING: protoc not found; prost/tonic build may fail"
fi
# Build a representative subset to avoid known sea-orm-cli issues in full workspace builds
log "building workflow-runner"
cargo build -p workflow-runner --release || cargo build -p workflow-runner
log "done"
}
main "$@"

View file

@ -68,6 +68,7 @@ Hardware hints for Linux/local VM testing:
2) Start RabbitMQ locally
- docker compose up -d rabbitmq
- Management UI: http://localhost:15672 (guest/guest)
- Or with mise: `mise run dev:up`
3) Build everything
- cargo build --workspace

View file

@ -231,6 +231,9 @@ fn parse_capacity_map(s: Option<&str>) -> HashMap<String, usize> {
}
fn make_cloud_init_userdata(repo_url: &str, commit_sha: &str, request_id: uuid::Uuid, orch_addr: &str) -> Vec<u8> {
// Allow local dev to inject one or more runner URLs that the VM can fetch.
let runner_url = std::env::var("SOLSTICE_RUNNER_URL").unwrap_or_default();
let runner_urls = std::env::var("SOLSTICE_RUNNER_URLS").unwrap_or_default();
let s = format!(r#"#cloud-config
write_files:
- path: /etc/solstice/job.yaml
@ -247,16 +250,52 @@ write_files:
set -eu
echo "Solstice: bootstrapping workflow runner for {sha}" | tee /dev/console
RUNNER="/usr/local/bin/solstice-runner"
# Runner URL(s) provided by orchestrator (local dev) if set
export SOLSTICE_RUNNER_URL='{runner_url}'
export SOLSTICE_RUNNER_URLS='{runner_urls}'
if [ ! -x "$RUNNER" ]; then
mkdir -p /usr/local/bin
if command -v curl >/dev/null 2>&1 && [ -n "$SOLSTICE_RUNNER_URL" ]; then
curl -fSL "$SOLSTICE_RUNNER_URL" -o "$RUNNER" || true
elif command -v wget >/dev/null 2>&1 && [ -n "$SOLSTICE_RUNNER_URL" ]; then
wget -O "$RUNNER" "$SOLSTICE_RUNNER_URL" || true
# Helper to download from a URL to $RUNNER
fetch_runner() {{
U="$1"
[ -z "$U" ] && return 1
if command -v curl >/dev/null 2>&1; then
curl -fSL "$U" -o "$RUNNER" || return 1
elif command -v wget >/dev/null 2>&1; then
wget -O "$RUNNER" "$U" || return 1
else
echo 'runner URL not provided or curl/wget missing' | tee /dev/console
return 1
fi
chmod +x "$RUNNER" 2>/dev/null || true
return 0
}}
OS=$(uname -s 2>/dev/null || echo unknown)
# Prefer single URL if provided
if [ -n "$SOLSTICE_RUNNER_URL" ]; then
fetch_runner "$SOLSTICE_RUNNER_URL" || true
fi
# If still missing, iterate URLs with a basic OS-based preference
if [ ! -x "$RUNNER" ] && [ -n "$SOLSTICE_RUNNER_URLS" ]; then
for U in $SOLSTICE_RUNNER_URLS; do
case "$OS" in
Linux)
echo "$U" | grep -qi linux || continue ;;
SunOS)
echo "$U" | grep -qi illumos || continue ;;
*) ;;
esac
fetch_runner "$U" && break || true
done
fi
# As a final fallback, try all URLs regardless of OS tag
if [ ! -x "$RUNNER" ] && [ -n "$SOLSTICE_RUNNER_URLS" ]; then
for U in $SOLSTICE_RUNNER_URLS; do
fetch_runner "$U" && break || true
done
fi
if [ ! -x "$RUNNER" ]; then
echo 'runner URL(s) not provided or curl/wget missing' | tee /dev/console
fi
fi
export SOLSTICE_REPO_URL='{repo}'
export SOLSTICE_COMMIT_SHA='{sha}'
@ -272,7 +311,7 @@ write_files:
(command -v poweroff >/dev/null 2>&1 && poweroff) || (command -v shutdown >/dev/null 2>&1 && shutdown -y -i5 -g0) || true
runcmd:
- [ /usr/local/bin/solstice-bootstrap.sh ]
"#, repo = repo_url, sha = commit_sha, req_id = request_id, orch_addr = orch_addr);
"#, repo = repo_url, sha = commit_sha, req_id = request_id, orch_addr = orch_addr, runner_url = runner_url, runner_urls = runner_urls);
s.into_bytes()
}

View file

@ -0,0 +1,41 @@
### VM build scripts and cross-built runner serving
Summary
- Added mise tasks to cross-build the workflow runner and run builds inside both Linux and Illumos VMs.
- Enhanced orchestrator cloud-init to support multiple runner URLs (SOLSTICE_RUNNER_URLS) and auto-pick by OS.
New tasks
- build:runner-cross — cross builds workflow-runner for:
- x86_64-unknown-linux-gnu (release)
- x86_64-unknown-illumos (release)
- run:runner-serve-multi — serves both binaries via simple HTTP servers:
- http://HOST:8090/solstice-runner-linux
- http://HOST:8091/solstice-runner-illumos
- ci:vm-build — end-to-end:
- Brings up RabbitMQ
- Cross-builds the runner
- Serves both binaries and exports SOLSTICE_RUNNER_URLS
- Starts the orchestrator with ORCH_CONTACT_ADDR set so VMs can stream logs back
- Enqueues two jobs for this repo/commit: ubuntu-22.04 and illumos-latest
- Tails logs briefly and cleans up
Guest job script
- Added .solstice/job.sh (executed by the runner in the VM) to:
- Best-effort install basic toolchain (curl/git/protoc/Rust) depending on OS
- Build the workflow-runner crate (release preferred)
- Avoids full workspace build to sidestep known sea-orm-cli issues during development
Usage
- End-to-end run across both VMs:
- mise run ci:vm-build
- Individual steps:
- mise run build:runner-cross
- mise run run:runner-serve-multi
- mise run run:orchestrator (in another terminal)
- SOL_RUNS_ON=ubuntu-22.04 mise run run:forge-enqueue
- SOL_RUNS_ON=illumos-latest mise run run:forge-enqueue
Notes
- Ensure the example orchestrator image map is configured and images are accessible. Ubuntu entry is provided; illumos points to OpenIndiana Hipster cloud image.
- The runner uses system git and /bin/sh in the VM; ensure they exist in the base images.
- Known: a transitive sea-orm-cli build failure can break full workspace builds; tasks avoid building that crate for the fast loop.

10
fnox.toml Normal file
View file

@ -0,0 +1,10 @@
[providers.age]
type = "age"
recipients = ["age1u6rk762cysclfyvf0ysceee0a93hddgsp39wjrw9dqymyzd4w5vs3wfrve"]
[providers.onepass]
type = "1password"
vault = "Development"
[secrets]
OP_SERVICE_ACCOUNT_TOKEN= { provider = "age", value = "YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBKZnh1bW5rRzV5N0FNcTgrOWVIS1VSSmtqUmZ6b0k1RHJXR0Qyek14cEdBClZzdlY3Q3VWSFZydnJqMWNLSTc5V3BqUFZaUkRMSnFib3JUZHhxU2JpQXcKLS0tIGJvNnRrTHEvYTIxaEFMVFNxVkZlVTlYUEV5TmFCK0ExdDBZNFRpSkNUeUkKPKHB+as1NIejSD81EtZYsj2csqJ3hd9PHjQU39PBr5PZZD0efWeJmU67/Esen5xFfBJ9V0OY9Ola0hKZqMux4MAe+7DHuR7FKhaT9Ttfv25HA4QN/F8BLckA6kX42m0jcF8+IQBasiVmaLd+LtZXd+fNZm/S36pHBFvyZwbuCiW7ZSzi2cl27IbIxbo3bUr07p0JntqF+LOKt8Qu67il5C4T3eslaIs9QkFzrjXuVHsmrKRv+/b7LvSK8aCRJIxtDkXgppcH5CHPktIWuTeixwf1znW7UC5gm8w1I5FWQP7jRenjBrR3iV3erbSQPJk3RDAAXKTIptKVoVgikv0EMjI9Bn1K9Z8HSalc6gjyvZihOOsqnvLHsI3nFheuVVwl+G2p/lHwTrb74z+TWKZBrsR3jDlR56jwh4Au6nnv3IPa3lvd3nQ7SL6MRQfTknqyT0hDaH/2+rFv8hHA4dwFhV4nLrbfse3U1jsyLqE8EL5nLAFKOwaJfPfGnadmsaAq9xtuOffKHVcX3mBH9cKv6yvLXJldUZc+v3AFAu0N/KKdyfWe6I0q37GC1/0gJWymH5uJ59cYmSR3xJ/6mfwKg2y67m9se1o2q6qWzUe7ouuN3PNKM6NDKuAg7TUIcajZlylTyMIPUaWJR+RiZnbzAQB1BnMXQ0eAYcElfpOFP5baVc7v8nOZXycvBFXvY+IXYtN1FcvlxSCFv/icD3q4mMtWhtTcoEYpi8bmf40SEcFHXT4mM+gp57Fx6TakpwA9+r/avQoQwyi6Z3HZc6BaCUW3NMrDV39igbuNcOOF4rSE3ppZetkniZFq8apdCbj7Sy4yHp8zkczv7eJGaWHwOTjdcA2m3dOGBfraH6sYrddtvoLF7NPQQYprLsDTbp4j1sHwz1ZtwdH76cz5JmzaluHxUy8XirsmHX+Hw+GUGe/uIy0IYnQrjJuiKEIid1eptoTqfCk9olXM5lxbR50YZlgbtNxcH9E0gLm3TbmQ7quxfTS3f90RBaWPzz65DC4iFo9OBxj6dCK5ZYOQZrwK1OBuwdNlYoE+haZg6Ct0/ZcAolQQtN1AEGDfXIwocfe8IPcyEhHCKLTj6GBt4ayxD7Ajo/ZOktyLKVcNytA1vF44WjVBP3StZE0I+QDpupDJR19KHO03t9Sapq9GdpcGWA9IbO8=" }

4
mise.toml Normal file
View file

@ -0,0 +1,4 @@
[tools]
age = "latest"
fnox = "latest"
python = "latest"