Solstice CI — Production deployment with Podman Compose + Traefik This stack deploys Solstice CI services behind Traefik with automatic TLS certificates from Let’s Encrypt. It uses upstream official images for system services and multi-stage Rust builds on official Rust/Debian images that rely on container layer caching (no sccache) for fast, reproducible builds. Prerequisites - Podman 4.9+ with podman-compose compatibility (podman compose) - Public DNS records for subdomains pointing to the host running this stack - Ports 80 and 443 open to the Internet - Email address for ACME registration DNS Create A/AAAA records for the following hostnames under your base domain (no environment in hostname; env separation is logical via DB/vhost/buckets): - traefik.svc.DOMAIN - api.svc.DOMAIN - grpc.svc.DOMAIN - runner.svc.DOMAIN - forge.svc.DOMAIN (Forge/Forgejo webhooks) - github.svc.DOMAIN (GitHub App/webhooks) - minio.svc.DOMAIN (console UI) - s3.svc.DOMAIN (S3 API, TLS via TCP SNI) - mq.svc.DOMAIN (RabbitMQ mgmt UI; AMQP remains internal) Quick start 1. Copy env template and edit secrets and settings: cp .env.sample .env # Edit .env (ENV=staging|prod, DOMAIN, passwords, ACME email) 2. (Optional) Use Let’s Encrypt staging CA to test issuance without rate limits by setting in .env: TRAEFIK_ACME_CASERVER=https://acme-staging-v02.api.letsencrypt.org/directory 3. Bring up the stack: podman compose -f compose.yml up -d --build 4. Monitor logs: podman compose logs -f traefik Services and routing - Traefik dashboard: https://traefik.svc.${DOMAIN} (protect with TRAEFIK_DASHBOARD_AUTH in .env) - Orchestrator HTTP: https://api.svc.${DOMAIN} - Orchestrator gRPC (h2/TLS via SNI): grpc.svc.${DOMAIN} - Forge webhooks: https://forge.svc.${DOMAIN} - GitHub webhooks: https://github.svc.${DOMAIN} - Runner static server: https://runner.svc.${DOMAIN} - MinIO console: https://minio.svc.${DOMAIN} - S3 API: s3.svc.${DOMAIN} - RabbitMQ management: https://mq.svc.${DOMAIN} Environment scoping (single infra, logical separation) - RabbitMQ: single broker; per-environment vhosts named solstice-${ENV} (staging/prod). Services connect to amqp://.../solstice-${ENV}. - Postgres: single cluster; databases solstice_staging and solstice_prod are created by the postgres-setup job. Services use postgres://.../solstice_${ENV}. - MinIO: single server; buckets solstice-logs-staging and solstice-logs-prod are created by the minio-setup job. Set S3 bucket per service to the env-appropriate bucket. Security notes - Secrets are provided via podman compose secrets referencing your environment variables. Do not commit real secrets. - Only management UIs are exposed publicly via Traefik. Data planes (Postgres, AMQP, S3 API) terminate TLS at Traefik and route internally. Adjust exposure policy as needed. Images and builds - System services use Chainguard images (postgres, rabbitmq). MinIO uses upstream images. - Rust services are built with multi-stage Containerfiles using cgr.dev/chainguard/rust and run on cgr.dev/chainguard/glibc-dynamic. - Build caches are mounted in-build for cargo registry/git and the cargo target directory (via ~/.cargo/config target-dir=/cargo/target). Maintenance - Upgrade images by editing tags in compose.yml and rebuilding: podman compose build --pull - Renewals are automatic via Traefik ACME. Certificates are stored in the traefik-acme volume. - Backups: persist volumes (postgres-data, rabbitmq-data, minio-data, traefik-acme). Tear down - Stop: podman compose down - Remove volumes (DANGEROUS: destroys data): podman volume rm solstice-ci_traefik-acme solstice-ci_postgres-data solstice-ci_rabbitmq-data solstice-ci_minio-data Troubleshooting - Certificate issues: check Traefik logs; verify DNS and ports 80/443. For testing, use ACME staging server. - No routes: verify labels on services and that traefik sees the podman socket. - Healthchecks failing: inspect service logs with podman logs .