Compare commits

...

73 commits

Author SHA1 Message Date
Till Wegmueller
ed6bb8d28c
chore: Bump chart version to 0.2.0-beta.5
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-24 22:38:47 +01:00
Till Wegmueller
9aa018fc93
feat: Add scope-gated OIDC profile and email claims
Implement standard OIDC claims support for the userinfo endpoint and
ID token. Claims are stored in the properties table and returned based
on the access token's granted scopes:

- profile scope: preferred_username (falls back to username), name,
  given_name, family_name, nickname, picture, profile, website,
  gender, birthdate, zoneinfo, locale, updated_at
- email scope: email, email_verified (with user record fallback)

Adds bulk property retrieval, shared gather_claims() function used by
both userinfo and build_id_token, and updated discovery metadata.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-24 22:19:54 +01:00
Till Wegmüller
86ba1da7bc
Merge pull request #1 from CloudNebulaProject/claude/implement-next-steps-TIoEy
Add token introspection endpoint (RFC 7662)
2026-03-19 22:31:47 +01:00
Claude
7b16f54223
feat: Add token introspection endpoint, docs, and validation scripts
Implement the remaining items from docs/next-iteration-plan.md:

- Add POST /introspect endpoint (RFC 7662) with client authentication,
  support for access and refresh tokens, and token_type_hint
- Add raw token lookup functions in storage for introspection
- Add revocation_endpoint and introspection_endpoint to discovery metadata
- Create docs/flows.md with end-to-end curl examples for all OIDC flows
- Create scripts/validate-oidc.sh to verify discovery, JWKS, registration,
  introspection, and revocation endpoints
- Update docs/oidc-conformance.md to reflect actual implementation status
- Update README.md and CLAUDE.md pending sections to be accurate

https://claude.ai/code/session_01JBxVy75XfwwZB8iBXjTxT3
2026-03-19 20:30:31 +00:00
Till Wegmueller
9e64ce6744
chore: Bump chart version to 0.2.0-beta.4
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 19:07:30 +01:00
Till Wegmueller
210a27ca02
fix: Change device_code interval from i64 to i32
The migration creates the interval column as integer (INT4) but the
entity and storage struct used i64 (INT8), causing a type mismatch
error on PostgreSQL.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 18:33:50 +01:00
Till Wegmueller
f6262b2128
fix: Pass env vars to user-sync init container
The init container was only getting RUST_LOG, not the main env block.
This caused it to connect to the config file's database URL (SQLite)
instead of the BARYCENTER__DATABASE__URL env var (PostgreSQL),
resulting in migrations and user-sync running against the wrong
database.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 18:15:23 +01:00
Till Wegmueller
dd3dd4ef31
fix: Rename device_code table to device_codes
The DeriveIden macro converted DeviceCode to device_code (singular),
but the SeaORM entity expects device_codes (plural). Adds a migration
to rename the table so queries match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 18:02:22 +01:00
Till Wegmueller
204c2958a8
fix: Add prefix_separator to config env override
The config-rs crate uses '_' as the default prefix separator, so
BARYCENTER__DATABASE__URL was parsed as _database.url instead of
database.url. Adding prefix_separator("__") ensures double-underscore
env vars are correctly mapped to nested config keys.

Also makes the database section in the Helm ConfigMap conditional so
it can be omitted when the URL is provided via environment variable.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-22 17:31:40 +01:00
Till Wegmueller
8d835e240b
chore: Add book/build to gitignore
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 18:16:36 +01:00
Till Wegmueller
3f814408f5
fix: Add Mermaid diagram rendering support to mdbook
Include mermaid.min.js and a custom init script that converts
```mermaid code blocks to rendered diagrams at runtime. Supports
theme-aware rendering (light/dark). No preprocessor needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 18:15:20 +01:00
Till Wegmueller
2b9826f95f
fix: Remove unsupported git-repository-icon from book.toml
The fa-github icon font is not available in newer mdbook versions,
causing a "Missing font github" rendering error.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 18:06:32 +01:00
Till Wegmueller
22987c764e
ci: Add GitHub Actions workflow for deploying docs to Pages
Builds the mdbook documentation and deploys to GitHub Pages on
pushes to main that modify book/ files. Also supports manual
dispatch via workflow_dispatch.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 18:00:48 +01:00
Till Wegmueller
39eb8206a1
docs: Add comprehensive mdbook documentation
Complete documentation site covering all aspects of Barycenter:
Getting Started, Authentication, OAuth 2.0/OIDC, Authorization
Policy Engine, Administration, Deployment, Security, Development,
and Reference sections (96 markdown files).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-14 17:59:55 +01:00
Till Wegmueller
1e3bb668e8
chore: Release 2026-02-14 17:11:08 +01:00
Till Wegmueller
89a7902116
Run Clippy fix
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-02-08 20:31:48 +01:00
Till Wegmueller
4f0dac7645
Fix formatting
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-02-08 20:30:52 +01:00
Till Wegmueller
df57dda960
Add Claude settings
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-02-08 18:58:06 +01:00
Till Wegmueller
7bc8f513ac
Add Kubernetes deployment support for authorization policy service
Expose authz API port (8082) in Dockerfile and create /app/policies
directory. Extend Helm chart with configurable authz section: inline
KDL policy ConfigMap, existing ConfigMap reference, policies volume
mount, Service port, and a NetworkPolicy restricting the authz port
to same-namespace traffic while leaving the OIDC port unrestricted.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 18:55:54 +01:00
Till Wegmueller
1385403e1a
Add original research document and claude settings
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-02-08 18:34:42 +01:00
Till Wegmueller
e0ca87f867
Implement file-driven authorization policy service (ReBAC + ABAC)
Add a Zanzibar-style relationship-based access control engine with
OPA-style ABAC condition evaluation. Policies, roles, resources, and
grants are defined in KDL files loaded from a configured directory at
startup. Exposes a read-only REST API (POST /v1/check, /v1/expand,
GET /healthz) on a dedicated port when authz.enabled = true.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-08 18:34:14 +01:00
Till Wegmueller
95a55c5f24
chore: Release 2026-01-06 23:06:56 +01:00
Till Wegmueller
113eb2a211
Format
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 22:24:47 +01:00
Till Wegmueller
badb5dd18e
Implement device flow and client autoregistration
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 22:24:09 +01:00
Till Wegmueller
3cf557d310
chore: Release 2026-01-06 20:10:30 +01:00
Till Wegmueller
31423c2a7f
Update claude settings
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 20:09:57 +01:00
Till Wegmueller
782a319164
ci: Add Docker build check to catch Dockerfile issues early
- Add docker-build job that runs on every push/PR
- Builds only amd64 platform for speed (vs multi-platform in release)
- Uses GitHub Actions cache for faster builds
- Prevents Dockerfile issues from reaching release workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 19:17:52 +01:00
Till Wegmueller
1fc229f582
fix(docker): Add missing client-wasm directory and update Rust version
- Add COPY client-wasm to Dockerfile to include workspace member
- Update Rust base image from 1.91 to 1.92
- Fixes CI build failure: "failed to load manifest for workspace member client-wasm"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 18:44:46 +01:00
Till Wegmueller
2d14ef000c
chore: Release 2026-01-06 17:08:37 +01:00
Till Wegmueller
3f2a30cf97
format code
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 16:50:22 +01:00
Till Wegmueller
0fcd924105
Implement consent workflow
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 16:49:49 +01:00
Till Wegmueller
eb9c71a49f
Implement more tests
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 12:39:19 +01:00
Till Wegmueller
a949a3cbdb
Format
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 12:31:51 +01:00
Till Wegmueller
ecd6b00a1e
Implement Passkey classification features
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 12:31:22 +01:00
Till Wegmueller
d39c757be5
Fix tests
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 11:17:38 +01:00
Till Wegmueller
2b4922a69f
Fix tests
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 11:09:02 +01:00
Till Wegmueller
86c88d8aee
Commit work in progress
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2026-01-06 10:56:23 +01:00
Till Wegmueller
d7bdd51164
WIP Passkey implementation. Needs fixing storage.rs and more tests
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2025-12-07 13:18:22 +01:00
Till Wegmueller
47d9d24798
chore: bump chart version
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2025-12-02 22:08:01 +01:00
Till Wegmueller
304196ead9
chore: release 0.2.0-alpha.15 2025-12-02 21:47:11 +01:00
Till Wegmueller
629cfc1c92
fix: include migration directory in Docker build
Add COPY instruction for migration directory to Dockerfile to fix
build failure. The migration crate is a path dependency required
by the main barycenter package.

Fixes Docker build error:
  error: failed to get `migration` as a dependency of package `barycenter`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 21:46:47 +01:00
Till Wegmueller
8e0107cd33
chore: release 0.2.0-alpha.14 2025-12-02 21:43:30 +01:00
Till Wegmueller
b6bf4ceee0
feat: migrate from raw SQL to SeaORM migrations
Replace raw SQL CREATE TABLE statements with proper SeaORM migration
system. This eliminates verbose SQL logs on startup and provides
proper migration tracking and rollback support.

Changes:
- Add sea-orm-migration dependency and migration crate
- Create initial migration (m20250101_000001) with all 8 tables
- Update storage::init() to only connect to database
- Run migrations automatically in main.rs on startup
- Remove unused detect_backend() function and imports

The migration system properly handles both SQLite and PostgreSQL
backends with appropriate type handling (e.g., BIGSERIAL vs INTEGER
for auto-increment columns).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 21:42:58 +01:00
Till Wegmueller
2a865b2ba4
feat: add full Kubernetes env var support to Helm chart
Add support for valueFrom in environment variables for both main
container and user-sync init container. This enables injecting
values from secrets, configMaps, fieldRefs, and resourceFieldRefs
instead of only hardcoded values.

Updated deployment template to use toYaml for env rendering,
allowing full Kubernetes env var specifications. Added comprehensive
documentation and examples in values.yaml.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 21:42:37 +01:00
Till Wegmueller
be4e0f8e71
fix: set config path for Helm chart containers
Add --config flag to both main container and user-sync init container
to explicitly specify the mounted config file path at /app/config/config.toml.
This fixes deserialization errors when the application couldn't find the
config file in the default working directory.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-02 14:23:52 +01:00
Till Wegmueller
e8a060d7c3
chore: formatting
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2025-12-01 00:01:11 +01:00
Till Wegmueller
06bff60122
fix: enable public registration for tests and fix env prefix
- Enable public registration in integration tests via environment variable
  BARYCENTER__SERVER__ALLOW_PUBLIC_REGISTRATION=true
- Fix environment variable prefix from CRABIDP to BARYCENTER to match
  documentation in CLAUDE.md
- All 4 integration tests now pass successfully

Fixes:
- test_oauth2_authorization_code_flow
- test_openidconnect_authorization_code_flow
- test_security_headers
- test_token_endpoint_cache_control

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 18:47:51 +01:00
Till Wegmueller
57a0df9080
feat: add user sync init container support to Helm chart
- Add userSync configuration to values.yaml (existingSecret only)
- Add conditional init container to deployment.yaml
- Create comprehensive README.md with:
  - Installation and configuration instructions
  - User sync workflow and examples
  - Troubleshooting guide
  - Security best practices
- Add examples/user-sync-secret.yaml with sample users
- Support declarative user management for Kubernetes/GitOps

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 18:44:18 +01:00
Till Wegmueller
f2f7f4be00
chore: release 0.2.0-alpha.13 2025-11-30 18:13:48 +01:00
Till Wegmueller
a1056bb237
feat: add admin GraphQL API, background jobs, and user sync CLI
Major Features:
- Admin GraphQL API with dual endpoints (Seaography + custom)
- Background job scheduler with execution tracking
- Idempotent user sync CLI for Kubernetes deployments
- Secure PUT /properties endpoint with Bearer token auth

Admin GraphQL API:
- Entity CRUD via Seaography at /admin/graphql
- Custom job management API at /admin/jobs
- Mutations: triggerJob
- Queries: jobLogs, availableJobs
- GraphiQL playgrounds for both endpoints

Background Jobs:
- tokio-cron-scheduler integration
- Automated cleanup of expired sessions (hourly)
- Automated cleanup of expired refresh tokens (hourly)
- Job execution tracking in database
- Manual job triggering via GraphQL

User Sync CLI:
- Command: barycenter sync-users --file users.json
- Idempotent user synchronization from JSON
- Creates new users with hashed passwords
- Updates existing users (enabled, email_verified, email)
- Syncs custom properties per user
- Perfect for Kubernetes init containers

Security Enhancements:
- PUT /properties endpoint requires Bearer token
- Users can only modify their own properties
- Public registration disabled by default
- Admin API on separate port for network isolation

Database:
- New job_executions table for job tracking
- User update functions (update_user, update_user_email)
- PostgreSQL + SQLite support maintained

Configuration:
- allow_public_registration setting (default: false)
- admin_port setting (default: main port + 1)

Documentation:
- Comprehensive Kubernetes deployment guide
- User sync JSON schema and examples
- Init container and CronJob examples
- Production deployment patterns

Files Added:
- src/admin_graphql.rs - GraphQL schema builders
- src/admin_mutations.rs - Custom mutations and queries
- src/jobs.rs - Job scheduler and tracking
- src/user_sync.rs - User sync logic
- src/entities/ - SeaORM entities (8 entities)
- docs/kubernetes-deployment.md - K8s deployment guide
- users.json.example - User sync example

Dependencies:
- tokio-cron-scheduler 0.13
- seaography 1.1.4
- async-graphql 7.0
- async-graphql-axum 7.0

🤖 Generated with Claude Code (https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-30 18:06:50 +01:00
Till Wegmueller
06ff10dda9
chore: release 0.2.0-alpha.12 2025-11-29 20:52:42 +01:00
Till Wegmueller
0c9f8144bb
fix: add attestations write permission for manifest job 2025-11-29 20:52:06 +01:00
Till Wegmueller
3afdb6308e
chore: release 0.2.0-alpha.11 2025-11-29 20:46:00 +01:00
Till Wegmueller
80a56a137a
fix: extract manifest digest correctly for attestation 2025-11-29 20:45:04 +01:00
Till Wegmueller
6ef8f0b266
chore: release 0.2.0-alpha.10 2025-11-29 20:30:58 +01:00
Till Wegmueller
ececa59084
fix: use correct ARM64 runner label ubuntu-24.04-arm 2025-11-29 20:29:32 +01:00
Till Wegmueller
c381e00c37
chore: release 0.2.0-alpha.9 2025-11-29 17:36:59 +01:00
Till Wegmueller
656bdb5531
fix: move attestation to multi-platform manifest creation 2025-11-29 17:36:35 +01:00
Till Wegmueller
6e0fb3cb68
chore: release 0.2.0-alpha.8 2025-11-29 17:22:01 +01:00
Till Wegmueller
d3f6b47fdb
chore: add claude memory for the repo
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2025-11-29 17:21:40 +01:00
Till Wegmueller
609f39813f
feat: use native ARM64 runners with matrix strategy for faster builds 2025-11-29 17:21:07 +01:00
Till Wegmueller
876c659292
chore: release 0.2.0-alpha.7 2025-11-29 16:41:28 +01:00
Till Wegmueller
94767f5554
fix: use platform-specific build caches to avoid race conditions 2025-11-29 16:41:06 +01:00
Till Wegmueller
55a0141a2f
chore: release 0.2.0-alpha.6 2025-11-29 16:21:57 +01:00
Till Wegmueller
362b57d4c3
chore: update Dockerfile to Rust 1.91 for edition 2024 support 2025-11-29 16:21:24 +01:00
Till Wegmueller
6b388de790
chore: release 0.2.0-alpha.5 2025-11-29 16:15:02 +01:00
Till Wegmueller
0ce360f004
fix: commit Cargo.lock for reproducible builds
Cargo.lock should be committed for applications (not libraries) to ensure
reproducible builds across environments. This is required for Docker builds
and is the recommended practice per Rust guidelines.

Removed Cargo.lock from:
- .gitignore
- .dockerignore

This fixes the Docker build error:
  ERROR: "/Cargo.lock": not found
2025-11-29 16:14:39 +01:00
Till Wegmueller
bd42b06fff
chore: release 0.2.0-alpha.4 2025-11-29 16:11:31 +01:00
Till Wegmueller
7e7e672f65
fix(ci): use fixed prefix for SHA tags instead of branch name
The {{branch}} placeholder is empty for tag pushes, resulting in
invalid tags like '-f7184b4'. Changed to use 'sha-' prefix instead.

Tags will now be:
- ghcr.io/.../barycenter:0.2.0-alpha.3
- ghcr.io/.../barycenter:sha-f7184b4
2025-11-29 16:11:09 +01:00
Till Wegmueller
f7184b4c67
chore: release 0.2.0-alpha.3 2025-11-29 16:09:05 +01:00
Till Wegmueller
ea876be242
fix(ci): prevent invalid Docker tags for pre-release versions
Disable major and minor version tags for pre-release versions (alpha, beta, rc)
since semver pattern extraction doesn't work correctly with pre-release suffixes.

This fixes the error:
  ERROR: failed to build: invalid tag "ghcr.io/.../barycenter:-1171167"

Pre-release versions will now only get:
- Full version tag: v0.2.0-alpha.1
- SHA tag: main-<sha>

Stable releases will continue to get all tags:
- Full version: v1.0.0
- Major.minor: 1.0
- Major: 1
- SHA: main-<sha>

Also added missing id to build step for attestation.
2025-11-29 16:08:31 +01:00
Till Wegmueller
11711677da
chore: release 0.2.0-alpha.2 2025-11-29 16:04:13 +01:00
Till Wegmueller
5189a18008
chore: fix formatting
Signed-off-by: Till Wegmueller <toasterson@gmail.com>
2025-11-29 16:03:52 +01:00
179 changed files with 39471 additions and 569 deletions

View file

@ -21,7 +21,37 @@
"Bash(gh run view:*)",
"Bash(cargo fmt:*)",
"Bash(cargo clippy:*)",
"Bash(rm:*)"
"Bash(rm:*)",
"WebSearch",
"Bash(cargo check:*)",
"Bash(cat:*)",
"Bash(cargo doc:*)",
"Bash(grep:*)",
"Bash(cargo run:*)",
"Bash(wasm-pack build:*)",
"Bash(find:*)",
"Bash(wc:*)",
"Bash(cargo fix:*)",
"Bash(tee:*)",
"mcp__context7__query-docs",
"Bash(cargo expand:*)",
"Bash(cargo tree:*)",
"Bash(cargo metadata:*)",
"Bash(ls:*)",
"Bash(sqlite3:*)",
"Bash(rustc:*)",
"Bash(docker build:*)",
"Bash(git commit -m \"$(cat <<''EOF''\nfix(docker): Add missing client-wasm directory and update Rust version\n\n- Add COPY client-wasm to Dockerfile to include workspace member\n- Update Rust base image from 1.91 to 1.92\n- Fixes CI build failure: \"failed to load manifest for workspace member client-wasm\"\n\n🤖 Generated with [Claude Code](https://claude.com/claude-code)\n\nCo-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>\nEOF\n)\")",
"WebFetch(domain:datatracker.ietf.org)",
"WebFetch(domain:docs.rs)",
"WebFetch(domain:github.com)",
"WebFetch(domain:kdl.dev)",
"Bash(git -C /home/toasty/ws/nebula/barycenter status)",
"Bash(git -C /home/toasty/ws/nebula/barycenter diff --stat)",
"Bash(git -C /home/toasty/ws/nebula/barycenter log --oneline -5)",
"Bash(git -C /home/toasty/ws/nebula/barycenter add Cargo.toml Cargo.lock src/lib.rs src/settings.rs src/web.rs src/authz/)",
"Bash(git -C /home/toasty/ws/nebula/barycenter commit -m \"$\\(cat <<''EOF''\nImplement file-driven authorization policy service \\(ReBAC + ABAC\\)\n\nAdd a Zanzibar-style relationship-based access control engine with\nOPA-style ABAC condition evaluation. Policies, roles, resources, and\ngrants are defined in KDL files loaded from a configured directory at\nstartup. Exposes a read-only REST API \\(POST /v1/check, /v1/expand,\nGET /healthz\\) on a dedicated port when authz.enabled = true.\n\nCo-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>\nEOF\n\\)\")",
"Bash(helm template:*)"
],
"deny": [],
"ask": []

View file

@ -12,7 +12,7 @@
# Rust
target/
Cargo.lock
# Note: Cargo.lock is needed for reproducible builds
# Build artifacts
*.db

View file

@ -73,6 +73,27 @@ jobs:
- name: Run tests
run: cargo nextest run --verbose
docker-build:
name: Docker Build Check
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build Docker image (amd64)
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64
push: false
tags: barycenter:ci-test
cache-from: type=gha,scope=ci-docker-amd64
cache-to: type=gha,mode=max,scope=ci-docker-amd64
security:
name: Security Audit
runs-on: ubuntu-latest

54
.github/workflows/docs.yml vendored Normal file
View file

@ -0,0 +1,54 @@
name: Deploy Documentation
on:
push:
branches:
- main
paths:
- 'book/**'
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
concurrency:
group: pages
cancel-in-progress: false
jobs:
build:
name: Build mdbook
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install mdbook
uses: taiki-e/install-action@v2
with:
tool: mdbook
- name: Build book
run: mdbook build book
- name: Upload Pages artifact
uses: actions/upload-pages-artifact@v3
with:
path: book/build
deploy:
name: Deploy to GitHub Pages
needs: build
runs-on: ubuntu-latest
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View file

@ -10,12 +10,19 @@ env:
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push:
runs-on: ubuntu-latest
build-platform:
runs-on: ${{ matrix.runner }}
permissions:
contents: read
packages: write
id-token: write
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
- platform: linux/arm64
runner: ubuntu-24.04-arm
steps:
- name: Checkout repository
@ -38,38 +45,88 @@ jobs:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha,prefix={{branch}}-
type=semver,pattern={{major}}.{{minor}},enable=${{ !contains(github.ref_name, 'alpha') && !contains(github.ref_name, 'beta') && !contains(github.ref_name, 'rc') }}
type=semver,pattern={{major}},enable=${{ !contains(github.ref_name, 'alpha') && !contains(github.ref_name, 'beta') && !contains(github.ref_name, 'rc') }}
type=sha,prefix=sha-
labels: |
org.opencontainers.image.title=Barycenter
org.opencontainers.image.description=OpenID Connect Identity Provider with federation and auto-registration
org.opencontainers.image.vendor=${{ github.repository_owner }}
flavor: |
suffix=-${{ matrix.platform == 'linux/amd64' && 'amd64' || 'arm64' }}
- name: Build and push Docker image
- name: Build and push platform-specific image
id: build
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
platforms: ${{ matrix.platform }}
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
cache-from: type=gha,scope=${{ matrix.platform }}
cache-to: type=gha,mode=max,scope=${{ matrix.platform }}
build-args: |
VERSION=${{ github.ref_name }}
REVISION=${{ github.sha }}
create-manifest:
runs-on: ubuntu-latest
needs: build-platform
permissions:
contents: read
packages: write
id-token: write
attestations: write
steps:
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata for manifest
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}},enable=${{ !contains(github.ref_name, 'alpha') && !contains(github.ref_name, 'beta') && !contains(github.ref_name, 'rc') }}
type=semver,pattern={{major}},enable=${{ !contains(github.ref_name, 'alpha') && !contains(github.ref_name, 'beta') && !contains(github.ref_name, 'rc') }}
type=sha,prefix=sha-
- name: Create and push multi-platform manifest
id: manifest
run: |
# Extract tags into an array
TAGS=$(echo '${{ steps.meta.outputs.tags }}' | tr '\n' ' ')
# For each tag, create a manifest combining both platform images
for TAG in $TAGS; do
echo "Creating manifest for $TAG"
docker buildx imagetools create -t $TAG \
${TAG}-amd64 \
${TAG}-arm64
done
# Get the digest of the first tag (version tag) for attestation
FIRST_TAG=$(echo '${{ steps.meta.outputs.tags }}' | head -n1)
DIGEST=$(docker buildx imagetools inspect ${FIRST_TAG} --raw | sha256sum | cut -d' ' -f1 | awk '{print "sha256:" $0}')
echo "digest=${DIGEST}" >> $GITHUB_OUTPUT
- name: Generate artifact attestation
uses: actions/attest-build-provenance@v1
with:
subject-name: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
subject-digest: ${{ steps.build.outputs.digest }}
subject-digest: ${{ steps.manifest.outputs.digest }}
push-to-registry: true
create-github-release:
runs-on: ubuntu-latest
needs: build-and-push
needs: create-manifest
permissions:
contents: write

5
.gitignore vendored
View file

@ -4,7 +4,7 @@
*.pdb
# Cargo
Cargo.lock
# Note: Cargo.lock should be committed for applications (not ignored)
# IDE and editor files
.idea/
@ -24,6 +24,9 @@ Cargo.lock
*.db-shm
*.db-wal
# mdbook build output
/book/build/
# Environment and config (optional - uncomment if you want to ignore local configs)
# config.toml
# .env

344
CLAUDE.md
View file

@ -4,7 +4,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview
Barycenter is an OpenID Connect Identity Provider (IdP) implementing OAuth 2.0 Authorization Code flow with PKCE. The project is written in Rust using axum for the web framework, SeaORM for database access (SQLite), and josekit for JOSE/JWT operations.
Barycenter is an OpenID Connect Identity Provider (IdP) implementing OAuth 2.0 Authorization Code flow with PKCE. The project is written in Rust using axum for the web framework, SeaORM for database access (SQLite and PostgreSQL), and josekit for JOSE/JWT operations.
## Build and Development Commands
@ -68,6 +68,27 @@ The application loads configuration from:
Environment variables use double underscores as separators for nested keys.
### Database Configuration
Barycenter supports both SQLite and PostgreSQL databases. The database backend is automatically detected from the connection string:
**SQLite (default):**
```toml
[database]
url = "sqlite://barycenter.db?mode=rwc"
```
**PostgreSQL:**
```toml
[database]
url = "postgresql://user:password@localhost/barycenter"
```
Or via environment variable:
```bash
export BARYCENTER__DATABASE__URL="postgresql://user:password@localhost/barycenter"
```
## Architecture and Module Structure
### Entry Point (`src/main.rs`)
@ -81,14 +102,14 @@ The application initializes in this order:
### Settings (`src/settings.rs`)
Manages configuration with four main sections:
- `Server`: listen address and public base URL (issuer)
- `Database`: SQLite connection string
- `Database`: database connection string (SQLite or PostgreSQL)
- `Keys`: JWKS and private key paths, signing algorithm
- `Federation`: trust anchor URLs (future use)
The `issuer()` method returns the OAuth issuer URL, preferring `public_base_url` or falling back to `http://{host}:{port}`.
### Storage (`src/storage.rs`)
Database layer with raw SQL using SeaORM's `DatabaseConnection`. Tables:
Database layer with raw SQL using SeaORM's `DatabaseConnection`. Supports both SQLite and PostgreSQL backends, automatically detected from the connection string. Tables:
- `clients`: OAuth client registrations (client_id, client_secret, redirect_uris)
- `auth_codes`: Authorization codes with PKCE challenge, subject, scope, nonce
- `access_tokens`: Bearer tokens with subject, scope, expiration
@ -113,15 +134,35 @@ Implements OpenID Connect and OAuth 2.0 endpoints:
**OAuth/OIDC Flow:**
- `GET /authorize` - Authorization endpoint (issues authorization code with PKCE)
- Currently uses fixed subject "demo-user" (pending login flow implementation per docs/next-iteration-plan.md)
- Validates client_id, redirect_uri, scope (must include "openid"), PKCE S256
- Checks 2FA requirements (admin-enforced, high-value scopes, max_age)
- Redirects to /login or /login/2fa if authentication needed
- Returns redirect with code and state
- `POST /token` - Token endpoint (exchanges code for tokens)
- Supports `client_secret_basic` (Authorization header) and `client_secret_post` (form body)
- Validates PKCE S256 code_verifier
- Returns access_token, id_token (JWT), token_type, expires_in
- Returns access_token, id_token (JWT with AMR/ACR claims), token_type, expires_in
- `GET /userinfo` - UserInfo endpoint (returns claims for Bearer token)
**Authentication:**
- `GET /login` - Login page with passkey autofill and password fallback
- `POST /login` - Password authentication, checks 2FA requirements
- `GET /login/2fa` - Two-factor authentication page
- `POST /logout` - End user session
**Passkey/WebAuthn Endpoints:**
- `POST /webauthn/register/start` - Start passkey registration (requires session)
- `POST /webauthn/register/finish` - Complete passkey registration
- `POST /webauthn/authenticate/start` - Start passkey authentication (public)
- `POST /webauthn/authenticate/finish` - Complete passkey authentication
- `POST /webauthn/2fa/start` - Start 2FA passkey verification (requires partial session)
- `POST /webauthn/2fa/finish` - Complete 2FA passkey verification
**Passkey Management:**
- `GET /account/passkeys` - List user's registered passkeys
- `DELETE /account/passkeys/:credential_id` - Delete a passkey
- `PATCH /account/passkeys/:credential_id` - Update passkey name
**Non-Standard:**
- `GET /properties/:owner/:key` - Get property value
- `PUT /properties/:owner/:key` - Set property value
@ -147,12 +188,60 @@ Generated ID tokens include:
- Standard claims: iss, sub, aud, exp, iat
- Optional: nonce (if provided in authorize request)
- at_hash: hash of access token per OIDC spec (left 128 bits of SHA-256, base64url)
- auth_time: timestamp of authentication (from session)
- amr: Authentication Method References array (e.g., ["pwd"], ["hwk"], ["pwd", "hwk"])
- acr: Authentication Context Reference ("aal1" for single-factor, "aal2" for two-factor)
- Signed with RS256, includes kid header matching JWKS
### State Management
- Authorization codes: 5 minute TTL, single-use (marked consumed)
- Access tokens: 1 hour TTL, checked for expiration and revoked flag
- Both stored in SQLite with timestamps
- Sessions: Track authentication methods (AMR), context (ACR), and MFA status
- WebAuthn challenges: 5 minute TTL, cleaned up every 5 minutes by background job
- All stored in database with timestamps
### WebAuthn/Passkey Authentication
Barycenter supports passwordless authentication using WebAuthn/FIDO2 passkeys with the following features:
**Authentication Modes:**
- **Single-factor passkey login**: Passkeys as primary authentication method
- **Two-factor authentication**: Passkeys as second factor after password login
- **Password fallback**: Traditional password authentication remains available
**Client Implementation:**
- Rust WASM module (`client-wasm/`) compiled with wasm-pack
- Browser-side WebAuthn API calls via wasm-bindgen
- Conditional UI support for autofill in Chrome 108+, Safari 16+
- Progressive enhancement: falls back to explicit button if autofill unavailable
**Passkey Storage:**
- Full `Passkey` object stored as JSON in database
- Tracks signature counter for clone detection
- Records backup state (cloud-synced vs hardware-bound)
- Supports friendly names for user management
**AMR (Authentication Method References) Values:**
- `"pwd"`: Password authentication
- `"hwk"`: Hardware-bound passkey (YubiKey, security key)
- `"swk"`: Software/cloud-synced passkey (iCloud Keychain, password manager)
- Multiple values indicate multi-factor auth (e.g., `["pwd", "hwk"]`)
**2FA Enforcement Modes:**
1. **User-Optional 2FA**: Users can enable 2FA in account settings (future UI)
2. **Admin-Enforced 2FA**: Set `users.requires_2fa = 1` via GraphQL mutation
3. **Context-Based 2FA**: Triggered by:
- High-value scopes: "admin", "payment", "transfer", "delete"
- Fresh authentication required: `max_age < 300` seconds
- Can be configured per-scope or per-request
**2FA Flow:**
1. User logs in with password → creates partial session (`mfa_verified=0`)
2. If 2FA required, redirect to `/login/2fa`
3. User verifies with passkey
4. Session upgraded: `mfa_verified=1`, `acr="aal2"`, `amr=["pwd", "hwk"]`
5. Authorization proceeds, ID token includes full authentication context
## Current Implementation Status
@ -162,32 +251,243 @@ See `docs/oidc-conformance.md` for detailed OIDC compliance requirements.
- Authorization Code flow with PKCE (S256)
- Dynamic client registration
- Token endpoint with client_secret_basic and client_secret_post
- ID Token signing (RS256) with at_hash and nonce
- ID Token signing (RS256) with at_hash, nonce, auth_time, AMR, and ACR claims
- UserInfo endpoint with Bearer token authentication
- Discovery and JWKS publication
- Property storage API
- User authentication with sessions
- Password authentication with argon2 hashing
- WebAuthn/passkey authentication (single-factor and two-factor)
- WASM client for browser-side WebAuthn operations
- Conditional UI/autofill for passkey login
- Three 2FA modes: user-optional, admin-enforced, context-based
- Background jobs for cleanup (sessions, tokens, challenges)
- Admin GraphQL API for user management and job triggering
- Refresh token grant with rotation
- Session-based AMR/ACR tracking
**Pending (see docs/next-iteration-plan.md):**
- User authentication and session management (currently uses fixed "demo-user" subject)
- auth_time claim in ID Token (requires session tracking)
- Cache-Control headers on token endpoint
- Consent flow (currently auto-consents)
- Refresh tokens
- Token revocation and introspection
**Pending:**
- OpenID Federation trust chain validation
- User account management UI
- Key rotation and multi-key JWKS
## Admin GraphQL API
The admin API is served on a separate port (default: 9091) and provides GraphQL queries and mutations for management:
**Mutations:**
```graphql
mutation {
# Trigger background jobs manually
triggerJob(jobName: "cleanup_expired_sessions") {
success
message
}
# Enable 2FA requirement for a user
setUser2faRequired(username: "alice", required: true) {
success
message
requires2fa
}
}
```
**Queries:**
```graphql
query {
# Get job execution history
jobLogs(limit: 10, onlyFailures: false) {
id
jobName
startedAt
completedAt
success
recordsProcessed
}
# Get user 2FA status
user2faStatus(username: "alice") {
username
requires2fa
passkeyEnrolled
passkeyCount
passkeyEnrolledAt
}
# List available jobs
availableJobs {
name
description
schedule
}
}
```
Available job names:
- `cleanup_expired_sessions` (hourly at :00)
- `cleanup_expired_refresh_tokens` (hourly at :30)
- `cleanup_expired_challenges` (every 5 minutes)
## Building the WASM Client
The passkey authentication client is written in Rust and compiled to WebAssembly:
```bash
# Install wasm-pack if not already installed
cargo install wasm-pack
# Build the WASM module
cd client-wasm
wasm-pack build --target web --out-dir ../static/wasm
# The built files will be in static/wasm/:
# - barycenter_webauthn_client_bg.wasm
# - barycenter_webauthn_client.js
# - TypeScript definitions (.d.ts files)
```
The WASM module is automatically loaded by the login page and provides:
- `supports_webauthn()`: Check if WebAuthn is available
- `supports_conditional_ui()`: Check for autofill support
- `register_passkey(options)`: Create a new passkey
- `authenticate_passkey(options, mediation)`: Authenticate with passkey
## Testing and Validation
No automated tests currently exist. Manual testing can be done with curl commands following the OAuth 2.0 Authorization Code + PKCE flow:
### Manual Testing Flow
1. Register a client via `POST /connect/register`
2. Generate PKCE verifier and challenge
3. Navigate to `/authorize` with required parameters
4. Exchange authorization code at `/token` with code_verifier
5. Access `/userinfo` with Bearer access_token
Example PKCE generation (bash):
**1. Test Password Login:**
```bash
# Navigate to http://localhost:9090/login
# Enter username: admin, password: password123
# Should create session and redirect
```
**2. Test Passkey Registration:**
```bash
# After logging in with password
# Navigate to http://localhost:9090/account/passkeys
# (Future UI - currently use browser console)
# Call via JavaScript console:
fetch('/webauthn/register/start', { method: 'POST' })
.then(r => r.json())
.then(data => {
// Use browser's navigator.credentials.create() with returned options
});
```
**3. Test Passkey Authentication:**
- Navigate to `/login`
- Click on username field
- Browser should show passkey autofill (Chrome 108+, Safari 16+)
- Select a passkey to authenticate
**4. Test Admin-Enforced 2FA:**
```graphql
# Via admin API (port 9091)
mutation {
setUser2faRequired(username: "admin", required: true) {
success
}
}
```
Then:
1. Log out
2. Log in with password
3. Should redirect to `/login/2fa`
4. Complete passkey verification
5. Should complete authorization with ACR="aal2"
**5. Test Context-Based 2FA:**
```bash
# Request authorization with max_age < 300
curl "http://localhost:9090/authorize?...&max_age=60"
# Should trigger 2FA even if not admin-enforced
```
### OIDC Flow Testing
```bash
# 1. Register a client
curl -X POST http://localhost:9090/connect/register \
-H "Content-Type: application/json" \
-d '{
"redirect_uris": ["http://localhost:8080/callback"],
"client_name": "Test Client"
}'
# 2. Generate PKCE
verifier=$(openssl rand -base64 32 | tr -d '=' | tr '+/' '-_')
challenge=$(echo -n "$verifier" | openssl dgst -binary -sha256 | base64 | tr -d '=' | tr '+/' '-_')
# 3. Navigate to authorize endpoint (in browser)
http://localhost:9090/authorize?client_id=CLIENT_ID&redirect_uri=http://localhost:8080/callback&response_type=code&scope=openid&code_challenge=$challenge&code_challenge_method=S256&state=random
# 4. After redirect, exchange code for tokens
curl -X POST http://localhost:9090/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code&code=CODE&redirect_uri=http://localhost:8080/callback&client_id=CLIENT_ID&client_secret=SECRET&code_verifier=$verifier"
# 5. Decode ID token to verify AMR/ACR claims
# Use jwt.io or similar to inspect the token
```
### Expected ID Token Claims
After passkey authentication:
```json
{
"iss": "http://localhost:9090",
"sub": "user_subject_uuid",
"aud": "client_id",
"exp": 1234567890,
"iat": 1234564290,
"auth_time": 1234564290,
"amr": ["hwk"], // or ["swk"] for cloud-synced, ["pwd", "hwk"] for 2FA
"acr": "aal1", // or "aal2" for 2FA
"nonce": "optional_nonce"
}
```
## Migration Guide for Existing Deployments
If you have an existing Barycenter deployment, the database will be automatically migrated when you update:
1. **Backup your database** before upgrading
2. Run the application - migrations run automatically on startup
3. New tables will be created:
- `passkeys`: Stores registered passkeys
- `webauthn_challenges`: Temporary challenge storage
4. Existing tables will be extended:
- `sessions`: Added `amr`, `acr`, `mfa_verified` columns
- `users`: Added `requires_2fa`, `passkey_enrolled_at` columns
**Post-Migration Steps:**
1. Build the WASM client:
```bash
cd client-wasm
wasm-pack build --target web --out-dir ../static/wasm
```
2. Restart the application to serve static files
3. Users can now register passkeys via `/account/passkeys` (future UI)
4. Enable 2FA for specific users via admin API:
```graphql
mutation {
setUser2faRequired(username: "admin", required: true) {
success
}
}
```
**No Breaking Changes:**
- Password authentication continues to work
- Existing sessions remain valid
- ID tokens now include AMR/ACR claims (additive change)
- OIDC clients receiving new claims should handle gracefully

5931
Cargo.lock generated Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,6 +1,9 @@
[workspace]
members = [".", "migration", "client-wasm"]
[package]
name = "barycenter"
version = "0.2.0-alpha.1"
version = "0.2.0-beta.1"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "OpenID Connect IdP with federation, property storage, and auto-registration the center of gravity between multiple objects."
@ -10,6 +13,10 @@ documentation = "https://github.com/CloudNebulaProject/barycenter/blob/main/READ
keywords = ["openid", "oauth2", "identity", "authentication", "oidc"]
categories = ["authentication", "web-programming"]
[lib]
name = "barycenter"
path = "src/lib.rs"
[dependencies]
axum = { version = "0.8", features = ["json", "form"] }
tokio = { version = "1", features = ["full"] }
@ -23,12 +30,18 @@ serde = { version = "1", features = ["derive"] }
serde_json = "1"
serde_with = "3"
# SeaORM for SQLite
sea-orm = { version = "1", default-features = false, features = ["sqlx-sqlite", "runtime-tokio-rustls"] }
# SeaORM for SQLite and PostgreSQL
sea-orm = { version = "1", default-features = false, features = ["sqlx-sqlite", "sqlx-postgres", "runtime-tokio-rustls", "macros"] }
sea-orm-migration = { version = "1", features = ["sqlx-sqlite", "sqlx-postgres", "runtime-tokio-rustls"] }
migration = { path = "migration" }
# JOSE / JWKS & JWT
josekit = "0.10"
# WebAuthn / Passkeys
webauthn-rs = { version = "0.5", features = ["danger-allow-state-serialisation"] }
uuid = { version = "1", features = ["v4", "serde"] }
chrono = { version = "0.4", features = ["serde", "clock"] }
time = "0.3"
rand = "0.8"
@ -43,17 +56,40 @@ argon2 = "0.5"
# Rate limiting
tower = "0.5"
tower_governor = "0.4"
tower-http = { version = "0.6", features = ["fs"] }
# Validation
regex = "1"
url = "2"
urlencoding = "2"
# GraphQL Admin API
seaography = { version = "1", features = ["with-decimal", "with-chrono", "with-uuid"] }
async-graphql = "7"
async-graphql-axum = "7"
# Background job scheduler
tokio-cron-scheduler = "0.13"
bincode = "2.0.1"
# Policy / authorization engine
kdl = "6"
[dev-dependencies]
# Existing OIDC/OAuth testing
openidconnect = { version = "4", features = ["reqwest-blocking"] }
oauth2 = "5"
reqwest = { version = "0.12", features = ["blocking", "json", "cookies"] }
urlencoding = "2"
# New test utilities
tempfile = "3" # Temp SQLite databases for test isolation
tokio-test = "0.4" # Async test utilities
assert_matches = "1" # Pattern matching assertions
pretty_assertions = "1" # Better assertion output with color diffs
test-log = "0.2" # Capture tracing logs in tests
serde_cbor = "0.11" # CBOR encoding for WebAuthn mocks
[profile.release]
debug = 1

View file

@ -1,6 +1,6 @@
# Multi-stage build for Barycenter OpenID Connect IdP
# Build stage
FROM rust:1.83-bookworm AS builder
FROM rust:1.92-bookworm AS builder
WORKDIR /build
@ -9,10 +9,13 @@ COPY Cargo.toml Cargo.lock ./
# Copy source code
COPY src ./src
COPY migration ./migration
COPY client-wasm ./client-wasm
# Build release binary
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/build/target \
# Build release binary with platform-specific caches to avoid race conditions
ARG TARGETPLATFORM
RUN --mount=type=cache,target=/usr/local/cargo/registry,id=cargo-registry-${TARGETPLATFORM} \
--mount=type=cache,target=/build/target,id=build-target-${TARGETPLATFORM} \
cargo build --release && \
cp target/release/barycenter /barycenter
@ -27,7 +30,7 @@ RUN apt-get update && \
# Create non-root user
RUN useradd -r -u 1000 -s /bin/false barycenter && \
mkdir -p /app/data /app/config && \
mkdir -p /app/data /app/config /app/policies && \
chown -R barycenter:barycenter /app
WORKDIR /app
@ -44,8 +47,8 @@ RUN chown -R barycenter:barycenter /app
# Switch to non-root user
USER barycenter
# Expose default port
EXPOSE 8080
# Expose default ports (OIDC, admin GraphQL, authz API)
EXPOSE 8080 8081 8082
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \

View file

@ -139,16 +139,21 @@ This is an early-stage implementation. See `docs/next-iteration-plan.md` for pla
**Currently Implemented:**
- Authorization Code flow with PKCE (S256)
- Dynamic client registration
- Token issuance and validation
- ID Token generation with RS256 signing
- Token issuance with RS256 ID Token signing (at_hash, nonce, auth_time, AMR, ACR)
- UserInfo endpoint
- Token endpoint with client_secret_basic and client_secret_post
- User authentication with sessions (password + passkey/WebAuthn)
- Two-factor authentication (admin-enforced, context-based)
- Consent flow with database persistence
- Refresh token grant with rotation
- Token revocation (RFC 7009) and introspection (RFC 7662)
- Device Authorization Grant (RFC 8628)
- Admin GraphQL API
**Pending Implementation:**
- User authentication and session management
- Consent flow
- Refresh tokens
- Token revocation and introspection
- OpenID Federation support
- OpenID Federation trust chain validation
- User account management UI
- Key rotation and multi-key JWKS
## Deployment

32
book/book.toml Normal file
View file

@ -0,0 +1,32 @@
[book]
title = "Barycenter Documentation"
authors = ["CloudNebula Project"]
description = "Comprehensive documentation for the Barycenter OpenID Connect Identity Provider"
language = "en"
src = "src"
[build]
build-dir = "build"
create-missing = false
[output.html]
default-theme = "light"
preferred-dark-theme = "navy"
smart-punctuation = true
additional-js = ["mermaid.min.js", "mermaid-init.js"]
git-repository-url = "https://github.com/CloudNebulaProject/barycenter"
edit-url-template = "https://github.com/CloudNebulaProject/barycenter/edit/main/book/{path}"
site-url = "/"
no-section-label = false
[output.html.search]
enable = true
limit-results = 30
teaser-word-count = 30
use-boolean-and = true
boost-title = 2
boost-hierarchy = 1
boost-paragraph = 1
expand = true
heading-split-level = 3
copy-js = true

57
book/mermaid-init.js Normal file
View file

@ -0,0 +1,57 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.
(() => {
// Convert ```mermaid code blocks to <pre class="mermaid"> elements
const codeBlocks = document.querySelectorAll('code.language-mermaid');
codeBlocks.forEach((block) => {
const pre = block.parentElement;
const div = document.createElement('pre');
div.className = 'mermaid';
div.textContent = block.textContent;
pre.parentElement.replaceChild(div, pre);
});
if (codeBlocks.length === 0 && document.querySelectorAll('.mermaid').length === 0) {
return;
}
const darkThemes = ['ayu', 'navy', 'coal'];
const lightThemes = ['light', 'rust'];
const classList = document.getElementsByTagName('html')[0].classList;
let lastThemeWasLight = true;
for (const cssClass of classList) {
if (darkThemes.includes(cssClass)) {
lastThemeWasLight = false;
break;
}
}
const theme = lastThemeWasLight ? 'default' : 'dark';
mermaid.initialize({ startOnLoad: true, theme });
for (const darkTheme of darkThemes) {
const el = document.getElementById(darkTheme);
if (el) {
el.addEventListener('click', () => {
if (lastThemeWasLight) {
window.location.reload();
}
});
}
}
for (const lightTheme of lightThemes) {
const el = document.getElementById(lightTheme);
if (el) {
el.addEventListener('click', () => {
if (!lastThemeWasLight) {
window.location.reload();
}
});
}
}
})();

2609
book/mermaid.min.js vendored Normal file

File diff suppressed because one or more lines are too long

26
book/src/README.md Normal file
View file

@ -0,0 +1,26 @@
# Barycenter
Barycenter is a lightweight, Rust-based OpenID Connect Identity Provider (IdP) that implements the OAuth 2.0 Authorization Code flow with PKCE, WebAuthn/passkey authentication, device authorization grants, and a KDL-based authorization policy engine.
Built on top of [axum](https://github.com/tokio-rs/axum) and [SeaORM](https://www.sea-ql.org/SeaORM/), Barycenter is designed to be fast, self-contained, and straightforward to operate -- whether you are deploying it as a standalone identity provider or integrating it into a larger distributed system.
## Who This Book Is For
- **Operators** looking to deploy and configure Barycenter in development or production environments.
- **Application Developers** integrating their services with Barycenter as an OIDC provider.
- **Identity Engineers** evaluating Barycenter's authentication and authorization capabilities.
- **Contributors** who want to understand the internals and extend the project.
## How This Book Is Organized
| Section | Description |
|---------|-------------|
| [Getting Started](./getting-started/overview.md) | Project overview, installation, configuration, and a quickstart guide to get tokens flowing. |
| Authentication | Password login, WebAuthn/passkey authentication, two-factor enforcement, and session management. |
| OpenID Connect | Client registration, authorization code flow, token exchange, ID token claims, and discovery. |
| Authorization | KDL-based policy engine combining Relationship-Based Access Control (ReBAC) and Attribute-Based Access Control (ABAC). |
| Admin | GraphQL admin API for user management, background jobs, and operational tasks. |
| Deployment | Docker images, Kubernetes manifests, database choices, and production hardening. |
| Security | Security headers, PKCE enforcement, key management, and threat model considerations. |
| Development | Building from source, running tests, WASM client compilation, and contributing guidelines. |
| Reference | Endpoint reference, configuration keys, entity schemas, and error codes. |

122
book/src/SUMMARY.md Normal file
View file

@ -0,0 +1,122 @@
[Introduction](README.md)
# Getting Started
- [Overview](getting-started/overview.md)
- [Architecture](getting-started/architecture.md)
- [Key Concepts](getting-started/key-concepts.md)
- [Installation](getting-started/installation.md)
- [Prerequisites](getting-started/prerequisites.md)
- [Building from Source](getting-started/building-from-source.md)
- [Docker](getting-started/docker.md)
- [Quick Start](getting-started/quickstart.md)
- [Configuration](getting-started/configuration.md)
- [Configuration File](getting-started/config-file.md)
- [Environment Variables](getting-started/env-variables.md)
- [Database Setup](getting-started/database-setup.md)
# Authentication
- [Password Authentication](authentication/password.md)
- [Passkey / WebAuthn](authentication/passkeys.md)
- [How Passkeys Work](authentication/passkeys-how.md)
- [Registering a Passkey](authentication/passkey-registration.md)
- [Authenticating with a Passkey](authentication/passkey-authentication.md)
- [Conditional UI / Autofill](authentication/conditional-ui.md)
- [Two-Factor Authentication](authentication/two-factor.md)
- [Admin-Enforced 2FA](authentication/2fa-admin-enforced.md)
- [Context-Based 2FA](authentication/2fa-context-based.md)
- [User-Optional 2FA](authentication/2fa-user-optional.md)
- [2FA Flow Walkthrough](authentication/2fa-flow.md)
- [Sessions](authentication/sessions.md)
- [AMR and ACR Claims](authentication/amr-acr.md)
- [Session Lifecycle](authentication/session-lifecycle.md)
- [Consent Flow](authentication/consent.md)
# OAuth 2.0 & OpenID Connect
- [Authorization Code Flow with PKCE](oidc/authorization-code-flow.md)
- [Dynamic Client Registration](oidc/client-registration.md)
- [Token Endpoint](oidc/token-endpoint.md)
- [Authorization Code Grant](oidc/grant-authorization-code.md)
- [Refresh Token Grant](oidc/grant-refresh-token.md)
- [Device Authorization Grant](oidc/grant-device-authorization.md)
- [Client Authentication Methods](oidc/client-authentication.md)
- [ID Token](oidc/id-token.md)
- [UserInfo Endpoint](oidc/userinfo.md)
- [Discovery and JWKS](oidc/discovery-jwks.md)
- [Token Revocation](oidc/token-revocation.md)
# Authorization Policy Engine
- [Overview](authz/overview.md)
- [KDL Policy Language](authz/kdl-policy-language.md)
- [Resources and Permissions](authz/resources-permissions.md)
- [Roles and Inheritance](authz/roles-inheritance.md)
- [Grants and Relationship Tuples](authz/grants-tuples.md)
- [ABAC Rules and Conditions](authz/abac-rules.md)
- [Authz REST API](authz/rest-api.md)
- [Configuration and Deployment](authz/configuration.md)
# Administration
- [Admin GraphQL API](admin/graphql-api.md)
- [Entity CRUD (Seaography)](admin/entity-crud.md)
- [Job Management](admin/job-management.md)
- [User 2FA Management](admin/user-2fa.md)
- [GraphQL Playground](admin/playground.md)
- [User Management](admin/user-management.md)
- [Creating Users](admin/creating-users.md)
- [User Sync from JSON](admin/user-sync.md)
- [Public Registration](admin/public-registration.md)
- [Passkey Management](admin/passkey-management.md)
- [Background Jobs](admin/background-jobs.md)
- [Available Jobs](admin/available-jobs.md)
- [Job Scheduling](admin/job-scheduling.md)
- [Monitoring Job Executions](admin/job-monitoring.md)
# Deployment
- [Docker](deployment/docker.md)
- [Docker Compose](deployment/docker-compose.md)
- [Kubernetes with Helm](deployment/kubernetes-helm.md)
- [Helm Chart Values](deployment/helm-values.md)
- [Ingress Configuration](deployment/helm-ingress.md)
- [Gateway API](deployment/gateway-api.md)
- [User Sync in Kubernetes](deployment/k8s-user-sync.md)
- [Authorization Policies in Kubernetes](deployment/k8s-authz-policies.md)
- [Linux systemd](deployment/systemd.md)
- [FreeBSD rc.d](deployment/freebsd.md)
- [illumos / Solaris SMF](deployment/illumos-smf.md)
- [Reverse Proxy and TLS](deployment/reverse-proxy-tls.md)
- [Production Checklist](deployment/production-checklist.md)
- [Backup and Recovery](deployment/backup-recovery.md)
# Security
- [Security Model](security/security-model.md)
- [PKCE Enforcement](security/pkce.md)
- [Security Headers](security/headers.md)
- [Session Security](security/session-security.md)
- [Rate Limiting](security/rate-limiting.md)
- [File Permissions and Hardening](security/hardening.md)
# Development
- [Building from Source](development/building.md)
- [Running Tests](development/testing.md)
- [Building the WASM Client](development/wasm-client.md)
- [Architecture Deep Dive](development/architecture.md)
- [Module Structure](development/module-structure.md)
- [Database Schema and Migrations](development/database-schema.md)
- [Error Handling](development/error-handling.md)
- [Contributing](development/contributing.md)
- [Release Process](development/release-process.md)
# Reference
- [API Endpoint Reference](reference/api-endpoints.md)
- [Configuration Reference](reference/configuration.md)
- [Database Schema Reference](reference/database-schema.md)
- [OIDC Conformance Status](reference/oidc-conformance.md)
- [Glossary](reference/glossary.md)

View file

@ -0,0 +1,95 @@
# Available Jobs
Barycenter includes four built-in background jobs that perform periodic cleanup of expired database records. Each job targets a specific table and removes rows that have passed their expiration time.
## Job Reference
| Job Name | Description | Cron Schedule | Frequency |
|---|---|---|---|
| `cleanup_expired_sessions` | Clean up expired user sessions | `0 0 * * * *` | Hourly at :00 |
| `cleanup_expired_refresh_tokens` | Clean up expired refresh tokens | `0 30 * * * *` | Hourly at :30 |
| `cleanup_expired_challenges` | Clean up expired WebAuthn challenges | `0 */5 * * * *` | Every 5 minutes |
| `cleanup_expired_device_codes` | Clean up expired device authorization codes | `0 45 * * * *` | Hourly at :45 |
## Job Details
### cleanup_expired_sessions
**Schedule**: Hourly at the top of the hour.
Deletes rows from the `sessions` table where the `expires_at` timestamp is in the past. User sessions have a configurable lifetime; once expired, they cannot be used for authentication and serve no further purpose.
Keeping expired sessions in the database does not affect correctness (expired sessions are rejected at authentication time), but removing them reduces table size and improves query performance for session lookups.
**Records processed**: The number of expired sessions deleted in each run.
### cleanup_expired_refresh_tokens
**Schedule**: Hourly at 30 minutes past the hour.
Deletes rows from the `refresh_tokens` table where the expiration timestamp has passed. Refresh tokens have a longer lifetime than access tokens but still expire eventually. Expired refresh tokens cannot be used to obtain new access tokens.
This job also removes refresh tokens that have been rotated and are no longer the current token in the rotation chain, provided they have passed their grace period.
**Records processed**: The number of expired refresh tokens deleted in each run.
### cleanup_expired_challenges
**Schedule**: Every 5 minutes.
Deletes rows from the `webauthn_challenges` table where the challenge is older than 5 minutes. WebAuthn challenges are ephemeral -- they are created at the start of a registration or authentication ceremony and must be consumed within a short window. Unclaimed challenges (e.g., from abandoned login attempts) accumulate and should be cleaned up frequently.
This job runs more frequently than the others because challenges have a very short TTL and can accumulate rapidly in high-traffic deployments.
**Records processed**: The number of expired challenges deleted in each run.
### cleanup_expired_device_codes
**Schedule**: Hourly at 45 minutes past the hour.
Deletes rows from the device authorization codes table where the expiration timestamp has passed. Device authorization codes are issued during the [Device Authorization Grant](../oidc/grant-device-authorization.md) flow and have a limited lifetime for the user to complete the authorization on a secondary device. Codes that are not used within this window expire and should be removed.
**Records processed**: The number of expired device codes deleted in each run.
## Querying Available Jobs
You can retrieve this information programmatically via the admin API:
```graphql
{
availableJobs {
name
description
schedule
}
}
```
```bash
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{"query": "{ availableJobs { name description schedule } }"}' | jq .
```
## Triggering Jobs Manually
Any job can be triggered outside its normal schedule using the `triggerJob` mutation:
```graphql
mutation {
triggerJob(jobName: "cleanup_expired_sessions") {
success
message
jobName
}
}
```
See [Job Management](./job-management.md) for full details on triggering and monitoring.
## Further Reading
- [Job Scheduling](./job-scheduling.md) -- cron expression format and scheduler behavior
- [Monitoring Job Executions](./job-monitoring.md) -- querying execution history
- [Job Management](./job-management.md) -- admin API operations for jobs
- [Background Jobs](./background-jobs.md) -- overview of the job system

View file

@ -0,0 +1,76 @@
# Background Jobs
Barycenter runs a set of background jobs that perform periodic maintenance tasks such as cleaning up expired sessions, tokens, and challenges. These jobs start automatically when the server launches and run on a configurable schedule. The job system is built on [tokio-cron-scheduler](https://crates.io/crates/tokio-cron-scheduler) and integrates with the admin GraphQL API for on-demand triggering and monitoring.
## Overview
Background jobs handle housekeeping that would otherwise cause unbounded growth of expired records in the database. Without these jobs, the sessions, tokens, and challenges tables would accumulate stale rows over time, degrading query performance and consuming storage.
Each job:
- Runs on a cron schedule defined at compile time.
- Executes a database query that deletes records past their expiration time.
- Logs its execution result (success or failure, records processed) to the `job_execution` table.
- Can be triggered on demand via the admin API.
## Available Jobs
Barycenter ships with four built-in background jobs:
| Job | Schedule | Description |
|---|---|---|
| `cleanup_expired_sessions` | Hourly at :00 | Deletes user sessions past their expiration time |
| `cleanup_expired_refresh_tokens` | Hourly at :30 | Deletes refresh tokens past their expiration time |
| `cleanup_expired_challenges` | Every 5 minutes | Deletes WebAuthn challenges older than 5 minutes |
| `cleanup_expired_device_codes` | Hourly at :45 | Deletes expired device authorization codes |
See [Available Jobs](./available-jobs.md) for detailed descriptions of each job.
## Key Concepts
### Automatic Startup
All jobs are registered with the scheduler during server initialization and begin running immediately. No manual action is required to start the job scheduler -- it is an integral part of the server lifecycle.
### Execution Tracking
Every job execution is recorded in the `job_execution` table with a start time, completion time, success status, error message (if applicable), and a count of records processed. This provides a complete audit trail of maintenance operations.
### On-Demand Triggering
While jobs run automatically on their schedules, administrators can trigger any job immediately through the `triggerJob` mutation at `POST /admin/jobs`. This is useful for:
- Forcing a cleanup after a known batch of expirations.
- Verifying that a job executes correctly after a deployment.
- Clearing expired records before a maintenance window.
### Monitoring
Job execution logs can be queried through the admin API, filtered by job name and failure status. This supports operational monitoring and alerting on job failures.
## Architecture
```
Server Startup
|
v
Register Jobs with tokio-cron-scheduler
|
+---> cleanup_expired_sessions (0 0 * * * *)
+---> cleanup_expired_refresh_tokens (0 30 * * * *)
+---> cleanup_expired_challenges (0 */5 * * * *)
+---> cleanup_expired_device_codes (0 45 * * * *)
|
v
Scheduler runs in background (tokio task)
|
+---> On each trigger: execute cleanup query
+---> Record result in job_execution table
```
## Further Reading
- [Available Jobs](./available-jobs.md) -- detailed descriptions of each background job
- [Job Scheduling](./job-scheduling.md) -- cron expressions and scheduler internals
- [Monitoring Job Executions](./job-monitoring.md) -- querying execution logs and detecting failures
- [Job Management](./job-management.md) -- admin API for triggering jobs and querying logs

View file

@ -0,0 +1,106 @@
# Creating Users
Barycenter provides several mechanisms for creating user accounts, ranging from the automatic default admin user to programmatic creation via the GraphQL API.
## Default Admin User
On first startup, Barycenter creates a default administrator account if no users exist in the database:
| Field | Value |
|---|---|
| Username | `admin` |
| Password | `password123` |
This account is intended for initial setup and testing. In production deployments, you should either:
- Change the admin password immediately after first login.
- Use [user sync](./user-sync.md) to provision accounts with strong passwords as part of your deployment process, replacing the default admin.
The default admin user is only created when the users table is empty. If you have provisioned users through any other method before first startup, the default admin is not created.
## Creating Users via the GraphQL API
The Seaography entity CRUD schema at `POST /admin/graphql` supports creating user records directly. This is useful for ad-hoc user creation by administrators.
### Example Mutation
```graphql
mutation {
user {
createOne(
data: {
username: "alice"
email: "alice@example.com"
passwordHash: "$argon2id$v=19$m=19456,t=2,p=1$..."
}
) {
id
username
email
}
}
}
```
### curl Example
```bash
curl -s -X POST http://localhost:8081/admin/graphql \
-H "Content-Type: application/json" \
-d '{
"query": "mutation { user { createOne(data: { username: \"alice\", email: \"alice@example.com\", passwordHash: \"$argon2id$v=19$m=19456,t=2,p=1$...\" }) { id username email } } }"
}' | jq .
```
### Password Hashing
The GraphQL API expects a pre-computed password hash in argon2id format. You must hash the password before sending it to the API. The hash can be generated using any argon2 library or command-line tool:
```bash
# Using the argon2 command-line utility
echo -n "user_password" | argon2 $(openssl rand -base64 16) -id -e
```
> **Note**: For most provisioning scenarios, the [user sync CLI](./user-sync.md) handles password hashing automatically from plaintext passwords in the JSON file, making it a more practical choice than direct GraphQL mutations.
## Creating Users via User Sync
The `sync-users` CLI subcommand reads a JSON file containing user definitions and creates or updates accounts idempotently. This is the recommended method for production deployments where the set of users is known ahead of time.
```bash
barycenter sync-users --file users.json
```
See [User Sync from JSON](./user-sync.md) for the full file format and usage details.
## Creating Users via Public Registration
When enabled, the public registration endpoint allows users to create their own accounts:
```bash
curl -X POST http://localhost:8080/register \
-H "Content-Type: application/json" \
-d '{
"username": "newuser",
"email": "newuser@example.com",
"password": "secure_password"
}'
```
See [Public Registration](./public-registration.md) for configuration details.
## Choosing the Right Method
| Method | Password Handling | Best For |
|---|---|---|
| Default admin | Pre-set (`password123`) | Initial setup and development |
| GraphQL API | Pre-hashed (argon2id) required | Ad-hoc creation by administrators |
| User sync CLI | Plaintext in JSON, hashed automatically | Declarative production provisioning |
| Public registration | Plaintext in request, hashed automatically | Self-service account creation |
## Further Reading
- [User Sync from JSON](./user-sync.md) -- bulk provisioning with automatic password hashing
- [Public Registration](./public-registration.md) -- self-service account creation
- [Entity CRUD (Seaography)](./entity-crud.md) -- full CRUD operations for all entities
- [User 2FA Management](./user-2fa.md) -- enabling 2FA after user creation

View file

@ -0,0 +1,225 @@
# Entity CRUD (Seaography)
Barycenter uses [Seaography](https://www.sea-ql.org/Seaography/) to auto-generate a GraphQL schema from its SeaORM database entities. This schema is served at `POST /admin/graphql` on the admin port and provides full create, read, update, and delete operations for all registered entities without any hand-written resolver code.
## Registered Entities
The following eight entities are registered in the Seaography schema:
| Entity | Description |
|---|---|
| `user` | User accounts with credentials, 2FA settings, and metadata |
| `client` | OAuth 2.0 client registrations with secrets and redirect URIs |
| `session` | Active user sessions with authentication context (AMR, ACR, MFA status) |
| `access_token` | Issued access tokens with subject, scope, and expiration |
| `auth_code` | Authorization codes with PKCE challenge, scope, and nonce |
| `refresh_token` | Refresh tokens with rotation tracking and expiration |
| `property` | Key-value property store (owner, key, value) |
| `job_execution` | Background job execution history and results |
Each entity exposes `findMany` and `findOne` queries, as well as `createOne`, `updateOne`, and `deleteOne` mutations, all auto-generated by Seaography.
## Querying Entities
### List Users
```graphql
{
user {
findMany {
nodes {
id
username
email
requires2fa
passkeyEnrolledAt
createdAt
}
paginationInfo {
pages
current
offset
total
}
}
}
}
```
### Find a Single User
```graphql
{
user {
findOne(filter: { username: { eq: "admin" } }) {
id
username
email
requires2fa
}
}
}
```
### List Active Sessions
```graphql
{
session {
findMany(
filter: { expiresAt: { gt: "2026-02-14T00:00:00Z" } }
) {
nodes {
id
subject
amr
acr
mfaVerified
createdAt
expiresAt
}
}
}
}
```
### List Registered Clients
```graphql
{
client {
findMany {
nodes {
clientId
clientName
redirectUris
createdAt
}
}
}
}
```
### View Job Execution History
```graphql
{
jobExecution {
findMany(
orderBy: { startedAt: DESC }
) {
nodes {
id
jobName
startedAt
completedAt
success
errorMessage
recordsProcessed
}
}
}
}
```
## Creating Entities
### Create a New User
```graphql
mutation {
user {
createOne(
data: {
username: "alice"
email: "alice@example.com"
passwordHash: "$argon2id$..."
}
) {
id
username
email
}
}
}
```
> **Note**: Password hashes must be pre-computed in argon2 format. For user provisioning, the [user sync CLI](./user-sync.md) or the [public registration endpoint](./public-registration.md) are typically more convenient than direct entity creation.
## Updating Entities
### Update a Client's Redirect URIs
```graphql
mutation {
client {
updateOne(
filter: { clientId: { eq: "my_client_id" } }
data: {
redirectUris: "https://app.example.com/callback https://app.example.com/callback2"
}
) {
clientId
redirectUris
}
}
}
```
## Deleting Entities
### Delete an Expired Access Token
```graphql
mutation {
accessToken {
deleteOne(
filter: { id: { eq: "token_id_here" } }
) {
id
}
}
}
```
## Using curl
All queries and mutations can be sent as JSON POST requests:
```bash
# List all users
curl -s -X POST http://localhost:8081/admin/graphql \
-H "Content-Type: application/json" \
-d '{
"query": "{ user { findMany { nodes { id username email requires2fa } } } }"
}' | jq .
# Find a specific client
curl -s -X POST http://localhost:8081/admin/graphql \
-H "Content-Type: application/json" \
-d '{
"query": "{ client { findOne(filter: { clientId: { eq: \"my_client_id\" } }) { clientId clientName redirectUris } } }"
}' | jq .
```
## Filtering and Pagination
Seaography auto-generates filter types for each entity field. Common filter operators include:
| Operator | Description | Example |
|---|---|---|
| `eq` | Equals | `{ username: { eq: "admin" } }` |
| `ne` | Not equals | `{ success: { ne: true } }` |
| `gt` | Greater than | `{ createdAt: { gt: "2026-01-01T00:00:00Z" } }` |
| `lt` | Less than | `{ expiresAt: { lt: "2026-02-14T00:00:00Z" } }` |
| `gte` | Greater than or equal | `{ recordsProcessed: { gte: 10 } }` |
| `lte` | Less than or equal | `{ recordsProcessed: { lte: 100 } }` |
| `contains` | String contains | `{ email: { contains: "@example.com" } }` |
Pagination is available on `findMany` queries using `offset` and `limit` parameters. The response includes a `paginationInfo` object with total count and page information.
## Further Reading
- [Admin GraphQL API](./graphql-api.md) -- overview of the admin API architecture
- [GraphQL Playground](./playground.md) -- interactive schema exploration
- [Job Management](./job-management.md) -- custom job operations at `/admin/jobs`

View file

@ -0,0 +1,88 @@
# Admin GraphQL API
Barycenter exposes an administration API on a dedicated port, separate from the public-facing OIDC endpoints. By default, the admin API listens on the main server port plus one (e.g., if the OIDC server runs on port 8080, the admin API runs on port 8081). This separation allows operators to restrict admin access at the network level using firewalls, Kubernetes NetworkPolicies, or reverse proxy rules without affecting public OIDC traffic.
## Two GraphQL Schemas
The admin API serves two independent GraphQL schemas, each at its own endpoint:
| Endpoint | Playground | Purpose |
|---|---|---|
| `POST /admin/graphql` | `GET /admin/playground` | Entity CRUD operations via [Seaography](./entity-crud.md) |
| `POST /admin/jobs` | `GET /admin/jobs/playground` | [Job management](./job-management.md) and [user 2FA management](./user-2fa.md) |
### Entity CRUD Schema (`/admin/graphql`)
This schema is auto-generated by Seaography from Barycenter's SeaORM entities. It provides full create, read, update, and delete operations for all registered database entities. Use this schema for tasks such as listing users, inspecting sessions, viewing registered clients, or manually managing tokens.
See [Entity CRUD (Seaography)](./entity-crud.md) for details and examples.
### Job and User Management Schema (`/admin/jobs`)
This schema provides custom queries and mutations for operational tasks that go beyond simple CRUD:
- **Trigger background jobs** on demand (e.g., force a cleanup cycle without waiting for the next scheduled run).
- **Query job execution logs** with filtering by job name and failure status.
- **List available jobs** and their schedules.
- **Manage user 2FA settings** (enable/disable 2FA requirements, query enrollment status).
See [Job Management](./job-management.md) and [User 2FA Management](./user-2fa.md) for details.
## Accessing the Admin API
The admin API is intended for operators and automation tooling, not end users. Access it using any GraphQL client, `curl`, or the built-in GraphQL playgrounds.
### Using curl
```bash
# Entity CRUD: list all users
curl -s -X POST http://localhost:8081/admin/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ user { findMany { nodes { id username email } } } }"}'
# Job management: list available jobs
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{"query": "{ availableJobs { name description schedule } }"}'
```
### Using the Playground
Open the appropriate playground URL in a browser for interactive query building:
- **Entity CRUD playground**: `http://localhost:8081/admin/playground`
- **Job management playground**: `http://localhost:8081/admin/jobs/playground`
See [GraphQL Playground](./playground.md) for more on using the playground interface.
## Configuration
The admin API port is derived from the main server configuration. If you set the main port via the configuration file or environment variable, the admin port adjusts accordingly:
```toml
[server]
port = 9090
# Admin API will be available on port 9091
```
Or via environment variable:
```bash
export BARYCENTER__SERVER__PORT=9090
# Admin API will be available on port 9091
```
## Security Considerations
The admin API provides unrestricted access to all database entities and operational controls. In production deployments:
- **Do not expose the admin port to the public internet.** Bind it to a loopback or internal network interface, or use firewall rules to restrict access.
- **Use a reverse proxy** with authentication if remote admin access is required.
- **In Kubernetes**, use a separate `Service` for the admin port and restrict access with `NetworkPolicy` resources.
## Further Reading
- [Entity CRUD (Seaography)](./entity-crud.md) -- auto-generated CRUD operations for all entities
- [Job Management](./job-management.md) -- triggering jobs and querying execution logs
- [User 2FA Management](./user-2fa.md) -- managing two-factor authentication requirements
- [GraphQL Playground](./playground.md) -- interactive query building in the browser

View file

@ -0,0 +1,225 @@
# Job Management
The job management schema at `POST /admin/jobs` provides queries and mutations for controlling Barycenter's background job system. You can trigger jobs on demand, list available jobs and their schedules, and query execution history with filtering.
## Triggering a Job
Use the `triggerJob` mutation to run a background job immediately, without waiting for its next scheduled execution.
### Mutation
```graphql
mutation {
triggerJob(jobName: "cleanup_expired_sessions") {
success
message
jobName
}
}
```
### Response
```json
{
"data": {
"triggerJob": {
"success": true,
"message": "Job cleanup_expired_sessions triggered successfully",
"jobName": "cleanup_expired_sessions"
}
}
}
```
If the job name does not match any registered job, the mutation returns an error:
```json
{
"data": {
"triggerJob": {
"success": false,
"message": "Unknown job: nonexistent_job",
"jobName": "nonexistent_job"
}
}
}
```
### curl Example
```bash
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "mutation { triggerJob(jobName: \"cleanup_expired_sessions\") { success message jobName } }"
}' | jq .
```
## Listing Available Jobs
The `availableJobs` query returns all registered background jobs with their descriptions and cron schedules.
### Query
```graphql
{
availableJobs {
name
description
schedule
}
}
```
### Response
```json
{
"data": {
"availableJobs": [
{
"name": "cleanup_expired_sessions",
"description": "Clean up expired user sessions",
"schedule": "0 0 * * * *"
},
{
"name": "cleanup_expired_refresh_tokens",
"description": "Clean up expired refresh tokens",
"schedule": "0 30 * * * *"
},
{
"name": "cleanup_expired_challenges",
"description": "Clean up expired WebAuthn challenges",
"schedule": "0 */5 * * * *"
},
{
"name": "cleanup_expired_device_codes",
"description": "Clean up expired device authorization codes",
"schedule": "0 45 * * * *"
}
]
}
}
```
### curl Example
```bash
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{"query": "{ availableJobs { name description schedule } }"}' | jq .
```
## Querying Job Execution Logs
The `jobLogs` query retrieves execution history for background jobs. Results are ordered by start time, most recent first.
### Query
```graphql
{
jobLogs(limit: 10) {
id
jobName
startedAt
completedAt
success
errorMessage
recordsProcessed
}
}
```
### Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| `jobName` | `String` | all jobs | Filter logs to a specific job by name. |
| `limit` | `Int` | `100` | Maximum number of log entries to return. |
| `onlyFailures` | `Boolean` | `false` | When `true`, return only failed executions. |
### Filter by Job Name
```graphql
{
jobLogs(jobName: "cleanup_expired_sessions", limit: 5) {
id
jobName
startedAt
completedAt
success
recordsProcessed
}
}
```
### Filter for Failures Only
```graphql
{
jobLogs(onlyFailures: true, limit: 20) {
id
jobName
startedAt
completedAt
success
errorMessage
}
}
```
### Response Fields
| Field | Type | Description |
|---|---|---|
| `id` | `ID` | Unique identifier for this execution record. |
| `jobName` | `String` | Name of the job that was executed. |
| `startedAt` | `String` | ISO 8601 timestamp when execution began. |
| `completedAt` | `String` | ISO 8601 timestamp when execution finished. May be `null` if still running. |
| `success` | `Boolean` | Whether the execution completed successfully. |
| `errorMessage` | `String` | Error details if the execution failed. `null` on success. |
| `recordsProcessed` | `Int` | Number of records affected (e.g., expired sessions deleted). |
### curl Example
```bash
# Get the last 10 job executions
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "{ jobLogs(limit: 10) { id jobName startedAt completedAt success errorMessage recordsProcessed } }"
}' | jq .
# Get only failures for a specific job
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "{ jobLogs(jobName: \"cleanup_expired_sessions\", onlyFailures: true) { id startedAt errorMessage } }"
}' | jq .
```
## Combining Queries
GraphQL allows multiple queries in a single request:
```graphql
{
availableJobs {
name
schedule
}
jobLogs(limit: 5, onlyFailures: true) {
jobName
startedAt
errorMessage
}
}
```
## Further Reading
- [Available Jobs](./available-jobs.md) -- detailed descriptions of each background job
- [Job Scheduling](./job-scheduling.md) -- how the cron scheduler works
- [Monitoring Job Executions](./job-monitoring.md) -- strategies for monitoring job health
- [User 2FA Management](./user-2fa.md) -- the other custom schema at `/admin/jobs`

View file

@ -0,0 +1,228 @@
# Monitoring Job Executions
Every background job execution in Barycenter is recorded in the `job_execution` table, providing a complete history of when jobs ran, whether they succeeded, how many records they processed, and what errors occurred. The admin API exposes this data through the `jobLogs` query at `POST /admin/jobs`.
## Querying Job Logs
### Basic Query
Retrieve the most recent job executions:
```graphql
{
jobLogs(limit: 10) {
id
jobName
startedAt
completedAt
success
errorMessage
recordsProcessed
}
}
```
### Response
```json
{
"data": {
"jobLogs": [
{
"id": "42",
"jobName": "cleanup_expired_sessions",
"startedAt": "2026-02-14T12:00:00Z",
"completedAt": "2026-02-14T12:00:01Z",
"success": true,
"errorMessage": null,
"recordsProcessed": 15
},
{
"id": "41",
"jobName": "cleanup_expired_challenges",
"startedAt": "2026-02-14T11:55:00Z",
"completedAt": "2026-02-14T11:55:00Z",
"success": true,
"errorMessage": null,
"recordsProcessed": 3
}
]
}
}
```
## Response Fields
| Field | Type | Description |
|---|---|---|
| `id` | `ID` | Unique identifier for this execution record. |
| `jobName` | `String` | Name of the job that was executed. |
| `startedAt` | `String` | ISO 8601 UTC timestamp when execution began. |
| `completedAt` | `String` | ISO 8601 UTC timestamp when execution finished. `null` if the job is still running. |
| `success` | `Boolean` | Whether the execution completed without error. |
| `errorMessage` | `String` | Error details if the execution failed. `null` on success. |
| `recordsProcessed` | `Int` | Number of records affected by the job (e.g., expired sessions deleted). |
## Filtering
### By Job Name
Narrow results to a specific job:
```graphql
{
jobLogs(jobName: "cleanup_expired_sessions", limit: 20) {
id
startedAt
completedAt
success
recordsProcessed
}
}
```
### Failures Only
Show only executions that failed:
```graphql
{
jobLogs(onlyFailures: true) {
id
jobName
startedAt
errorMessage
}
}
```
### Combined Filters
Filter by both job name and failure status:
```graphql
{
jobLogs(jobName: "cleanup_expired_refresh_tokens", onlyFailures: true, limit: 10) {
id
startedAt
completedAt
errorMessage
}
}
```
## Query Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| `jobName` | `String` | `null` (all jobs) | Filter to a specific job by name. |
| `limit` | `Int` | `100` | Maximum number of entries to return. |
| `onlyFailures` | `Boolean` | `false` | When `true`, return only failed executions. |
Results are ordered by `startedAt` descending (most recent first).
## curl Examples
```bash
# Get the 10 most recent job executions
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "{ jobLogs(limit: 10) { id jobName startedAt completedAt success errorMessage recordsProcessed } }"
}' | jq .
# Get failures only
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "{ jobLogs(onlyFailures: true) { id jobName startedAt errorMessage } }"
}' | jq .
# Get execution history for a specific job
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "{ jobLogs(jobName: \"cleanup_expired_challenges\", limit: 5) { startedAt success recordsProcessed } }"
}' | jq .
```
## Monitoring Strategies
### Health Checks
Verify that jobs are running on schedule by checking the most recent execution time. If the most recent execution of a job is significantly older than its schedule interval, the scheduler may have stalled.
```graphql
{
sessions: jobLogs(jobName: "cleanup_expired_sessions", limit: 1) {
startedAt
success
}
challenges: jobLogs(jobName: "cleanup_expired_challenges", limit: 1) {
startedAt
success
}
}
```
For example, if `cleanup_expired_challenges` normally runs every 5 minutes but the most recent execution was 30 minutes ago, investigate the server health.
### Failure Alerting
Periodically query for recent failures and feed the results into your alerting system:
```bash
# Check for any failures in the last hour (pipe to your alerting tool)
FAILURES=$(curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{"query": "{ jobLogs(onlyFailures: true, limit: 1) { id } }"}' \
| jq '.data.jobLogs | length')
if [ "$FAILURES" -gt 0 ]; then
echo "ALERT: Background job failures detected"
fi
```
### Tracking Cleanup Volume
Monitor `recordsProcessed` to understand how many expired records are being cleaned up. A sudden increase may indicate:
- A spike in user activity generating more sessions and tokens.
- A configuration change that shortened token lifetimes.
- An issue causing tokens to not be cleaned up on time (backlog).
A consistently zero `recordsProcessed` for a job is normal -- it means no records had expired since the last run.
### Alternative: Seaography Entity Query
The `job_execution` table is also available through the Seaography entity CRUD schema at `POST /admin/graphql`, which provides more advanced filtering options:
```graphql
{
jobExecution {
findMany(
filter: {
success: { eq: false }
startedAt: { gt: "2026-02-14T00:00:00Z" }
}
orderBy: { startedAt: DESC }
) {
nodes {
id
jobName
startedAt
errorMessage
}
}
}
}
```
This approach is useful when you need date-range filtering or more complex query logic than the `jobLogs` query provides.
## Further Reading
- [Job Management](./job-management.md) -- triggering jobs and the full `jobLogs` query reference
- [Available Jobs](./available-jobs.md) -- what each job does and its schedule
- [Job Scheduling](./job-scheduling.md) -- how the cron scheduler operates
- [Entity CRUD (Seaography)](./entity-crud.md) -- querying job_execution via the entity schema

View file

@ -0,0 +1,117 @@
# Job Scheduling
Barycenter's background job scheduler is built on [tokio-cron-scheduler](https://crates.io/crates/tokio-cron-scheduler), a cron-based job scheduling library that runs within the Tokio async runtime. Jobs are defined at compile time, registered during server startup, and execute automatically according to their cron expressions.
## How the Scheduler Works
### Startup
During server initialization, after the database connection is established, Barycenter:
1. Creates a `JobScheduler` instance.
2. Registers each background job with its cron expression and execution function.
3. Starts the scheduler, which runs as a background Tokio task for the lifetime of the server process.
No configuration is needed to enable the scheduler -- it starts automatically as part of the normal server boot sequence.
### Execution
When a job's cron expression matches the current time, the scheduler spawns a Tokio task that:
1. Records the start time.
2. Executes the job's cleanup query against the database.
3. Counts the number of records affected.
4. Writes an execution record to the `job_execution` table with the result.
Jobs execute asynchronously and do not block the main server or each other. If a job takes longer than expected (e.g., due to a large number of expired records), other jobs and request handling continue unaffected.
### Graceful Shutdown
When the server receives a shutdown signal, the scheduler stops accepting new job executions. Any currently running jobs are allowed to complete before the process exits.
## Cron Expression Format
Barycenter uses six-field cron expressions (with a seconds field), as supported by tokio-cron-scheduler:
```
┌──────── second (0-59)
│ ┌────── minute (0-59)
│ │ ┌──── hour (0-23)
│ │ │ ┌── day of month (1-31)
│ │ │ │ ┌ month (1-12)
│ │ │ │ │ ┌ day of week (0-6, 0 = Sunday)
│ │ │ │ │ │
* * * * * *
```
### Current Job Schedules
| Job | Cron Expression | Meaning |
|---|---|---|
| `cleanup_expired_sessions` | `0 0 * * * *` | At second 0, minute 0 of every hour |
| `cleanup_expired_refresh_tokens` | `0 30 * * * *` | At second 0, minute 30 of every hour |
| `cleanup_expired_challenges` | `0 */5 * * * *` | At second 0, every 5th minute |
| `cleanup_expired_device_codes` | `0 45 * * * *` | At second 0, minute 45 of every hour |
### Cron Expression Examples
For reference, here are common cron patterns in the six-field format:
| Expression | Meaning |
|---|---|
| `0 0 * * * *` | Every hour at :00 |
| `0 30 * * * *` | Every hour at :30 |
| `0 */5 * * * *` | Every 5 minutes |
| `0 */15 * * * *` | Every 15 minutes |
| `0 0 */2 * * *` | Every 2 hours |
| `0 0 0 * * *` | Once daily at midnight |
| `0 0 3 * * *` | Once daily at 03:00 |
| `0 0 0 * * 1` | Every Monday at midnight |
## Job Execution Tracking
Every time a job runs -- whether triggered by the cron schedule or manually via the admin API -- an execution record is written to the `job_execution` table:
| Column | Type | Description |
|---|---|---|
| `id` | `integer` | Auto-incrementing primary key |
| `job_name` | `string` | Name of the executed job |
| `started_at` | `timestamp` | When execution began |
| `completed_at` | `timestamp` | When execution finished (null if still running) |
| `success` | `boolean` | Whether the job completed without error |
| `error_message` | `string` | Error details if the job failed (null on success) |
| `records_processed` | `integer` | Number of database records affected |
This table serves as both an audit log and a monitoring data source. Query it through the admin API's `jobLogs` query or directly through the Seaography entity CRUD schema.
## Staggered Schedules
The four built-in jobs are deliberately staggered across different minutes of the hour to avoid simultaneous execution:
```
:00 cleanup_expired_sessions
:05 cleanup_expired_challenges
:10 cleanup_expired_challenges
:15 cleanup_expired_challenges
:20 cleanup_expired_challenges
:25 cleanup_expired_challenges
:30 cleanup_expired_refresh_tokens + cleanup_expired_challenges
:35 cleanup_expired_challenges
:40 cleanup_expired_challenges
:45 cleanup_expired_device_codes + cleanup_expired_challenges
:50 cleanup_expired_challenges
:55 cleanup_expired_challenges
```
The challenge cleanup runs every 5 minutes due to the short TTL of WebAuthn challenges, while the other three jobs run once per hour at different offsets. This distribution prevents database contention from multiple concurrent cleanup operations.
## Timezone
The cron scheduler operates in UTC. All timestamps in the `job_execution` table are recorded in UTC.
## Further Reading
- [Available Jobs](./available-jobs.md) -- what each job does
- [Monitoring Job Executions](./job-monitoring.md) -- querying the execution log
- [Job Management](./job-management.md) -- triggering jobs and querying logs via GraphQL
- [Background Jobs](./background-jobs.md) -- overview of the job system

View file

@ -0,0 +1,186 @@
# Passkey Management
Authenticated users can manage their registered passkeys through the account API. These endpoints allow listing, renaming, and deleting passkeys. All operations require an active user session -- they are user-facing endpoints, not part of the admin GraphQL API.
## Endpoints
| Method | Path | Description |
|---|---|---|
| `GET` | `/account/passkeys` | List all passkeys for the current user |
| `DELETE` | `/account/passkeys/:credential_id` | Delete a specific passkey |
| `PATCH` | `/account/passkeys/:credential_id` | Update a passkey's friendly name |
All endpoints require an active session. Unauthenticated requests receive a `401 Unauthorized` response.
## List Passkeys
```
GET /account/passkeys
```
Returns all passkeys registered for the currently authenticated user.
### Response
```http
HTTP/1.1 200 OK
Content-Type: application/json
```
```json
[
{
"credential_id": "dGhpcyBpcyBhIGNyZWRlbnRpYWwgaWQ",
"name": "YubiKey 5C",
"backup_eligible": false,
"backup_state": false,
"created_at": "2026-01-15T10:30:00Z",
"last_used_at": "2026-02-14T08:15:00Z"
},
{
"credential_id": "YW5vdGhlciBjcmVkZW50aWFsIGlk",
"name": "iCloud Keychain",
"backup_eligible": true,
"backup_state": true,
"created_at": "2026-01-20T14:00:00Z",
"last_used_at": "2026-02-13T19:45:00Z"
}
]
```
### Response Fields
| Field | Type | Description |
|---|---|---|
| `credential_id` | `string` | Base64url-encoded WebAuthn credential identifier. Used in DELETE and PATCH paths. |
| `name` | `string` | User-assigned friendly name for the passkey. |
| `backup_eligible` | `boolean` | Whether the passkey can be synced across devices. `false` for hardware-bound keys. |
| `backup_state` | `boolean` | Whether the passkey is currently backed up to a cloud provider. |
| `created_at` | `string` | ISO 8601 timestamp of when the passkey was registered. |
| `last_used_at` | `string` | ISO 8601 timestamp of the most recent authentication with this passkey. |
### curl Example
```bash
# List passkeys (requires a session cookie)
curl -s -b cookies.txt http://localhost:8080/account/passkeys | jq .
```
## Delete a Passkey
```
DELETE /account/passkeys/:credential_id
```
Permanently removes a passkey. The credential ID must be URL-encoded if it contains special characters.
### Response
```http
HTTP/1.1 204 No Content
```
### Error Responses
| Status | Condition |
|---|---|
| `401 Unauthorized` | No active session. |
| `404 Not Found` | Credential ID does not exist or does not belong to the current user. |
### curl Example
```bash
# Delete a passkey
curl -s -X DELETE -b cookies.txt \
http://localhost:8080/account/passkeys/dGhpcyBpcyBhIGNyZWRlbnRpYWwgaWQ
```
> **Warning**: Deleting a user's last passkey removes their ability to use passkey authentication. If the user has admin-enforced 2FA enabled, they will need to re-enroll a passkey on their next login.
## Update a Passkey Name
```
PATCH /account/passkeys/:credential_id
Content-Type: application/json
```
Updates the friendly name associated with a passkey. This helps users distinguish between multiple registered passkeys (e.g., "Work YubiKey" vs "Phone").
### Request Body
```json
{
"name": "Work YubiKey 5C NFC"
}
```
### Response
```http
HTTP/1.1 200 OK
Content-Type: application/json
```
```json
{
"credential_id": "dGhpcyBpcyBhIGNyZWRlbnRpYWwgaWQ",
"name": "Work YubiKey 5C NFC"
}
```
### Error Responses
| Status | Condition |
|---|---|
| `400 Bad Request` | Missing or empty `name` field. |
| `401 Unauthorized` | No active session. |
| `404 Not Found` | Credential ID does not exist or does not belong to the current user. |
### curl Example
```bash
# Rename a passkey
curl -s -X PATCH -b cookies.txt \
http://localhost:8080/account/passkeys/dGhpcyBpcyBhIGNyZWRlbnRpYWwgaWQ \
-H "Content-Type: application/json" \
-d '{"name": "Work YubiKey 5C NFC"}' | jq .
```
## Admin Perspective
While passkey management endpoints are user-facing, administrators can view and manage passkeys through the admin GraphQL API:
- **View passkeys**: Query the Seaography entity schema to list all passkeys across all users.
- **Check enrollment status**: Use the `user2faStatus` query to check whether a specific user has passkeys enrolled. See [User 2FA Management](./user-2fa.md).
- **Enforce 2FA**: Use the `setUser2faRequired` mutation to require passkey-based 2FA for specific users.
### Admin Query Example
```graphql
# At POST /admin/graphql (Entity CRUD)
{
user {
findOne(filter: { username: { eq: "alice" } }) {
username
passkeyEnrolledAt
}
}
}
```
```graphql
# At POST /admin/jobs (Job Management)
{
user2faStatus(username: "alice") {
passkeyEnrolled
passkeyCount
passkeyEnrolledAt
}
}
```
## Further Reading
- [Passkey / WebAuthn](../authentication/passkeys.md) -- overview of passkey authentication
- [Registering a Passkey](../authentication/passkey-registration.md) -- the registration ceremony
- [User 2FA Management](./user-2fa.md) -- admin-side 2FA enforcement

View file

@ -0,0 +1,149 @@
# GraphQL Playground
Barycenter ships with built-in GraphiQL playgrounds for both admin GraphQL schemas. These browser-based interfaces allow you to explore the schema, compose queries and mutations, view documentation, and test operations interactively -- without installing any external tooling.
## Playground URLs
| Playground | URL | Schema |
|---|---|---|
| Entity CRUD | `GET /admin/playground` | Seaography auto-generated CRUD for all entities |
| Job and User Management | `GET /admin/jobs/playground` | Custom queries and mutations for jobs and 2FA |
Both playgrounds are served on the admin port. If your admin API runs on port 8081:
- Entity CRUD playground: `http://localhost:8081/admin/playground`
- Job management playground: `http://localhost:8081/admin/jobs/playground`
## Using the Playground
### Opening the Playground
Navigate to the playground URL in any modern browser. The GraphiQL interface loads with three main panels:
1. **Query editor** (left) -- write your GraphQL queries and mutations here.
2. **Result panel** (right) -- displays the JSON response after executing a query.
3. **Documentation explorer** (accessible via the "Docs" button) -- browse the full schema, including all available types, queries, mutations, and their arguments.
### Exploring the Schema
Click the **Docs** button (or the book icon) in the upper-left area to open the documentation explorer. From here you can:
- Browse all available **queries** and **mutations**.
- Inspect **input types** and **filter types** to understand what arguments each operation accepts.
- View **return types** and their fields.
- Navigate the type hierarchy by clicking on type names.
This is particularly useful for the Seaography entity schema, where filter types and pagination parameters are auto-generated and may not be obvious without schema exploration.
### Writing and Executing Queries
Type your query in the left panel and press the play button (or use `Ctrl+Enter` / `Cmd+Enter`) to execute it.
**Example in the Entity CRUD playground:**
```graphql
{
user {
findMany {
nodes {
id
username
email
requires2fa
}
paginationInfo {
total
}
}
}
}
```
**Example in the Job Management playground:**
```graphql
{
availableJobs {
name
description
schedule
}
}
```
### Using Variables
The playground supports GraphQL variables. Click the **Variables** panel at the bottom of the query editor to define variables as JSON:
**Query:**
```graphql
query GetUser2FA($name: String!) {
user2faStatus(username: $name) {
username
requires2fa
passkeyEnrolled
passkeyCount
}
}
```
**Variables:**
```json
{
"name": "alice"
}
```
### Request Headers
If you need to pass custom headers (for example, for authentication in a future release), use the **Headers** panel at the bottom of the query editor:
```json
{
"Authorization": "Bearer your-admin-token"
}
```
## Choosing the Right Playground
The two playgrounds serve different purposes. Use this table to determine which one you need:
| Task | Playground |
|---|---|
| List, create, update, or delete users | Entity CRUD (`/admin/playground`) |
| List, create, update, or delete clients | Entity CRUD (`/admin/playground`) |
| Inspect sessions, tokens, or auth codes | Entity CRUD (`/admin/playground`) |
| View job execution history via entity query | Entity CRUD (`/admin/playground`) |
| Trigger a background job on demand | Job Management (`/admin/jobs/playground`) |
| Query job logs with filtering | Job Management (`/admin/jobs/playground`) |
| List available jobs and schedules | Job Management (`/admin/jobs/playground`) |
| Enable or disable 2FA for a user | Job Management (`/admin/jobs/playground`) |
| Check user 2FA and passkey enrollment status | Job Management (`/admin/jobs/playground`) |
## Browser Compatibility
The GraphiQL playground works in all modern browsers including Chrome, Firefox, Safari, and Edge. No browser extensions or plugins are required.
## Production Usage
In production environments where the admin port is not directly accessible from a developer workstation, you have several options:
- **Port forwarding**: Use SSH tunneling or `kubectl port-forward` to access the admin port locally.
- **curl**: Use `curl` or any HTTP client to send GraphQL requests directly. See [Job Management](./job-management.md) and [Entity CRUD](./entity-crud.md) for curl examples.
- **GraphQL clients**: Tools like Insomnia, Postman, or Altair GraphQL Client can connect to the admin endpoint.
```bash
# SSH tunnel to a remote server
ssh -L 8081:localhost:8081 user@server
# Kubernetes port-forward
kubectl port-forward svc/barycenter-admin 8081:8081
```
## Further Reading
- [Entity CRUD (Seaography)](./entity-crud.md) -- operations available in the entity CRUD schema
- [Job Management](./job-management.md) -- operations available in the job management schema
- [User 2FA Management](./user-2fa.md) -- 2FA operations in the job management schema

View file

@ -0,0 +1,121 @@
# Public Registration
Barycenter supports an optional public registration endpoint that allows users to create their own accounts without administrator intervention. This feature is disabled by default and must be explicitly enabled in the configuration.
## Configuration
Public registration is controlled by the `allow_public_registration` setting:
### Configuration File
```toml
[server]
allow_public_registration = true
```
### Environment Variable
```bash
export BARYCENTER__SERVER__ALLOW_PUBLIC_REGISTRATION=true
```
When set to `false` (the default), the `/register` endpoint returns a `403 Forbidden` response.
## Registration Endpoint
```
POST /register
Content-Type: application/json
```
### Request Body
```json
{
"username": "newuser",
"email": "newuser@example.com",
"password": "secure_password"
}
```
### Fields
| Field | Type | Required | Description |
|---|---|---|---|
| `username` | `string` | Yes | Desired username. Must be unique across all accounts. |
| `email` | `string` | Yes | User's email address. |
| `password` | `string` | Yes | Plaintext password. Hashed with argon2id before storage. |
### Successful Response
```http
HTTP/1.1 201 Created
Content-Type: application/json
```
```json
{
"username": "newuser",
"email": "newuser@example.com",
"subject": "550e8400-e29b-41d4-a716-446655440000"
}
```
### Error Responses
| Status | Condition |
|---|---|
| `400 Bad Request` | Missing required fields or invalid input. |
| `403 Forbidden` | Public registration is disabled. |
| `409 Conflict` | A user with the given username already exists. |
### curl Example
```bash
# Register a new user
curl -s -X POST http://localhost:8080/register \
-H "Content-Type: application/json" \
-d '{
"username": "newuser",
"email": "newuser@example.com",
"password": "secure_password"
}' | jq .
```
## When to Enable Public Registration
Public registration is appropriate when:
- **Self-service onboarding**: You want users to create accounts on their own, such as in a SaaS application or community service.
- **Development and testing**: Convenient for local development when you need to create test accounts quickly without using the admin API.
Public registration should remain **disabled** when:
- **Controlled environments**: Only known users should have accounts (use [user sync](./user-sync.md) instead).
- **Enterprise deployments**: User provisioning is handled by an external identity management system or HR workflow.
- **Security-sensitive deployments**: Open registration increases the attack surface by allowing anyone to create an account.
## Security Considerations
When public registration is enabled:
- **Rate limiting**: Consider configuring rate limiting on the `/register` endpoint to prevent abuse. See [Rate Limiting](../security/rate-limiting.md).
- **Password policy**: Barycenter hashes passwords with argon2id regardless of strength. Consider implementing client-side password strength requirements in your application.
- **Email verification**: Barycenter does not currently perform email verification on registration. The provided email is stored as-is.
- **Account enumeration**: The `409 Conflict` response reveals whether a username is taken. If this is a concern for your threat model, consider implementing a unified error response.
## Comparison with Other Methods
| | Public Registration | User Sync | GraphQL API |
|---|---|---|---|
| Self-service | Yes | No | No |
| Bulk provisioning | No | Yes | Possible but manual |
| Password handling | Auto-hashed | Auto-hashed | Pre-hashed required |
| Access required | None (public) | CLI access | Admin API access |
| Idempotent | No (409 on duplicate) | Yes | No (error on duplicate) |
## Further Reading
- [Creating Users](./creating-users.md) -- all user creation methods
- [User Sync from JSON](./user-sync.md) -- declarative user provisioning
- [Rate Limiting](../security/rate-limiting.md) -- protecting public endpoints

195
book/src/admin/user-2fa.md Normal file
View file

@ -0,0 +1,195 @@
# User 2FA Management
The admin job management schema at `POST /admin/jobs` includes mutations and queries for managing two-factor authentication requirements on a per-user basis. Administrators can enforce 2FA for specific users and check their enrollment status.
## Setting 2FA Requirements
The `setUser2faRequired` mutation enables or disables the 2FA requirement for a specific user. When enabled, the user must complete a second authentication factor (passkey verification) after their initial password login before any authorization flow can proceed.
### Enable 2FA for a User
```graphql
mutation {
setUser2faRequired(username: "alice", required: true) {
success
message
username
requires2fa
}
}
```
### Response
```json
{
"data": {
"setUser2faRequired": {
"success": true,
"message": "2FA requirement updated for user alice",
"username": "alice",
"requires2fa": true
}
}
}
```
### Disable 2FA for a User
```graphql
mutation {
setUser2faRequired(username: "alice", required: false) {
success
message
username
requires2fa
}
}
```
### Error Handling
If the user does not exist, the mutation reports a failure:
```json
{
"data": {
"setUser2faRequired": {
"success": false,
"message": "User not found: nonexistent_user",
"username": "nonexistent_user",
"requires2fa": false
}
}
}
```
### curl Example
```bash
# Enable 2FA for user "alice"
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "mutation { setUser2faRequired(username: \"alice\", required: true) { success message username requires2fa } }"
}' | jq .
# Disable 2FA for user "alice"
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "mutation { setUser2faRequired(username: \"alice\", required: false) { success message username requires2fa } }"
}' | jq .
```
## Querying 2FA Status
The `user2faStatus` query returns the current 2FA configuration and passkey enrollment details for a user.
### Query
```graphql
{
user2faStatus(username: "alice") {
username
subject
requires2fa
passkeyEnrolled
passkeyCount
passkeyEnrolledAt
}
}
```
### Response
```json
{
"data": {
"user2faStatus": {
"username": "alice",
"subject": "550e8400-e29b-41d4-a716-446655440000",
"requires2fa": true,
"passkeyEnrolled": true,
"passkeyCount": 2,
"passkeyEnrolledAt": "2026-01-15T10:30:00Z"
}
}
}
```
### Response Fields
| Field | Type | Description |
|---|---|---|
| `username` | `String` | The queried username. |
| `subject` | `String` | The user's unique subject identifier (UUID). |
| `requires2fa` | `Boolean` | Whether 2FA is currently required for this user. |
| `passkeyEnrolled` | `Boolean` | Whether the user has at least one passkey registered. |
| `passkeyCount` | `Int` | Total number of passkeys registered for the user. |
| `passkeyEnrolledAt` | `String` | ISO 8601 timestamp of the user's first passkey registration. `null` if no passkeys are enrolled. |
### curl Example
```bash
curl -s -X POST http://localhost:8081/admin/jobs \
-H "Content-Type: application/json" \
-d '{
"query": "{ user2faStatus(username: \"alice\") { username subject requires2fa passkeyEnrolled passkeyCount passkeyEnrolledAt } }"
}' | jq .
```
## Operational Considerations
### Enabling 2FA Before Passkey Enrollment
If you enable 2FA for a user who has no passkeys enrolled, they will be prompted to register a passkey during their next login. The login flow redirects to the 2FA page, which requires a passkey verification step. Users without a passkey will need to register one first.
Check enrollment status before enabling:
```graphql
{
user2faStatus(username: "alice") {
requires2fa
passkeyEnrolled
passkeyCount
}
}
```
### Bulk 2FA Enforcement
To enable 2FA for multiple users, issue multiple mutations. GraphQL allows batching in a single request using aliases:
```graphql
mutation {
alice: setUser2faRequired(username: "alice", required: true) {
success
username
}
bob: setUser2faRequired(username: "bob", required: true) {
success
username
}
carol: setUser2faRequired(username: "carol", required: true) {
success
username
}
}
```
### Interaction with Context-Based 2FA
Admin-enforced 2FA and context-based 2FA are independent mechanisms. Even if a user does not have admin-enforced 2FA, they may still be required to complete 2FA when:
- The authorization request includes high-value scopes (`admin`, `payment`, `transfer`, `delete`).
- The `max_age` parameter is below 300 seconds.
See [Context-Based 2FA](../authentication/2fa-context-based.md) for details on these triggers.
## Further Reading
- [Admin-Enforced 2FA](../authentication/2fa-admin-enforced.md) -- how enforced 2FA works during the login flow
- [Context-Based 2FA](../authentication/2fa-context-based.md) -- 2FA triggered by scopes and max_age
- [Passkey Management](./passkey-management.md) -- user-facing passkey operations
- [Job Management](./job-management.md) -- the other operations available at `/admin/jobs`

View file

@ -0,0 +1,55 @@
# User Management
Barycenter provides multiple methods for managing user accounts, each suited to different operational contexts. This section covers how users are created, provisioned, and managed across the system.
## User Provisioning Methods
| Method | Use Case | Details |
|---|---|---|
| Default admin user | First-run bootstrap | Created automatically on startup. See [Creating Users](./creating-users.md). |
| Seaography GraphQL API | Ad-hoc user creation by administrators | Full CRUD via `POST /admin/graphql`. See [Creating Users](./creating-users.md). |
| User sync from JSON | Declarative provisioning from configuration | Idempotent CLI subcommand for bulk provisioning. See [User Sync from JSON](./user-sync.md). |
| Public registration | Self-service account creation | Optional endpoint for open registration. See [Public Registration](./public-registration.md). |
## User Lifecycle
### Account Creation
Users can be created through any of the methods listed above. Every user account includes:
- **Username**: unique identifier used for login.
- **Email**: contact address (used in OIDC claims).
- **Password hash**: argon2id hash of the user's password.
- **Subject**: a UUID assigned at creation, used as the `sub` claim in tokens.
- **2FA settings**: whether 2FA is required, and passkey enrollment status.
### Authentication
Once created, users authenticate via:
- **Password login** at `POST /login`.
- **Passkey login** via WebAuthn at `POST /webauthn/authenticate/start` and `POST /webauthn/authenticate/finish`.
- **Two-factor authentication** when required, via `POST /webauthn/2fa/start` and `POST /webauthn/2fa/finish`.
### Passkey Management
Authenticated users can manage their own passkeys through the account API. See [Passkey Management](./passkey-management.md).
### 2FA Enforcement
Administrators can require specific users to complete two-factor authentication on every login. See [User 2FA Management](./user-2fa.md).
## Choosing a Provisioning Method
- **Development and testing**: Rely on the default admin user. Create additional test users via the Seaography API.
- **Production with known users**: Use the [user sync CLI](./user-sync.md) to declare users in a JSON file and provision them as part of deployment (e.g., as a Kubernetes init container).
- **Production with self-service**: Enable [public registration](./public-registration.md) to let users create their own accounts.
- **Mixed environments**: Combine user sync for administrative accounts with public registration for end users.
## Further Reading
- [Creating Users](./creating-users.md) -- default admin user and GraphQL-based creation
- [User Sync from JSON](./user-sync.md) -- declarative bulk provisioning
- [Public Registration](./public-registration.md) -- self-service account creation
- [Passkey Management](./passkey-management.md) -- user-facing passkey operations
- [User 2FA Management](./user-2fa.md) -- admin-enforced two-factor authentication

159
book/src/admin/user-sync.md Normal file
View file

@ -0,0 +1,159 @@
# User Sync from JSON
The `sync-users` CLI subcommand provisions user accounts from a JSON file. It is designed for declarative, repeatable user management in production environments where the set of administrative or service accounts is known at deployment time.
## Usage
```bash
barycenter sync-users --file users.json
```
The command reads the specified JSON file, and for each user definition:
- **Creates** the user if the username does not already exist.
- **Updates** the user if the username already exists (updates email, password, and other fields).
- **Does not delete** users that are present in the database but absent from the JSON file.
This idempotent behavior means you can run the command repeatedly -- during every deployment, as a startup script, or as a Kubernetes init container -- without causing errors or duplicating accounts.
## JSON File Format
The users file is a JSON array of user objects:
```json
[
{
"username": "admin",
"email": "admin@example.com",
"password": "strong_admin_password"
},
{
"username": "alice",
"email": "alice@example.com",
"password": "alice_secure_password"
},
{
"username": "service-account",
"email": "svc@example.com",
"password": "service_account_password"
}
]
```
### Fields
| Field | Type | Required | Description |
|---|---|---|---|
| `username` | `string` | Yes | Unique username for login. Used as the key for idempotent matching. |
| `email` | `string` | Yes | User's email address. Updated on subsequent syncs if changed. |
| `password` | `string` | Yes | Plaintext password. Automatically hashed with argon2id before storage. |
Passwords in the JSON file are stored in plaintext and hashed automatically by the sync command. Protect the JSON file with appropriate file system permissions and do not commit it to version control with real passwords.
## Idempotent Behavior
The sync operation uses the `username` field as the unique key:
| Scenario | Action |
|---|---|
| Username does not exist in database | Create new user with hashed password |
| Username already exists in database | Update email and password hash if changed |
| Username exists in database but not in JSON file | No action (user is preserved) |
This means:
- Running `sync-users` twice with the same file produces the same result as running it once.
- Adding a new user to the JSON file and re-running creates only that new user.
- Changing a password in the JSON file and re-running updates the password hash.
- Removing a user from the JSON file does **not** delete them from the database.
## Kubernetes Init Container Pattern
A common deployment pattern is to run `sync-users` as an init container before the main Barycenter pod starts. This ensures administrative accounts exist before the server begins accepting requests.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: barycenter
spec:
template:
spec:
initContainers:
- name: sync-users
image: your-registry/barycenter:latest
command: ["barycenter", "sync-users", "--file", "/config/users.json"]
volumeMounts:
- name: users-config
mountPath: /config
- name: data
mountPath: /data
containers:
- name: barycenter
image: your-registry/barycenter:latest
command: ["barycenter", "--config", "/config/config.toml"]
volumeMounts:
- name: users-config
mountPath: /config
- name: data
mountPath: /data
volumes:
- name: users-config
secret:
secretName: barycenter-users
- name: data
persistentVolumeClaim:
claimName: barycenter-data
```
The users JSON file is mounted from a Kubernetes Secret to keep passwords out of ConfigMaps:
```bash
kubectl create secret generic barycenter-users \
--from-file=users.json=./users.json \
--from-file=config.toml=./config.toml
```
## Security Considerations
- **File permissions**: The JSON file contains plaintext passwords. Set restrictive permissions (`chmod 600 users.json`) and limit access to the deployment system.
- **Secrets management**: In Kubernetes, store the file as a Secret rather than a ConfigMap. Consider using external secret managers (e.g., Vault, AWS Secrets Manager) that inject secrets at runtime.
- **Version control**: Never commit the users JSON file with real passwords to a repository. Use a template or placeholder file instead, and populate real values during deployment.
- **Audit trail**: The sync command logs which users were created or updated, providing a record of provisioning actions.
## Example Workflow
```bash
# 1. Create the users file
cat > users.json << 'EOF'
[
{
"username": "admin",
"email": "admin@myorg.com",
"password": "change-me-in-production"
},
{
"username": "readonly-service",
"email": "readonly@myorg.com",
"password": "service-account-password"
}
]
EOF
# 2. Restrict file permissions
chmod 600 users.json
# 3. Run the sync
barycenter sync-users --file users.json
# 4. Verify users were created
curl -s -X POST http://localhost:8081/admin/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ user { findMany { nodes { username email } } } }"}' | jq .
```
## Further Reading
- [Creating Users](./creating-users.md) -- all user creation methods
- [User Sync in Kubernetes](../deployment/k8s-user-sync.md) -- detailed Kubernetes deployment guide
- [Public Registration](./public-registration.md) -- self-service registration as an alternative

View file

@ -0,0 +1,124 @@
# Admin-Enforced 2FA
Admin-enforced two-factor authentication allows an administrator to mandate that a specific user must always complete a second authentication factor (passkey verification) after entering their password. This is the strongest 2FA enforcement mode -- the user cannot bypass it regardless of what they are accessing.
## How It Works
1. An administrator sets the `requires_2fa` flag for a user via the [Admin GraphQL API](../admin/graphql-api.md).
2. The flag is stored in the `users` table as `requires_2fa = 1`.
3. On the user's next login, after successful password authentication, Barycenter checks the flag.
4. If `requires_2fa = 1`, a **partial session** is created with `mfa_verified = 0` and the user is redirected to `/login/2fa`.
5. The user completes passkey verification.
6. The session is upgraded to `mfa_verified = 1` with `acr = "aal2"`.
## Enabling 2FA for a User
Use the `setUser2faRequired` GraphQL mutation on the admin API (default port 9091):
```graphql
mutation {
setUser2faRequired(username: "alice", required: true) {
success
message
requires2fa
}
}
```
Example using `curl`:
```bash
curl -X POST http://localhost:9091/graphql \
-H "Content-Type: application/json" \
-d '{
"query": "mutation { setUser2faRequired(username: \"alice\", required: true) { success message requires2fa } }"
}'
```
### Response
```json
{
"data": {
"setUser2faRequired": {
"success": true,
"message": "2FA requirement updated for user alice",
"requires2fa": true
}
}
}
```
## Disabling 2FA for a User
Pass `required: false` to remove the enforcement:
```graphql
mutation {
setUser2faRequired(username: "alice", required: false) {
success
message
requires2fa
}
}
```
This removes the mandatory 2FA requirement. The user will still be prompted for 2FA if a [context-based trigger](./2fa-context-based.md) applies to a specific authorization request.
## Checking a User's 2FA Status
Use the `user2faStatus` query to inspect whether a user has 2FA enabled and whether they have passkeys enrolled:
```graphql
query {
user2faStatus(username: "alice") {
username
requires2fa
passkeyEnrolled
passkeyCount
passkeyEnrolledAt
}
}
```
### Response
```json
{
"data": {
"user2faStatus": {
"username": "alice",
"requires2fa": true,
"passkeyEnrolled": true,
"passkeyCount": 2,
"passkeyEnrolledAt": "2026-01-15T10:30:00Z"
}
}
}
```
## Database Column
The enforcement flag is stored in the `users` table:
| Column | Type | Description |
|----------------|---------|---------------------------------------------|
| `requires_2fa` | Integer | `0` = not required (default), `1` = required|
## The Redirect to /login/2fa
When Barycenter detects that 2FA is required for a user who has just authenticated with a password, it:
1. Creates a partial session with `mfa_verified = 0`, `amr = ["pwd"]`, `acr = "aal1"`.
2. Preserves the pending authorization request parameters in the session.
3. Returns an HTTP redirect to `/login/2fa`.
The `/login/2fa` page presents the user with a passkey verification prompt. Upon successful verification, the session is upgraded and the user is redirected back to `/authorize` to continue the OAuth flow.
See [2FA Flow Walkthrough](./2fa-flow.md) for the complete sequence.
## Considerations
- **Passkey enrollment**: A user must have at least one registered passkey before admin-enforced 2FA can be completed. If the user has no passkeys, they will be unable to satisfy the 2FA requirement and will be stuck at the `/login/2fa` page. Use the `user2faStatus` query to verify enrollment before enabling the flag.
- **Existing sessions**: Enabling `requires_2fa` does not invalidate existing sessions. The flag is checked at the next login. To force re-authentication, expire the user's current sessions.
- **Admin access**: The GraphQL admin API should be protected and not exposed publicly. See [Admin GraphQL API](../admin/graphql-api.md) for access control guidance.

View file

@ -0,0 +1,139 @@
# Context-Based 2FA
Context-based two-factor authentication triggers the second factor based on properties of the authorization request rather than a per-user flag. This allows applications to require stronger authentication for sensitive operations while keeping routine access frictionless.
## Trigger Conditions
Two conditions can independently trigger context-based 2FA:
### 1. High-Value Scopes
If the authorization request includes any scope that Barycenter considers high-value, 2FA is required regardless of the user's `requires_2fa` setting.
The following scopes are classified as high-value:
| Scope | Rationale |
|------------|-----------------------------------------------------|
| `admin` | Administrative operations with broad system impact. |
| `payment` | Financial transactions. |
| `transfer` | Asset or data transfer operations. |
| `delete` | Destructive operations that remove data. |
#### Scope Matching Logic
Barycenter evaluates the requested scopes during the authorization flow using an `is_high_value_scope()` check. The match is performed against the exact scope string:
```
Requested scopes: ["openid", "profile", "payment"]
^^^^^^^
High-value scope detected --> 2FA required
```
If the authorization request contains `openid profile email`, no high-value scope is present and 2FA is not triggered by this condition.
#### Example Authorization Request
```
GET /authorize?
client_id=abc123&
redirect_uri=https://app.example.com/callback&
response_type=code&
scope=openid+payment&
code_challenge=...&
code_challenge_method=S256&
state=xyz
```
Because the `payment` scope is included, Barycenter will require 2FA even if the user does not have `requires_2fa = 1`.
### 2. Fresh Authentication Requirement (max_age)
If the authorization request includes a `max_age` parameter with a value less than 300 seconds (5 minutes), Barycenter interprets this as a request for fresh, strong authentication and triggers 2FA.
This is useful when a relying party needs to ensure the user has recently proven their identity with a high level of assurance -- for example, before displaying sensitive account settings or confirming a critical action.
#### Evaluation Logic
```
max_age parameter present?
|
+-- No --> max_age does not trigger 2FA
|
+-- Yes --> max_age < 300?
|
+-- No --> max_age does not trigger 2FA
|
+-- Yes --> 2FA required
```
#### Example Authorization Request
```
GET /authorize?
client_id=abc123&
redirect_uri=https://app.example.com/callback&
response_type=code&
scope=openid+profile&
max_age=60&
code_challenge=...&
code_challenge_method=S256&
state=xyz
```
Even though no high-value scope is requested, the `max_age=60` parameter triggers 2FA because 60 < 300.
## Interaction with Admin-Enforced 2FA
Context-based 2FA is evaluated independently of [admin-enforced 2FA](./2fa-admin-enforced.md). The checks are additive:
| User `requires_2fa` | High-Value Scope | `max_age < 300` | 2FA Required? |
|----------------------|------------------|------------------|---------------|
| `0` | No | No | No |
| `0` | Yes | No | Yes |
| `0` | No | Yes | Yes |
| `1` | No | No | Yes |
| `1` | Yes | Yes | Yes |
If any condition evaluates to "2FA required" and the session does not already have `mfa_verified = 1`, the user is redirected to `/login/2fa`.
## Session Handling
When context-based 2FA is triggered:
1. The user has already authenticated with a password, creating a session with `mfa_verified = 0`.
2. Barycenter evaluates the authorization request and determines 2FA is needed.
3. The authorization parameters are preserved in the session.
4. The user is redirected to `/login/2fa`.
5. After successful passkey verification, the session is upgraded to `mfa_verified = 1`, `acr = "aal2"`.
6. The user is redirected back to `/authorize` where the flow continues.
If the user already has a valid session with `mfa_verified = 1` (from a previous 2FA authentication in the same session), the second factor is not requested again.
## Use Cases
### Step-Up Authentication
A common pattern is to request basic scopes for normal operations and high-value scopes only when needed:
```
# Normal access -- no 2FA
scope=openid profile email
# Administrative action -- triggers 2FA
scope=openid admin
# Payment confirmation -- triggers 2FA
scope=openid payment
```
### Confirm Sensitive Action
A relying party can use `max_age` to require fresh authentication before displaying sensitive information:
```
# User is already logged in, but RP wants fresh strong auth
# before showing account deletion page
GET /authorize?...&scope=openid+delete&max_age=60
```
This ensures the user has authenticated within the last 60 seconds and has completed 2FA, providing high confidence that the current user is the account owner.

View file

@ -0,0 +1,203 @@
# 2FA Flow Walkthrough
This page provides a complete walkthrough of the two-factor authentication flow, from the initial authorization request through password login, passkey verification, session upgrade, and the final redirect back to the OAuth authorization endpoint.
## Complete Sequence
```mermaid
sequenceDiagram
participant RP as Relying Party
participant Browser
participant Server as Barycenter
participant DB as Database
RP->>Browser: Redirect to /authorize?scope=openid+admin&...
Browser->>Server: GET /authorize
Note over Server: No valid session found
Server-->>Browser: 302 Redirect to /login
Browser->>Server: GET /login
Server-->>Browser: Login page (password form + passkey autofill)
Browser->>Server: POST /login (username + password)
Note over Server: Verify password (Argon2)
Server->>DB: Create session (mfa_verified=0, amr=["pwd"], acr="aal1")
DB-->>Server: session_id
Note over Server: Check 2FA requirement:<br/>- User requires_2fa=1? OR<br/>- High-value scope (admin)? OR<br/>- max_age < 300?
Note over Server: 2FA is required!
Server-->>Browser: 302 Redirect to /login/2fa<br/>Set-Cookie: session=<session_id>
Browser->>Server: GET /login/2fa
Server-->>Browser: 2FA page (passkey verification prompt)
Browser->>Server: POST /webauthn/2fa/start
Note over Server: Generate challenge (5 min TTL)
Server->>DB: Store challenge
Server-->>Browser: PublicKeyCredentialRequestOptions
Browser->>Browser: navigator.credentials.get(options)
Note over Browser: User verifies with passkey<br/>(biometric, PIN, or touch)
Browser->>Server: POST /webauthn/2fa/finish (assertion response)
Note over Server: Validate challenge<br/>Verify signature<br/>Check counter<br/>Determine passkey type (hwk/swk)
Server->>DB: Delete consumed challenge
Server->>DB: Update session:<br/>mfa_verified=1<br/>acr="aal2"<br/>amr=["pwd", "hwk"]
Server-->>Browser: 200 OK (2FA complete)
Browser->>Server: GET /authorize (resume original request)
Note over Server: Valid session with mfa_verified=1<br/>ACR meets requirements
Server->>DB: Create authorization code
Server-->>Browser: 302 Redirect to RP callback?code=...&state=...
Browser->>RP: GET /callback?code=...&state=...
RP->>Server: POST /token (exchange code)
Note over Server: ID token includes:<br/>amr=["pwd","hwk"]<br/>acr="aal2"<br/>auth_time=<timestamp>
Server-->>RP: access_token + id_token
```
## Step-by-Step Breakdown
### 1. Authorization Request
The relying party redirects the user to Barycenter's authorization endpoint. The request may contain scopes or parameters that trigger 2FA:
```
GET /authorize?
client_id=abc123&
redirect_uri=https://app.example.com/callback&
response_type=code&
scope=openid+admin&
code_challenge=E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM&
code_challenge_method=S256&
state=af0ifjsldkj
```
Barycenter finds no valid session and redirects to `/login`.
### 2. Password Authentication
The user submits their username and password. Barycenter:
- Verifies the password against the stored Argon2 hash.
- Creates a **partial session** in the database:
| Field | Value |
|----------------|-------------|
| `amr` | `["pwd"]` |
| `acr` | `"aal1"` |
| `mfa_verified` | `0` |
| `auth_time` | now |
### 3. 2FA Requirement Check
Before redirecting back to `/authorize`, Barycenter checks whether 2FA is required by evaluating three conditions:
1. **User flag**: Is `users.requires_2fa = 1` for this user?
2. **Scope check**: Does the requested scope contain `admin`, `payment`, `transfer`, or `delete`?
3. **max_age check**: Is `max_age` present and less than 300?
In this example, the `admin` scope triggers condition 2. The user is redirected to `/login/2fa`.
### 4. Passkey 2FA Page
The `/login/2fa` page presents the user with a passkey verification prompt. This page uses the same WASM client as the login page but calls the 2FA-specific endpoints.
### 5. WebAuthn 2FA Start
The browser sends a request to begin the 2FA ceremony:
```
POST /webauthn/2fa/start
Cookie: session=<session_id>
```
This endpoint requires a **partial session** (a session with `mfa_verified = 0`). It generates a challenge, stores it with a 5-minute TTL, and returns `PublicKeyCredentialRequestOptions`.
The `allowCredentials` list is populated with the user's registered passkey IDs, since the user is already identified from the session.
### 6. Passkey Verification
The browser invokes the WebAuthn API:
```javascript
const assertion = await authenticate_passkey(options, "optional");
```
The user verifies their identity with a biometric, PIN, or security key touch.
### 7. WebAuthn 2FA Finish
The browser sends the signed assertion:
```
POST /webauthn/2fa/finish
Cookie: session=<session_id>
Content-Type: application/json
{ ... assertion response ... }
```
The server:
1. Retrieves and validates the challenge.
2. Identifies the passkey from the credential ID.
3. Verifies the signature against the stored public key.
4. Checks and updates the signature counter.
5. Determines the passkey type (`hwk` or `swk`) from the backup state.
6. Deletes the consumed challenge.
### 8. Session Upgrade
Upon successful verification, the existing session is **upgraded** (not replaced):
| Field | Before | After |
|----------------|-----------------|----------------------|
| `amr` | `["pwd"]` | `["pwd", "hwk"]` |
| `acr` | `"aal1"` | `"aal2"` |
| `mfa_verified` | `0` | `1` |
| `auth_time` | (unchanged) | (unchanged) |
The `auth_time` is not updated because it records when the session was first created (the initial password authentication). The upgrade only changes the authentication strength fields.
### 9. Resume Authorization
The user is redirected back to `/authorize` with the original parameters. This time, Barycenter finds a valid session with `mfa_verified = 1` and proceeds to issue an authorization code.
### 10. Token Exchange
When the relying party exchanges the authorization code at the token endpoint, the resulting ID token includes the full authentication context:
```json
{
"iss": "https://auth.example.com",
"sub": "user_subject_uuid",
"aud": "abc123",
"exp": 1739560800,
"iat": 1739557200,
"auth_time": 1739557180,
"amr": ["pwd", "hwk"],
"acr": "aal2",
"nonce": "..."
}
```
The relying party can verify that:
- `acr` is `"aal2"`, confirming two-factor authentication was performed.
- `amr` contains `"pwd"` and `"hwk"`, confirming the specific methods used.
- `auth_time` indicates when the authentication occurred.
## 2FA Endpoint vs Regular Authentication Endpoints
The 2FA endpoints differ from the regular passkey authentication endpoints:
| Aspect | Regular Auth | 2FA |
|----------------------|--------------------------------------|--------------------------------------|
| Start endpoint | `POST /webauthn/authenticate/start` | `POST /webauthn/2fa/start` |
| Finish endpoint | `POST /webauthn/authenticate/finish` | `POST /webauthn/2fa/finish` |
| Auth required | No (public) | Yes (partial session required) |
| Creates new session | Yes | No (upgrades existing session) |
| `allowCredentials` | Empty (discoverable) | Populated with user's passkey IDs |
| Resulting `acr` | `"aal1"` | `"aal2"` |

View file

@ -0,0 +1,33 @@
# User-Optional 2FA
> **Status**: This feature is planned but **not yet implemented**.
User-optional two-factor authentication will allow users to enable 2FA for their own accounts through a self-service interface, without requiring an administrator to set the flag.
## Planned Functionality
When implemented, this mode will provide:
- **Self-service enrollment**: Users will be able to enable 2FA from an account settings page, requiring them to register at least one passkey as part of the enrollment process.
- **Self-service disablement**: Users will be able to disable self-imposed 2FA from the same settings page, typically requiring a passkey verification to confirm the change.
- **Independent of admin enforcement**: User-optional 2FA will coexist with [admin-enforced 2FA](./2fa-admin-enforced.md). If an administrator has already mandated 2FA for a user, the user cannot disable it. If the user enables 2FA voluntarily, the administrator can still override the setting.
## How It Will Differ from Admin-Enforced 2FA
| Aspect | Admin-Enforced | User-Optional (Planned) |
|----------------------|---------------------------------------|---------------------------------|
| Who enables it | Administrator via GraphQL API | User via account settings UI |
| Who can disable it | Administrator only | User (unless admin also enforces) |
| Requires passkey | Passkey must be enrolled beforehand | Enrollment is part of the setup flow |
| Stored as | `requires_2fa = 1` in `users` table | Separate user-preference flag |
## Current Alternatives
Until user-optional 2FA is available, the same outcome can be achieved through:
1. **Admin-enforced 2FA**: An administrator can enable 2FA for individual users using the `setUser2faRequired` mutation. See [Admin-Enforced 2FA](./2fa-admin-enforced.md).
2. **Context-based 2FA**: Applications can require 2FA for specific operations by requesting [high-value scopes](./2fa-context-based.md) or setting a low `max_age`.
## Tracking
This feature is tracked in the project's pending work. Contributions are welcome -- see [Contributing](../development/contributing.md) for guidelines.

View file

@ -0,0 +1,142 @@
# AMR and ACR Claims
Barycenter tracks how a user authenticated and includes this information in the [ID Token](../oidc/id-token.md) as standard OpenID Connect claims. Relying parties can use these claims to make authorization decisions based on the strength of the authentication.
## AMR -- Authentication Methods Reference
The `amr` claim is a JSON array of strings identifying the authentication methods used during the session. It is defined in [RFC 8176](https://www.rfc-editor.org/rfc/rfc8176).
### Supported Values
| AMR Value | Method | Description |
|-----------|-----------------------------|--------------------------------------------------------------------|
| `pwd` | Password | The user entered a username and password. |
| `hwk` | Hardware-bound key | The user authenticated with a hardware-bound passkey (e.g., YubiKey, Titan Security Key, platform TPM) that cannot be synced or cloned. |
| `swk` | Software key | The user authenticated with a software or cloud-synced passkey (e.g., iCloud Keychain, Google Password Manager, 1Password). |
### How AMR Is Determined
The AMR array is built incrementally as the user authenticates:
| Authentication Event | AMR After Event |
|--------------------------------|---------------------|
| Password login | `["pwd"]` |
| Passkey login (hardware-bound) | `["hwk"]` |
| Passkey login (cloud-synced) | `["swk"]` |
| Password + 2FA (hardware key) | `["pwd", "hwk"]` |
| Password + 2FA (cloud key) | `["pwd", "swk"]` |
The passkey type (`hwk` vs `swk`) is determined by the authenticator's `backup_eligible` flag:
- **`backup_eligible = false`**: The credential is hardware-bound and cannot be transferred. AMR value: `hwk`.
- **`backup_eligible = true`**: The credential may be synced across devices via a cloud service. AMR value: `swk`.
This check is performed on every authentication, not just at registration, because the backup state can change over time.
### AMR in the ID Token
The `amr` claim appears as a top-level array in the ID token:
```json
{
"iss": "https://auth.example.com",
"sub": "user-uuid-123",
"aud": "client-abc",
"amr": ["pwd", "hwk"],
"acr": "aal2",
"auth_time": 1739557200,
"..."
}
```
## ACR -- Authentication Context Class Reference
The `acr` claim is a string that indicates the overall assurance level of the authentication. Barycenter uses the NIST Authentication Assurance Levels (AAL) defined in [SP 800-63B](https://pages.nist.gov/800-63-3/sp800-63b.html).
### Supported Values
| ACR Value | Assurance Level | Meaning |
|-----------|-------------------------------|-----------------------------------------------------|
| `aal1` | Authentication Assurance Level 1 | Single-factor authentication. The user proved their identity with one method (password alone or passkey alone). |
| `aal2` | Authentication Assurance Level 2 | Two-factor authentication. The user proved their identity with two distinct methods (password + passkey). |
### How ACR Is Determined
The ACR value is set based on the `mfa_verified` flag in the session:
| `mfa_verified` | ACR | Condition |
|----------------|---------|-----------------------------------------------------|
| `0` | `aal1` | Only one authentication method has been used. |
| `1` | `aal2` | Two authentication methods have been verified. |
The transition from `aal1` to `aal2` happens during the [2FA flow](./2fa-flow.md) when the passkey verification succeeds and the session is upgraded.
### ACR in the ID Token
```json
{
"acr": "aal2"
}
```
Relying parties can check this value to enforce minimum assurance levels:
```python
# Example: reject tokens that don't meet aal2
id_token = decode_id_token(token_string)
if id_token["acr"] != "aal2":
raise InsufficientAuthenticationError("This action requires two-factor authentication")
```
## auth_time Claim
The `auth_time` claim records when the user's session was first created (the time of initial authentication). It is a Unix timestamp (seconds since epoch).
```json
{
"auth_time": 1739557200
}
```
Key behaviors:
- `auth_time` is set when the session is created (during password login or passkey login).
- `auth_time` is **not updated** when the session is upgraded during 2FA. It always reflects the time of the first factor.
- Relying parties can use `auth_time` together with the `max_age` parameter to determine whether the authentication is fresh enough for their needs.
### Relationship with max_age
When a relying party includes `max_age` in the authorization request, Barycenter checks whether the session's `auth_time` is within the specified window:
```
current_time - auth_time <= max_age
```
If the session is too old, the user is required to re-authenticate. If `max_age` is less than 300 seconds, [context-based 2FA](./2fa-context-based.md) is also triggered.
## Combining AMR, ACR, and auth_time
Together, these three claims give relying parties a complete picture of the authentication:
| Claim | Answers the Question |
|-------------|-----------------------------------------------------|
| `amr` | **How** did the user authenticate? (methods used) |
| `acr` | **How strong** is the authentication? (assurance) |
| `auth_time` | **When** did the user authenticate? (freshness) |
### Example: Enforcing Strong, Fresh Authentication
A relying party protecting a payment flow might check all three:
```python
id_token = decode_id_token(token_string)
# Require two-factor authentication
assert id_token["acr"] == "aal2", "Payment requires 2FA"
# Require a hardware-bound key was used
assert "hwk" in id_token["amr"], "Payment requires hardware key"
# Require authentication within the last 5 minutes
assert time.time() - id_token["auth_time"] < 300, "Authentication too old"
```

View file

@ -0,0 +1,122 @@
# Conditional UI / Autofill
Conditional UI is a WebAuthn feature that integrates passkey authentication into the browser's native autofill mechanism. Instead of requiring users to click a dedicated "Sign in with passkey" button, passkey credentials appear alongside saved passwords in the username field's autofill dropdown.
## How It Works
When the login page loads, Barycenter's WASM client:
1. Calls `supports_conditional_ui()` to check if the browser supports the feature.
2. If supported, initiates a passkey authentication request with `mediation: "conditional"`.
3. The browser silently prepares available passkeys for the current RP ID.
4. When the user focuses the username field, passkey credentials appear in the autofill dropdown alongside any saved passwords.
5. If the user selects a passkey, the WebAuthn assertion ceremony completes automatically.
6. If the user types a username and password instead, the passkey request is silently abandoned.
This approach is called **progressive enhancement**: passkey users get a streamlined experience, while password users see no difference from a traditional login form.
## Browser Support
Conditional UI support varies across browsers. The `supports_conditional_ui()` function in the WASM client detects availability at runtime.
| Browser | Minimum Version | Status |
|---------------------|-----------------|---------------------------------|
| Google Chrome | 108+ | Fully supported |
| Microsoft Edge | 108+ | Fully supported (Chromium-based)|
| Apple Safari | 16+ | Fully supported |
| Mozilla Firefox | --- | Not yet supported |
| Safari (iOS) | 16+ | Fully supported |
| Chrome (Android) | 108+ | Fully supported |
> **Note**: Browser support is checked at runtime. The table above reflects the state at time of writing and may change as browsers add support. The `supports_conditional_ui()` function is the authoritative check.
## WASM Client Detection
The WASM client provides two capability-check functions:
```javascript
import init, {
supports_webauthn,
supports_conditional_ui
} from '/static/wasm/barycenter_webauthn_client.js';
await init();
// Check basic WebAuthn support
if (!supports_webauthn()) {
// Hide all passkey UI elements
// Show password-only login form
}
// Check Conditional UI support
if (await supports_conditional_ui()) {
// Start conditional mediation (autofill mode)
authenticate_passkey(options, "conditional");
} else {
// Show explicit "Sign in with passkey" button
}
```
`supports_conditional_ui()` is an async function because it calls `PublicKeyCredential.isConditionalMediationAvailable()`, which returns a Promise.
## Autofill Integration
For Conditional UI to work, the login form's username input must include the `webauthn` autocomplete token:
```html
<input
type="text"
name="username"
autocomplete="username webauthn"
placeholder="Username"
/>
```
The `webauthn` token tells the browser to include passkey credentials in the autofill dropdown for this field. Without it, the browser will only show saved passwords.
Barycenter's built-in login page includes this attribute automatically.
## Mediation Modes
The WASM client's `authenticate_passkey()` function accepts a mediation parameter that controls how the browser presents credentials:
| Mode | Behavior | Use Case |
|-----------------|------------------------------------------------------------------|-----------------------------|
| `"conditional"` | Credentials appear in the autofill dropdown, no modal. | Default on page load. |
| `"optional"` | A modal dialog prompts the user to select a credential. | Explicit button click. |
### Conditional Mediation
```javascript
// Called on page load -- non-blocking, waits for autofill interaction
const assertion = await authenticate_passkey(options, "conditional");
```
The conditional request is initiated as soon as the page loads but does not block or show any UI. It remains pending until the user interacts with the autofill dropdown or the page navigates away.
### Optional Mediation (Fallback Button)
```javascript
// Called when user clicks "Sign in with passkey" button
const assertion = await authenticate_passkey(options, "optional");
```
This triggers the browser's standard modal credential picker. It is used as a fallback when Conditional UI is not supported or when the user explicitly requests passkey authentication.
## Fallback Strategy
Barycenter's login page implements a layered fallback strategy:
```
Browser supports Conditional UI?
|
+-- Yes --> Autofill mode (passkeys in dropdown + password form)
|
+-- No --> Browser supports WebAuthn?
|
+-- Yes --> Explicit "Sign in with passkey" button + password form
|
+-- No --> Password-only form
```
This ensures that every user can authenticate regardless of their browser's capabilities. The login page adapts its UI based on the detected support level without requiring any user configuration.

View file

@ -0,0 +1,147 @@
# Consent Flow
After a user authenticates, Barycenter presents a consent screen asking the user to approve or deny the relying party's request for access. The consent flow ensures that users are informed about what data and permissions they are granting to a third-party application.
## How Consent Works
The consent flow is part of the OAuth 2.0 authorization code flow and occurs after authentication but before the authorization code is issued:
1. The user authenticates (password, passkey, or 2FA).
2. Barycenter checks whether the user has already granted consent for this client and scope combination.
3. If **prior consent exists** and covers the requested scopes, the flow proceeds without prompting.
4. If **no prior consent exists** (or the requested scopes exceed what was previously granted), the user is shown the consent page.
5. The user approves or denies the request.
6. If approved, the consent is recorded and the authorization code is issued.
## Consent Page
### GET /consent
The consent page displays the following information to the user:
- **Client name**: The registered name of the application requesting access.
- **Requested scopes**: A human-readable list of the permissions being requested.
The user is presented with two actions:
- **Approve**: Grant the application access to the requested scopes.
- **Deny**: Reject the request. The user is redirected back to the relying party with an `access_denied` error.
### POST /consent
The consent decision is submitted as a form POST:
```
POST /consent
Cookie: session=<session_id>
Content-Type: application/x-www-form-urlencoded
decision=approve
```
| Parameter | Values | Description |
|------------|---------------------|------------------------------------|
| `decision` | `approve` or `deny` | The user's consent decision. |
## Consent Storage
Approved consent decisions are stored in the `consent` table:
| Column | Type | Description |
|--------------|-----------|------------------------------------------------------------|
| `client_id` | String | The client that received consent. |
| `subject` | String | The user who granted consent. |
| `scope` | String | Space-separated list of approved scopes. |
| `granted_at` | Timestamp | When the consent was granted. |
When a user approves consent, Barycenter records the client, user, and scope combination. On subsequent authorization requests from the same client with the same or a subset of the previously approved scopes, the consent page is skipped.
### Scope Matching
Consent is checked per-scope. If a client requests scopes that are a **subset** of previously granted scopes, consent is not re-prompted. If the client requests **additional scopes** beyond what was previously granted, the consent page is shown again with the full set of requested scopes.
Example:
| Previous Consent | New Request | Consent Prompted? |
|--------------------------|---------------------------|-------------------|
| `openid profile` | `openid profile` | No |
| `openid profile email` | `openid profile` | No (subset) |
| `openid profile` | `openid profile email` | Yes (new scope) |
| (none) | `openid` | Yes (first time) |
## Forcing Re-Consent
### prompt=consent
A relying party can force the consent screen to be displayed by including `prompt=consent` in the authorization request, even if the user has previously granted consent for the requested scopes:
```
GET /authorize?
client_id=abc123&
redirect_uri=https://app.example.com/callback&
response_type=code&
scope=openid+profile&
prompt=consent&
code_challenge=...&
code_challenge_method=S256&
state=xyz
```
This is useful when an application wants to ensure the user is aware of and actively agrees to the permissions being granted -- for example, after a policy change or when requesting consent for a different purpose.
When `prompt=consent` is specified:
- The consent page is always shown, regardless of prior consent records.
- If the user approves, the consent record is updated with the new `granted_at` timestamp.
- If the user denies, the existing consent record is not modified.
## Skipping Consent for Development
For development and testing environments, the consent flow can be bypassed entirely using the `BARYCENTER_SKIP_CONSENT` environment variable:
```bash
export BARYCENTER_SKIP_CONSENT=true
```
When this variable is set to `true`:
- The consent page is never shown.
- All authorization requests are treated as if the user approved.
- No consent records are written to the database.
> **Warning**: Never enable `BARYCENTER_SKIP_CONSENT` in production. Skipping consent violates user expectations and may conflict with regulatory requirements (e.g., GDPR, which requires informed consent for data sharing).
This variable is intended solely for automated testing and local development where the consent prompt would be an obstacle.
## Consent and the Authorization Flow
The consent check is integrated into the authorization endpoint flow:
```
GET /authorize
|
+-- Valid session? --> No --> Redirect to /login
|
+-- 2FA required and not verified? --> Redirect to /login/2fa
|
+-- prompt=consent? --> Yes --> Show consent page
|
+-- Prior consent covers requested scopes? --> Yes --> Issue authorization code
|
+-- No prior consent --> Show consent page
|
+-- User approves --> Record consent, issue authorization code
|
+-- User denies --> Redirect to RP with error=access_denied
```
## Deny Response
If the user denies consent, Barycenter redirects back to the relying party's registered `redirect_uri` with an error:
```
HTTP/1.1 302 Found
Location: https://app.example.com/callback?error=access_denied&error_description=The+user+denied+the+request&state=xyz
```
The relying party should handle this error gracefully and inform the user that the requested permissions were not granted.

View file

@ -0,0 +1,154 @@
# Authenticating with a Passkey
Passkey authentication allows users to log in without a password by proving possession of a previously registered WebAuthn credential. This corresponds to the WebAuthn **assertion** ceremony.
## Endpoints
Both endpoints are **public** -- no existing session is required.
| Step | Method | Path | Auth Required |
|--------|--------|-----------------------------------|---------------|
| Start | `POST` | `/webauthn/authenticate/start` | No |
| Finish | `POST` | `/webauthn/authenticate/finish` | No |
## Authentication Flow
```mermaid
sequenceDiagram
participant User
participant Browser as Browser (WASM)
participant Server as Barycenter
User->>Browser: Interact with login page
Browser->>Server: POST /webauthn/authenticate/start
Note over Server: Generate challenge (5 min TTL)<br/>Store in webauthn_challenges table<br/>Build PublicKeyCredentialRequestOptions
Server-->>Browser: 200 OK (request options JSON)
Browser->>Browser: navigator.credentials.get(options)
Note over Browser: User selects passkey<br/>(autofill or explicit prompt)<br/>Performs verification gesture
Browser->>User: Prompt for verification
User-->>Browser: Approve
Browser->>Server: POST /webauthn/authenticate/finish (assertion response)
Note over Server: Retrieve and validate challenge<br/>Identify user from credential ID<br/>Verify signature against stored public key<br/>Check and update signature counter<br/>Determine AMR (hwk or swk)<br/>Create session<br/>Delete consumed challenge
Server-->>Browser: 200 OK + Set-Cookie (session)
Browser->>User: Redirect to authorization flow
```
## Step 1: Start Authentication
The client initiates authentication by requesting a challenge:
```
POST /webauthn/authenticate/start
Content-Type: application/json
```
The server responds with `PublicKeyCredentialRequestOptions`:
```json
{
"publicKey": {
"challenge": "<base64url-encoded random challenge>",
"timeout": 300000,
"rpId": "auth.example.com",
"userVerification": "preferred",
"allowCredentials": []
}
}
```
Note that `allowCredentials` is empty for discoverable credential flows. The authenticator itself determines which credentials are available for the given RP ID. This enables passkey autofill via [Conditional UI](./conditional-ui.md).
## Step 2: Browser Credential Assertion
The WASM client passes the options to the browser's WebAuthn API:
```javascript
const assertion = await authenticate_passkey(requestOptions, "conditional");
```
The second argument controls the mediation behavior:
| Mediation Value | Behavior |
|-----------------|---------------------------------------------------------|
| `"conditional"` | Credentials appear in the browser's autofill dropdown. |
| `"optional"` | A modal dialog prompts the user to select a credential. |
The authenticator signs the challenge with the private key corresponding to the selected credential.
## Step 3: Finish Authentication
The WASM client sends the signed assertion to the server:
```
POST /webauthn/authenticate/finish
Content-Type: application/json
{
"id": "<credential ID>",
"rawId": "<base64url-encoded raw credential ID>",
"type": "public-key",
"response": {
"clientDataJSON": "<base64url-encoded>",
"authenticatorData": "<base64url-encoded>",
"signature": "<base64url-encoded>",
"userHandle": "<base64url-encoded user handle>"
}
}
```
The server performs the following steps:
1. **Retrieve the challenge** from the `webauthn_challenges` table and verify it has not expired.
2. **Identify the user** by looking up the credential ID in the `passkeys` table.
3. **Verify the signature** against the stored public key.
4. **Check the signature counter** to detect potential authenticator cloning. If the counter has not advanced as expected, authentication may be rejected.
5. **Update the signature counter** in the stored passkey record.
6. **Determine the AMR value** based on the passkey's backup state.
7. **Create a new session** and set the session cookie.
8. **Delete the consumed challenge**.
## AMR Assignment
The AMR (Authentication Methods Reference) value is determined by the passkey's `backup_eligible` flag:
| Backup Eligible | AMR Value | Meaning |
|-----------------|-----------|-----------------------------------------------|
| `false` | `hwk` | Hardware-bound key that cannot be synced. |
| `true` | `swk` | Software/cloud key that may be synced. |
This classification is checked on every authentication, not just at registration time, because the backup state can change (for example, when an OS update enables cloud sync for a previously local credential).
## Session Created
After successful passkey authentication, the new session contains:
| Field | Value |
|----------------|---------------------------------------|
| `amr` | `["hwk"]` or `["swk"]` |
| `acr` | `"aal1"` (single-factor) |
| `mfa_verified` | `0` |
| `auth_time` | Current UTC timestamp |
The session is single-factor (`aal1`) because only one authentication method was used. If the authorization request requires two-factor authentication, the user will be redirected to complete a second factor. See [Two-Factor Authentication](./two-factor.md).
## Signature Counter and Clone Detection
WebAuthn authenticators maintain a monotonically increasing signature counter. Each time the authenticator is used, the counter increments. Barycenter stores and checks this counter to detect cloned authenticators:
- If the assertion's counter is **greater than** the stored counter, authentication succeeds and the stored counter is updated.
- If the counter is **less than or equal to** the stored counter (and not zero), this may indicate the authenticator has been cloned. Barycenter logs a warning; the behavior is configurable.
Note that some authenticators (particularly cloud-synced passkeys) always report a counter of zero, which effectively disables clone detection for those credentials.
## Error Cases
| Scenario | HTTP Status | Description |
|------------------------------------|-------------|----------------------------------------------------|
| Challenge expired or not found | `400` | The 5-minute window has elapsed; restart the flow. |
| Credential not found | `400` | No passkey matches the provided credential ID. |
| Signature verification failed | `400` | The signed response did not validate. |
| Signature counter regression | `400` | Possible authenticator cloning detected. |

View file

@ -0,0 +1,163 @@
# Registering a Passkey
Passkey registration is the process of creating a new WebAuthn credential and associating it with the user's account. This corresponds to the WebAuthn **attestation** ceremony.
## Prerequisites
- The user must have an **active session** (i.e., they must be logged in). Registration endpoints are not public.
- The browser must support the WebAuthn API (`supports_webauthn()` returns `true`).
- The WASM client module must be loaded.
## Endpoints
| Step | Method | Path | Auth Required |
|--------|--------|-----------------------------|---------------|
| Start | `POST` | `/webauthn/register/start` | Yes (session) |
| Finish | `POST` | `/webauthn/register/finish` | Yes (session) |
## Registration Flow
```mermaid
sequenceDiagram
participant User
participant Browser as Browser (WASM)
participant Server as Barycenter
User->>Browser: Click "Register passkey"
Browser->>Server: POST /webauthn/register/start
Note over Server: Generate challenge (5 min TTL)<br/>Store in webauthn_challenges table<br/>Build PublicKeyCredentialCreationOptions
Server-->>Browser: 200 OK (creation options JSON)
Browser->>Browser: navigator.credentials.create(options)
Note over Browser: User performs gesture<br/>(biometric, PIN, or touch)
Browser->>User: Prompt for verification
User-->>Browser: Approve
Browser->>Server: POST /webauthn/register/finish (attestation response)
Note over Server: Retrieve and validate challenge<br/>Verify attestation response<br/>Extract public key and metadata<br/>Store passkey in DB<br/>Delete consumed challenge
Server-->>Browser: 200 OK (registration complete)
Browser->>User: Display success
```
## Step 1: Start Registration
The client sends a POST request to begin registration. No request body is needed; the server identifies the user from the session cookie.
```
POST /webauthn/register/start
Cookie: session=<session_id>
```
The server responds with `PublicKeyCredentialCreationOptions` in JSON:
```json
{
"publicKey": {
"rp": {
"name": "Barycenter",
"id": "auth.example.com"
},
"user": {
"id": "<base64url-encoded user handle>",
"name": "alice",
"displayName": "Alice"
},
"challenge": "<base64url-encoded random challenge>",
"pubKeyCredParams": [
{ "type": "public-key", "alg": -7 },
{ "type": "public-key", "alg": -257 }
],
"timeout": 300000,
"authenticatorSelection": {
"residentKey": "preferred",
"userVerification": "preferred"
},
"excludeCredentials": [
{
"type": "public-key",
"id": "<existing credential ID>"
}
]
}
}
```
Key fields:
| Field | Description |
|------------------------|--------------------------------------------------------------------------|
| `rp.id` | Relying Party ID, derived from the issuer URL host. |
| `challenge` | Random challenge stored with a 5-minute TTL. |
| `pubKeyCredParams` | Supported algorithms: ES256 (`-7`) and RS256 (`-257`). |
| `excludeCredentials` | IDs of the user's existing passkeys to prevent duplicate registration. |
| `authenticatorSelection` | Preferences for discoverable credentials and user verification. |
## Step 2: Browser Credential Creation
The WASM client passes the options to the browser's WebAuthn API:
```javascript
const credential = await register_passkey(creationOptions);
```
This triggers `navigator.credentials.create()`, which prompts the user to verify their identity with a biometric, PIN, or security key touch. The authenticator then:
1. Generates a new public/private key pair.
2. Stores the private key internally.
3. Returns the public key, credential ID, and attestation data.
## Step 3: Finish Registration
The WASM client sends the attestation response to the server:
```
POST /webauthn/register/finish
Cookie: session=<session_id>
Content-Type: application/json
{
"id": "<credential ID>",
"rawId": "<base64url-encoded raw credential ID>",
"type": "public-key",
"response": {
"clientDataJSON": "<base64url-encoded>",
"attestationObject": "<base64url-encoded>"
}
}
```
The server performs the following validation:
1. **Retrieve the challenge** from the `webauthn_challenges` table and verify it has not expired.
2. **Verify the attestation response** including origin, RP ID, and challenge match.
3. **Extract credential data**: public key, credential ID, signature counter, and backup state.
4. **Store the passkey** as a full `Passkey` JSON object in the database, associated with the user.
5. **Delete the consumed challenge** to enforce single-use.
## Passkey Storage
Each registered passkey is stored with the following data:
| Field | Description |
|--------------------|------------------------------------------------------------------|
| `credential_id` | Unique identifier for the credential (base64url-encoded). |
| `user_subject` | The subject identifier of the owning user. |
| `passkey_json` | Full `Passkey` object serialized as JSON (public key, counters). |
| `friendly_name` | Optional user-assigned name (e.g., "Work YubiKey"). |
| `backup_eligible` | Whether the credential can be synced across devices. |
| `created_at` | Timestamp of registration. |
## Challenge Expiration
The challenge generated during the start step has a **5-minute TTL**. If the user does not complete the registration within this window, the challenge expires and the ceremony must be restarted. A background job cleans up expired challenges every 5 minutes.
## Error Cases
| Scenario | HTTP Status | Description |
|------------------------------------|-------------|----------------------------------------------------|
| No active session | `401` | User must log in before registering a passkey. |
| Challenge expired or not found | `400` | The 5-minute window has elapsed; restart the flow. |
| Attestation verification failed | `400` | The authenticator response did not pass validation.|
| Credential already registered | `409` | The credential ID is already associated with the user. |

View file

@ -0,0 +1,130 @@
# How Passkeys Work
This page explains the WebAuthn protocol ceremonies and how Barycenter's architecture implements them using a Rust WASM client and server-side endpoints.
## WebAuthn Ceremonies
The WebAuthn specification defines two core ceremonies:
### Registration (Attestation)
Registration creates a new credential and associates it with a user account. The WebAuthn specification calls this the **attestation** ceremony because the authenticator attests to the properties of the newly created key pair.
The registration ceremony involves three parties:
1. **Relying Party (Barycenter server)** -- generates a challenge and specifies credential creation parameters.
2. **Browser** -- mediates between the server and authenticator, enforcing origin checks.
3. **Authenticator** -- generates a new key pair, stores the private key, and returns the public key with an attestation statement.
During registration, the server receives:
- The **public key** of the newly created credential.
- The **credential ID** that uniquely identifies this key pair.
- An **attestation statement** proving the key was generated by a legitimate authenticator.
- Authenticator metadata including **backup eligibility** (used for [hwk vs swk classification](./passkeys.md#passkey-classification-hwk-vs-swk)) and the initial **signature counter**.
### Authentication (Assertion)
Authentication proves possession of a previously registered credential. The WebAuthn specification calls this the **assertion** ceremony because the authenticator asserts the user's identity by signing a challenge.
During authentication, the server:
1. Sends a random challenge and a list of acceptable credential IDs.
2. The authenticator signs the challenge with the private key.
3. The server verifies the signature against the stored public key.
4. The server checks the signature counter to detect cloned authenticators.
## WASM Client Architecture
Barycenter uses a Rust crate (`client-wasm/`) compiled to WebAssembly to handle the browser-side WebAuthn API calls. This design provides several benefits:
- **Type safety**: WebAuthn options are parsed and validated with Rust's type system before being passed to the browser API.
- **Consistent behavior**: the same Rust data structures are used on both client and server, reducing serialization mismatches.
- **Small footprint**: the compiled WASM module is loaded once by the login page.
### Client-Server Communication Flow
```
Browser (WASM) Barycenter Server
| |
| 1. POST /webauthn/*/start |
| -------------------------------->|
| | Generate challenge,
| | store in DB (5 min TTL)
| 2. PublicKeyCredentialOptions |
| <--------------------------------|
| |
| 3. navigator.credentials.*() |
| (user interaction: biometric/PIN)|
| |
| 4. POST /webauthn/*/finish |
| -------------------------------->|
| | Verify response,
| | consume challenge
| 5. Success / session |
| <--------------------------------|
| |
```
The WASM module calls `navigator.credentials.create()` for registration and `navigator.credentials.get()` for authentication, converting the server's JSON options into the `PublicKeyCredentialCreationOptions` or `PublicKeyCredentialRequestOptions` that the browser expects.
### WASM Module Functions
The compiled WASM module exposes four functions:
```javascript
import init, {
supports_webauthn,
supports_conditional_ui,
register_passkey,
authenticate_passkey
} from '/static/wasm/barycenter_webauthn_client.js';
// Initialize the WASM module
await init();
// Check browser capabilities
const hasWebAuthn = supports_webauthn();
const hasAutofill = await supports_conditional_ui();
// Register a new passkey (options from /webauthn/register/start)
const credential = await register_passkey(registrationOptions);
// Authenticate (options from /webauthn/authenticate/start)
// mediation: "conditional" for autofill, "optional" for explicit
const assertion = await authenticate_passkey(authOptions, "conditional");
```
## Relying Party ID
The WebAuthn Relying Party (RP) ID is a domain string that scopes credentials to a specific origin. Barycenter derives the RP ID from the **host component** of the configured issuer URL.
| Issuer URL | RP ID |
|-------------------------------------|----------------------|
| `https://auth.example.com` | `auth.example.com` |
| `https://example.com:8443/` | `example.com` |
| `http://localhost:9090` | `localhost` |
The RP ID determines which credentials are available during authentication:
- Credentials registered with RP ID `auth.example.com` are only usable on `auth.example.com` and its subdomains.
- The RP ID must be a registrable domain suffix of the origin. For example, `example.com` is valid for `auth.example.com`, but `other.com` is not.
> **Important**: Changing the issuer URL after passkeys have been registered will invalidate all existing passkey credentials because the RP ID will no longer match. Plan your domain structure before deploying passkey authentication in production.
## Challenge Lifecycle
Every WebAuthn ceremony requires a server-generated challenge to prevent replay attacks:
1. The server generates a random challenge and stores it in the `webauthn_challenges` table with a **5-minute TTL**.
2. The challenge is sent to the browser as part of the credential options.
3. The authenticator includes the challenge in its signed response.
4. The server retrieves and validates the challenge, then deletes it (single-use).
A background job runs every 5 minutes to clean up expired challenges that were never consumed (for example, if the user abandoned the ceremony).
## Further Reading
- [Registering a Passkey](./passkey-registration.md) -- the attestation ceremony in detail
- [Authenticating with a Passkey](./passkey-authentication.md) -- the assertion ceremony in detail
- [Building the WASM Client](../development/wasm-client.md) -- compilation instructions

View file

@ -0,0 +1,81 @@
# Passkey / WebAuthn
Barycenter supports passwordless authentication using [WebAuthn](https://www.w3.org/TR/webauthn-3/) (also known as FIDO2) passkeys. Passkeys provide phishing-resistant authentication tied to cryptographic key pairs stored on the user's device or in a cloud keychain.
## What Are Passkeys?
A passkey is a WebAuthn credential -- a public/private key pair where the private key never leaves the authenticator (device, security key, or cloud keychain). During authentication, the authenticator signs a challenge from the server, proving possession of the private key without transmitting any shared secret.
Passkeys offer several advantages over passwords:
- **Phishing-resistant**: credentials are bound to the relying party's origin, so they cannot be replayed on a different domain.
- **No shared secrets**: the server stores only the public key, so a database breach does not expose authentication credentials.
- **User-friendly**: on supported devices, authentication is a single biometric or PIN prompt.
## Authentication Modes
Barycenter supports passkeys in two distinct roles:
### Single-Factor Passkey Login
A passkey can serve as the sole authentication method. When a user authenticates with a passkey alone, the session is created with:
- `amr`: `["hwk"]` or `["swk"]` (depending on passkey type)
- `acr`: `"aal1"` (single-factor)
This mode is suitable for everyday access where the passkey itself provides sufficient assurance. See [Authenticating with a Passkey](./passkey-authentication.md).
### Two-Factor with Passkey as Second Factor
A passkey can serve as the second factor after a password login. When combined with password authentication, the session is upgraded to:
- `amr`: `["pwd", "hwk"]` or `["pwd", "swk"]`
- `acr`: `"aal2"` (two-factor)
This mode is triggered by [admin-enforced 2FA](./2fa-admin-enforced.md), [context-based 2FA](./2fa-context-based.md), or future user-optional 2FA settings. See [Two-Factor Authentication](./two-factor.md).
## Passkey Classification: hwk vs swk
Barycenter classifies passkeys based on their backup state, which indicates whether the credential can be synced across devices:
| AMR Value | Classification | Examples | Backup Eligible |
|-----------|--------------------|--------------------------------------------------|-----------------|
| `hwk` | Hardware-bound key | YubiKey, Titan Security Key, platform TPM | No |
| `swk` | Software/cloud key | iCloud Keychain, Google Password Manager, 1Password | Yes |
The distinction is determined by examining the `backup_eligible` flag reported by the authenticator during registration and authentication. Hardware-bound passkeys that cannot be cloned or synced receive the `hwk` designation, while cloud-synced passkeys receive `swk`.
Both types are valid for authentication and 2FA. The AMR value is included in the [ID Token](../oidc/id-token.md) to allow relying parties to make authorization decisions based on authenticator strength.
## WASM Client
Browser-side WebAuthn operations are handled by a Rust WASM module compiled from the `client-wasm/` crate. The WASM client provides:
| Function | Description |
|-------------------------------|-------------------------------------------------------|
| `supports_webauthn()` | Check if the browser supports WebAuthn |
| `supports_conditional_ui()` | Check if the browser supports passkey autofill |
| `register_passkey(options)` | Create a new passkey credential |
| `authenticate_passkey(options, mediation)` | Authenticate with an existing passkey |
The WASM module is loaded by the login page and abstracts the browser's `navigator.credentials` API into a clean interface that communicates with Barycenter's WebAuthn endpoints. See [How Passkeys Work](./passkeys-how.md) for architectural details.
## Passkey Management
Users can manage their registered passkeys through the account API:
| Endpoint | Method | Description |
|--------------------------------------------|----------|-------------------------------|
| `/account/passkeys` | `GET` | List all registered passkeys |
| `/account/passkeys/:credential_id` | `DELETE` | Remove a passkey |
| `/account/passkeys/:credential_id` | `PATCH` | Update passkey friendly name |
Each passkey record stores the full WebAuthn `Passkey` object as JSON, including the signature counter for clone detection and the backup state for classification. Friendly names help users identify which device or authenticator a credential belongs to.
## Further Reading
- [How Passkeys Work](./passkeys-how.md) -- WebAuthn ceremonies and WASM architecture
- [Registering a Passkey](./passkey-registration.md) -- step-by-step registration flow
- [Authenticating with a Passkey](./passkey-authentication.md) -- step-by-step authentication flow
- [Conditional UI / Autofill](./conditional-ui.md) -- browser autofill integration
- [Two-Factor Authentication](./two-factor.md) -- using passkeys as a second factor

View file

@ -0,0 +1,92 @@
# Password Authentication
Barycenter supports traditional username-and-password authentication as its foundational login method. Password authentication can be used standalone or as the first factor in a [two-factor authentication](./two-factor.md) flow.
## Login Page
The login page is served at `GET /login` and presents the user with two options:
1. **Passkey autofill** -- if the browser supports [Conditional UI](./conditional-ui.md), passkey credentials are offered in the username field's autofill dropdown.
2. **Password fallback** -- a standard username and password form that submits via `POST /login`.
When an OAuth authorization request requires authentication, Barycenter redirects the user to `/login` with the original authorization parameters preserved in the session. After successful authentication, the user is redirected back to the `/authorize` endpoint to continue the OAuth flow.
## Password Submission
Credentials are submitted as a standard HTML form POST:
```
POST /login
Content-Type: application/x-www-form-urlencoded
username=alice&password=correct-horse-battery-staple
```
### Request Parameters
| Parameter | Required | Description |
|------------|----------|---------------------------------|
| `username` | Yes | The user's login name. |
| `password` | Yes | The user's plaintext password. |
### Success Response
On successful authentication, the server:
1. Verifies the password against the stored Argon2 hash.
2. Creates a new [session](./sessions.md) in the database.
3. Sets an `HttpOnly` session cookie on the response.
4. Redirects the user to the original authorization endpoint (or a default landing page if no authorization request is pending).
The newly created session records:
- `amr` (Authentication Methods Reference): `["pwd"]`
- `acr` (Authentication Context Class Reference): `"aal1"`
- `mfa_verified`: `0` (single-factor only at this stage)
### Failure Response
If the username does not exist or the password is incorrect, the server returns the login page with an error message. Barycenter does not distinguish between "unknown user" and "wrong password" in the error response to prevent username enumeration.
## Password Hashing
All passwords are hashed using [Argon2](https://en.wikipedia.org/wiki/Argon2), the winner of the 2015 Password Hashing Competition. Argon2 is a memory-hard function designed to resist brute-force attacks on GPUs and ASICs.
- **Algorithm**: Argon2id (hybrid variant combining Argon2i and Argon2d)
- **Verification**: performed using constant-time comparison to prevent timing attacks
- **Storage**: the full Argon2 encoded string (algorithm, parameters, salt, and hash) is stored in the `users` table
Barycenter never stores or logs plaintext passwords. The password is consumed during verification and immediately dropped from memory.
## Default Admin User
For development and initial setup, Barycenter ships with a default administrator account:
| Field | Value |
|----------|---------------|
| Username | `admin` |
| Password | `password123` |
> **Warning**: Change or remove the default admin credentials before deploying to any non-development environment. See the [Production Checklist](../deployment/production-checklist.md) for hardening guidance.
## Session Creation
After successful password authentication, a new session row is inserted into the `sessions` table:
| Column | Value |
|----------------|------------------------------------------------|
| `session_id` | Random 24-byte base64url-encoded identifier |
| `subject` | The authenticated user's subject identifier |
| `auth_time` | Current UTC timestamp |
| `expires_at` | `auth_time` + session TTL |
| `amr` | `["pwd"]` |
| `acr` | `"aal1"` |
| `mfa_verified` | `0` |
The session ID is returned to the browser as a cookie. See [Session Lifecycle](./session-lifecycle.md) for details on cookie attributes and expiration behavior.
## Integration with Two-Factor Authentication
If the authenticated user has [two-factor authentication](./two-factor.md) enabled -- either through admin enforcement or because the authorization request triggers context-based 2FA -- the password login creates a **partial session** with `mfa_verified = 0`. The user is then redirected to `/login/2fa` to complete the second factor before the authorization flow can continue.
See [2FA Flow Walkthrough](./2fa-flow.md) for the complete sequence.

View file

@ -0,0 +1,156 @@
# Session Lifecycle
This page covers the technical details of how sessions are created, stored, transmitted, upgraded, and cleaned up in Barycenter.
## Session Cookie
When a session is created, Barycenter sends a `Set-Cookie` header to the browser with the following attributes:
| Attribute | Value | Purpose |
|-------------|----------------|------------------------------------------------------------|
| `HttpOnly` | Yes | Prevents JavaScript access, mitigating XSS-based session theft. |
| `SameSite` | `Lax` | Cookie is sent on top-level navigations and same-site requests, but not on cross-site sub-requests. Balances CSRF protection with OAuth redirect compatibility. |
| `Secure` | Yes (production) | Cookie is only sent over HTTPS connections. Disabled for `localhost` during development. |
| `Path` | `/` | Cookie is available for all paths on the domain. |
### Example Set-Cookie Header
```
Set-Cookie: session=abc123def456...; HttpOnly; SameSite=Lax; Secure; Path=/
```
The cookie value is the `session_id` -- a 24-byte random value encoded as base64url, providing 192 bits of entropy.
### SameSite=Lax and OAuth Redirects
The `Lax` setting is chosen specifically for compatibility with the OAuth authorization code flow. During the flow, the user is redirected from the relying party to Barycenter and back. `SameSite=Lax` allows the session cookie to be included on these top-level redirects while still blocking cross-site requests initiated by embedded resources (images, iframes, AJAX calls), which provides meaningful CSRF protection.
## Session TTL
Sessions have a finite lifetime defined by the `expires_at` timestamp stored in the database. The TTL is configured in the application settings.
Once a session expires, it is no longer considered valid:
- Requests with an expired session cookie are treated as unauthenticated.
- The user must log in again.
- Expired session records remain in the database until the cleanup job removes them.
## Session States
A session transitions through the following states during its lifecycle:
```
Created Upgraded (optional) Expired
| | |
[mfa_verified=0] [mfa_verified=1] [expires_at < now]
[amr=["pwd"]] [amr=["pwd","hwk"]]
[acr="aal1"] [acr="aal2"]
```
### State: Created
A session enters the "Created" state after successful single-factor authentication (password or passkey). At this point:
- `mfa_verified = 0`
- `amr` contains a single method (e.g., `["pwd"]`)
- `acr = "aal1"`
If no 2FA is required, the session remains in this state for its entire lifetime and is fully usable for authorization.
### State: Upgraded
If two-factor authentication is completed, the session transitions to the "Upgraded" state:
- `mfa_verified = 1`
- `amr` contains both methods (e.g., `["pwd", "hwk"]`)
- `acr = "aal2"`
- `auth_time` remains unchanged (still reflects the initial authentication)
- `expires_at` remains unchanged (the upgrade does not extend the session)
The upgrade is performed as an in-place update to the existing session row. No new session is created.
### State: Expired
A session is considered expired when `expires_at < current_time`. Expired sessions are:
- Rejected by Barycenter on any request that checks for a valid session.
- Cleaned up by the `cleanup_expired_sessions` background job.
## MFA Upgrade During 2FA
When a user completes the [2FA flow](./2fa-flow.md), the session is upgraded in a single database update:
```sql
UPDATE sessions
SET mfa_verified = 1,
acr = 'aal2',
amr = '["pwd", "hwk"]'
WHERE session_id = ?
```
This atomic update ensures that the session is either fully upgraded or not changed at all. There is no intermediate state where `mfa_verified = 1` but `acr` still reads `"aal1"`.
## Session Cleanup
Expired sessions accumulate in the database until they are removed by the `cleanup_expired_sessions` background job.
| Job Name | Schedule | Action |
|--------------------------------|--------------|-------------------------------------------|
| `cleanup_expired_sessions` | Hourly (:00) | Deletes all sessions where `expires_at < now` |
The cleanup job runs as part of Barycenter's background job scheduler. It can also be triggered manually via the [Admin GraphQL API](../admin/graphql-api.md):
```graphql
mutation {
triggerJob(jobName: "cleanup_expired_sessions") {
success
message
}
}
```
## Logout
Users can explicitly terminate their session by sending a POST request to the logout endpoint:
```
POST /logout
Cookie: session=<session_id>
```
On logout, Barycenter:
1. Deletes the session record from the database.
2. Clears the session cookie by setting it with an expired `Max-Age`.
3. Redirects the user to the login page (or a configured post-logout URL).
## Session Validation Flow
On each request that requires authentication, Barycenter validates the session:
```
1. Extract session_id from cookie
|
+-- No cookie? --> 401 Unauthenticated
|
2. Look up session in database
|
+-- Not found? --> 401 Unauthenticated (cookie is stale)
|
3. Check expires_at > current_time
|
+-- Expired? --> 401 Unauthenticated
|
4. Session is valid. Read subject, amr, acr, mfa_verified.
```
This validation is performed for every authenticated endpoint, including the authorization endpoint, passkey registration, passkey management, and the 2FA verification endpoints (which require a partial session).
## Security Considerations
- **Session ID entropy**: 192 bits of cryptographic randomness makes brute-force guessing infeasible.
- **Server-side storage**: All session data is stored in the database, not in the cookie. The cookie contains only the opaque identifier.
- **HttpOnly**: JavaScript cannot read the session cookie, protecting against XSS.
- **SameSite=Lax**: Provides CSRF protection while remaining compatible with OAuth redirects.
- **Secure flag**: In production, the cookie is only transmitted over HTTPS.
- **No session fixation**: A new session ID is generated on every successful login. Pre-existing session IDs are never reused.

View file

@ -0,0 +1,83 @@
# Sessions
Barycenter uses server-side sessions to track authenticated users across requests. Sessions are created during login and persist until they expire or are explicitly terminated.
## What Is a Session?
A session represents an authenticated user's state on the server. When a user logs in (via password, passkey, or both), Barycenter creates a session record in the database and returns a session identifier to the browser as a cookie. Subsequent requests include this cookie, allowing Barycenter to identify the user without requiring re-authentication on every request.
## Session Data
Each session record in the database contains:
| Column | Type | Description |
|----------------|-----------|-----------------------------------------------------------------------|
| `session_id` | String | Random 24-byte base64url-encoded identifier. Used as the cookie value.|
| `subject` | String | The authenticated user's subject identifier (stable, unique ID). |
| `auth_time` | Timestamp | When the session was created (initial authentication time). |
| `expires_at` | Timestamp | When the session will expire and be cleaned up. |
| `amr` | JSON | Authentication Methods Reference -- array of method identifiers. |
| `acr` | String | Authentication Context Class Reference -- assurance level. |
| `mfa_verified` | Integer | Whether multi-factor authentication has been completed (`0` or `1`). |
## Session Identifiers
Session IDs are generated using `random_id()`, which produces 24 cryptographically random bytes encoded as a base64url string. This provides 192 bits of entropy, making session ID guessing infeasible.
The session ID is the only value sent to the browser. All other session data remains on the server and is looked up by the session ID on each request.
## Authentication Tracking
Sessions track how the user authenticated, which is propagated to ID tokens issued during the session:
### AMR (Authentication Methods Reference)
The `amr` field is a JSON array recording which authentication methods were used:
| Value | Method |
|--------|----------------------------------------|
| `pwd` | Password authentication |
| `hwk` | Hardware-bound passkey (YubiKey, etc.) |
| `swk` | Software/cloud passkey (iCloud, etc.) |
After a password-only login: `["pwd"]`
After a passkey-only login: `["hwk"]` or `["swk"]`
After password + passkey 2FA: `["pwd", "hwk"]` or `["pwd", "swk"]`
### ACR (Authentication Context Class Reference)
The `acr` field records the authentication assurance level:
| Value | Meaning |
|--------|---------------------------------------------|
| `aal1` | Single-factor authentication |
| `aal2` | Two-factor authentication (MFA verified) |
See [AMR and ACR Claims](./amr-acr.md) for a detailed explanation of how these values are determined and used.
## Session Lifecycle
Sessions follow a defined lifecycle from creation through potential upgrade to eventual expiration:
1. **Creation**: A session is created on successful login with initial `amr`, `acr`, and `mfa_verified` values.
2. **Upgrade** (optional): If 2FA is completed, `amr` gains a second method, `acr` becomes `"aal2"`, and `mfa_verified` becomes `1`.
3. **Use**: The session is validated on each request by checking the cookie against the database.
4. **Expiration**: Sessions expire based on `expires_at`. Expired sessions are cleaned up by a background job.
5. **Logout**: Users can explicitly end their session via `POST /logout`.
See [Session Lifecycle](./session-lifecycle.md) for details on cookie settings, TTL, and cleanup.
## Session and the OAuth Flow
During an OAuth authorization request, the session serves several purposes:
- **Authentication check**: If a valid session exists, the user does not need to re-authenticate (unless `prompt=login` or `max_age` requires it).
- **2FA state**: The `mfa_verified` flag determines whether the user needs to complete a second factor.
- **ID token claims**: `auth_time`, `amr`, and `acr` from the session are included in the issued ID token.
## Further Reading
- [AMR and ACR Claims](./amr-acr.md) -- detailed explanation of authentication method tracking
- [Session Lifecycle](./session-lifecycle.md) -- cookie settings, TTL, and cleanup jobs
- [Two-Factor Authentication](./two-factor.md) -- how sessions are upgraded during 2FA
- [Password Authentication](./password.md) -- how sessions are created during login

View file

@ -0,0 +1,89 @@
# Two-Factor Authentication
Barycenter supports two-factor authentication (2FA) to provide a higher level of assurance for sensitive operations. When 2FA is required, users must authenticate with both a password and a passkey before the authorization flow can proceed.
## Overview
Two-factor authentication in Barycenter means combining two distinct authentication methods:
1. **First factor**: Password authentication (`amr: "pwd"`)
2. **Second factor**: Passkey verification (`amr: "hwk"` or `"swk"`)
After both factors are verified, the session is upgraded to:
- `amr`: `["pwd", "hwk"]` or `["pwd", "swk"]`
- `acr`: `"aal2"` (Authentication Assurance Level 2)
- `mfa_verified`: `1`
These values are propagated to the [ID Token](../oidc/id-token.md) so that relying parties can make authorization decisions based on the strength of the authentication.
## Three Modes of 2FA
Barycenter provides three mechanisms for triggering two-factor authentication, each suited to different operational needs:
### 1. Admin-Enforced 2FA
An administrator sets a per-user flag requiring 2FA for every login. This is the strongest enforcement mode -- the user cannot bypass the second factor regardless of what they are accessing.
- Configured via the [Admin GraphQL API](../admin/graphql-api.md) using the `setUser2faRequired` mutation.
- Stored as `requires_2fa = 1` in the `users` table.
- Takes effect on the next login attempt.
See [Admin-Enforced 2FA](./2fa-admin-enforced.md) for details.
### 2. Context-Based 2FA
The authorization request itself triggers 2FA based on the sensitivity of the operation. This allows applications to require stronger authentication for high-risk actions without mandating 2FA for routine access.
Two conditions can trigger context-based 2FA:
- **High-value scopes**: Authorization requests that include scopes such as `admin`, `payment`, `transfer`, or `delete`.
- **Fresh authentication**: Authorization requests with `max_age` less than 300 seconds, indicating the relying party requires a recent, strong authentication.
See [Context-Based 2FA](./2fa-context-based.md) for details.
### 3. User-Optional 2FA
Users will be able to enable 2FA for their own accounts through a self-service settings page, independent of administrator policy.
> **Note**: This mode is not yet implemented. See [User-Optional 2FA](./2fa-user-optional.md) for the planned functionality.
## When Is 2FA Triggered?
During the authorization flow, Barycenter evaluates whether 2FA is required by checking the following conditions in order:
| Check | Trigger Condition |
|--------------------------------------------|------------------------------------------------|
| User has `requires_2fa = 1` | Admin-enforced 2FA is active for this user. |
| Requested scope includes a high-value scope| `admin`, `payment`, `transfer`, or `delete`. |
| `max_age` parameter is less than 300 | Relying party requires fresh strong auth. |
If any condition is met and the current session does not already have `mfa_verified = 1`, the user is redirected to `/login/2fa` to complete the second factor.
## 2FA Flow Summary
The high-level 2FA flow is:
1. User authenticates with password at `/login`.
2. A **partial session** is created with `mfa_verified = 0`.
3. Barycenter determines that 2FA is required.
4. User is redirected to `/login/2fa`.
5. User completes passkey verification via `/webauthn/2fa/start` and `/webauthn/2fa/finish`.
6. The session is **upgraded**: `mfa_verified = 1`, `acr = "aal2"`, `amr` includes both methods.
7. User is redirected back to `/authorize` to complete the OAuth flow.
See [2FA Flow Walkthrough](./2fa-flow.md) for a detailed sequence diagram.
## Passkey Enrollment Requirement
Two-factor authentication requires that the user has at least one registered passkey. If a user has `requires_2fa = 1` but no enrolled passkeys, the 2FA step cannot be completed.
Administrators should ensure that users enroll a passkey before enabling mandatory 2FA. The [user2faStatus](../admin/user-2fa.md) GraphQL query can check whether a user has passkeys enrolled.
## Further Reading
- [Admin-Enforced 2FA](./2fa-admin-enforced.md) -- per-user enforcement via GraphQL
- [Context-Based 2FA](./2fa-context-based.md) -- scope and max_age triggers
- [User-Optional 2FA](./2fa-user-optional.md) -- planned self-service enrollment
- [2FA Flow Walkthrough](./2fa-flow.md) -- complete sequence diagram
- [AMR and ACR Claims](./amr-acr.md) -- how authentication strength is represented in tokens

View file

@ -0,0 +1,308 @@
# ABAC Rules and Conditions
A `rule` node defines an Attribute-Based Access Control (ABAC) policy that can allow or deny access based on contextual attributes provided at evaluation time. Rules complement the relationship-based [grants](./grants-tuples.md) model by adding conditional logic that depends on dynamic properties such as time of day, IP address, or application-specific metadata.
## Syntax
```kdl
rule "<rule_name>" effect="<allow|deny>" {
permissions {
- "<resource_type>:<permission_name>"
// ...
}
principals {
- "<principal_pattern>"
// ...
}
condition "<expression>"
}
```
### Components
| Component | Required | Description |
|-----------|----------|-------------|
| `rule_name` | Yes | A descriptive identifier for this rule. Used in logging and debugging. |
| `effect` | Yes | The action to take when the rule matches: `"allow"` or `"deny"`. |
| `permissions` | Yes | Fully-qualified permissions that this rule applies to. |
| `principals` | Yes | Principal patterns that this rule applies to. |
| `condition` | No | A boolean expression evaluated against the request context. If omitted, the rule applies unconditionally when the permissions and principals match. |
## Effect: Allow vs. Deny
The `effect` attribute determines what happens when a rule matches:
| Effect | Behavior |
|--------|----------|
| `allow` | Grants access to the specified permissions for the matched principals, subject to the condition. Acts as an alternative to grant-based access. |
| `deny` | Blocks access regardless of any matching grants or allow rules. Deny rules always take precedence. |
The authorization engine follows a **deny-overrides** evaluation strategy:
1. If any `deny` rule matches, the request is denied.
2. If a grant or an `allow` rule matches, the request is allowed.
3. If nothing matches, the request is denied (default deny).
```mermaid
flowchart TD
A["Evaluate all matching rules"] --> B{"Any deny\nrule matches?"}
B -- Yes --> C["DENIED"]
B -- No --> D{"Grant found OR\nallow rule matches?"}
D -- Yes --> E["ALLOWED"]
D -- No --> F["DENIED\n(default)"]
style C fill:#f8d7da,stroke:#721c24
style E fill:#d4edda,stroke:#155724
style F fill:#f8d7da,stroke:#721c24
```
## Permissions Block
The `permissions` block lists the fully-qualified permissions that this rule applies to. A rule is only evaluated for check requests whose requested permission appears in this list.
```kdl
rule "AllowReadDuringBusinessHours" effect="allow" {
permissions {
- "document:read"
- "document:list"
}
// ...
}
```
The permission format is the same as in [role definitions](./roles-inheritance.md): `resource_type:permission_name`.
## Principals Block
The `principals` block specifies which principals the rule applies to. Each entry is a pattern that is matched against the principal in the check request.
```kdl
rule "DenyContractorDeleteAccess" effect="deny" {
permissions {
- "document:delete"
- "project:delete"
}
principals {
- "group:contractors"
}
// ...
}
```
### Principal Patterns
| Pattern | Matches | Example |
|---------|---------|---------|
| `"*"` | All principals | Applies to everyone |
| `"user:alice"` | A specific user | Only the principal `user/alice` |
| `"group:engineering"` | A specific group | Principals in the `engineering` group |
| `"service:*"` | All service accounts | Any principal with type `service` |
The wildcard `"*"` is useful for rules that should apply universally, such as time-based restrictions or global deny policies:
```kdl
rule "DenyAllAccessDuringMaintenance" effect="deny" {
permissions {
- "vm:start"
- "vm:stop"
- "vm:delete"
}
principals {
- "*"
}
condition "request.context.maintenance_mode == true"
}
```
## Condition Expressions
The `condition` attribute contains a boolean expression that is evaluated against the context JSON object provided in the check request. If the condition evaluates to `true`, the rule's effect is applied. If it evaluates to `false`, the rule is skipped.
### Context Object
The context is provided in the check request body:
```json
{
"principal": "user/alice",
"permission": "document:write",
"resource": "document/q4-report",
"context": {
"request": {
"time": {
"hour": 14,
"day_of_week": "Tuesday"
},
"ip": "10.0.1.42",
"source": "internal"
},
"environment": {
"maintenance_mode": false,
"region": "us-east-1"
}
}
}
```
### Expression Syntax
Condition expressions support standard comparison and logical operators:
| Category | Operators | Example |
|----------|-----------|---------|
| Comparison | `==`, `!=`, `<`, `>`, `<=`, `>=` | `request.time.hour >= 9` |
| Logical | `&&`, `\|\|`, `!` | `request.time.hour >= 9 && request.time.hour < 17` |
| Grouping | `(` `)` | `(a > 1) \|\| (b < 2)` |
| String equality | `==`, `!=` | `request.source == "internal"` |
| Boolean | `== true`, `== false` | `environment.maintenance_mode == true` |
Expressions use dot notation to traverse the context JSON object. For example, `request.time.hour` accesses the `hour` field nested under `request.time` in the context.
### Expression Examples
**Time-based restrictions:**
```kdl
// Only allow during business hours (9 AM to 5 PM)
condition "request.time.hour >= 9 && request.time.hour < 17"
// Deny on weekends
condition "request.time.day_of_week == \"Saturday\" || request.time.day_of_week == \"Sunday\""
```
**Network-based restrictions:**
```kdl
// Only allow from internal network
condition "request.source == \"internal\""
// Deny from specific regions
condition "environment.region != \"us-restricted-1\""
```
**Feature flags and operational controls:**
```kdl
// Block during maintenance
condition "environment.maintenance_mode == true"
// Only allow when feature is enabled
condition "environment.feature_flags.new_dashboard == true"
```
## Complete Rule Examples
### Business Hours Restriction
Allow a group of contractors to access invoices only during business hours:
```kdl
rule "AllowFinanceViewDuringBusinessHours" effect="allow" {
permissions {
- "invoice:view"
- "invoice:list"
}
principals {
- "group:finance"
}
condition "request.time.hour >= 9 && request.time.hour < 17"
}
```
### Maintenance Window Lockout
Deny all destructive operations during maintenance windows:
```kdl
rule "DenyDestructiveDuringMaintenance" effect="deny" {
permissions {
- "vm:stop"
- "vm:delete"
- "vm:resize"
- "document:delete"
- "project:delete"
}
principals {
- "*"
}
condition "environment.maintenance_mode == true"
}
```
### Environment-Based Access Tiers
Restrict production access to senior engineers:
```kdl
rule "DenyProductionForJuniors" effect="deny" {
permissions {
- "vm:start"
- "vm:stop"
- "vm:delete"
}
principals {
- "group:junior-engineers"
}
condition "request.resource_environment == \"production\""
}
```
### Unconditional Deny
A rule without a condition acts as an unconditional policy. This is useful for hard access boundaries:
```kdl
rule "DenyContractorDelete" effect="deny" {
permissions {
- "document:delete"
- "project:delete"
- "workspace:delete"
}
principals {
- "group:contractors"
}
}
```
This rule denies delete operations for contractors regardless of any context, grants, or other allow rules.
## Combining Rules with Grants
Rules and grants work together in the evaluation pipeline. A common pattern is to use grants for the baseline access model and rules for contextual restrictions:
```kdl
// Baseline: Alice is a VM admin (via grant)
grant "vm_admin" on="vm/prod-web-1" to="user/alice"
// Restriction: Nobody can delete VMs during maintenance (via rule)
rule "DenyDeleteDuringMaintenance" effect="deny" {
permissions {
- "vm:delete"
}
principals {
- "*"
}
condition "environment.maintenance_mode == true"
}
```
Even though Alice has `vm_admin` (which includes `vm:delete`), the deny rule blocks deletion during maintenance. When maintenance mode is off, Alice's grant-based access works normally.
## Evaluation Order
Rules are evaluated in the order they appear in the policy files, but the order does not affect the final result because deny-overrides is applied as a set operation:
1. All rules matching the requested permission are collected.
2. Each rule's principal pattern is checked against the requesting principal.
3. Each matching rule's condition is evaluated against the provided context.
4. If **any** deny rule matches, the result is deny.
5. If **any** allow rule matches (and no deny rule matched), the result is allow.
This means you cannot "override" a deny rule with a later allow rule. Deny is always final.
## Further Reading
- [Overview](./overview.md) -- the complete evaluation pipeline
- [Grants and Relationship Tuples](./grants-tuples.md) -- the ReBAC grant model that rules augment
- [Authz REST API](./rest-api.md) -- passing context in check requests
- [Configuration and Deployment](./configuration.md) -- file organization and loading

View file

@ -0,0 +1,388 @@
# Configuration and Deployment
The authorization policy engine is an optional component of Barycenter that must be explicitly enabled. This page covers the configuration options, file layout, and deployment considerations for running the engine in development and production environments.
## Enabling the Engine
The authorization engine is disabled by default. To enable it, add the `[authz]` section to your configuration file:
```toml
[authz]
enabled = true
```
When `enabled` is `false` or the `[authz]` section is absent, Barycenter does not start the authorization server, does not load policy files, and does not bind the authorization port.
## Configuration Reference
All authorization configuration lives under the `[authz]` section:
```toml
[authz]
enabled = true
port = 8082
policies_dir = "policies"
```
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `enabled` | boolean | `false` | Whether to start the authorization policy engine. |
| `port` | integer | Main port + 2 (e.g., `8082`) | The TCP port the authorization REST API listens on. |
| `policies_dir` | string | `"policies"` | Path to the directory containing `.kdl` policy files. Relative paths are resolved from the working directory. |
### Environment Variable Overrides
Like all Barycenter settings, authorization configuration can be overridden with environment variables using the `BARYCENTER__` prefix and double underscores as separators:
```bash
export BARYCENTER__AUTHZ__ENABLED=true
export BARYCENTER__AUTHZ__PORT=9082
export BARYCENTER__AUTHZ__POLICIES_DIR="/etc/barycenter/policies"
```
## Policy Directory Layout
The engine loads all files with the `.kdl` extension from the configured `policies_dir` directory. Files are loaded in alphabetical order, but load order does not affect evaluation semantics -- all nodes are merged into a single `AuthzState`.
### Recommended Structure
```
policies/
01-resources.kdl # Resource type definitions
02-roles.kdl # Role definitions with inheritance
10-grants-infra.kdl # Infrastructure team grants
10-grants-platform.kdl # Platform team grants
10-grants-services.kdl # Service account grants
20-rules.kdl # ABAC rules and conditions
```
Using numeric prefixes makes the load order predictable for humans reading the directory listing, even though it does not change evaluation behavior.
### Minimal Example
A minimal policy directory with a single file:
```
policies/
policy.kdl
```
```kdl
// policies/policy.kdl
resource "api" {
permissions {
- "read"
- "write"
}
}
role "api_reader" {
permissions {
- "api:read"
}
}
role "api_writer" {
includes {
- "api_reader"
}
permissions {
- "api:write"
}
}
grant "api_writer" on="api/v1" to="user/admin"
```
## Port Allocation
Barycenter uses a three-port architecture. The authorization engine occupies the third port:
| Server | Default Port | Configuration Key |
|--------|-------------|-------------------|
| Public OIDC | 8080 | `server.port` |
| Admin GraphQL | 8081 | (main port + 1, not separately configurable) |
| Authorization API | 8082 | `authz.port` |
If you change the main server port, the authorization port default follows unless you set `authz.port` explicitly:
```toml
[server]
port = 9090
[authz]
enabled = true
# port defaults to 9092 (9090 + 2)
```
To use a fixed port regardless of the main server port:
```toml
[authz]
enabled = true
port = 8082
```
## Immutability and Reloading
Policies are immutable after loading. The `AuthzState` is built once during startup and shared as read-only state for the lifetime of the process. To apply policy changes:
1. Edit the `.kdl` files in `policies_dir`.
2. Restart the Barycenter process.
This design provides several guarantees:
- **Consistency**: All authorization decisions within a single process lifetime use the same policy set.
- **Performance**: No locks, mutexes, or file watchers are needed during evaluation.
- **Auditability**: The policy set active at any given time is the set of files present when the process started.
In containerized deployments, this maps naturally to a rolling update: build a new container image (or update a ConfigMap), and let the orchestrator replace old pods with new ones.
## Docker Deployment
Mount the policy directory into the container:
```bash
docker run -d \
-p 8080:8080 \
-p 8081:8081 \
-p 8082:8082 \
-v ./policies:/app/policies:ro \
-v ./config.toml:/app/config.toml:ro \
barycenter:latest
```
The `:ro` (read-only) mount flag is recommended since the engine only reads policy files at startup.
### Docker Compose
```yaml
services:
barycenter:
image: barycenter:latest
ports:
- "8080:8080"
- "8081:8081"
- "8082:8082"
volumes:
- ./config.toml:/app/config.toml:ro
- ./policies:/app/policies:ro
environment:
- BARYCENTER__AUTHZ__ENABLED=true
```
## Kubernetes Deployment
### Helm Chart Values
If you are using the Barycenter Helm chart, enable the authorization engine in your values file:
```yaml
authz:
enabled: true
port: 8082
# Policy files are stored in a ConfigMap
policies:
configMapName: barycenter-authz-policies
```
### Policy ConfigMap
Store your policy files in a Kubernetes ConfigMap:
```bash
kubectl create configmap barycenter-authz-policies \
--from-file=policies/
```
Or declare it in a manifest:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: barycenter-authz-policies
data:
resources.kdl: |
resource "vm" {
relations {
- "owner"
- "viewer"
}
permissions {
- "start"
- "stop"
- "view_console"
}
}
roles.kdl: |
role "vm_viewer" {
permissions {
- "vm:view_console"
}
}
role "vm_admin" {
includes {
- "vm_viewer"
}
permissions {
- "vm:start"
- "vm:stop"
}
}
grants.kdl: |
grant "vm_admin" on="vm/prod-web-1" to="user/alice"
grant "vm_viewer" on="vm/prod-web-1" to="group/sre#member"
```
Mount the ConfigMap as a volume in the pod spec:
```yaml
spec:
containers:
- name: barycenter
volumeMounts:
- name: authz-policies
mountPath: /app/policies
readOnly: true
volumes:
- name: authz-policies
configMap:
name: barycenter-authz-policies
```
### Service Configuration
Expose the authorization port as a separate Kubernetes Service so that backend services can reach it independently:
```yaml
apiVersion: v1
kind: Service
metadata:
name: barycenter-authz
spec:
selector:
app: barycenter
ports:
- name: authz
port: 8082
targetPort: 8082
```
Backend services can then call the authorization API at `http://barycenter-authz:8082/v1/check`.
### NetworkPolicy Considerations
The authorization API should typically be accessible only from backend services within the cluster, not from external traffic. A NetworkPolicy can enforce this:
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: barycenter-authz-policy
spec:
podSelector:
matchLabels:
app: barycenter
policyTypes:
- Ingress
ingress:
# Allow authorization API traffic only from pods with the "uses-authz" label
- from:
- podSelector:
matchLabels:
uses-authz: "true"
ports:
- protocol: TCP
port: 8082
# Allow public OIDC traffic from the ingress controller
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
ports:
- protocol: TCP
port: 8080
```
This policy ensures that:
- Port 8082 (authorization) is only reachable from pods labeled `uses-authz: "true"`.
- Port 8080 (OIDC) is reachable from the ingress controller namespace.
- Port 8081 (admin) is not accessible from outside the pod (unless you add an explicit rule).
### Updating Policies in Kubernetes
Since policies are immutable at runtime, updating them requires a pod restart. The typical workflow is:
1. Update the ConfigMap with new policy files:
```bash
kubectl create configmap barycenter-authz-policies \
--from-file=policies/ \
--dry-run=client -o yaml | kubectl apply -f -
```
2. Trigger a rollout restart to pick up the new policies:
```bash
kubectl rollout restart deployment/barycenter
```
3. Verify the rollout:
```bash
kubectl rollout status deployment/barycenter
```
For automated policy deployments, consider adding the ConfigMap hash as a pod annotation so that Kubernetes automatically restarts pods when policies change:
```yaml
spec:
template:
metadata:
annotations:
checksum/authz-policies: {{ include (print $.Template.BasePath "/configmap-authz.yaml") . | sha256sum }}
```
## Monitoring
### Health Check
The `/healthz` endpoint is available for liveness and readiness probes:
```bash
curl http://localhost:8082/healthz
```
See [Authz REST API](./rest-api.md) for details on probe configuration.
### Logging
The authorization engine uses the same `RUST_LOG` environment variable as the rest of Barycenter. To see authorization-specific logs:
```bash
RUST_LOG=barycenter::authz=debug cargo run
```
At the `debug` level, the engine logs:
- Policy file loading and parsing results
- Number of resources, roles, grants, and rules loaded
- Individual check request evaluations (principal, permission, resource, result)
At the `trace` level, additional detail is logged:
- Role inheritance resolution
- Tuple index construction
- Condition expression evaluation steps
## Further Reading
- [Overview](./overview.md) -- what the authorization engine is and how it works
- [KDL Policy Language](./kdl-policy-language.md) -- writing policy files
- [Authz REST API](./rest-api.md) -- the HTTP endpoints exposed by the engine
- [Architecture](../getting-started/architecture.md) -- the three-port design

View file

@ -0,0 +1,211 @@
# Grants and Relationship Tuples
A `grant` node creates a relationship tuple that assigns a [role](./roles-inheritance.md) to a principal on a specific [resource](./resources-permissions.md) instance. Grants are the data layer of the authorization model -- they answer the question "who has what role on which object?"
## Syntax
```kdl
grant "<role_name>" on="<object_ref>" to="<subject_ref>"
```
A grant is a single-line KDL node with three components:
| Component | Format | Description |
|-----------|--------|-------------|
| `role_name` | String argument | The name of a role defined by a `role` node. |
| `on` | `type/id` (ObjectRef) | The resource instance that the grant applies to. |
| `to` | `type/id` or `type/id#relation` (SubjectRef) | The principal receiving the role, optionally with a userset qualifier. |
## Object References (ObjectRef)
The `on` attribute uses the **ObjectRef** format to identify a specific resource instance:
```
<resource_type>/<instance_id>
```
| ObjectRef | Resource Type | Instance ID |
|-----------|--------------|-------------|
| `vm/prod-web-1` | `vm` | `prod-web-1` |
| `document/q4-report` | `document` | `q4-report` |
| `project/barycenter` | `project` | `barycenter` |
| `workspace/acme-corp` | `workspace` | `acme-corp` |
The resource type must match a `resource` node declared in the policy files. The instance ID is an opaque string -- it can be a UUID, a slug, a numeric ID, or any identifier your application uses.
## Subject References (SubjectRef)
The `to` attribute identifies the principal receiving the role. Subject references come in two forms:
### Direct Subject
A direct subject reference identifies a single principal:
```
<subject_type>/<subject_id>
```
```kdl
grant "vm_admin" on="vm/prod-web-1" to="user/alice"
grant "vm_viewer" on="vm/prod-web-1" to="service/monitoring-agent"
```
The subject type is not restricted to `user` -- it can be any type that makes sense in your domain:
| Subject Type | Example | Use Case |
|-------------|---------|----------|
| `user` | `user/alice` | Individual human users |
| `service` | `service/monitoring` | Machine-to-machine principals |
| `bot` | `bot/deploy-agent` | Automated agents |
| `team` | `team/sre` | Organizational units (direct assignment) |
### Userset Subject
A userset subject reference delegates the grant to all members of a group or relation:
```
<subject_type>/<subject_id>#<relation>
```
```kdl
grant "vm_viewer" on="vm/prod-web-1" to="group/sre#member"
grant "document_editor" on="document/q4-report" to="team/finance#lead"
```
The `#relation` suffix means "all entities that have the specified relation to this subject." For example, `group/sre#member` means "all members of the SRE group."
This is a key building block for scaling authorization. Instead of granting roles to every individual user, you grant roles to a group's membership and manage who belongs to the group separately.
### Userset Example
Consider a scenario where you want all members of the engineering team to view a VM:
```kdl
// Grant the vm_viewer role to all members of the engineers group
grant "vm_viewer" on="vm/staging-1" to="group/engineers#member"
```
When evaluating a check request for `user/bob` on `vm/staging-1`, the engine will:
1. Look up grants on `vm/staging-1`.
2. Find the grant to `group/engineers#member`.
3. Check whether `user/bob` has the `member` relation to `group/engineers`.
4. If yes, the grant applies and Bob holds the `vm_viewer` role on `vm/staging-1`.
## Tuple Index
Grants are stored in an in-memory `TupleIndex` structure that supports efficient lookup from two directions:
```mermaid
graph LR
subgraph "TupleIndex"
OI["Object Index\n(object_type, object_id, relation)\n-> Vec&lt;SubjectRef&gt;"]
SI["Subject Index\n(subject_type, subject_id)\n-> Vec&lt;(ObjectRef, relation)&gt;"]
end
Check["Check Request\n'Can alice start vm/prod-web-1?'"] --> OI
Expand["Expand Request\n'Who can start vm/prod-web-1?'"] --> OI
Reverse["Reverse Lookup\n'What can alice access?'"] --> SI
style OI fill:#e8f4f8,stroke:#2980b9
style SI fill:#eaf5ea,stroke:#27ae60
```
### Object Index
The primary index is keyed by `(object_type, object_id, relation)` and returns a list of subjects that hold that relation on that object. This index is used during permission checks:
**Lookup**: Given a permission and resource, the engine finds all roles that grant the permission (via the pre-computed `permission_roles` map), then queries the object index for each role on the specified resource to find matching subjects.
### Subject Index
The secondary index is keyed by `(subject_type, subject_id)` and returns a list of `(ObjectRef, relation)` pairs. This index supports reverse lookups -- finding all resources that a given subject has access to.
### Index Construction
Both indexes are built once during policy loading. Each `grant` node produces entries in both indexes:
```kdl
grant "vm_admin" on="vm/prod-web-1" to="user/alice"
```
Produces:
- **Object index entry**: `("vm", "prod-web-1", "vm_admin")` -> `[SubjectRef("user/alice")]`
- **Subject index entry**: `("user", "alice")` -> `[(ObjectRef("vm/prod-web-1"), "vm_admin")]`
## Examples
### Per-User Access
Direct role assignment to individual users:
```kdl
grant "vm_admin" on="vm/prod-web-1" to="user/alice"
grant "vm_admin" on="vm/prod-web-2" to="user/alice"
grant "vm_viewer" on="vm/prod-web-1" to="user/bob"
grant "vm_operator" on="vm/staging-1" to="user/charlie"
```
### Group-Based Access
Using userset references to grant roles to group members:
```kdl
// All SRE team members can operate production VMs
grant "vm_operator" on="vm/prod-web-1" to="group/sre#member"
grant "vm_operator" on="vm/prod-web-2" to="group/sre#member"
grant "vm_operator" on="vm/prod-db-1" to="group/sre#member"
// All engineering members can view staging VMs
grant "vm_viewer" on="vm/staging-1" to="group/engineering#member"
grant "vm_viewer" on="vm/staging-2" to="group/engineering#member"
```
### Mixed Access Patterns
Combining direct and group-based grants for the same resource:
```kdl
// Alice owns the document
grant "document_owner" on="document/architecture-rfc" to="user/alice"
// The platform team can edit it
grant "document_editor" on="document/architecture-rfc" to="team/platform#member"
// Everyone in engineering can read it
grant "document_viewer" on="document/architecture-rfc" to="group/engineering#member"
```
### Service Accounts
Granting roles to non-human principals:
```kdl
// Monitoring service can view all production VMs
grant "vm_viewer" on="vm/prod-web-1" to="service/prometheus"
grant "vm_viewer" on="vm/prod-web-2" to="service/prometheus"
grant "vm_viewer" on="vm/prod-db-1" to="service/prometheus"
// Deploy agent can operate staging VMs
grant "vm_operator" on="vm/staging-1" to="service/deploy-agent"
```
## How Grants Are Evaluated
During a permission check, the engine follows this sequence:
1. **Map permission to roles**: Look up the requested permission (e.g., `vm:start`) in the `permission_roles` map to find all roles that grant it (e.g., `["vm_operator", "vm_admin"]`).
2. **Query the object index**: For each matching role, query the tuple index for the requested resource (e.g., all subjects with `vm_operator` or `vm_admin` on `vm/prod-web-1`).
3. **Match the principal**: Check whether the requesting principal appears in the returned subjects, either as a direct match or through a userset reference.
4. **Return result**: If a match is found, the grant contributes an "allow" signal to the evaluation pipeline. ABAC rules may still override the result (see [Overview](./overview.md) for the full pipeline).
## Further Reading
- [Resources and Permissions](./resources-permissions.md) -- declaring resource types referenced in ObjectRefs
- [Roles and Inheritance](./roles-inheritance.md) -- defining the roles referenced in grants
- [ABAC Rules and Conditions](./abac-rules.md) -- conditional policies that can override grant-based access
- [Authz REST API](./rest-api.md) -- the `/v1/check` and `/v1/expand` endpoints that evaluate grants

View file

@ -0,0 +1,205 @@
# KDL Policy Language
Barycenter's authorization policies are written in [KDL](https://kdl.dev/) (KDL Document Language), a node-based configuration language that provides a clean, readable syntax for expressing access control rules. Policy files use the `.kdl` extension and are loaded from the configured `policies_dir` directory at startup.
## Why KDL?
KDL strikes a balance between the simplicity of TOML and the expressiveness of HCL. Its node-based structure maps naturally to the concepts in an authorization policy: resources have child nodes for relations and permissions, roles contain permission lists, and grants are single-line declarations with named attributes.
Key advantages for policy authoring:
- **Readable**: Node names (`resource`, `role`, `grant`, `rule`) read as natural-language declarations.
- **Structured**: Child blocks group related configuration without deeply nested braces.
- **Comments**: KDL supports `//` line comments and `/* */` block comments for documenting policy intent.
- **Familiar**: The syntax will feel natural to anyone who has worked with CSS, HCL, or similar formats.
## The Four Node Types
Every policy file is composed of four types of top-level nodes. Each node type serves a distinct role in the authorization model:
```mermaid
graph LR
R["resource\n(types & permissions)"] --> Role["role\n(permission groups)"]
Role --> G["grant\n(role assignments)"]
R --> Rule["rule\n(ABAC conditions)"]
style R fill:#e8f4f8,stroke:#2980b9
style Role fill:#eaf5ea,stroke:#27ae60
style G fill:#fdf2e9,stroke:#e67e22
style Rule fill:#f5eef8,stroke:#8e44ad
```
### resource
A `resource` node declares a type of object in your system, along with the relations and permissions that apply to it. Resources are the foundation of the policy model -- they define _what_ can be acted upon and _what actions_ exist.
```kdl
resource "document" {
relations {
- "owner"
- "editor"
- "viewer"
}
permissions {
- "read"
- "write"
- "delete"
- "share"
}
}
```
See [Resources and Permissions](./resources-permissions.md) for full syntax and examples.
### role
A `role` node groups permissions together and optionally inherits from other roles. Roles use fully-qualified permission names in the format `type:permission` to reference the permissions declared on resources.
```kdl
role "document_viewer" {
permissions {
- "document:read"
}
}
role "document_editor" {
includes {
- "document_viewer"
}
permissions {
- "document:write"
- "document:share"
}
}
```
See [Roles and Inheritance](./roles-inheritance.md) for details on composition and inheritance chains.
### grant
A `grant` node creates a relationship tuple that assigns a role to a principal on a specific resource instance. Grants are the data layer of the authorization model -- they express _who_ has _what role_ on _which object_.
```kdl
grant "document_editor" on="document/quarterly-report" to="user/alice"
grant "document_viewer" on="document/quarterly-report" to="group/accounting#member"
```
See [Grants and Relationship Tuples](./grants-tuples.md) for the full reference syntax and tuple indexing.
### rule
A `rule` node defines an attribute-based policy with a condition expression. Rules can allow or deny access based on properties of the request context, such as the current time, IP address, or custom attributes.
```kdl
rule "RestrictEditingToBusinessHours" effect="allow" {
permissions {
- "document:write"
}
principals {
- "group:contractors"
}
condition "request.time.hour >= 9 && request.time.hour < 17"
}
```
See [ABAC Rules and Conditions](./abac-rules.md) for the complete rule syntax and condition language.
## File Organization
All `.kdl` files in the `policies_dir` directory are loaded and merged at startup. There is no required file naming convention, but a common pattern is to organize by resource type or domain:
```
policies/
resources.kdl # resource type definitions
roles.kdl # role definitions with inheritance
grants-team-a.kdl # grants for team A
grants-team-b.kdl # grants for team B
rules.kdl # ABAC rules and conditions
```
You can also put everything in a single file, or split it however makes sense for your organization. The engine merges all files into a single `AuthzState` before building its indexes.
### Loading Order
Files are loaded in alphabetical order, but the order does not affect evaluation semantics. All nodes are collected and indexed together. However, there are dependencies between node types:
| Node Type | May Reference |
|-----------|--------------|
| `resource` | Nothing (standalone declarations) |
| `role` | Permissions from `resource` nodes, other `role` nodes via `includes` |
| `grant` | `role` names, resource types and IDs |
| `rule` | Permissions from `resource` nodes |
If a role references a permission that does not exist on any resource, or a grant references an undefined role, the engine will log a warning at startup. Malformed references do not prevent loading but will never match during evaluation.
## Immutability
Policies are immutable after loading. The `AuthzState` structure that holds all resources, roles, rules, and tuple indexes is built once during startup and shared as read-only state across all request handlers.
To change policies:
1. Edit the `.kdl` files in `policies_dir`.
2. Commit the changes to version control.
3. Restart (or reload) the Barycenter service.
This design ensures that policy evaluation is lock-free and that all authorization decisions during a given process lifetime are consistent. It also makes policy changes fully auditable through your version control system.
## Example: Complete Policy File
Here is a minimal but complete policy file that demonstrates all four node types working together:
```kdl
// Define a resource type for virtual machines
resource "vm" {
relations {
- "owner"
- "viewer"
}
permissions {
- "start"
- "stop"
- "view_console"
}
}
// Define roles with inheritance
role "vm_viewer" {
permissions {
- "vm:view_console"
}
}
role "vm_admin" {
includes {
- "vm_viewer"
}
permissions {
- "vm:start"
- "vm:stop"
}
}
// Assign roles to users and groups
grant "vm_admin" on="vm/prod-web-1" to="user/alice"
grant "vm_viewer" on="vm/prod-web-1" to="group/sre#member"
// Restrict stop operations to business hours
rule "AllowStopDuringBusinessHoursOnly" effect="deny" {
permissions {
- "vm:stop"
}
principals {
- "*"
}
condition "request.time.hour < 6 || request.time.hour >= 22"
}
```
With this policy loaded, the check request `{ principal: "user/alice", permission: "vm:start", resource: "vm/prod-web-1" }` would be allowed (Alice has `vm_admin` which includes `vm:start`), while `{ principal: "user/alice", permission: "vm:stop", resource: "vm/prod-web-1", context: { "request.time.hour": 23 } }` would be denied by the ABAC rule.
## Further Reading
- [Resources and Permissions](./resources-permissions.md)
- [Roles and Inheritance](./roles-inheritance.md)
- [Grants and Relationship Tuples](./grants-tuples.md)
- [ABAC Rules and Conditions](./abac-rules.md)

View file

@ -0,0 +1,90 @@
# Overview
Barycenter includes a built-in authorization policy engine that evaluates access control decisions at runtime. The engine combines Relationship-Based Access Control (ReBAC) with Attribute-Based Access Control (ABAC) into a single evaluation pipeline, all configured through a declarative KDL-based policy language.
## What It Does
The authorization engine answers one question: **is this principal allowed to perform this action on this resource?** It does so by loading policies from `.kdl` files at startup, building an in-memory index of grants, roles, and rules, and exposing a REST API that services can call to check permissions.
Unlike external policy engines such as Open Policy Agent or SpiceDB, Barycenter's authorization engine runs inside the same process as the identity provider. This eliminates network hops for authorization decisions and simplifies deployment for teams that need both authentication and authorization in a single binary.
## Design Principles
- **Declarative policies**: All access control logic is expressed in KDL files, not application code.
- **Immutable at runtime**: Policies are loaded once at startup and cannot be modified without restarting the service. This guarantees consistent evaluation and makes policy changes auditable through version control.
- **Hybrid model**: ReBAC (grants and roles) handles structural relationships such as "Alice is an admin of VM-123." ABAC (rules with conditions) handles contextual decisions such as "allow access only during business hours."
- **Separate network surface**: The authorization API runs on its own port, isolated from both the public OIDC endpoints and the admin GraphQL API.
## How It Fits Into Barycenter
The authorization engine is the third server in Barycenter's [three-port architecture](../getting-started/architecture.md):
| Server | Default Port | Purpose |
|--------|-------------|---------|
| Public OIDC | 8080 | User-facing authentication and OAuth flows |
| Admin GraphQL | 8081 | Internal management and job control |
| **Authorization** | **8082** | **Policy evaluation API** |
All three servers share the same process, database connection pool, and application state. The authorization engine is optional -- it is only started when explicitly enabled in the configuration.
## Policy Evaluation Flow
When a service sends a check request to the authorization engine, the following evaluation pipeline runs:
```mermaid
flowchart TD
A["POST /v1/check\n{principal, permission, resource, context}"] --> B["Find matching grants"]
B --> C{"Principal has\ngranted role with\nthis permission?"}
C -- Yes --> D["Evaluate ABAC rules"]
C -- No --> E["Evaluate ABAC rules"]
D --> F{"Any deny\nrule matches?"}
E --> F
F -- Yes --> G["DENIED"]
F -- No --> H{"Grant found OR\nallow rule matches?"}
H -- Yes --> I["ALLOWED"]
H -- No --> G
```
The evaluation proceeds in three stages:
1. **Grant lookup**: The engine searches the tuple index for grants that connect the principal to the requested resource with a role that includes the requested permission. Role inheritance is resolved transitively -- if `vm_admin` includes `vm_viewer`, a principal granted `vm_admin` also holds all `vm_viewer` permissions.
2. **ABAC rule evaluation**: All rules whose permissions list matches the requested permission are evaluated. Each rule specifies an effect (`allow` or `deny`), a set of principals, and an optional condition expression. The condition is evaluated against the context JSON provided in the request.
3. **Decision**: If any `deny` rule matches, the request is denied regardless of grants. If a matching grant was found or an `allow` rule matches, the request is allowed. Otherwise, it is denied by default.
This means the engine follows a **deny-overrides** strategy: explicit deny rules always take precedence over grants and allow rules.
## Enabling the Authorization Engine
The engine is disabled by default. To enable it, add the following to your configuration file:
```toml
[authz]
enabled = true
```
See [Configuration and Deployment](./configuration.md) for the full set of configuration options.
## Policy Language at a Glance
Policies are written in [KDL](https://kdl.dev/), a document language designed for configuration files. Barycenter's policy language defines four node types:
| Node Type | Purpose | Example |
|-----------|---------|---------|
| [`resource`](./resources-permissions.md) | Declares a resource type with its relations and permissions | `resource "vm" { ... }` |
| [`role`](./roles-inheritance.md) | Groups permissions and supports inheritance | `role "vm_admin" { ... }` |
| [`grant`](./grants-tuples.md) | Assigns a role to a principal on a specific resource | `grant "vm_admin" on="vm/vm-123" to="user/alice"` |
| [`rule`](./abac-rules.md) | Defines a conditional allow or deny policy | `rule "DenyAfterHours" effect="deny" { ... }` |
These four building blocks combine to express access control policies ranging from simple role assignments to fine-grained, context-aware authorization decisions. See [KDL Policy Language](./kdl-policy-language.md) for a complete introduction.
## Further Reading
- [KDL Policy Language](./kdl-policy-language.md) -- syntax and structure of policy files
- [Resources and Permissions](./resources-permissions.md) -- defining resource types
- [Roles and Inheritance](./roles-inheritance.md) -- role composition and permission grouping
- [Grants and Relationship Tuples](./grants-tuples.md) -- assigning roles to principals
- [ABAC Rules and Conditions](./abac-rules.md) -- attribute-based conditional policies
- [Authz REST API](./rest-api.md) -- HTTP endpoints for policy evaluation
- [Configuration and Deployment](./configuration.md) -- enabling and configuring the engine

View file

@ -0,0 +1,265 @@
# Resources and Permissions
A `resource` node declares a type of object in your system and defines the relations and permissions that apply to it. Resources are the foundation of the authorization model -- every permission check ultimately references a resource type and one of its declared permissions.
## Syntax
```kdl
resource "<type_name>" {
relations {
- "<relation_name>"
// ...
}
permissions {
- "<permission_name>"
// ...
}
}
```
### Components
| Component | Required | Description |
|-----------|----------|-------------|
| `type_name` | Yes | A unique identifier for this resource type (e.g., `"vm"`, `"document"`, `"invoice"`). Used in fully-qualified permission names and object references. |
| `relations` | No | A block listing the named relationships that can exist between principals and instances of this resource type. |
| `permissions` | No | A block listing the actions that can be performed on instances of this resource type. |
Both `relations` and `permissions` use KDL's list-child syntax where each item is a `-` node with a string argument.
## Relations
Relations define the types of relationships that can exist between a principal (user, group, service) and a resource instance. They are used in [grants](./grants-tuples.md) to specify the nature of a principal's connection to a resource.
```kdl
resource "repository" {
relations {
- "owner"
- "maintainer"
- "contributor"
- "reader"
}
}
```
Relation names are arbitrary strings. Common conventions include:
| Pattern | Examples | Use Case |
|---------|----------|----------|
| Role-like names | `owner`, `admin`, `viewer` | Hierarchical access levels |
| Organizational names | `member`, `manager`, `team_lead` | Team structure relationships |
| Functional names | `assignee`, `reviewer`, `approver` | Workflow-specific relationships |
Relations declared on a resource do not by themselves grant any permissions. They serve as labels for the relationship tuples created by `grant` nodes. The mapping from relations to permissions is handled by [roles](./roles-inheritance.md).
## Permissions
Permissions define the actions that can be performed on instances of a resource type. Each permission is a simple string that describes a single, atomic operation.
```kdl
resource "vm" {
permissions {
- "start"
- "stop"
- "restart"
- "view_console"
- "snapshot"
- "delete"
}
}
```
### Naming Conventions
Permission names should be short, descriptive verbs or verb phrases. Use underscores to separate words:
| Good | Avoid |
|------|-------|
| `read` | `can_read` |
| `write` | `has_write_access` |
| `view_console` | `console_viewing_permission` |
| `manage_members` | `member-management` |
## Fully-Qualified Permission Names
Outside of the `resource` block, permissions are referenced using the fully-qualified format:
```
<resource_type>:<permission_name>
```
For example, a resource defined as:
```kdl
resource "vm" {
permissions {
- "start"
- "stop"
}
}
```
produces the fully-qualified permissions:
- `vm:start`
- `vm:stop`
These fully-qualified names are used in three places:
1. **Role definitions**: `role` nodes reference permissions as `"vm:start"` in their `permissions` block.
2. **Rule definitions**: `rule` nodes reference permissions as `"vm:start"` in their `permissions` block.
3. **Check requests**: The `permission` field in a check request uses the fully-qualified format: `{ "permission": "vm:start" }`.
## Object References
When a check request or grant refers to a specific instance of a resource, it uses the **ObjectRef** format:
```
<resource_type>/<instance_id>
```
For example:
| ObjectRef | Resource Type | Instance ID |
|-----------|--------------|-------------|
| `vm/prod-web-1` | `vm` | `prod-web-1` |
| `document/quarterly-report` | `document` | `quarterly-report` |
| `repository/barycenter` | `repository` | `barycenter` |
The resource type in an ObjectRef must match a declared `resource` node. The instance ID is an opaque string that identifies a specific object in your system.
## Examples
### Infrastructure Resources
```kdl
resource "vm" {
relations {
- "owner"
- "viewer"
}
permissions {
- "start"
- "stop"
- "restart"
- "view_console"
- "snapshot"
- "resize"
- "delete"
}
}
resource "network" {
relations {
- "admin"
- "member"
}
permissions {
- "create_subnet"
- "delete_subnet"
- "attach_vm"
- "detach_vm"
- "view"
}
}
```
### SaaS Application Resources
```kdl
resource "workspace" {
relations {
- "owner"
- "admin"
- "member"
- "guest"
}
permissions {
- "manage_billing"
- "manage_members"
- "create_project"
- "view"
}
}
resource "project" {
relations {
- "manager"
- "contributor"
- "viewer"
}
permissions {
- "create_task"
- "assign_task"
- "delete"
- "archive"
- "view"
}
}
```
### Document Management Resources
```kdl
resource "document" {
relations {
- "owner"
- "editor"
- "commenter"
- "viewer"
}
permissions {
- "read"
- "write"
- "comment"
- "share"
- "delete"
- "manage_permissions"
}
}
resource "folder" {
relations {
- "owner"
- "editor"
- "viewer"
}
permissions {
- "list"
- "create_document"
- "delete"
- "rename"
}
}
```
## Relationship to Other Node Types
Resources are referenced by all other node types in the policy language:
```mermaid
graph TD
Res["resource &quot;vm&quot;\npermissions: start, stop, view_console"]
Role["role &quot;vm_admin&quot;\npermissions: vm:start, vm:stop"]
Grant["grant &quot;vm_admin&quot;\non=&quot;vm/prod-web-1&quot;\nto=&quot;user/alice&quot;"]
Rule["rule &quot;DenyLateStop&quot;\npermissions: vm:stop"]
Res -- "type:permission" --> Role
Res -- "type/id" --> Grant
Res -- "type:permission" --> Rule
style Res fill:#e8f4f8,stroke:#2980b9
style Role fill:#eaf5ea,stroke:#27ae60
style Grant fill:#fdf2e9,stroke:#e67e22
style Rule fill:#f5eef8,stroke:#8e44ad
```
- **Roles** reference resource permissions using the fully-qualified `type:permission` format.
- **Grants** reference resource instances using the `type/id` ObjectRef format.
- **Rules** reference resource permissions using the fully-qualified `type:permission` format.
## Further Reading
- [Roles and Inheritance](./roles-inheritance.md) -- grouping permissions into reusable roles
- [Grants and Relationship Tuples](./grants-tuples.md) -- assigning roles to specific resource instances
- [ABAC Rules and Conditions](./abac-rules.md) -- conditional policies referencing resource permissions

361
book/src/authz/rest-api.md Normal file
View file

@ -0,0 +1,361 @@
# Authz REST API
The authorization policy engine exposes a REST API on a dedicated port (default: 8082) for evaluating access control decisions. The API provides three endpoints: a permission check endpoint, a permission expansion endpoint, and a health check.
## Base URL
The authorization API runs on its own port, separate from the public OIDC server and the admin GraphQL API. By default, the port is the main server port plus 2:
| Server | Default Port |
|--------|-------------|
| Public OIDC | 8080 |
| Admin GraphQL | 8081 |
| **Authorization API** | **8082** |
All endpoints are prefixed with `/v1/` except the health check.
## POST /v1/check
Evaluates whether a principal has a specific permission on a resource. This is the primary endpoint that backend services call to make authorization decisions.
### Request
```
POST /v1/check
Content-Type: application/json
```
```json
{
"principal": "user/alice",
"permission": "vm:start",
"resource": "vm/prod-web-1",
"context": {}
}
```
#### Request Fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `principal` | string | Yes | The subject requesting access, in `type/id` format (e.g., `"user/alice"`, `"service/deploy-agent"`). |
| `permission` | string | Yes | The fully-qualified permission being requested, in `type:action` format (e.g., `"vm:start"`). |
| `resource` | string | Yes | The target resource instance, in `type/id` format (e.g., `"vm/prod-web-1"`). |
| `context` | object | No | A JSON object containing contextual attributes for [ABAC rule](./abac-rules.md) evaluation. Defaults to an empty object if omitted. |
### Response
```json
{
"allowed": true
}
```
#### Response Fields
| Field | Type | Description |
|-------|------|-------------|
| `allowed` | boolean | `true` if the principal has the requested permission on the resource, `false` otherwise. |
### Examples
**Basic permission check:**
```bash
curl -s -X POST http://localhost:8082/v1/check \
-H "Content-Type: application/json" \
-d '{
"principal": "user/alice",
"permission": "vm:start",
"resource": "vm/prod-web-1"
}' | jq .
```
```json
{
"allowed": true
}
```
**Check with context for ABAC rule evaluation:**
```bash
curl -s -X POST http://localhost:8082/v1/check \
-H "Content-Type: application/json" \
-d '{
"principal": "user/bob",
"permission": "invoice:view",
"resource": "invoice/inv-2024-001",
"context": {
"request": {
"time": {
"hour": 14,
"day_of_week": "Wednesday"
},
"source": "internal"
},
"environment": {
"maintenance_mode": false
}
}
}' | jq .
```
```json
{
"allowed": true
}
```
**Denied request:**
```bash
curl -s -X POST http://localhost:8082/v1/check \
-H "Content-Type: application/json" \
-d '{
"principal": "user/mallory",
"permission": "vm:delete",
"resource": "vm/prod-db-1"
}' | jq .
```
```json
{
"allowed": false
}
```
### Integration Pattern
A typical integration pattern is to call the check endpoint from your application middleware before processing a request:
```python
import httpx
AUTHZ_URL = "http://localhost:8082/v1/check"
async def check_permission(principal: str, permission: str, resource: str, context: dict = None):
response = await httpx.AsyncClient().post(AUTHZ_URL, json={
"principal": principal,
"permission": permission,
"resource": resource,
"context": context or {}
})
response.raise_for_status()
return response.json()["allowed"]
# In a request handler:
if not await check_permission(f"user/{current_user.id}", "vm:start", f"vm/{vm_id}"):
raise PermissionDenied("You are not allowed to start this VM.")
```
## POST /v1/expand
Returns the set of all subjects (principals) that have a specific permission on a resource. This is useful for building UIs that show "who has access to this resource" or for auditing purposes.
### Request
```
POST /v1/expand
Content-Type: application/json
```
```json
{
"permission": "vm:start",
"resource": "vm/prod-web-1"
}
```
#### Request Fields
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `permission` | string | Yes | The fully-qualified permission to expand, in `type:action` format. |
| `resource` | string | Yes | The target resource instance, in `type/id` format. |
### Response
```json
{
"subjects": [
"user/alice",
"group/sre#member"
]
}
```
#### Response Fields
| Field | Type | Description |
|-------|------|-------------|
| `subjects` | string[] | A list of subject references that have the requested permission on the resource. Includes both direct subjects and userset references. |
### Examples
**Expand all subjects with a permission:**
```bash
curl -s -X POST http://localhost:8082/v1/expand \
-H "Content-Type: application/json" \
-d '{
"permission": "vm:start",
"resource": "vm/prod-web-1"
}' | jq .
```
```json
{
"subjects": [
"user/alice",
"group/sre#member"
]
}
```
**Expand with no matching subjects:**
```bash
curl -s -X POST http://localhost:8082/v1/expand \
-H "Content-Type: application/json" \
-d '{
"permission": "vm:delete",
"resource": "vm/prod-web-99"
}' | jq .
```
```json
{
"subjects": []
}
```
> **Note**: The expand endpoint resolves grant-based access only. It does not evaluate ABAC rules, because rule evaluation depends on a specific principal and context that are not available in an expand query.
## GET /healthz
A simple health check endpoint that returns the status of the authorization engine.
### Request
```
GET /healthz
```
No request body or parameters are required.
### Response
A successful response indicates that the authorization engine is running and has loaded its policies:
```
HTTP/1.1 200 OK
Content-Type: application/json
```
```json
{
"status": "ok"
}
```
### Example
```bash
curl -s http://localhost:8082/healthz | jq .
```
```json
{
"status": "ok"
}
```
This endpoint is intended for use with container orchestrators (Kubernetes liveness/readiness probes), load balancers, and monitoring systems.
### Kubernetes Probe Configuration
```yaml
livenessProbe:
httpGet:
path: /healthz
port: 8082
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8082
initialDelaySeconds: 3
periodSeconds: 5
```
## Error Handling
The API returns standard HTTP status codes for error conditions:
| Status Code | Condition | Response Body |
|-------------|-----------|---------------|
| `200` | Request processed successfully | `{ "allowed": true/false }` or `{ "subjects": [...] }` |
| `400` | Malformed request body or missing required fields | `{ "error": "description of the problem" }` |
| `404` | Unknown endpoint | `{ "error": "not found" }` |
| `405` | Wrong HTTP method (e.g., GET on /v1/check) | `{ "error": "method not allowed" }` |
| `500` | Internal server error during evaluation | `{ "error": "internal error" }` |
### Error Response Format
Error responses include a descriptive message:
```json
{
"error": "missing required field: principal"
}
```
### Common Errors
**Missing required field:**
```bash
curl -s -X POST http://localhost:8082/v1/check \
-H "Content-Type: application/json" \
-d '{"permission": "vm:start", "resource": "vm/prod-web-1"}' | jq .
```
```json
{
"error": "missing required field: principal"
}
```
**Invalid JSON:**
```bash
curl -s -X POST http://localhost:8082/v1/check \
-H "Content-Type: application/json" \
-d 'not json' | jq .
```
```json
{
"error": "invalid JSON in request body"
}
```
## Performance Considerations
The authorization engine is designed for low-latency evaluation:
- **In-memory evaluation**: All policies, grants, and indexes are held in memory. No database queries are made during check or expand operations.
- **Pre-computed indexes**: Role inheritance is resolved at load time. The `permission_roles` map and `TupleIndex` enable constant-time lookups for most checks.
- **No network hops**: The engine runs in the same process as Barycenter, so internal callers (such as future OIDC-to-authz integration) avoid network round-trips entirely.
- **Immutable state**: The `AuthzState` is read-only after loading, so no locks or synchronization are needed during evaluation.
For external callers, the primary latency factor is network round-trip time. Placing the authorization engine close to (or on the same host as) the calling service minimizes this overhead.
## Further Reading
- [Overview](./overview.md) -- the evaluation pipeline behind `/v1/check`
- [ABAC Rules and Conditions](./abac-rules.md) -- how the `context` field is used in rule evaluation
- [Configuration and Deployment](./configuration.md) -- setting the authorization port and policy directory

View file

@ -0,0 +1,298 @@
# Roles and Inheritance
A `role` node groups fully-qualified permissions together under a named identity and optionally inherits permissions from other roles. Roles are the bridge between the abstract permissions declared on [resources](./resources-permissions.md) and the concrete [grants](./grants-tuples.md) that assign those permissions to principals.
## Syntax
```kdl
role "<role_name>" {
includes {
- "<parent_role_name>"
// ...
}
permissions {
- "<resource_type>:<permission_name>"
// ...
}
}
```
### Components
| Component | Required | Description |
|-----------|----------|-------------|
| `role_name` | Yes | A unique identifier for this role (e.g., `"vm_admin"`, `"document_editor"`). Referenced by `grant` nodes. |
| `includes` | No | A block listing other role names whose permissions this role inherits. |
| `permissions` | No | A block listing fully-qualified permissions (`type:permission`) that this role directly grants. |
A role with neither `includes` nor `permissions` is valid but has no effect.
## Fully-Qualified Permissions
Permissions in a role block use the fully-qualified format `resource_type:permission_name`, which references a permission declared on a `resource` node:
```kdl
// The resource declaration
resource "vm" {
permissions {
- "start"
- "stop"
- "view_console"
}
}
// The role references those permissions
role "vm_operator" {
permissions {
- "vm:start"
- "vm:stop"
}
}
```
A role can include permissions from multiple resource types:
```kdl
role "infrastructure_admin" {
permissions {
- "vm:start"
- "vm:stop"
- "vm:delete"
- "network:create_subnet"
- "network:delete_subnet"
}
}
```
## Role Inheritance
The `includes` block allows a role to inherit all permissions from one or more parent roles. This creates a hierarchy where more privileged roles build on top of less privileged ones.
### Basic Inheritance
```kdl
role "vm_viewer" {
permissions {
- "vm:view_console"
}
}
role "vm_operator" {
includes {
- "vm_viewer"
}
permissions {
- "vm:start"
- "vm:stop"
}
}
role "vm_admin" {
includes {
- "vm_operator"
}
permissions {
- "vm:delete"
- "vm:resize"
- "vm:snapshot"
}
}
```
In this example, the effective permissions for each role are:
| Role | Direct Permissions | Inherited Permissions | Effective Permissions |
|------|-------------------|----------------------|----------------------|
| `vm_viewer` | `vm:view_console` | -- | `vm:view_console` |
| `vm_operator` | `vm:start`, `vm:stop` | `vm:view_console` | `vm:view_console`, `vm:start`, `vm:stop` |
| `vm_admin` | `vm:delete`, `vm:resize`, `vm:snapshot` | `vm:view_console`, `vm:start`, `vm:stop` | All six permissions |
### Transitive Inheritance
Inheritance is transitive. If role A includes role B, and role B includes role C, then role A inherits all permissions from both B and C:
```mermaid
graph BT
C["vm_viewer\nvm:view_console"] --> B["vm_operator\nvm:start, vm:stop"]
B --> A["vm_admin\nvm:delete, vm:resize, vm:snapshot"]
style A fill:#e8f4f8,stroke:#2980b9
style B fill:#eaf5ea,stroke:#27ae60
style C fill:#fdf2e9,stroke:#e67e22
```
The arrows point from child to parent to indicate the direction of permission flow: permissions flow upward from base roles to more privileged roles.
### Multiple Inheritance
A role can include multiple parent roles, merging their permissions:
```kdl
role "vm_viewer" {
permissions {
- "vm:view_console"
}
}
role "network_viewer" {
permissions {
- "network:view"
}
}
role "infrastructure_viewer" {
includes {
- "vm_viewer"
- "network_viewer"
}
}
```
The `infrastructure_viewer` role effectively holds both `vm:view_console` and `network:view`.
### Diamond Inheritance
If multiple included roles share a common ancestor, the shared permissions are deduplicated. There is no double-counting or ambiguity:
```kdl
role "base" {
permissions {
- "vm:view_console"
}
}
role "operator" {
includes {
- "base"
}
permissions {
- "vm:start"
}
}
role "auditor" {
includes {
- "base"
}
permissions {
- "vm:snapshot"
}
}
role "super_admin" {
includes {
- "operator"
- "auditor"
}
}
```
The `super_admin` role holds `vm:view_console`, `vm:start`, and `vm:snapshot` -- the `vm:view_console` permission from `base` appears only once.
## Pre-Computed Permission Map
At startup, the engine resolves all inheritance chains and builds a `permission_roles` map: a `HashMap<String, Vec<String>>` that maps each fully-qualified permission to the list of roles that grant it (directly or through inheritance).
For the basic inheritance example above, the pre-computed map would contain:
| Permission | Roles Granting It |
|------------|------------------|
| `vm:view_console` | `vm_viewer`, `vm_operator`, `vm_admin` |
| `vm:start` | `vm_operator`, `vm_admin` |
| `vm:stop` | `vm_operator`, `vm_admin` |
| `vm:delete` | `vm_admin` |
| `vm:resize` | `vm_admin` |
| `vm:snapshot` | `vm_admin` |
This pre-computation happens once during policy loading. At evaluation time, when a check request asks "does the principal have permission `vm:start` on resource `vm/prod-web-1`?", the engine looks up `vm:start` in the `permission_roles` map to find all roles that grant it, then checks the tuple index for grants matching any of those roles on the specified resource for the given principal.
This design means that role inheritance has zero cost at evaluation time -- all the resolution work is done upfront.
## Design Patterns
### Layered Access Levels
A common pattern is to define a hierarchy of access levels for each resource type:
```kdl
role "project_viewer" {
permissions {
- "project:view"
}
}
role "project_contributor" {
includes {
- "project_viewer"
}
permissions {
- "project:create_task"
}
}
role "project_manager" {
includes {
- "project_contributor"
}
permissions {
- "project:assign_task"
- "project:archive"
}
}
role "project_owner" {
includes {
- "project_manager"
}
permissions {
- "project:delete"
- "project:manage_members"
}
}
```
### Cross-Resource Roles
For roles that span multiple resource types, use a descriptive name that indicates the broader scope:
```kdl
role "workspace_admin" {
includes {
- "project_owner"
- "document_admin"
}
permissions {
- "workspace:manage_billing"
- "workspace:manage_members"
}
}
```
### Separation of Duties
Roles can be designed to enforce separation of duties by keeping certain permission sets in distinct, non-overlapping roles:
```kdl
role "payment_initiator" {
permissions {
- "payment:create"
- "payment:view"
}
}
role "payment_approver" {
permissions {
- "payment:approve"
- "payment:reject"
- "payment:view"
}
}
// No role includes both "create" and "approve"
```
## Further Reading
- [Resources and Permissions](./resources-permissions.md) -- declaring the permissions that roles reference
- [Grants and Relationship Tuples](./grants-tuples.md) -- assigning roles to principals on resources
- [Overview](./overview.md) -- how roles fit into the evaluation pipeline

View file

@ -0,0 +1,225 @@
# Backup and Recovery
Barycenter stores critical data that must be backed up to recover from hardware failure, accidental deletion, or corruption. This page describes what to back up, how to perform backups for each storage backend, and how to restore from a backup.
## What to Back Up
Three categories of data are critical:
| Data | Location | Impact if Lost |
|------|----------|----------------|
| RSA private key | `private_key.pem` in the data directory | All issued ID tokens become unverifiable. Clients cannot validate existing tokens. A new key is generated on restart, but previously issued tokens are invalidated. |
| Database | SQLite file or PostgreSQL database | All client registrations, authorization codes, access tokens, refresh tokens, user accounts, passkey registrations, and session data are lost. |
| Configuration | `config.toml` | Must be recreated manually. Store in version control. |
The JWKS public key file (`jwks.json`) is derived from the private key and is regenerated automatically. It does not need to be backed up independently.
## Backup Procedures
### SQLite
The SQLite database is a single file, but copying it while Barycenter is running can produce a corrupt copy. Use SQLite's built-in backup command instead:
```bash
sqlite3 /var/lib/barycenter/data/barycenter.db ".backup '/var/backups/barycenter/barycenter-$(date +%Y%m%d-%H%M%S).db'"
```
This command creates a consistent snapshot even while the database is in use.
Alternatively, stop the service before copying:
```bash
systemctl stop barycenter
cp /var/lib/barycenter/data/barycenter.db /var/backups/barycenter/barycenter-$(date +%Y%m%d-%H%M%S).db
systemctl start barycenter
```
### PostgreSQL
Use `pg_dump` to create a logical backup:
```bash
pg_dump -U barycenter -h db-host -d barycenter -F custom -f /var/backups/barycenter/barycenter-$(date +%Y%m%d-%H%M%S).pgdump
```
For automated backups, consider using `pg_basebackup` for physical backups or a tool like pgBackRest for incremental backups with point-in-time recovery.
### Private Key
Copy the private key file:
```bash
cp /var/lib/barycenter/data/private_key.pem /var/backups/barycenter/private_key-$(date +%Y%m%d-%H%M%S).pem
```
The private key does not change after initial generation, so it only needs to be backed up once and again if the key is rotated.
### Configuration
```bash
cp /etc/barycenter/config.toml /var/backups/barycenter/config-$(date +%Y%m%d-%H%M%S).toml
```
The recommended approach is to store the configuration file in version control (Git) so that changes are tracked and the file can be recovered from any commit.
### Docker Volumes
For Docker deployments, back up the named volume:
```bash
docker run --rm \
-v barycenter-data:/data:ro \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/barycenter-data-$(date +%Y%m%d-%H%M%S).tar.gz -C /data .
```
## Encrypting Backups
Backups contain the RSA private key and potentially database credentials. Encrypt them before storing off-site:
```bash
gpg --symmetric --cipher-algo AES256 \
-o /var/backups/barycenter/backup-$(date +%Y%m%d).gpg \
/var/backups/barycenter/barycenter-$(date +%Y%m%d-%H%M%S).db
```
To decrypt:
```bash
gpg --decrypt /var/backups/barycenter/backup-20260214.gpg > restored.db
```
For automated encryption, use GPG with a public key so that no passphrase is needed during backup creation:
```bash
gpg --encrypt --recipient backup@example.com \
-o /var/backups/barycenter/backup-$(date +%Y%m%d).gpg \
/var/backups/barycenter/barycenter-$(date +%Y%m%d-%H%M%S).db
```
## Automated Backup Script
A complete backup script that handles SQLite, the private key, and the configuration:
```bash
#!/bin/sh
set -e
BACKUP_DIR="/var/backups/barycenter"
DATA_DIR="/var/lib/barycenter/data"
CONFIG="/etc/barycenter/config.toml"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
mkdir -p "$BACKUP_DIR"
# Database
sqlite3 "$DATA_DIR/barycenter.db" ".backup '$BACKUP_DIR/db-$TIMESTAMP.db'"
# Private key
cp "$DATA_DIR/private_key.pem" "$BACKUP_DIR/key-$TIMESTAMP.pem"
# Configuration
cp "$CONFIG" "$BACKUP_DIR/config-$TIMESTAMP.toml"
# Create a single encrypted archive
tar czf - -C "$BACKUP_DIR" \
"db-$TIMESTAMP.db" \
"key-$TIMESTAMP.pem" \
"config-$TIMESTAMP.toml" \
| gpg --symmetric --cipher-algo AES256 --batch --passphrase-file /root/.backup-passphrase \
> "$BACKUP_DIR/barycenter-$TIMESTAMP.tar.gz.gpg"
# Clean up unencrypted files
rm "$BACKUP_DIR/db-$TIMESTAMP.db"
rm "$BACKUP_DIR/key-$TIMESTAMP.pem"
rm "$BACKUP_DIR/config-$TIMESTAMP.toml"
# Prune backups older than 30 days
find "$BACKUP_DIR" -name "barycenter-*.tar.gz.gpg" -mtime +30 -delete
echo "Backup completed: $BACKUP_DIR/barycenter-$TIMESTAMP.tar.gz.gpg"
```
Schedule with cron (daily at 2 AM):
```cron
0 2 * * * /usr/local/bin/barycenter-backup.sh
```
## Recovery Procedures
### Restoring SQLite
1. Stop Barycenter.
2. Replace the database file with the backup.
3. Start Barycenter. Migrations run automatically if the backup is from an older version.
```bash
systemctl stop barycenter
cp /var/backups/barycenter/db-20260214-020000.db /var/lib/barycenter/data/barycenter.db
chown barycenter:barycenter /var/lib/barycenter/data/barycenter.db
chmod 600 /var/lib/barycenter/data/barycenter.db
systemctl start barycenter
```
### Restoring PostgreSQL
1. Stop Barycenter.
2. Drop and recreate the database, then restore from the dump.
3. Start Barycenter.
```bash
systemctl stop barycenter
dropdb -U postgres barycenter
createdb -U postgres -O barycenter barycenter
pg_restore -U barycenter -d barycenter /var/backups/barycenter/barycenter-20260214-020000.pgdump
systemctl start barycenter
```
### Restoring the Private Key
1. Stop Barycenter.
2. Copy the backed-up key to the data directory.
3. Set correct ownership and permissions.
4. Start Barycenter.
```bash
systemctl stop barycenter
cp /var/backups/barycenter/key-20260214-020000.pem /var/lib/barycenter/data/private_key.pem
chown barycenter:barycenter /var/lib/barycenter/data/private_key.pem
chmod 600 /var/lib/barycenter/data/private_key.pem
systemctl start barycenter
```
### Restoring from an Encrypted Archive
```bash
gpg --decrypt /var/backups/barycenter/barycenter-20260214-020000.tar.gz.gpg | tar xzf - -C /tmp/barycenter-restore/
```
Then follow the individual restoration steps above using the extracted files.
## Off-Site Storage
Backups should be stored in at least one location separate from the primary server. Options include:
- **Object storage** (S3, GCS, MinIO) -- upload the encrypted archive after each backup.
- **Remote server** -- transfer via rsync or scp.
- **Tape or cold storage** -- for long-term retention requirements.
## Backup Verification
Periodically verify that backups can be restored:
1. Decrypt the archive.
2. Restore the database to a test instance.
3. Start Barycenter against the test database.
4. Confirm the OIDC discovery endpoint responds.
5. Confirm that a known client registration exists.
An unverified backup is not a backup.
## Further Reading
- [Production Checklist](./production-checklist.md) -- includes backup verification steps
- [Linux systemd](./systemd.md) -- service management for backup scheduling

View file

@ -0,0 +1,193 @@
# Docker Compose
Docker Compose simplifies running Barycenter alongside supporting services such as PostgreSQL. This page provides ready-to-use Compose files for several common configurations.
## Standalone with SQLite
The simplest Compose setup uses the built-in SQLite database:
```yaml
services:
barycenter:
image: ghcr.io/cloudnebulaproject/barycenter:latest
ports:
- "8080:8080"
- "8081:8081"
- "8082:8082"
volumes:
- ./config.toml:/app/config/config.toml:ro
- barycenter-data:/app/data
environment:
- RUST_LOG=info
volumes:
barycenter-data:
```
Start it with:
```bash
docker compose up -d
```
## With PostgreSQL
For production use, PostgreSQL is recommended:
```yaml
services:
barycenter:
image: ghcr.io/cloudnebulaproject/barycenter:latest
ports:
- "8080:8080"
- "8081:8081"
- "8082:8082"
volumes:
- ./config.toml:/app/config/config.toml:ro
- barycenter-data:/app/data
environment:
- RUST_LOG=info
- BARYCENTER__DATABASE__URL=postgresql://barycenter:secret@postgres:5432/barycenter
- BARYCENTER__SERVER__PUBLIC_BASE_URL=https://idp.example.com
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:17
environment:
POSTGRES_USER: barycenter
POSTGRES_PASSWORD: secret
POSTGRES_DB: barycenter
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U barycenter"]
interval: 5s
timeout: 5s
retries: 5
volumes:
barycenter-data:
postgres-data:
```
The `depends_on` condition ensures Barycenter waits for PostgreSQL to become healthy before starting.
## Production-Hardened
This configuration adds security options suitable for production:
```yaml
services:
barycenter:
image: ghcr.io/cloudnebulaproject/barycenter:latest
ports:
- "8080:8080"
volumes:
- ./config.toml:/app/config/config.toml:ro
- barycenter-data:/app/data
environment:
- RUST_LOG=info
- BARYCENTER__SERVER__PUBLIC_BASE_URL=https://idp.example.com
- BARYCENTER__DATABASE__URL=postgresql://barycenter:secret@postgres:5432/barycenter
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmp
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
postgres:
image: postgres:17
environment:
POSTGRES_USER: barycenter
POSTGRES_PASSWORD: secret
POSTGRES_DB: barycenter
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U barycenter"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
barycenter-data:
postgres-data:
```
Key differences from the basic setup:
- **`security_opt: no-new-privileges:true`** -- prevents privilege escalation inside the container.
- **`read_only: true`** -- makes the root filesystem immutable. Only the `/app/data` volume and `/tmp` tmpfs are writable.
- **`tmpfs: /tmp`** -- provides a writable temporary directory backed by memory.
- **Only port 8080 is published** -- the admin (8081) and authorization (8082) ports are not exposed to the host network. Other containers on the same Compose network can still reach them by service name.
- **`restart: unless-stopped`** -- automatically restarts after crashes or host reboots.
## Accessing Internal Ports
If you need the admin or authorization APIs from the host (for example, during initial setup), you can temporarily add them to the `ports` list:
```yaml
ports:
- "8080:8080"
- "127.0.0.1:8081:8081" # Admin API, localhost only
- "127.0.0.1:8082:8082" # Authz API, localhost only
```
Binding to `127.0.0.1` ensures these ports are only reachable from the host machine and not from the external network.
## Using an `.env` File
Sensitive values such as database credentials should not be committed to version control. Use a `.env` file alongside your Compose file:
```bash
# .env
POSTGRES_PASSWORD=a-strong-random-password
BARYCENTER_DB_URL=postgresql://barycenter:a-strong-random-password@postgres:5432/barycenter
```
Then reference these variables in the Compose file:
```yaml
services:
barycenter:
environment:
- BARYCENTER__DATABASE__URL=${BARYCENTER_DB_URL}
# ...
postgres:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
# ...
```
## Managing the Stack
```bash
# Start all services in the background
docker compose up -d
# View logs
docker compose logs -f barycenter
# Restart Barycenter after a configuration change
docker compose restart barycenter
# Stop and remove all services
docker compose down
# Stop and remove all services, including volumes (destroys data)
docker compose down -v
```
## Next Steps
- [Reverse Proxy and TLS](./reverse-proxy-tls.md) -- terminate TLS in front of the Compose stack
- [Backup and Recovery](./backup-recovery.md) -- back up the Docker volumes
- [Production Checklist](./production-checklist.md) -- verify your setup before going live

View file

@ -0,0 +1,154 @@
# Docker
Barycenter publishes multi-architecture container images to the GitHub Container Registry. This page covers pulling, running, and building the Docker image for standalone deployments. For multi-container setups see [Docker Compose](./docker-compose.md).
## Image Registry
```bash
docker pull ghcr.io/cloudnebulaproject/barycenter:latest
```
Tagged releases are also available:
```bash
docker pull ghcr.io/cloudnebulaproject/barycenter:0.2.0
```
Images are built for `linux/amd64` and `linux/arm64`.
## Ports
Barycenter exposes three ports corresponding to its [three-server architecture](../getting-started/architecture.md):
| Port | Purpose | Expose publicly? |
|------|---------|------------------|
| 8080 | Public OIDC server | Yes |
| 8081 | Admin GraphQL API | No -- internal only |
| 8082 | Authorization policy server | No -- internal only |
Only the public OIDC port should be reachable from the internet. The admin and authorization ports should be restricted to trusted networks or kept behind a firewall.
## Volumes
| Mount point | Purpose | Required |
|-------------|---------|----------|
| `/app/data` | SQLite database, RSA private key, JWKS public key set | Recommended for persistence |
| `/app/config/config.toml` | Configuration file (mount read-only) | Optional if using environment variables exclusively |
If no volume is mounted at `/app/data`, the database and key material live inside the container and are lost when the container is removed.
## Running the Container
### Minimal
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
ghcr.io/cloudnebulaproject/barycenter:latest
```
This starts Barycenter with defaults: an in-container SQLite database and an auto-generated RSA key pair. Suitable for quick evaluation only.
### With Persistent Storage and Configuration
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-p 8081:8081 \
-p 8082:8082 \
-v $(pwd)/config.toml:/app/config/config.toml:ro \
-v barycenter-data:/app/data \
ghcr.io/cloudnebulaproject/barycenter:latest
```
### With Environment Variable Overrides
Any configuration value can be overridden through environment variables using the `BARYCENTER__` prefix with double-underscore separators for nested keys:
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-e RUST_LOG=info \
-e BARYCENTER__SERVER__PUBLIC_BASE_URL=https://idp.example.com \
-e BARYCENTER__DATABASE__URL=postgresql://user:pass@db-host/barycenter \
-v barycenter-data:/app/data \
ghcr.io/cloudnebulaproject/barycenter:latest
```
### With PostgreSQL
When using an external PostgreSQL database, the `/app/data` volume is still needed for key material but no longer stores the database:
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-v barycenter-data:/app/data \
-e BARYCENTER__DATABASE__URL=postgresql://barycenter:secret@postgres:5432/barycenter \
--network my-network \
ghcr.io/cloudnebulaproject/barycenter:latest
```
## Security Hardening
For production containers, apply these options:
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-v $(pwd)/config.toml:/app/config/config.toml:ro \
-v barycenter-data:/app/data \
--security-opt no-new-privileges:true \
--read-only \
--tmpfs /tmp \
-e RUST_LOG=info \
ghcr.io/cloudnebulaproject/barycenter:latest
```
- `--security-opt no-new-privileges:true` prevents privilege escalation inside the container.
- `--read-only` makes the root filesystem immutable. Only `/app/data` and `/tmp` are writable.
- `--tmpfs /tmp` provides a writable temporary filesystem backed by memory.
## Building the Image Locally
From the repository root:
```bash
docker build -t barycenter:latest .
```
For a specific platform:
```bash
docker build --platform linux/amd64 -t barycenter:latest .
```
For multi-architecture builds using buildx:
```bash
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t barycenter:latest \
.
```
## Environment Variable Reference
| Variable | Purpose | Example |
|----------|---------|---------|
| `RUST_LOG` | Log level filter | `info`, `barycenter=debug` |
| `BARYCENTER__SERVER__PORT` | Public server listen port | `8080` |
| `BARYCENTER__SERVER__PUBLIC_BASE_URL` | OAuth issuer URL | `https://idp.example.com` |
| `BARYCENTER__DATABASE__URL` | Database connection string | `sqlite://barycenter.db?mode=rwc` |
See [Configuration](../getting-started/configuration.md) for the full list of available environment variables.
## Next Steps
- [Docker Compose](./docker-compose.md) -- multi-container setups with PostgreSQL
- [Reverse Proxy and TLS](./reverse-proxy-tls.md) -- placing Barycenter behind nginx or another proxy
- [Production Checklist](./production-checklist.md) -- steps to verify before going live

View file

@ -0,0 +1,171 @@
# FreeBSD rc.d
This guide covers deploying Barycenter as an rc.d service on FreeBSD. An rc.d script is provided in the repository at `deploy/freebsd/barycenter`.
## Prerequisites
- FreeBSD 13 or later
- The Rust toolchain (to build from source) or a pre-built binary
- SQLite libraries (if using SQLite) or a reachable PostgreSQL instance
## Step 1: Build the Binary
```bash
cargo build --release
```
The release binary is located at `target/release/barycenter`.
## Step 2: Create a Service User
Create a dedicated user with no login shell:
```bash
pw useradd barycenter -d /var/db/barycenter -s /usr/sbin/nologin -c "Barycenter IdP"
mkdir -p /var/db/barycenter/data
chown -R barycenter:barycenter /var/db/barycenter
```
## Step 3: Install the Binary
```bash
cp target/release/barycenter /usr/local/bin/barycenter
chmod 755 /usr/local/bin/barycenter
```
## Step 4: Create Configuration Directory
```bash
mkdir -p /usr/local/etc/barycenter
```
## Step 5: Install the Configuration File
```bash
cp config.toml /usr/local/etc/barycenter/config.toml
chmod 640 /usr/local/etc/barycenter/config.toml
chown root:barycenter /usr/local/etc/barycenter/config.toml
```
Edit `/usr/local/etc/barycenter/config.toml` for your deployment:
```toml
[server]
public_base_url = "https://idp.example.com"
[database]
url = "sqlite:///var/db/barycenter/data/barycenter.db?mode=rwc"
[keys]
jwks_path = "/var/db/barycenter/data/jwks.json"
private_key_path = "/var/db/barycenter/data/private_key.pem"
```
## Step 6: Install the rc.d Script
```bash
install -m 755 deploy/freebsd/barycenter /usr/local/etc/rc.d/barycenter
```
## Step 7: Enable the Service
Add the following line to `/etc/rc.conf`:
```bash
barycenter_enable="YES"
```
Or use `sysrc`:
```bash
sysrc barycenter_enable="YES"
```
## Step 8: Start the Service
```bash
service barycenter start
```
## Managing the Service
```bash
# Check status
service barycenter status
# Start the service
service barycenter start
# Stop the service
service barycenter stop
# Restart after a configuration change
service barycenter restart
```
## Viewing Logs
If the rc.d script logs to syslog, view logs with:
```bash
grep barycenter /var/log/messages
```
To follow logs in real time:
```bash
tail -f /var/log/messages | grep barycenter
```
For more detailed logging, set the `RUST_LOG` environment variable in the rc.d configuration. Add to `/etc/rc.conf`:
```bash
barycenter_env="RUST_LOG=info"
```
Or with `sysrc`:
```bash
sysrc barycenter_env="RUST_LOG=info"
```
## Directory Layout
| Path | Owner | Mode | Purpose |
|------|-------|------|---------|
| `/usr/local/bin/barycenter` | `root:wheel` | `755` | Application binary |
| `/usr/local/etc/barycenter/config.toml` | `root:barycenter` | `640` | Configuration file |
| `/usr/local/etc/rc.d/barycenter` | `root:wheel` | `755` | rc.d service script |
| `/var/db/barycenter/data/` | `barycenter:barycenter` | `750` | Data directory |
| `/var/db/barycenter/data/private_key.pem` | `barycenter:barycenter` | `600` | RSA private key (created at first run) |
## Upgrading
```bash
# Build the new version
cargo build --release
# Stop the service
service barycenter stop
# Replace the binary
cp target/release/barycenter /usr/local/bin/barycenter
# Start the service
service barycenter start
# Verify
service barycenter status
```
Database migrations run automatically on startup.
## Jail Deployment
Barycenter works well inside a FreeBSD jail for additional isolation. The setup is identical to the steps above, performed inside the jail. Ensure the jail has network access to any external PostgreSQL instance if not using SQLite.
## Further Reading
- [Production Checklist](./production-checklist.md) -- steps to verify before going live
- [Reverse Proxy and TLS](./reverse-proxy-tls.md) -- placing Barycenter behind a reverse proxy
- [Backup and Recovery](./backup-recovery.md) -- backing up the data directory

View file

@ -0,0 +1,147 @@
# Gateway API
The Barycenter Helm chart supports the [Kubernetes Gateway API](https://gateway-api.sigs.k8s.io/) as an alternative to Ingress. Gateway API provides a more expressive and role-oriented model for routing traffic into the cluster.
When `gatewayAPI.enabled` is `true`, the chart creates an HTTPRoute resource instead of (or in addition to) an Ingress.
## Prerequisites
- A Gateway API implementation installed in the cluster (e.g., Envoy Gateway, Istio, Cilium, or nginx Gateway Fabric)
- A `Gateway` resource already deployed that the HTTPRoute can attach to
- Gateway API CRDs installed (typically bundled with the implementation)
## Basic HTTPRoute
```yaml
gatewayAPI:
enabled: true
parentRefs:
- name: main-gateway
namespace: gateway-system
hostnames:
- idp.example.com
```
This creates an HTTPRoute that:
1. Attaches to the Gateway named `main-gateway` in the `gateway-system` namespace.
2. Matches requests for the hostname `idp.example.com`.
3. Routes all matching traffic to the Barycenter Service on port 8080.
## With Filters
HTTPRoute filters can modify requests before they reach Barycenter. For example, to add request headers:
```yaml
gatewayAPI:
enabled: true
parentRefs:
- name: main-gateway
namespace: gateway-system
hostnames:
- idp.example.com
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
set:
- name: X-Forwarded-Proto
value: https
```
## TLS with Gateway API
TLS termination in the Gateway API model is handled by the `Gateway` resource, not the HTTPRoute. The Gateway references a certificate Secret:
```yaml
# Gateway resource (managed separately from the Barycenter Helm chart)
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: main-gateway
namespace: gateway-system
spec:
gatewayClassName: envoy
listeners:
- name: https
protocol: HTTPS
port: 443
hostname: "idp.example.com"
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: idp-tls
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gateway-access: "true"
```
The Barycenter HTTPRoute then attaches to this listener. No TLS configuration is needed in the chart's `gatewayAPI` section.
If your Gateway API implementation supports cert-manager integration, certificate issuance and renewal can be automated by annotating the Gateway resource.
## Multiple Parent References
An HTTPRoute can attach to multiple Gateways. This is useful when you have separate Gateways for internal and external traffic:
```yaml
gatewayAPI:
enabled: true
parentRefs:
- name: external-gateway
namespace: gateway-system
sectionName: https
- name: internal-gateway
namespace: gateway-system
sectionName: https
hostnames:
- idp.example.com
- idp.internal.example.com
```
## Combining with Ingress
The `ingress.enabled` and `gatewayAPI.enabled` flags are independent. You can enable both if your cluster uses a mix of Ingress and Gateway API, though in most cases you will choose one or the other.
## Verifying the HTTPRoute
After deploying, check the HTTPRoute status:
```bash
kubectl get httproute -n barycenter
```
Inspect the route details:
```bash
kubectl describe httproute barycenter -n barycenter
```
Look for the `Accepted` and `ResolvedRefs` conditions under the `status.parents` section. Both should be `True`.
Test the OIDC discovery endpoint:
```bash
curl https://idp.example.com/.well-known/openid-configuration
```
## Comparison: Ingress vs. Gateway API
| Feature | Ingress | Gateway API |
|---------|---------|-------------|
| TLS termination | Configured on Ingress resource | Configured on Gateway resource |
| Header manipulation | Via controller-specific annotations | Native `RequestHeaderModifier` filter |
| Traffic splitting | Limited, controller-specific | Native `BackendRef` weights |
| Role separation | Single resource | Gateway (infra team) + HTTPRoute (app team) |
| Multi-cluster | Controller-specific | Standardized across implementations |
For new clusters, Gateway API is the recommended approach. For existing clusters with established Ingress controllers, the Ingress path remains fully supported.
## Further Reading
- [Helm Chart Values](./helm-values.md) -- full reference of `gatewayAPI.*` values
- [Ingress Configuration](./helm-ingress.md) -- alternative Ingress-based setup
- [Kubernetes Gateway API documentation](https://gateway-api.sigs.k8s.io/)

View file

@ -0,0 +1,165 @@
# Ingress Configuration
The Barycenter Helm chart can create a Kubernetes Ingress resource to expose the public OIDC server (port 8080) through an Ingress controller. This page covers common configurations using the nginx Ingress controller and cert-manager for automatic TLS certificates.
## Basic Ingress
Enable Ingress in your `values.yaml`:
```yaml
ingress:
enabled: true
className: nginx
hosts:
- host: idp.example.com
paths:
- path: /
pathType: Prefix
```
This creates an Ingress resource that routes all traffic for `idp.example.com` to the Barycenter Service on port 8080.
## Ingress with TLS
### Manual TLS Secret
If you manage TLS certificates yourself, create a Kubernetes Secret and reference it:
```bash
kubectl create secret tls idp-tls \
--cert=tls.crt \
--key=tls.key \
-n barycenter
```
```yaml
ingress:
enabled: true
className: nginx
hosts:
- host: idp.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: idp-tls
hosts:
- idp.example.com
```
### Automatic TLS with cert-manager
[cert-manager](https://cert-manager.io/) can automatically provision and renew TLS certificates from Let's Encrypt or other ACME-compatible CAs.
**Prerequisites:**
1. cert-manager installed in the cluster
2. A ClusterIssuer configured (e.g., `letsencrypt-prod`)
**Values:**
```yaml
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: idp.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: idp-tls
hosts:
- idp.example.com
```
The `cert-manager.io/cluster-issuer` annotation tells cert-manager to issue a certificate using the named ClusterIssuer and store it in the Secret `idp-tls`. cert-manager handles renewal automatically.
## Ingress Annotations
Common nginx Ingress annotations for an identity provider:
```yaml
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "1m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "30"
nginx.ingress.kubernetes.io/proxy-send-timeout: "30"
hosts:
- host: idp.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: idp-tls
hosts:
- idp.example.com
```
| Annotation | Purpose |
|------------|---------|
| `ssl-redirect: "true"` | Redirects HTTP to HTTPS |
| `proxy-body-size: "1m"` | Limits request body size. OIDC requests are small; 1 MB is generous |
| `proxy-read-timeout: "30"` | Timeout in seconds for reading a response from Barycenter |
| `proxy-send-timeout: "30"` | Timeout in seconds for sending a request to Barycenter |
## Multiple Hosts
To serve multiple domains (for example, a production domain and a staging alias):
```yaml
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: idp.example.com
paths:
- path: /
pathType: Prefix
- host: idp-staging.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: idp-tls
hosts:
- idp.example.com
- secretName: idp-staging-tls
hosts:
- idp-staging.example.com
```
> **Note:** The `config.server.publicBaseUrl` value must match the primary domain used as the OAuth issuer. OIDC clients validate the `iss` claim in ID tokens against this URL.
## Verifying the Ingress
After deploying, verify the Ingress was created and has an address assigned:
```bash
kubectl get ingress -n barycenter
```
Expected output:
```
NAME CLASS HOSTS ADDRESS PORTS AGE
barycenter nginx idp.example.com 203.0.113.10 80, 443 2m
```
Test the OIDC discovery endpoint:
```bash
curl https://idp.example.com/.well-known/openid-configuration
```
## Alternative: Gateway API
If your cluster uses the Gateway API instead of Ingress, see [Gateway API](./gateway-api.md) for HTTPRoute configuration.

View file

@ -0,0 +1,219 @@
# Helm Chart Values
This page documents all configurable values for the Barycenter Helm chart. Values are set in a `values.yaml` file or passed directly with `--set` on the Helm command line.
## Image
| Key | Default | Description |
|-----|---------|-------------|
| `image.repository` | `ghcr.io/cloudnebulaproject/barycenter` | Container image repository |
| `image.tag` | Chart `appVersion` | Image tag to deploy |
| `image.pullPolicy` | `IfNotPresent` | Kubernetes image pull policy |
Example:
```yaml
image:
repository: ghcr.io/cloudnebulaproject/barycenter
tag: "0.2.0"
pullPolicy: IfNotPresent
```
## Application Configuration
These values are rendered into the `config.toml` ConfigMap that the pod mounts at startup.
| Key | Default | Description |
|-----|---------|-------------|
| `config.server.publicBaseUrl` | `""` | OAuth issuer URL. Must be the externally-reachable URL (e.g., `https://idp.example.com`) |
| `config.database.url` | `sqlite:///app/data/barycenter.db?mode=rwc` | Database connection string. Supports `sqlite://` and `postgresql://` |
| `config.authz.enabled` | `false` | Enable the authorization policy engine |
Example:
```yaml
config:
server:
publicBaseUrl: "https://idp.example.com"
database:
url: "postgresql://barycenter:secret@postgres.db.svc:5432/barycenter"
authz:
enabled: true
```
## Ingress
| Key | Default | Description |
|-----|---------|-------------|
| `ingress.enabled` | `false` | Create an Ingress resource |
| `ingress.className` | `""` | Ingress class name (e.g., `nginx`) |
| `ingress.annotations` | `{}` | Annotations for the Ingress resource |
| `ingress.hosts` | `[]` | List of host rules with paths |
| `ingress.tls` | `[]` | TLS configuration with secret names and hosts |
See [Ingress Configuration](./helm-ingress.md) for detailed examples.
## Gateway API
| Key | Default | Description |
|-----|---------|-------------|
| `gatewayAPI.enabled` | `false` | Create an HTTPRoute resource |
| `gatewayAPI.parentRefs` | `[]` | Gateway references the HTTPRoute attaches to |
| `gatewayAPI.hostnames` | `[]` | Hostnames the HTTPRoute matches |
| `gatewayAPI.filters` | `[]` | Optional HTTPRoute filters |
See [Gateway API](./gateway-api.md) for detailed examples.
## Persistence
| Key | Default | Description |
|-----|---------|-------------|
| `persistence.enabled` | `false` | Create a PersistentVolumeClaim for `/app/data` |
| `persistence.size` | `1Gi` | Storage request size |
| `persistence.storageClass` | `""` | Storage class name. Empty uses the cluster default |
| `persistence.accessModes` | `["ReadWriteOnce"]` | PVC access modes |
Example:
```yaml
persistence:
enabled: true
size: 5Gi
storageClass: fast-ssd
```
When `persistence.enabled` is `false`, the data directory uses an `emptyDir` volume. Data is lost when the pod is rescheduled.
> **Note:** If you use PostgreSQL as your database, the PVC is still needed for RSA key material and JWKS files. You can reduce the size to the minimum (e.g., `100Mi`).
## User Sync
| Key | Default | Description |
|-----|---------|-------------|
| `userSync.enabled` | `false` | Run a user-sync init container before the main application starts |
| `userSync.users` | `""` | Inline JSON array of user objects |
| `userSync.existingSecret` | `""` | Name of an existing Secret containing user data under the key `users.json` |
See [User Sync in Kubernetes](./k8s-user-sync.md) for detailed examples.
## Authorization Policies
| Key | Default | Description |
|-----|---------|-------------|
| `authz.policies` | `""` | Inline KDL policy content |
| `authz.existingConfigMap` | `""` | Name of an existing ConfigMap containing policy files |
| `authz.networkPolicy.enabled` | `false` | Create a NetworkPolicy restricting access to port 8082 |
See [Authorization Policies in Kubernetes](./k8s-authz-policies.md) for detailed examples.
## Resources
| Key | Default | Description |
|-----|---------|-------------|
| `resources.requests.cpu` | (not set) | CPU request |
| `resources.requests.memory` | (not set) | Memory request |
| `resources.limits.cpu` | (not set) | CPU limit |
| `resources.limits.memory` | (not set) | Memory limit |
Example:
```yaml
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
```
Setting resource requests is recommended for production deployments to ensure the scheduler places pods appropriately and to prevent resource contention.
## Autoscaling
| Key | Default | Description |
|-----|---------|-------------|
| `autoscaling.enabled` | `false` | Create a HorizontalPodAutoscaler |
| `autoscaling.minReplicas` | `1` | Minimum number of replicas |
| `autoscaling.maxReplicas` | `10` | Maximum number of replicas |
| `autoscaling.targetCPUUtilizationPercentage` | `80` | Target CPU utilization for scaling |
| `autoscaling.targetMemoryUtilizationPercentage` | (not set) | Target memory utilization for scaling |
Example:
```yaml
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 8
targetCPUUtilizationPercentage: 70
```
> **Important:** When autoscaling is enabled with SQLite, only a single replica can safely write to the database. Use PostgreSQL for multi-replica deployments.
## Complete Example
A production-ready `values.yaml`:
```yaml
image:
tag: "0.2.0"
config:
server:
publicBaseUrl: "https://idp.example.com"
database:
url: "postgresql://barycenter:secret@postgres.db.svc:5432/barycenter"
authz:
enabled: true
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: idp.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: idp-tls
hosts:
- idp.example.com
persistence:
enabled: true
size: 1Gi
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 6
targetCPUUtilizationPercentage: 75
userSync:
enabled: true
existingSecret: barycenter-users
authz:
policies: |
resource "document" {
permission "read"
permission "write"
}
role "editor" {
permission "document:read"
permission "document:write"
}
networkPolicy:
enabled: true
```

View file

@ -0,0 +1,186 @@
# illumos / Solaris SMF
This guide covers deploying Barycenter as a Service Management Facility (SMF) service on illumos distributions such as SmartOS, OmniOS, and OpenIndiana. An SMF manifest is provided in the repository at `deploy/illumos/barycenter.xml`.
## Prerequisites
- An illumos-based system (SmartOS, OmniOS, OpenIndiana, or similar)
- The Rust toolchain (to build from source) or a pre-built binary
- SQLite libraries (if using SQLite) or a reachable PostgreSQL instance
## Step 1: Build the Binary
```bash
cargo build --release
```
The release binary is located at `target/release/barycenter`.
## Step 2: Create a Service User
```bash
useradd -d /var/barycenter -s /usr/bin/false -c "Barycenter IdP" barycenter
mkdir -p /var/barycenter/data
chown -R barycenter:barycenter /var/barycenter
```
## Step 3: Install the Binary
```bash
mkdir -p /opt/barycenter/bin
cp target/release/barycenter /opt/barycenter/bin/barycenter
chmod 755 /opt/barycenter/bin/barycenter
```
## Step 4: Install the Configuration File
```bash
mkdir -p /etc/barycenter
cp config.toml /etc/barycenter/config.toml
chmod 640 /etc/barycenter/config.toml
chown root:barycenter /etc/barycenter/config.toml
```
Edit `/etc/barycenter/config.toml` for your deployment:
```toml
[server]
public_base_url = "https://idp.example.com"
[database]
url = "sqlite:///var/barycenter/data/barycenter.db?mode=rwc"
[keys]
jwks_path = "/var/barycenter/data/jwks.json"
private_key_path = "/var/barycenter/data/private_key.pem"
```
## Step 5: Import the SMF Manifest
```bash
svccfg import deploy/illumos/barycenter.xml
```
This registers the service with SMF. You can verify the import:
```bash
svcs -a | grep barycenter
```
## Step 6: Enable the Service
```bash
svcadm enable barycenter
```
## Managing the Service
```bash
# Check service status
svcs barycenter
# Check detailed status (includes process ID)
svcs -p barycenter
# View service properties
svccfg -s barycenter listprop
# Restart the service
svcadm restart barycenter
# Disable the service
svcadm disable barycenter
# Re-enable the service
svcadm enable barycenter
```
## Viewing Logs
SMF services log to files managed by the framework. Find the log file path:
```bash
svcs -L barycenter
```
View the log:
```bash
less $(svcs -L barycenter)
```
Follow the log in real time:
```bash
tail -f $(svcs -L barycenter)
```
## Troubleshooting Service Failures
If the service enters a `maintenance` state, it means SMF detected a persistent failure:
```bash
# Check the service state
svcs -xv barycenter
# Read the service log for error details
less $(svcs -L barycenter)
# After fixing the issue, clear the maintenance state
svcadm clear barycenter
```
Common causes:
- **Configuration error** -- Invalid `config.toml` syntax or unreachable database.
- **Permission denied** -- The `barycenter` user cannot read the config file or write to the data directory.
- **Port in use** -- Another process is already listening on port 8080, 8081, or 8082.
## Setting Environment Variables
To set the log level or other environment variables, modify the SMF service properties:
```bash
svccfg -s barycenter setenv RUST_LOG info
svcadm restart barycenter
```
## Directory Layout
| Path | Owner | Mode | Purpose |
|------|-------|------|---------|
| `/opt/barycenter/bin/barycenter` | `root:root` | `755` | Application binary |
| `/etc/barycenter/config.toml` | `root:barycenter` | `640` | Configuration file |
| `/var/barycenter/data/` | `barycenter:barycenter` | `750` | Data directory |
| `/var/barycenter/data/private_key.pem` | `barycenter:barycenter` | `600` | RSA private key (created at first run) |
## Upgrading
```bash
# Build the new version
cargo build --release
# Disable the service
svcadm disable barycenter
# Replace the binary
cp target/release/barycenter /opt/barycenter/bin/barycenter
# Enable the service
svcadm enable barycenter
# Verify
svcs barycenter
```
Database migrations run automatically on startup.
## Zone Deployment
On SmartOS and other illumos distributions that support zones, Barycenter can be deployed inside a zone for additional isolation. The setup is identical to the steps above, performed inside the zone. Ensure the zone has network access to any external PostgreSQL instance if not using SQLite.
## Further Reading
- [Production Checklist](./production-checklist.md) -- steps to verify before going live
- [Reverse Proxy and TLS](./reverse-proxy-tls.md) -- placing Barycenter behind a reverse proxy
- [Backup and Recovery](./backup-recovery.md) -- backing up the data directory

View file

@ -0,0 +1,151 @@
# Authorization Policies in Kubernetes
The Barycenter Helm chart provides two ways to deploy [KDL authorization policies](../authz/kdl-policy-language.md) into Kubernetes: inline in `values.yaml` or via an existing ConfigMap.
## Prerequisites
The authorization engine must be enabled in the chart configuration:
```yaml
config:
authz:
enabled: true
```
Without this, the authorization server on port 8082 does not start and policy files are ignored.
## Inline Policies
For simple policy sets, define the KDL content directly in `values.yaml`:
```yaml
authz:
policies: |
resource "document" {
permission "read"
permission "write"
permission "delete"
}
resource "project" {
permission "read"
permission "manage"
}
role "viewer" {
permission "document:read"
permission "project:read"
}
role "editor" {
include "viewer"
permission "document:write"
}
role "admin" {
include "editor"
permission "document:delete"
permission "project:manage"
}
grant "admin" on="project/proj-1" to="user/alice"
grant "editor" on="project/proj-1" to="user/bob"
```
The chart renders this content into a ConfigMap and mounts it into the pod at the path the authorization engine expects.
## Using an Existing ConfigMap
For larger policy sets or when policies are managed through a separate GitOps pipeline, create a ConfigMap containing one or more `.kdl` files:
```bash
kubectl create configmap barycenter-policies \
--from-file=policies.kdl=./policies.kdl \
-n barycenter
```
Reference it in your values:
```yaml
authz:
existingConfigMap: barycenter-policies
```
The ConfigMap can contain multiple files. All `.kdl` files in the ConfigMap are loaded by the authorization engine.
### Managing Policies with Kustomize
If you use Kustomize, you can generate the ConfigMap from a directory of policy files:
```yaml
# kustomization.yaml
configMapGenerator:
- name: barycenter-policies
files:
- policies/base.kdl
- policies/teams.kdl
- policies/projects.kdl
```
Then set `authz.existingConfigMap` to the generated ConfigMap name (Kustomize appends a hash suffix by default).
## Network Policy
The authorization API (port 8082) should not be exposed to the public internet. By default, the chart's Service makes it reachable from anywhere within the cluster. To restrict access to only pods in the same namespace:
```yaml
authz:
networkPolicy:
enabled: true
```
This creates a NetworkPolicy that allows ingress to port 8082 only from pods in the same namespace as the Barycenter release. Pods in other namespaces and external traffic are denied.
If your services that need to call the authorization API are in a different namespace, you will need to customize the NetworkPolicy. The generated policy can be used as a starting point:
```bash
kubectl get networkpolicy -n barycenter -o yaml
```
## Updating Policies
KDL policies are loaded once at startup and are immutable at runtime. To apply policy changes:
1. Update the inline `authz.policies` content or the ConfigMap contents.
2. Run `helm upgrade` to update the ConfigMap.
3. Restart the Barycenter pods to reload policies:
```bash
kubectl rollout restart deployment barycenter -n barycenter
```
The restart is necessary because policy files are read at process startup. A ConfigMap change alone does not trigger a reload.
> **Tip:** To automate restarts on ConfigMap changes, consider using a tool like [Reloader](https://github.com/stakater/Reloader) or adding a checksum annotation to the Deployment template that changes when the ConfigMap content changes.
## Verifying Policies
After deployment, verify the authorization engine is running and policies are loaded:
```bash
# Check that port 8082 is listening
kubectl port-forward svc/barycenter 8082:8082 -n barycenter
# In another terminal, test a check request
curl -X POST http://localhost:8082/v1/check \
-H "Content-Type: application/json" \
-d '{
"principal": "user/alice",
"permission": "document:read",
"resource": "project/proj-1"
}'
```
A successful response indicates the policies are loaded and the engine is evaluating requests.
## Further Reading
- [Helm Chart Values](./helm-values.md) -- full reference of `authz.*` values
- [KDL Policy Language](../authz/kdl-policy-language.md) -- syntax and structure of policy files
- [Authorization Overview](../authz/overview.md) -- how the authorization engine works
- [Authz REST API](../authz/rest-api.md) -- the check endpoint and request format

View file

@ -0,0 +1,128 @@
# User Sync in Kubernetes
Barycenter supports provisioning users at startup through an init container that runs before the main application. This is useful for seeding an initial set of users in automated deployments where interactive user creation is not practical.
## How It Works
When `userSync.enabled` is `true`, the Helm chart adds an init container to the Barycenter pod. This init container:
1. Reads a JSON array of user objects from a file.
2. Inserts or updates each user in the database.
3. Exits. The main Barycenter container then starts with the users already present.
The init container runs every time the pod starts, so it is safe to add new users to the list and redeploy. Existing users are updated to match the provided data.
## Inline User Definitions
For small deployments or development environments, define users directly in `values.yaml`:
```yaml
userSync:
enabled: true
users: |
[
{
"username": "alice",
"password": "correct-horse-battery-staple",
"email": "alice@example.com"
},
{
"username": "bob",
"password": "another-strong-passphrase",
"email": "bob@example.com"
}
]
```
> **Warning:** Passwords in `values.yaml` are stored as plaintext in the Kubernetes ConfigMap. For production deployments, use the `existingSecret` method described below.
## Using an Existing Secret
For production, store user data in a Kubernetes Secret:
```bash
kubectl create secret generic barycenter-users \
--from-file=users.json=./users.json \
-n barycenter
```
Where `users.json` contains:
```json
[
{
"username": "alice",
"password": "correct-horse-battery-staple",
"email": "alice@example.com"
},
{
"username": "bob",
"password": "another-strong-passphrase",
"email": "bob@example.com"
}
]
```
Reference the Secret in your values:
```yaml
userSync:
enabled: true
existingSecret: barycenter-users
```
The chart mounts the Secret into the init container and reads the `users.json` key.
## User Object Schema
Each user object in the JSON array supports the following fields:
| Field | Required | Description |
|-------|----------|-------------|
| `username` | Yes | Unique username for login |
| `password` | Yes | Plaintext password (hashed with Argon2 on import) |
| `email` | No | User email address |
Passwords are always hashed before being stored in the database. The plaintext password in the JSON is never persisted.
## Updating Users
To add or modify users:
1. Update the JSON array (in `values.yaml` or in the Secret).
2. Redeploy with `helm upgrade`.
The init container runs again and applies the changes. Existing users whose data has not changed are left untouched.
## Combining with Other User Sources
User sync is additive. Users created through other means -- such as the [Admin GraphQL API](../admin/graphql-api.md), [public registration](../admin/public-registration.md), or direct database access -- are not affected by the sync process. The init container only manages the users present in the JSON array.
## Disabling User Sync
To stop running the init container, set `userSync.enabled` to `false` and redeploy. Previously synced users remain in the database; they are not deleted.
```yaml
userSync:
enabled: false
```
## Troubleshooting
If the pod is stuck in `Init:0/1` status, check the init container logs:
```bash
kubectl logs <pod-name> -c user-sync -n barycenter
```
Common issues:
- **Database not reachable** -- If using PostgreSQL, verify that the database is accessible from the pod and that the connection string in `config.database.url` is correct.
- **Invalid JSON** -- Validate the JSON syntax before deploying. A missing comma or bracket will prevent the init container from completing.
- **Secret not found** -- Ensure the Secret referenced by `existingSecret` exists in the same namespace as the Barycenter release.
## Further Reading
- [Helm Chart Values](./helm-values.md) -- full reference of `userSync.*` values
- [User Sync from JSON](../admin/user-sync.md) -- the underlying user-sync mechanism
- [Creating Users](../admin/creating-users.md) -- other methods for provisioning users

View file

@ -0,0 +1,131 @@
# Kubernetes with Helm
Barycenter ships a Helm chart for deploying to Kubernetes clusters. The chart is located at `deploy/helm/barycenter/` in the repository and is currently at version `0.2.0-alpha.15`.
## Prerequisites
- Kubernetes 1.26 or later
- Helm 3.12 or later
- `kubectl` configured to access your cluster
## Quick Install
```bash
helm install barycenter ./deploy/helm/barycenter \
--namespace barycenter \
--create-namespace
```
This creates a namespace called `barycenter` and deploys Barycenter with default values: SQLite storage, a single replica, and no Ingress.
## Install with Custom Values
Create a `values.yaml` file to override defaults:
```yaml
config:
server:
publicBaseUrl: "https://idp.example.com"
database:
url: "postgresql://barycenter:secret@postgres.db.svc:5432/barycenter"
ingress:
enabled: true
className: nginx
hosts:
- host: idp.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: idp-tls
hosts:
- idp.example.com
persistence:
enabled: true
size: 1Gi
```
Then install:
```bash
helm install barycenter ./deploy/helm/barycenter \
--namespace barycenter \
--create-namespace \
-f values.yaml
```
## Upgrade
After changing values or pulling a new chart version:
```bash
helm upgrade barycenter ./deploy/helm/barycenter \
--namespace barycenter \
-f values.yaml
```
## Uninstall
```bash
helm uninstall barycenter --namespace barycenter
```
This removes the Deployment, Services, and related resources. Persistent Volume Claims are retained by default. To delete them as well:
```bash
kubectl delete pvc -l app.kubernetes.io/instance=barycenter -n barycenter
```
## What the Chart Creates
The Helm chart creates the following Kubernetes resources:
| Resource | Purpose |
|----------|---------|
| Deployment | Runs the Barycenter pod(s) |
| Service | Exposes ports 8080, 8081, and 8082 within the cluster |
| ConfigMap | Stores the generated `config.toml` |
| PersistentVolumeClaim | Provides persistent storage for keys and SQLite (when `persistence.enabled`) |
| Ingress or HTTPRoute | Exposes the public OIDC port externally (when enabled) |
| HorizontalPodAutoscaler | Scales pods based on CPU/memory (when `autoscaling.enabled`) |
| ServiceAccount | Identity for the Barycenter pods |
| NetworkPolicy | Restricts access to the authorization port (when `authz.networkPolicy.enabled`) |
## Security Defaults
The chart applies these security settings by default:
- **`runAsNonRoot: true`** -- the container does not run as root.
- **`readOnlyRootFilesystem: true`** -- the container filesystem is immutable.
- **`allowPrivilegeEscalation: false`** -- prevents privilege escalation.
These defaults follow Kubernetes Pod Security Standards at the `restricted` level.
## Architecture in Kubernetes
```
+-----------+
Internet --> Ingress -->| Service |
or | port 8080 |---> Barycenter Pod(s)
HTTPRoute +-----------+
+-----------+
Internal --> | Service |
(cluster) | port 8081 |---> (Admin API)
+-----------+
+-----------+
Internal --> | Service |
(same ns) NetworkPolicy| port 8082 |---> (Authz API)
+-----------+
```
Only port 8080 is exposed through the Ingress or Gateway API route. The admin and authorization ports are reachable only within the cluster, with optional NetworkPolicy restrictions on the authorization port.
## Further Reading
- [Helm Chart Values](./helm-values.md) -- complete reference of all configurable values
- [Ingress Configuration](./helm-ingress.md) -- setting up nginx Ingress with cert-manager
- [Gateway API](./gateway-api.md) -- using HTTPRoute instead of Ingress
- [User Sync in Kubernetes](./k8s-user-sync.md) -- provisioning users via init containers
- [Authorization Policies in Kubernetes](./k8s-authz-policies.md) -- deploying KDL policies

View file

@ -0,0 +1,154 @@
# Production Checklist
Use this checklist to verify your Barycenter deployment before serving production traffic. Each item includes the rationale and how to verify or fix it.
## Configuration
- [ ] **Set `public_base_url` to the externally-reachable HTTPS URL.**
This value becomes the OAuth `iss` claim in ID tokens and the `issuer` in the OpenID discovery document. OIDC clients validate tokens against this URL.
```toml
[server]
public_base_url = "https://idp.example.com"
```
Verify:
```bash
curl https://idp.example.com/.well-known/openid-configuration | jq .issuer
```
- [ ] **Use HTTPS.** TLS must be terminated either by a [reverse proxy](./reverse-proxy-tls.md) or a Kubernetes Ingress/Gateway. Barycenter does not terminate TLS natively. Never expose the HTTP port directly to the internet.
- [ ] **Configure the database connection string.** For production, PostgreSQL is recommended for multi-replica deployments. SQLite is suitable for single-instance setups.
```toml
[database]
url = "postgresql://barycenter:secret@db-host:5432/barycenter"
```
## Logging
- [ ] **Set the log level to `info` or `warn`.** Avoid running `debug` or `trace` in production as these levels produce high log volume and may expose sensitive data in logs.
```bash
RUST_LOG=info
```
- [ ] **Forward logs to a centralized logging system.** Use journald, Docker log drivers, or Kubernetes log aggregation to collect and retain logs.
## Persistent Storage
- [ ] **Persist the data directory.** The data directory contains the RSA private key and (for SQLite) the database. Losing the private key invalidates all issued tokens.
| Deployment | Mount point |
|------------|-------------|
| Docker | `/app/data` volume |
| systemd | `/var/lib/barycenter/data/` |
| FreeBSD | `/var/db/barycenter/data/` |
| illumos | `/var/barycenter/data/` |
| Kubernetes | PVC at `/app/data` |
- [ ] **Verify the data directory is writable by the Barycenter process.**
## Backups
- [ ] **Back up the RSA private key.** This key signs all ID tokens. If lost, every client must re-validate or re-authenticate. See [Backup and Recovery](./backup-recovery.md).
- [ ] **Back up the database.** Both SQLite and PostgreSQL databases should be backed up regularly.
- [ ] **Back up the configuration file.** Store it in version control or a configuration management system.
- [ ] **Test backup restoration.** Periodically verify that backups can be restored to a working state.
## File Permissions
- [ ] **Private key file: mode `600`.** Only the Barycenter service user should be able to read the RSA private key.
```bash
chmod 600 /var/lib/barycenter/data/private_key.pem
```
- [ ] **Configuration file: mode `640`.** The config file may contain database credentials. Restrict access to root and the Barycenter group.
```bash
chmod 640 /etc/barycenter/config.toml
chown root:barycenter /etc/barycenter/config.toml
```
- [ ] **Data directory: mode `750`.** Only the Barycenter user and group should access the directory.
## Run as Non-Root
- [ ] **The Barycenter process does not run as root.** All deployment methods (systemd, rc.d, SMF, Docker, Kubernetes) should run the process as a dedicated unprivileged user.
| Deployment | User |
|------------|------|
| Docker | Container default (non-root) |
| systemd | `barycenter` |
| FreeBSD | `barycenter` |
| illumos | `barycenter` |
| Kubernetes | `runAsNonRoot: true` in security context |
## Container Security (Docker / Kubernetes)
- [ ] **Enable `no-new-privileges`.** Prevents the process from gaining additional privileges.
- [ ] **Use a read-only root filesystem.** Mount the root filesystem as read-only and provide writable volumes only where needed.
- [ ] **Drop all capabilities.** The Barycenter process does not require any Linux capabilities.
- [ ] **In Kubernetes, apply the `restricted` Pod Security Standard.**
## Network
- [ ] **Only expose port 8080 publicly.** The admin API (8081) and authorization API (8082) should not be reachable from the internet.
- [ ] **Firewall rules.** Restrict inbound traffic to only the ports and source networks required.
| Port | Access |
|------|--------|
| 8080 | Public (through reverse proxy) |
| 8081 | Management network only |
| 8082 | Application network only |
- [ ] **In Kubernetes, enable the authz NetworkPolicy** if using the authorization engine:
```yaml
authz:
networkPolicy:
enabled: true
```
## Monitoring and Health Checks
- [ ] **Set up health checks.** Monitor the OIDC discovery endpoint to confirm the service is responsive:
```bash
curl -f https://idp.example.com/.well-known/openid-configuration
```
- [ ] **Monitor disk usage.** For SQLite deployments, the database grows over time. Set alerts for low disk space on the data volume.
- [ ] **Monitor certificate expiration.** Set alerts for TLS certificates nearing expiry. Automated renewal (certbot, cert-manager) should be verified periodically.
- [ ] **Monitor background jobs.** Query the admin API to check that cleanup jobs are running successfully:
```graphql
query {
jobLogs(limit: 5, onlyFailures: true) {
jobName
startedAt
success
}
}
```
## Client Registration
- [ ] **Review registered clients.** Ensure only expected clients are registered. Remove test clients that should not exist in production.
- [ ] **Verify redirect URIs.** Each registered client's redirect URIs should use HTTPS and match the actual callback URLs of the client application.
## Summary
Completing every item on this checklist ensures that Barycenter is deployed with appropriate security, reliability, and operational visibility for production use. Revisit this checklist after infrastructure changes, upgrades, or scaling events.

View file

@ -0,0 +1,215 @@
# Reverse Proxy and TLS
Barycenter does not terminate TLS natively. In production, it should be placed behind a reverse proxy that handles TLS termination and forwards requests to Barycenter over HTTP on the local network or loopback interface.
## Why a Reverse Proxy
- **TLS termination** -- The proxy handles certificate management and encryption.
- **HTTP/2 and HTTP/3** -- Most reverse proxies support modern HTTP protocols transparently.
- **Rate limiting and request filtering** -- Additional protection before requests reach the application.
- **Static asset serving** -- The proxy can serve static files (CSS, JavaScript, WASM) directly if needed.
- **Centralized logging** -- Access logs in a standardized format.
## Port Mapping
Only the public OIDC server (port 8080) should be exposed through the reverse proxy. The admin (8081) and authorization (8082) ports should remain on the internal network.
```
Internet
|
v
[Reverse Proxy :443] --> [Barycenter :8080] (public OIDC)
[Barycenter :8081] (admin, internal only)
[Barycenter :8082] (authz, internal only)
```
## nginx
### Basic Configuration
```nginx
server {
listen 443 ssl http2;
server_name idp.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
### With HTTP-to-HTTPS Redirect
```nginx
server {
listen 80;
server_name idp.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name idp.example.com;
ssl_certificate /etc/letsencrypt/live/idp.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/idp.example.com/privkey.pem;
# TLS hardening
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# HSTS - instruct browsers to always use HTTPS
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_connect_timeout 5s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
}
```
### With Let's Encrypt (certbot)
Install certbot and obtain a certificate:
```bash
sudo certbot --nginx -d idp.example.com
```
Certbot modifies the nginx configuration to add TLS settings and sets up automatic renewal via a systemd timer or cron job.
## Caddy
Caddy provides automatic HTTPS with Let's Encrypt out of the box:
```
idp.example.com {
reverse_proxy localhost:8080
}
```
This is the entire configuration needed. Caddy automatically obtains and renews TLS certificates from Let's Encrypt and redirects HTTP to HTTPS.
For more control:
```
idp.example.com {
reverse_proxy localhost:8080 {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Proto {scheme}
}
header {
Strict-Transport-Security "max-age=63072000; includeSubDomains"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
}
}
```
## HAProxy
```
frontend https
bind *:443 ssl crt /etc/haproxy/certs/idp.example.com.pem
default_backend barycenter
backend barycenter
server barycenter1 127.0.0.1:8080 check
http-request set-header X-Real-IP %[src]
http-request set-header X-Forwarded-Proto https
```
## Apache httpd
```apache
<VirtualHost *:443>
ServerName idp.example.com
SSLEngine on
SSLCertificateFile /path/to/cert.pem
SSLCertificateKeyFile /path/to/key.pem
ProxyPreserveHost On
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Real-IP "%{REMOTE_ADDR}e"
</VirtualHost>
```
## Important: Set the Public Base URL
Regardless of which reverse proxy you use, you must configure Barycenter's `public_base_url` to match the external URL:
```toml
[server]
public_base_url = "https://idp.example.com"
```
Or via environment variable:
```bash
BARYCENTER__SERVER__PUBLIC_BASE_URL=https://idp.example.com
```
This URL is used as the OAuth `iss` (issuer) claim in ID tokens and in the OpenID discovery document. If it does not match the URL that clients use to reach Barycenter, token validation will fail.
## TLS Best Practices
- **Use TLS 1.2 or 1.3 only.** Disable TLS 1.0 and 1.1.
- **Enable HSTS.** The `Strict-Transport-Security` header prevents protocol downgrade attacks.
- **Use strong cipher suites.** Prefer AEAD ciphers (AES-GCM, ChaCha20-Poly1305).
- **Automate certificate renewal.** Use Let's Encrypt with certbot, Caddy's built-in ACME, or cert-manager in Kubernetes.
- **Monitor certificate expiration.** Set up alerts for certificates approaching their expiry date.
## Restricting Admin and Authz Access
If you need to expose the admin or authorization ports through the proxy (for example, from a management network), use separate server blocks with IP-based access control:
```nginx
server {
listen 443 ssl http2;
server_name admin.idp.internal.example.com;
ssl_certificate /path/to/internal-cert.pem;
ssl_certificate_key /path/to/internal-key.pem;
allow 10.0.0.0/8;
deny all;
location / {
proxy_pass http://localhost:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
```
## Further Reading
- [Production Checklist](./production-checklist.md) -- complete list of pre-launch checks
- [Docker Compose](./docker-compose.md) -- reverse proxy in front of a Compose stack
- [Ingress Configuration](./helm-ingress.md) -- TLS termination in Kubernetes

View file

@ -0,0 +1,178 @@
# Linux systemd
This guide covers deploying Barycenter as a systemd service on Linux distributions such as Debian, Ubuntu, Fedora, RHEL, and Arch Linux. A systemd unit file is provided in the repository at `deploy/systemd/barycenter.service`.
## Prerequisites
- A Linux system with systemd
- The Rust toolchain (to build from source) or a pre-built binary
- SQLite development libraries (if using SQLite) or a reachable PostgreSQL instance
## Step 1: Build the Binary
```bash
cargo build --release
```
The release binary is located at `target/release/barycenter`.
## Step 2: Create a Service User
Create a dedicated system user with no login shell and a home directory for data:
```bash
sudo useradd -r -s /bin/false -d /var/lib/barycenter barycenter
```
## Step 3: Install the Binary
```bash
sudo cp target/release/barycenter /usr/local/bin/barycenter
sudo chmod 755 /usr/local/bin/barycenter
```
## Step 4: Create Directories
```bash
sudo mkdir -p /etc/barycenter
sudo mkdir -p /var/lib/barycenter/data
sudo chown -R barycenter:barycenter /var/lib/barycenter
```
| Directory | Purpose |
|-----------|---------|
| `/etc/barycenter/` | Configuration file |
| `/var/lib/barycenter/data/` | Database (SQLite), RSA private key, JWKS |
## Step 5: Install the Configuration File
Copy and edit the configuration file:
```bash
sudo cp config.toml /etc/barycenter/config.toml
sudo chmod 640 /etc/barycenter/config.toml
sudo chown root:barycenter /etc/barycenter/config.toml
```
Edit `/etc/barycenter/config.toml` to set the correct values for your deployment. At a minimum, configure the `public_base_url` and database path:
```toml
[server]
public_base_url = "https://idp.example.com"
[database]
url = "sqlite:///var/lib/barycenter/data/barycenter.db?mode=rwc"
[keys]
jwks_path = "/var/lib/barycenter/data/jwks.json"
private_key_path = "/var/lib/barycenter/data/private_key.pem"
```
## Step 6: Install the systemd Unit
```bash
sudo cp deploy/systemd/barycenter.service /etc/systemd/system/barycenter.service
sudo systemctl daemon-reload
```
The unit file runs Barycenter as the `barycenter` user, reads the configuration from `/etc/barycenter/config.toml`, and restarts the service on failure.
## Step 7: Enable and Start
```bash
sudo systemctl enable --now barycenter
```
This enables Barycenter to start automatically on boot and starts it immediately.
## Managing the Service
```bash
# Check status
sudo systemctl status barycenter
# View logs
sudo journalctl -u barycenter
# Follow logs in real time
sudo journalctl -u barycenter -f
# Restart after a configuration change
sudo systemctl restart barycenter
# Stop the service
sudo systemctl stop barycenter
# Disable automatic start on boot
sudo systemctl disable barycenter
```
## Log Level
Set the log level through the `RUST_LOG` environment variable. You can override it in the unit file by creating a drop-in:
```bash
sudo systemctl edit barycenter
```
Add the following content:
```ini
[Service]
Environment=RUST_LOG=info
```
Save and restart:
```bash
sudo systemctl restart barycenter
```
Common log level values:
| Value | Description |
|-------|-------------|
| `error` | Only errors |
| `warn` | Warnings and errors |
| `info` | Informational messages (recommended for production) |
| `debug` | Detailed debugging output |
| `barycenter=debug` | Debug output for Barycenter only, info for dependencies |
## File Permissions Summary
| Path | Owner | Mode | Purpose |
|------|-------|------|---------|
| `/usr/local/bin/barycenter` | `root:root` | `755` | Application binary |
| `/etc/barycenter/config.toml` | `root:barycenter` | `640` | Configuration file |
| `/var/lib/barycenter/data/` | `barycenter:barycenter` | `750` | Data directory |
| `/var/lib/barycenter/data/private_key.pem` | `barycenter:barycenter` | `600` | RSA private key (created at first run) |
## Upgrading
To upgrade Barycenter to a new version:
```bash
# Build the new version
cargo build --release
# Stop the service
sudo systemctl stop barycenter
# Replace the binary
sudo cp target/release/barycenter /usr/local/bin/barycenter
# Start the service
sudo systemctl start barycenter
# Verify
sudo systemctl status barycenter
sudo journalctl -u barycenter --since "1 minute ago"
```
Database migrations run automatically on startup.
## Further Reading
- [Production Checklist](./production-checklist.md) -- steps to verify before going live
- [Reverse Proxy and TLS](./reverse-proxy-tls.md) -- placing Barycenter behind nginx
- [Backup and Recovery](./backup-recovery.md) -- backing up the data directory

View file

@ -0,0 +1,123 @@
# Architecture
Barycenter is an OpenID Connect Identity Provider built in Rust. This page provides a high-level overview of the system architecture and links to detailed documentation for each subsystem.
## System Overview
Barycenter runs as a single binary that serves two HTTP interfaces:
- **Main server** (default port 9090): Handles all OpenID Connect, OAuth 2.0, WebAuthn, and user-facing endpoints.
- **Admin server** (default port 9091): Serves the GraphQL management API for administrative operations.
Both servers share the same application state, database connection, and background job scheduler.
```text
┌─────────────────────────────────────────┐
│ Barycenter │
│ │
Clients ────────► │ :9090 Main Server (OIDC/OAuth/WebAuthn) │
│ │
Admins ─────────► │ :9091 Admin Server (GraphQL) │
│ │
│ Background Jobs (scheduled) │
│ │
│ ┌──────────────────┐ │
│ │ AppState │ │
│ │ ┌─────────────┐ │ │
│ │ │ Settings │ │ │
│ │ │ Database │ │ │
│ │ │ JwksManager │ │ │
│ │ │ WebAuthn │ │ │
│ │ └─────────────┘ │ │
│ └──────────────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ Database │ │
│ │ SQLite / Postgres │ │
│ └─────────────────┘ │
└─────────────────────────────────────────┘
```
## Application State
The `AppState` struct is shared across all request handlers via Axum's state extraction. It contains:
| Field | Type | Purpose |
|-------|------|---------|
| `settings` | `Arc<Settings>` | Application configuration (server, database, keys, federation) |
| `db` | `DatabaseConnection` | SeaORM database connection (SQLite or PostgreSQL) |
| `jwks` | `JwksManager` | RSA key management, JWT signing, JWKS publication |
| `webauthn` | `WebAuthnManager` | WebAuthn/passkey operations (registration, authentication, 2FA) |
## Startup Sequence
The application initializes in a specific order, where each step depends on the previous:
1. **Parse CLI arguments** -- Read the config file path from command-line arguments.
2. **Load settings** -- Merge configuration from the config file, environment variables, and defaults.
3. **Initialize database** -- Connect to SQLite or PostgreSQL and run pending migrations via `Migrator::up()`.
4. **Initialize JWKS** -- Generate or load RSA keys for JWT signing.
5. **Initialize WebAuthn** -- Configure the WebAuthn manager with the application's origin and relying party ID.
6. **Start background jobs** -- Schedule cleanup jobs for sessions, tokens, and WebAuthn challenges.
7. **Start HTTP servers** -- Bind the main server and admin server to their configured ports.
## Key Subsystems
### Module Structure
The codebase is organized into focused modules, each handling a specific concern. See [Module Structure](module-structure.md) for the complete list with descriptions and a dependency graph.
### Database Schema
Barycenter uses SeaORM with 12 entity tables covering clients, users, tokens, sessions, passkeys, and administrative records. See [Database Schema](database-schema.md) for table definitions.
### Error Handling
Errors are handled through the `CrabError` enum with miette diagnostics for developer-facing messages and OAuth-compliant error responses for client-facing errors. See [Error Handling](error-handling.md) for details.
### Security
Security is enforced at multiple layers: transport (TLS), browser (security headers), protocol (PKCE, nonce), and infrastructure (hardening). See the [Security](../security/security-model.md) section for comprehensive documentation.
## Request Flow
A typical OpenID Connect Authorization Code flow passes through these components:
```text
1. GET /authorize
└─► web module
└─► Validate client_id, redirect_uri, scope, PKCE
└─► Check session (session module)
└─► If not authenticated: redirect to /login
└─► Store auth code (storage module)
└─► Redirect to client with code + state
2. POST /token
└─► web module
└─► Authenticate client (Basic auth or POST body)
└─► Validate authorization code + PKCE verifier (storage module)
└─► Generate access token (storage module)
└─► Sign ID token (jwks module)
└─► Return JSON response
3. GET /userinfo
└─► web module
└─► Validate Bearer token (storage module)
└─► Return user claims as JSON
```
## Technology Stack
| Component | Technology |
|-----------|-----------|
| Language | Rust |
| Web framework | Axum |
| Database ORM | SeaORM |
| Database backends | SQLite, PostgreSQL |
| JWT/JOSE | josekit |
| Password hashing | argon2 |
| WebAuthn | webauthn-rs |
| GraphQL | async-graphql |
| WASM tooling | wasm-pack, wasm-bindgen |
| Serialization | serde, serde_json |
| Configuration | config-rs |

View file

@ -0,0 +1,127 @@
# Building
Barycenter is built with Cargo, the Rust package manager and build system. The project is organized as a Cargo workspace with multiple crates.
## Quick Start
```bash
# Check the code compiles without producing a binary
cargo check
# Build in debug mode (faster compilation, unoptimized)
cargo build
# Build in release mode (slower compilation, optimized)
cargo build --release
# Run directly in debug mode
cargo run
# Run with a custom configuration file
cargo run -- --config path/to/config.toml
# Run in release mode
cargo run --release
```
## Workspace Structure
The repository is organized as a Cargo workspace with three crates:
```
barycenter/
├── Cargo.toml # Workspace root
├── src/ # Main application crate
│ ├── main.rs # Entry point
│ ├── lib.rs # Library root (module declarations)
│ └── ...
├── client-wasm/ # WebAssembly client crate
│ ├── Cargo.toml
│ └── src/
│ └── lib.rs
├── migration/ # SeaORM database migrations
│ ├── Cargo.toml
│ └── src/
│ ├── lib.rs
│ └── m*.rs # Individual migration files
└── static/ # Static assets served by the web server
└── wasm/ # Built WASM output (generated)
```
### Main Crate (`barycenter`)
The primary application crate containing all server-side logic: HTTP endpoints, authentication, session management, database operations, JWKS handling, and the admin GraphQL API.
### Client WASM Crate (`client-wasm`)
A Rust library compiled to WebAssembly that runs in the browser. It provides the client-side WebAuthn/passkey functionality used by the login and account management pages. This crate is built separately with `wasm-pack` and is not part of the normal `cargo build`. See [WASM Client](wasm-client.md) for build instructions.
### Migration Crate (`migration`)
Contains SeaORM database migration files that define and evolve the database schema. Migrations run automatically on application startup. This crate is a dependency of the main crate.
## Build Profiles
### Debug Build
```bash
cargo build
```
The debug build is intended for development. It compiles faster but produces larger, slower binaries. Debug builds include:
- Debug symbols for debugger support
- Overflow checks on arithmetic operations
- Debug assertions
- No optimizations
The output binary is located at `target/debug/barycenter`.
### Release Build
```bash
cargo build --release
```
The release build is intended for production deployment. It takes longer to compile but produces optimized binaries. Release builds include:
- Full optimization (opt-level 3 by default)
- No debug assertions
- Smaller binary size (with strip if configured)
The output binary is located at `target/release/barycenter`.
## Checking Code
To verify that the code compiles without producing a binary:
```bash
cargo check
```
This is significantly faster than `cargo build` and is useful during development for catching compilation errors quickly.
## Logging
Barycenter uses the `RUST_LOG` environment variable to control log verbosity:
```bash
# Enable debug logging for all crates
RUST_LOG=debug cargo run
# Enable trace logging for Barycenter only
RUST_LOG=barycenter=trace cargo run
# Combine different levels for different crates
RUST_LOG=barycenter=debug,sea_orm=info cargo run
```
## Cross-Compilation
For building Docker images targeting different architectures (e.g., building an ARM64 image on an AMD64 host), see the Docker build configuration in the CI pipeline. The release process produces multi-platform images for both `amd64` and `arm64`.
## Next Steps
- [Testing](testing.md) -- Running the test suite
- [WASM Client](wasm-client.md) -- Building the WebAssembly client
- [Architecture](architecture.md) -- Understanding the codebase structure

View file

@ -0,0 +1,241 @@
# Contributing
This page describes the development workflow, branching strategy, commit conventions, and CI pipeline for contributing to Barycenter.
## Branching Strategy
Barycenter follows the Gitflow workflow:
| Branch | Purpose | Merges Into |
|--------|---------|-------------|
| `main` | Production-ready code. Every commit on `main` is a release or release candidate. | -- |
| `develop` | Integration branch for features. Contains the latest development changes. | `main` (via release branch) |
| `feature/*` | New features and enhancements. One branch per feature. | `develop` |
| `release/*` | Release preparation. Version bumps, changelog updates, final fixes. | `main` and `develop` |
| `hotfix/*` | Urgent production fixes. Branched from `main`. | `main` and `develop` |
### Feature Branch Workflow
1. Create a feature branch from `develop`:
```bash
git checkout develop
git pull origin develop
git checkout -b feature/my-feature
```
2. Make changes, commit with conventional commit messages (see below).
3. Push and open a pull request targeting `develop`:
```bash
git push -u origin feature/my-feature
```
4. After code review and CI checks pass, merge the PR into `develop`.
### Release Workflow
1. Create a release branch from `develop`:
```bash
git checkout -b release/1.2.0 develop
```
2. Update version numbers, finalize changelog, apply last-minute fixes.
3. Merge into `main` and tag:
```bash
git checkout main
git merge release/1.2.0
git tag v1.2.0
git push origin main --tags
```
4. Merge back into `develop`:
```bash
git checkout develop
git merge release/1.2.0
```
### Hotfix Workflow
1. Create a hotfix branch from `main`:
```bash
git checkout -b hotfix/1.2.1 main
```
2. Apply the fix and update the version.
3. Merge into both `main` (with tag) and `develop`.
## Conventional Commits
All commit messages must follow the [Conventional Commits](https://www.conventionalcommits.org/) format:
```
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
```
### Commit Types
| Type | Usage |
|------|-------|
| `feat` | A new feature or capability |
| `fix` | A bug fix |
| `docs` | Documentation changes only |
| `chore` | Build process, dependencies, or auxiliary tool changes |
| `refactor` | Code change that neither fixes a bug nor adds a feature |
| `test` | Adding or updating tests |
| `perf` | Performance improvement |
| `ci` | CI/CD pipeline changes |
| `style` | Code style changes (formatting, whitespace) that do not affect logic |
### Examples
```
feat: add refresh token rotation
Implement refresh token rotation per RFC 6749. When a refresh token
is used, the old token is revoked and a new one is issued.
Closes #42
```
```
fix: prevent double consumption of authorization codes
Authorization codes were not atomically marked as consumed, allowing
a race condition where the same code could be exchanged twice.
```
```
docs: add PKCE security documentation
```
```
chore: update sea-orm to 1.1.0
```
### Breaking Changes
Breaking changes must include a `BREAKING CHANGE` footer or a `!` after the type:
```
feat!: change token endpoint to require PKCE for all clients
BREAKING CHANGE: Public clients without PKCE will now receive an
invalid_request error. All clients must include code_challenge and
code_challenge_method=S256 in authorization requests.
```
## Pull Request Process
1. **Create a feature branch** following the naming convention `feature/descriptive-name`.
2. **Write tests** for new functionality. All new features must include tests.
3. **Run the full CI check locally** before pushing:
```bash
cargo fmt --check
cargo clippy -- -D warnings
cargo nextest run
```
4. **Open a PR** targeting `develop` (or `main` for hotfixes).
5. **Fill in the PR template** with a description of changes, testing steps, and any breaking changes.
6. **Address review feedback** by pushing additional commits (do not force-push during review).
7. **CI must pass** before the PR can be merged.
## CI Pipeline
Every pull request runs the following checks. All must pass before merging.
### Formatting Check
```bash
cargo fmt --check
```
Verifies that all code is formatted according to the project's `rustfmt` configuration. Run `cargo fmt` locally to auto-format before committing.
### Clippy Lints
```bash
cargo clippy -- -D warnings
```
Runs the Clippy linter with all warnings treated as errors. Clippy catches common mistakes, non-idiomatic code, and potential bugs. If Clippy produces a false positive, suppress it with an `#[allow]` attribute and a comment explaining why.
### Test Suite
```bash
cargo nextest run
```
Runs the full test suite using cargo-nextest. Tests must pass on both SQLite and PostgreSQL backends (if applicable). See [Testing](testing.md) for details on why nextest is required.
### Docker Build
The CI pipeline builds the Docker image to verify that the application compiles and packages correctly. This catches issues like missing dependencies or build script errors that might not appear in a local `cargo build`.
### Security Audit
```bash
cargo audit
```
Checks dependencies for known security vulnerabilities using the RustSec Advisory Database. Any crate with a known vulnerability must be updated or the advisory must be explicitly acknowledged with a justification.
## Code Style
### Formatting
The project uses the default `rustfmt` configuration. Run `cargo fmt` before committing.
### Error Handling
- Use `CrabError` for all errors that propagate through the application.
- Add miette diagnostics with actionable help text for configuration and runtime errors.
- See [Error Handling](error-handling.md) for patterns and guidelines.
### Database Access
- Always use SeaORM entities for database access. Never write raw SQL.
- Define new entities in `src/entities/` and add corresponding migrations in `migration/src/`.
### Testing
- Use `cargo nextest run`, not `cargo test`.
- Write unit tests in `#[cfg(test)]` modules within the source file.
- Write integration tests in the `tests/` directory.
- Use `#[tokio::test]` for async tests.
## Setting Up the Development Environment
1. **Install Rust** (stable toolchain):
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
2. **Install development tools**:
```bash
cargo install cargo-nextest
cargo install wasm-pack
cargo install sea-orm-cli
```
3. **Clone the repository**:
```bash
git clone https://github.com/your-org/barycenter.git
cd barycenter
```
4. **Build and run**:
```bash
cargo build
cargo run
```
5. **Run tests**:
```bash
cargo nextest run
```

View file

@ -0,0 +1,237 @@
# Database Schema
Barycenter uses SeaORM for database access and supports both SQLite and PostgreSQL backends. The schema is defined through 12 entity tables managed by SeaORM migrations.
## Database Backend Detection
The database backend is automatically detected from the connection string:
- **SQLite**: `sqlite://barycenter.db?mode=rwc`
- **PostgreSQL**: `postgresql://user:password@localhost/barycenter`
No code changes are needed to switch backends. SeaORM generates compatible queries for both.
## Entity Tables
### `user`
Stores registered user accounts.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Unique user identifier (base64url-encoded random bytes) |
| `username` | TEXT (UNIQUE) | Login username |
| `password_hash` | TEXT | Argon2 password hash |
| `requires_2fa` | INTEGER | Whether admin-enforced 2FA is enabled (0 or 1) |
| `passkey_enrolled_at` | TIMESTAMP | When the user first enrolled a passkey (NULL if none) |
| `created_at` | TIMESTAMP | Account creation timestamp |
| `updated_at` | TIMESTAMP | Last modification timestamp |
### `client`
Stores OAuth 2.0 client registrations created through dynamic client registration.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Client ID (base64url-encoded random bytes) |
| `client_secret` | TEXT | Client secret for confidential clients |
| `redirect_uris` | TEXT | JSON array of registered redirect URIs |
| `client_name` | TEXT | Human-readable client name |
| `token_endpoint_auth_method` | TEXT | Authentication method (`client_secret_basic` or `client_secret_post`) |
| `grant_types` | TEXT | JSON array of allowed grant types |
| `response_types` | TEXT | JSON array of allowed response types |
| `created_at` | TIMESTAMP | Client registration timestamp |
### `auth_code`
Stores authorization codes issued during the authorization flow. Codes are single-use and short-lived.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Authorization code value (base64url-encoded random bytes) |
| `client_id` | TEXT (FK) | Client that requested the code |
| `subject` | TEXT | Authenticated user's subject identifier |
| `redirect_uri` | TEXT | Redirect URI used in the authorization request |
| `scope` | TEXT | Granted scope string |
| `code_challenge` | TEXT | PKCE S256 code challenge |
| `code_challenge_method` | TEXT | Always "S256" |
| `nonce` | TEXT | OpenID Connect nonce (if provided) |
| `consumed` | INTEGER | Whether the code has been used (0 or 1) |
| `created_at` | TIMESTAMP | Code issuance timestamp |
| `expires_at` | TIMESTAMP | Code expiration (5 minutes after creation) |
### `access_token`
Stores issued access tokens for API access.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Access token value (base64url-encoded random bytes) |
| `client_id` | TEXT (FK) | Client the token was issued to |
| `subject` | TEXT | User's subject identifier |
| `scope` | TEXT | Granted scope string |
| `revoked` | INTEGER | Whether the token has been revoked (0 or 1) |
| `created_at` | TIMESTAMP | Token issuance timestamp |
| `expires_at` | TIMESTAMP | Token expiration (1 hour after creation) |
### `refresh_token`
Stores refresh tokens for obtaining new access tokens without re-authentication.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Refresh token value (base64url-encoded random bytes) |
| `client_id` | TEXT (FK) | Client the token was issued to |
| `subject` | TEXT | User's subject identifier |
| `scope` | TEXT | Granted scope string |
| `access_token_id` | TEXT | Associated access token |
| `revoked` | INTEGER | Whether the token has been revoked (0 or 1) |
| `created_at` | TIMESTAMP | Token issuance timestamp |
| `expires_at` | TIMESTAMP | Token expiration timestamp |
### `session`
Stores server-side session data for authenticated users.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Session identifier (base64url-encoded random bytes) |
| `subject` | TEXT | Authenticated user's subject identifier |
| `amr` | TEXT | JSON array of Authentication Method References (e.g., `["pwd"]`, `["pwd", "hwk"]`) |
| `acr` | TEXT | Authentication Context Reference (`"aal1"` or `"aal2"`) |
| `mfa_verified` | INTEGER | Whether multi-factor authentication was completed (0 or 1) |
| `auth_time` | TIMESTAMP | When the user authenticated |
| `created_at` | TIMESTAMP | Session creation timestamp |
| `expires_at` | TIMESTAMP | Session expiration timestamp |
### `passkey`
Stores registered WebAuthn/FIDO2 passkey credentials.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Credential ID (base64url-encoded) |
| `user_id` | TEXT (FK) | User who owns this passkey |
| `name` | TEXT | User-assigned friendly name |
| `passkey_json` | TEXT | Full `Passkey` object serialized as JSON (includes public key, counter, backup state) |
| `created_at` | TIMESTAMP | Passkey registration timestamp |
| `last_used_at` | TIMESTAMP | Last successful authentication timestamp |
### `webauthn_challenge`
Temporary storage for WebAuthn challenge data during registration and authentication ceremonies.
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Challenge identifier |
| `challenge_type` | TEXT | Type of ceremony (`registration`, `authentication`, `2fa`) |
| `challenge_data` | TEXT | Serialized challenge state (JSON) |
| `user_id` | TEXT | Associated user (NULL for authentication start) |
| `created_at` | TIMESTAMP | Challenge creation timestamp |
| `expires_at` | TIMESTAMP | Challenge expiration (5 minutes after creation) |
### `property`
Key-value store for arbitrary user properties.
| Column | Type | Description |
|--------|------|-------------|
| `id` | INTEGER (PK) | Auto-incrementing row ID |
| `owner` | TEXT | Property owner identifier |
| `key` | TEXT | Property key |
| `value` | TEXT | Property value |
The combination of `owner` and `key` is unique.
### `job_execution`
Tracks background job execution history for monitoring and debugging.
| Column | Type | Description |
|--------|------|-------------|
| `id` | INTEGER (PK) | Auto-incrementing row ID |
| `job_name` | TEXT | Name of the executed job |
| `started_at` | TIMESTAMP | Job start timestamp |
| `completed_at` | TIMESTAMP | Job completion timestamp |
| `success` | INTEGER | Whether the job succeeded (0 or 1) |
| `records_processed` | INTEGER | Number of records affected |
| `error_message` | TEXT | Error message if the job failed (NULL on success) |
### `consent`
Stores user consent decisions for OAuth client access.
| Column | Type | Description |
|--------|------|-------------|
| `id` | INTEGER (PK) | Auto-incrementing row ID |
| `subject` | TEXT | User's subject identifier |
| `client_id` | TEXT (FK) | Client that consent was granted to |
| `scope` | TEXT | Consented scope string |
| `created_at` | TIMESTAMP | Consent grant timestamp |
| `expires_at` | TIMESTAMP | Consent expiration timestamp (NULL for permanent) |
### `device_code`
Stores device authorization grant codes for the device flow (RFC 8628).
| Column | Type | Description |
|--------|------|-------------|
| `id` | TEXT (PK) | Device code value |
| `user_code` | TEXT (UNIQUE) | User-facing code for device verification |
| `client_id` | TEXT (FK) | Client that requested the device code |
| `scope` | TEXT | Requested scope string |
| `subject` | TEXT | User's subject identifier (NULL until user authorizes) |
| `status` | TEXT | Flow status (`pending`, `authorized`, `denied`, `expired`) |
| `created_at` | TIMESTAMP | Device code issuance timestamp |
| `expires_at` | TIMESTAMP | Device code expiration timestamp |
## SeaORM Patterns
### Entity Definitions
Entities are defined in the `src/entities/` directory. Each entity has a `Model` struct (representing a row), an `Entity` struct (representing the table), and `ActiveModel` (for inserts and updates):
```rust
// Example: Querying a client by ID
let client = client::Entity::find_by_id(client_id)
.one(&state.db)
.await?;
// Example: Inserting a new access token
let token = access_token::ActiveModel {
id: Set(random_id()),
client_id: Set(client_id.to_string()),
subject: Set(subject.to_string()),
scope: Set(scope.to_string()),
revoked: Set(false),
created_at: Set(now),
expires_at: Set(now + Duration::hours(1)),
..Default::default()
};
token.insert(&state.db).await?;
```
### Database Connection
The `DatabaseConnection` type from SeaORM abstracts over both SQLite and PostgreSQL. The connection is established once at startup and shared via `AppState`.
## Migrations
Database migrations are located in `migration/src/` and run automatically on application startup via `Migrator::up()`. Each migration file defines an `up` method (apply the migration) and a `down` method (revert the migration).
Migration files follow the naming convention:
```
m20240101_000001_create_users_table.rs
m20240102_000001_create_clients_table.rs
...
```
To create a new migration:
```bash
cd migration
sea-orm-cli migrate generate create_new_table
```
Migrations are applied in lexicographic order by filename. Never modify an existing migration that has been deployed -- always create a new migration to make schema changes.

View file

@ -0,0 +1,177 @@
# Error Handling
Barycenter uses a centralized error type, `CrabError`, combined with miette diagnostics to provide clear, actionable error messages for developers and operators. Client-facing errors follow the OAuth 2.0 error response specification.
## CrabError Enum
The `CrabError` enum is the primary error type used throughout the application. It provides automatic conversion from common error types and can carry diagnostic metadata.
```rust
pub enum CrabError {
/// SeaORM database errors
DbErr(sea_orm::DbErr),
/// File system I/O errors
IoErr(std::io::Error),
/// JSON serialization/deserialization errors
JsonErr(serde_json::Error),
/// Generic errors with a descriptive message
Other(String),
}
```
### Automatic Conversions
`CrabError` implements `From` for common error types, allowing the `?` operator to work seamlessly:
```rust
// Database errors are automatically converted
let client = client::Entity::find_by_id(client_id)
.one(&state.db)
.await?; // DbErr -> CrabError::DbErr
// I/O errors are automatically converted
let key_data = std::fs::read_to_string(&key_path)?; // io::Error -> CrabError::IoErr
// JSON errors are automatically converted
let parsed: Value = serde_json::from_str(&body)?; // serde_json::Error -> CrabError::JsonErr
```
### The Other Variant
For errors that do not fit the specific variants, `CrabError::Other(String)` provides a catch-all:
```rust
// Using Other for custom error conditions
if scope.is_empty() {
return Err(CrabError::Other("Scope must not be empty".to_string()));
}
```
## Miette Diagnostics
Barycenter uses [miette](https://docs.rs/miette) to annotate errors with diagnostic information that helps operators understand what went wrong and what to do about it. Miette provides structured error reports with:
- **Error code**: A unique identifier for the error type.
- **Help text**: Actionable guidance on how to resolve the error.
- **Labels**: Source code spans or context pointing to the problematic input.
- **Related errors**: Additional errors that may be relevant.
### Diagnostic Pattern
When creating errors, follow this pattern to provide thorough diagnostics:
```rust
use miette::Diagnostic;
use thiserror::Error;
#[derive(Error, Diagnostic, Debug)]
#[error("Failed to load private key from {path}")]
#[diagnostic(
code(barycenter::jwks::key_load_failed),
help("Ensure the private key file exists at the configured path and has 600 permissions. \
The file should contain a JSON-encoded RSA private key. \
If the file is missing, delete the JWKS file as well and restart to regenerate both.")
)]
pub struct KeyLoadError {
pub path: String,
#[source]
pub source: std::io::Error,
}
```
The `help` text should inform the user exactly what they need to do to resolve the issue. Include specific file paths, permission values, or configuration keys when relevant.
### Diagnostic Guidelines
When writing diagnostic messages:
1. **Be specific**: Instead of "Configuration error", say "Database URL is not a valid connection string".
2. **Be actionable**: Instead of "Key file not found", say "Create the key file at /var/lib/barycenter/private_key.json or set keys.private_key_path in config.toml".
3. **Include context**: Reference the configuration key, file path, or environment variable that needs to change.
4. **Suggest recovery**: If the error is recoverable (e.g., regenerating keys), explain the steps.
## OAuth Error Responses
Client-facing errors in the OAuth and OpenID Connect flows follow the specifications defined in RFC 6749 (OAuth 2.0) and OpenID Connect Core.
### Authorization Endpoint Errors
Errors at the authorization endpoint are communicated by redirecting the user agent back to the client's redirect URI with error parameters:
```
HTTP/1.1 302 Found
Location: https://app.example.com/callback?error=invalid_request&error_description=Missing+code_challenge+parameter&state=abc123
```
The error is appended as query parameters to the redirect URI:
| Parameter | Description |
|-----------|-------------|
| `error` | An error code from the OAuth 2.0 specification |
| `error_description` | A human-readable description of the error |
| `state` | The `state` value from the authorization request (if provided) |
**Common authorization endpoint errors:**
| Error Code | When It Occurs |
|------------|----------------|
| `invalid_request` | Missing required parameter, unsupported parameter value, or malformed request |
| `unauthorized_client` | Client is not authorized for the requested grant type or redirect URI |
| `invalid_scope` | The requested scope is invalid or missing the required `openid` scope |
| `access_denied` | The user denied the authorization request |
**Important**: If the `redirect_uri` or `client_id` is invalid, Barycenter does **not** redirect. Instead, it displays an error page directly, because redirecting to an unvalidated URI would be a security risk (open redirect).
### Token Endpoint Errors
Errors at the token endpoint are returned as JSON in the response body with an appropriate HTTP status code:
```json
{
"error": "invalid_grant",
"error_description": "Authorization code has expired"
}
```
| HTTP Status | Error Code | When It Occurs |
|-------------|------------|----------------|
| 400 | `invalid_request` | Missing required parameter or unsupported grant type |
| 400 | `invalid_grant` | Authorization code is expired, consumed, or PKCE verification failed |
| 401 | `invalid_client` | Client authentication failed (bad credentials) |
| 400 | `unsupported_grant_type` | The grant type is not supported |
### UserInfo Endpoint Errors
Errors at the userinfo endpoint use the `WWW-Authenticate` header per RFC 6750 (Bearer Token Usage):
```
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer error="invalid_token", error_description="Access token has expired"
```
## Internal vs. Client-Facing Errors
Barycenter distinguishes between internal errors (for operators) and client-facing errors (for OAuth clients):
| Aspect | Internal Errors | Client-Facing Errors |
|--------|----------------|---------------------|
| Audience | Operators and developers | OAuth clients and end users |
| Detail level | Full stack traces, file paths, configuration details | Generic error codes with safe descriptions |
| Format | Miette diagnostic output to logs | OAuth error response (redirect or JSON) |
| Sensitive info | May include database details, file paths | Never includes internal details |
Internal errors are logged with full diagnostic information. Client-facing errors expose only the OAuth error code and a safe description that does not leak implementation details.
```rust
// Internal: logged with full context
tracing::error!("Failed to verify PKCE: stored_challenge={}, computed={}", stored, computed);
// Client-facing: safe error response
return Ok(Json(TokenErrorResponse {
error: "invalid_grant".to_string(),
error_description: Some("PKCE verification failed".to_string()),
}));
```

View file

@ -0,0 +1,134 @@
# Module Structure
Barycenter's source code is organized into focused modules declared in `src/lib.rs`. Each module handles a specific concern, and together they compose the complete identity provider.
## Module Overview
| Module | Description |
|--------|-------------|
| `admin_graphql` | GraphQL schema building and admin API router. Defines the GraphQL schema, attaches it to a dedicated Axum router, and serves the admin API on a separate port. |
| `admin_mutations` | Custom GraphQL mutations and queries for administrative operations. Includes job triggering, user 2FA management, and job log queries. |
| `authz` | Authorization policy engine. Evaluates access control policies with a modular architecture of sub-modules. |
| `entities` | SeaORM entity definitions for all 12 database tables. Auto-generated by `sea-orm-cli` and customized as needed. |
| `errors` | `CrabError` enum for centralized error handling. Provides conversions from common error types and miette diagnostic annotations. |
| `jobs` | Background job scheduler. Manages periodic cleanup tasks for expired sessions, refresh tokens, and WebAuthn challenges. |
| `jwks` | JWKS and JWT key management. Handles RSA key generation, persistence, public key publication, and JWT signing with RS256. |
| `session` | Session cookie handling. Manages session creation, validation, expiration, and the session cookie attributes (HttpOnly, SameSite, Secure). |
| `settings` | Configuration loading. Merges defaults, config file values, and environment variables into the `Settings` struct. |
| `storage` | Database operations. Provides functions for CRUD operations on all entity types using SeaORM's `DatabaseConnection`. |
| `user_sync` | User synchronization from JSON. Imports or updates user records from an external JSON source. |
| `web` | HTTP endpoints and middleware. Defines all Axum routes, request handlers, security header middleware, and static file serving. |
| `webauthn_manager` | WebAuthn/passkey operations. Manages passkey registration, authentication, and two-factor verification flows. |
## Authorization Policy Engine (`authz`)
The `authz` module has its own internal structure with the following sub-modules:
| Sub-Module | Description |
|------------|-------------|
| `condition` | Defines conditions that can be evaluated against a request context |
| `engine` | The policy evaluation engine that processes policies against requests |
| `errors` | Authorization-specific error types |
| `loader` | Loads policy definitions from configuration or storage |
| `policy` | Policy data structures and definitions |
| `types` | Shared type definitions used across the authorization module |
| `web` | HTTP endpoints for the authorization policy service |
## AppState
The `AppState` struct is the central shared state passed to all request handlers:
```rust
pub struct AppState {
pub settings: Arc<Settings>,
pub db: DatabaseConnection,
pub jwks: JwksManager,
pub webauthn: WebAuthnManager,
}
```
| Field | Description |
|-------|-------------|
| `settings` | Application configuration wrapped in `Arc` for shared access across threads |
| `db` | SeaORM database connection (connection pool for PostgreSQL, single connection for SQLite) |
| `jwks` | Manages RSA keys for JWT signing and JWKS endpoint serving |
| `webauthn` | Manages WebAuthn ceremonies (registration, authentication, 2FA) |
## Dependency Graph
The following diagram shows how modules depend on each other. Arrows point from the dependent module to the module it uses.
```mermaid
graph TD
main[main.rs] --> settings
main --> storage
main --> jwks
main --> web
main --> jobs
main --> webauthn_manager[webauthn_manager]
web --> storage
web --> jwks
web --> session
web --> errors
web --> entities
web --> webauthn_manager
web --> authz
admin_graphql[admin_graphql] --> admin_mutations[admin_mutations]
admin_graphql --> storage
admin_mutations --> storage
admin_mutations --> jobs
admin_mutations --> entities
storage --> entities
storage --> errors
jobs --> storage
session --> storage
session --> entities
session --> errors
jwks --> settings
jwks --> errors
webauthn_manager --> storage
webauthn_manager --> entities
webauthn_manager --> errors
user_sync[user_sync] --> storage
user_sync --> entities
authz --> errors
authz --> entities
settings --> errors
```
## Key Patterns
### Shared State via Axum Extractors
All request handlers receive `AppState` through Axum's `State` extractor:
```rust
async fn handle_token(
State(state): State<Arc<AppState>>,
Form(params): Form<TokenRequest>,
) -> Result<Json<TokenResponse>, CrabError> {
// Access state.db, state.jwks, state.settings, etc.
}
```
### Module Encapsulation
Each module exposes a public API that other modules consume. Internal implementation details are kept private. For example:
- `storage` exposes functions like `store_auth_code()` and `get_client()` but hides the SQL queries.
- `jwks` exposes `sign_jwt_rs256()` but hides the key loading and caching logic.
- `session` exposes `create_session()` and `validate_session()` but hides cookie parsing.
### Entity-Based Data Access
All database operations go through SeaORM entities defined in the `entities` module. Direct SQL queries are avoided in favor of SeaORM's query builder and entity model patterns. This provides type safety, database-agnostic queries, and compile-time validation of column references.

View file

@ -0,0 +1,148 @@
# Release Process
Barycenter uses tag-triggered releases. Pushing a Git tag matching the `v*.*.*` pattern triggers the CI pipeline to build artifacts, publish container images, and create a GitHub release.
## Triggering a Release
### 1. Prepare the Release
Ensure all changes for the release are merged into `main`. Update version numbers in `Cargo.toml` and finalize the changelog.
```bash
git checkout main
git pull origin main
```
### 2. Create and Push the Tag
```bash
git tag v1.2.0
git push origin v1.2.0
```
The tag name must match the `v*.*.*` pattern (e.g., `v1.0.0`, `v1.2.3`, `v2.0.0-rc.1`). The CI pipeline triggers automatically on tag push.
### 3. Monitor the Pipeline
The release pipeline performs several steps in sequence. Monitor it through the GitHub Actions UI or the CLI:
```bash
gh run list --workflow=release
gh run watch
```
## Release Pipeline Steps
### Step 1: Build Multi-Platform Docker Images
The pipeline builds Docker images for two architectures:
| Platform | Architecture | Use Case |
|----------|-------------|----------|
| `linux/amd64` | x86_64 | Standard servers, cloud instances |
| `linux/arm64` | AArch64 | ARM servers, AWS Graviton, Apple Silicon |
Both images are built from the same Dockerfile using Docker's `--platform` build argument. Platform-specific build caches are used to optimize parallel builds and avoid cache conflicts between architectures.
The images are tagged with:
- The version tag (e.g., `v1.2.0`)
- `latest` (for the most recent stable release)
### Step 2: Publish to GitHub Container Registry
Built images are published to GitHub Container Registry (GHCR):
```
ghcr.io/your-org/barycenter:v1.2.0
ghcr.io/your-org/barycenter:latest
```
A multi-architecture manifest is created so that `docker pull ghcr.io/your-org/barycenter:v1.2.0` automatically selects the correct image for the host platform.
### Step 3: Create GitHub Release
The pipeline creates a GitHub release with:
- **Release title**: The tag name (e.g., `v1.2.0`)
- **Changelog**: Auto-generated from commits since the previous release tag, organized by conventional commit type
- **Binary artifacts**: The compiled `barycenter` binary for each platform (if configured)
The changelog groups commits by type:
```markdown
## What's Changed
### Features
- feat: add refresh token rotation (#42)
- feat: add device code flow support (#45)
### Bug Fixes
- fix: prevent double consumption of authorization codes (#43)
### Other Changes
- chore: update sea-orm to 1.1.0 (#44)
- docs: add PKCE security documentation (#46)
```
### Step 4: Generate Artifact Attestation
The pipeline generates [artifact attestation](https://docs.github.com/en/actions/security-guides/using-artifact-attestations-and-reusable-workflows-to-verify-builds) for the published container images. Attestation provides a cryptographic proof that the container image was built by the CI pipeline from a specific commit in the repository.
This allows consumers to verify the provenance of the image:
```bash
gh attestation verify oci://ghcr.io/your-org/barycenter:v1.2.0 \
--owner your-org
```
## Version Numbering
Barycenter follows [Semantic Versioning](https://semver.org/):
| Component | When to Increment | Example |
|-----------|-------------------|---------|
| **Major** (X.0.0) | Breaking changes to the public API, configuration format, or database schema that requires manual migration | `v1.0.0` to `v2.0.0` |
| **Minor** (0.X.0) | New features, non-breaking additions to the API or configuration | `v1.0.0` to `v1.1.0` |
| **Patch** (0.0.X) | Bug fixes, security patches, documentation updates | `v1.0.0` to `v1.0.1` |
Pre-release versions use a suffix: `v1.2.0-rc.1`, `v1.2.0-beta.1`.
## Hotfix Releases
For urgent production fixes:
1. Create a hotfix branch from the latest release tag:
```bash
git checkout -b hotfix/1.2.1 v1.2.0
```
2. Apply the fix and commit with conventional commit format.
3. Merge into `main` and tag:
```bash
git checkout main
git merge hotfix/1.2.1
git tag v1.2.1
git push origin main --tags
```
4. Merge back into `develop`:
```bash
git checkout develop
git merge hotfix/1.2.1
git push origin develop
```
The tag push triggers the same release pipeline.
## Post-Release Checklist
After a release is published:
- [ ] Verify the GitHub release page has the correct changelog and artifacts
- [ ] Verify the Docker image is pullable from GHCR: `docker pull ghcr.io/your-org/barycenter:vX.Y.Z`
- [ ] Verify the multi-arch manifest works on both amd64 and arm64
- [ ] Verify artifact attestation: `gh attestation verify oci://ghcr.io/your-org/barycenter:vX.Y.Z --owner your-org`
- [ ] Update deployment configurations to reference the new version
- [ ] Announce the release to stakeholders if it contains user-facing changes
- [ ] Merge the release branch (or `main`) back into `develop` if not already done

View file

@ -0,0 +1,103 @@
# Testing
## Use cargo-nextest, Not cargo test
Barycenter uses [cargo-nextest](https://nexte.st/) as its test runner. **Do not use `cargo test`.**
This is a firm project requirement, not a suggestion. The standard `cargo test` runner executes tests as threads within a single process. Barycenter's integration tests start HTTP servers on specific ports, and running multiple such tests in the same process leads to port conflicts and flaky test failures.
cargo-nextest runs each test in its own process, providing:
- **Process isolation**: Each test gets its own address space, preventing port conflicts.
- **Reliable integration tests**: Tests that bind to ports cannot interfere with each other.
- **Better output**: Cleaner, more readable test output with per-test timing.
- **Faster execution**: Tests run in parallel across processes, with configurable concurrency.
## Installation
Install cargo-nextest if you do not already have it:
```bash
cargo install cargo-nextest
```
Verify the installation:
```bash
cargo nextest --version
```
## Running Tests
### Run All Tests
```bash
cargo nextest run
```
### Run with Verbose Output
```bash
cargo nextest run --verbose
```
### Run a Specific Test
```bash
cargo nextest run test_name
```
The test name can be a substring match. For example, `cargo nextest run token` runs all tests with "token" in their name.
### Run Tests in a Specific Module
```bash
cargo nextest run --filter-expr 'test(=web::tests::)'
```
### Run Tests with Output Capture Disabled
To see `println!` and log output during test execution:
```bash
cargo nextest run --no-capture
```
### Run Only Failed Tests from the Previous Run
```bash
cargo nextest run --run-ignored all --status-level fail
```
## Test Organization
Tests in Barycenter follow standard Rust conventions:
- **Unit tests**: Located in `#[cfg(test)]` modules within source files. These test individual functions and modules in isolation.
- **Integration tests**: Located in the `tests/` directory. These start the full HTTP server and make requests against it.
Integration tests are the primary reason cargo-nextest is required. Each integration test may start its own Barycenter server instance on a different port, and process isolation ensures these do not collide.
## Writing New Tests
When writing new tests, follow these guidelines:
1. **Use unique ports for integration tests** that start an HTTP server. Avoid hardcoding port numbers when possible.
2. **Do not rely on test execution order**. Each test must be independent.
3. **Clean up database state** in tests that modify the database. Use temporary SQLite databases or transactions that roll back.
4. **Use `#[tokio::test]`** for async tests, as the application uses the Tokio runtime.
Example integration test structure:
```rust
#[tokio::test]
async fn test_token_endpoint_requires_pkce() {
// Set up a test server instance
// Make HTTP requests to the server
// Assert on the response
}
```
## Continuous Integration
The CI pipeline runs `cargo nextest run` as part of every pull request check. Tests must pass before a PR can be merged. See [Contributing](contributing.md) for the full list of CI checks.

View file

@ -0,0 +1,196 @@
# WASM Client
Barycenter includes a Rust-based WebAssembly client that provides browser-side WebAuthn/passkey functionality. The client is compiled from Rust to WebAssembly using `wasm-pack` and loaded by the login and account management pages.
## Building
### Prerequisites
Install `wasm-pack` if you do not already have it:
```bash
cargo install wasm-pack
```
### Build Command
```bash
cd client-wasm
wasm-pack build --target web --out-dir ../static/wasm
```
The `--target web` flag generates ES module output suitable for loading directly in a browser with `<script type="module">`.
The `--out-dir ../static/wasm` flag places the output in the `static/wasm/` directory, where Barycenter's web server serves static files from.
### Output Files
After building, the following files are generated in `static/wasm/`:
| File | Description |
|------|-------------|
| `barycenter_webauthn_client_bg.wasm` | The compiled WebAssembly binary |
| `barycenter_webauthn_client.js` | JavaScript glue code (ES module) that loads and initializes the WASM binary |
| `barycenter_webauthn_client.d.ts` | TypeScript type definitions for the exported API |
| `barycenter_webauthn_client_bg.wasm.d.ts` | TypeScript type definitions for the WASM binary |
## Module API
The WASM module exports four functions that the browser-side JavaScript calls:
### `supports_webauthn()`
Checks whether the browser supports the WebAuthn API.
```javascript
import init, { supports_webauthn } from '/static/wasm/barycenter_webauthn_client.js';
await init();
if (supports_webauthn()) {
// Browser supports WebAuthn, enable passkey features
}
```
Returns `true` if `navigator.credentials` is available and supports the `create` and `get` operations.
### `supports_conditional_ui()`
Checks whether the browser supports conditional UI (autofill) for passkeys. This is an async check because the capability detection requires querying the browser.
```javascript
import init, { supports_conditional_ui } from '/static/wasm/barycenter_webauthn_client.js';
await init();
if (await supports_conditional_ui()) {
// Browser supports passkey autofill (Chrome 108+, Safari 16+)
}
```
Conditional UI allows passkeys to appear in the browser's autofill dropdown when the user focuses on a username field, providing a seamless authentication experience without a separate "Sign in with passkey" button.
### `register_passkey(options)`
Creates a new passkey credential. Called during the passkey registration flow after the server provides creation options.
```javascript
import init, { register_passkey } from '/static/wasm/barycenter_webauthn_client.js';
await init();
// 1. Start registration on the server
const response = await fetch('/webauthn/register/start', { method: 'POST' });
const options = await response.json();
// 2. Create the credential in the browser
const credential = await register_passkey(options);
// 3. Send the credential back to the server
await fetch('/webauthn/register/finish', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(credential)
});
```
This function wraps the browser's `navigator.credentials.create()` API, handling the conversion between the server's JSON format and the browser's `PublicKeyCredentialCreationOptions`.
### `authenticate_passkey(options, mediation)`
Authenticates using an existing passkey. The `mediation` parameter controls whether to use conditional UI (autofill) or a modal prompt.
```javascript
import init, { authenticate_passkey } from '/static/wasm/barycenter_webauthn_client.js';
await init();
// 1. Start authentication on the server
const response = await fetch('/webauthn/authenticate/start', { method: 'POST' });
const options = await response.json();
// 2. Authenticate with a passkey
// mediation: "conditional" for autofill, "optional" for modal prompt
const assertion = await authenticate_passkey(options, "conditional");
// 3. Send the assertion back to the server
await fetch('/webauthn/authenticate/finish', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(assertion)
});
```
**Mediation values:**
| Value | Behavior |
|-------|----------|
| `"conditional"` | Passkeys appear in the browser's autofill dropdown. Non-blocking -- the user can choose to type a password instead. |
| `"optional"` | Shows a modal browser dialog prompting the user to select a passkey. Used for explicit "Sign in with passkey" buttons. |
## Browser Integration
### Loading the Module
The WASM module is loaded as an ES module in the login page:
```html
<script type="module">
import init, {
supports_webauthn,
supports_conditional_ui,
authenticate_passkey
} from '/static/wasm/barycenter_webauthn_client.js';
async function setup() {
await init();
if (!supports_webauthn()) {
// Hide passkey UI elements
return;
}
if (await supports_conditional_ui()) {
// Start conditional UI (autofill) authentication
startConditionalAuth();
} else {
// Show explicit "Sign in with passkey" button
showPasskeyButton();
}
}
setup();
</script>
```
### Content-Security-Policy Requirement
The WASM module requires the `wasm-unsafe-eval` CSP directive. Barycenter's security headers include this:
```
script-src 'self' 'wasm-unsafe-eval'
```
Without `wasm-unsafe-eval`, the browser blocks WebAssembly compilation. This directive is narrowly scoped and does not permit JavaScript `eval()`.
### Browser Compatibility
| Browser | WebAuthn | Conditional UI (Autofill) |
|---------|----------|--------------------------|
| Chrome 108+ | Yes | Yes |
| Safari 16+ | Yes | Yes |
| Firefox 119+ | Yes | No |
| Edge 108+ | Yes | Yes |
On browsers that do not support conditional UI, the login page falls back to showing an explicit "Sign in with passkey" button. On browsers without WebAuthn support, passkey features are hidden entirely and password authentication remains available.
## Development Workflow
During development, rebuild the WASM module whenever you change the `client-wasm/` source:
```bash
cd client-wasm
wasm-pack build --target web --out-dir ../static/wasm
```
The WASM output files are not checked into version control. They must be built locally or generated as part of the CI/CD pipeline before deploying.

View file

@ -0,0 +1,134 @@
# Architecture
Barycenter uses a three-port architecture where each port serves a distinct role. All three servers share a single database connection pool, JWKS manager, and application state.
## Three-Port Design
```mermaid
graph TB
subgraph "Barycenter Process"
subgraph "Public Server (port 8080)"
OIDC["OIDC Endpoints"]
Auth["Authentication"]
WebAuthn["WebAuthn/Passkeys"]
Device["Device Authorization"]
UserInfo["UserInfo"]
end
subgraph "Admin Server (port 8081)"
GQL["GraphQL API"]
Jobs["Job Management"]
UserMgmt["User Management"]
end
subgraph "Authz Server (port 8082)"
Policy["Policy Evaluation"]
KDL["KDL Policy Engine"]
end
State["Shared Application State"]
JWKS["JWKS Manager"]
Scheduler["Background Job Scheduler"]
end
DB[(Database<br/>SQLite or PostgreSQL)]
OIDC --> State
Auth --> State
WebAuthn --> State
Device --> State
GQL --> State
Policy --> State
State --> DB
State --> JWKS
Scheduler --> DB
Client["OIDC Clients"] --> OIDC
Browser["User Browsers"] --> Auth
Browser --> WebAuthn
Admin["Admin Tools"] --> GQL
Services["Backend Services"] --> Policy
```
### Public Server (default port 8080)
The public server handles all user-facing and client-facing OIDC operations:
- **Discovery**: `/.well-known/openid-configuration` and `/.well-known/jwks.json`
- **Client Registration**: `POST /connect/register`
- **Authorization**: `GET /authorize` with PKCE enforcement
- **Token Exchange**: `POST /token` (authorization code, refresh token, device code grants)
- **Token Revocation**: `POST /revoke`
- **UserInfo**: `GET /userinfo`
- **Authentication**: Login pages, password verification, WebAuthn flows
- **Device Authorization**: `POST /device/authorize` and `GET /device` verification page
- **Consent**: User consent approval and tracking
### Admin Server (default port 8081)
The admin server exposes a GraphQL API on the port immediately following the public port. It is intended for internal management and should not be exposed to the public internet.
- **User Management**: Query users, set 2FA requirements
- **Job Control**: Trigger background jobs manually, view execution history
- **System Queries**: List available jobs, check user status
### Authorization Policy Server (default port 8082)
The authorization policy server runs on the second port after the public port. It evaluates access control decisions using KDL-defined policies.
- **Policy Evaluation**: HTTP API for checking authorization decisions
- **ReBAC + ABAC**: Combines relationship-based and attribute-based access control
## Technology Stack
| Component | Technology | Purpose |
|-----------|-----------|---------|
| Language | Rust (stable) | Systems language with memory safety |
| Web framework | [axum](https://github.com/tokio-rs/axum) | Async HTTP framework built on tokio and hyper |
| Database ORM | [SeaORM](https://www.sea-ql.org/SeaORM/) | Async ORM supporting SQLite and PostgreSQL |
| JWT/JOSE | [josekit](https://github.com/nickel-org/josekit-rs) | JSON Web Token creation and signing |
| WebAuthn | [webauthn-rs](https://github.com/kanidm/webauthn-rs) | FIDO2/WebAuthn server implementation |
| GraphQL | [async-graphql](https://github.com/async-graphql/async-graphql) | GraphQL server for the admin API |
| Scheduling | [tokio-cron-scheduler](https://github.com/mvniekerk/tokio-cron-scheduler) | Cron-based background job scheduling |
| Password hashing | argon2 | Memory-hard password hashing |
| Configuration | config-rs + TOML | Layered configuration with file and environment support |
| WASM client | wasm-pack + wasm-bindgen | Browser-side WebAuthn operations |
## Startup Sequence
When Barycenter starts, it follows this initialization order:
```mermaid
sequenceDiagram
participant CLI as CLI Parser
participant Cfg as Settings
participant DB as Database
participant Mig as Migrations
participant JWKS as JWKS Manager
participant WA as WebAuthn
participant GQL as GraphQL Schemas
participant Sched as Scheduler
participant Srv as Servers
CLI->>Cfg: 1. Parse --config flag and subcommands
Cfg->>Cfg: 2. Load defaults + config.toml + env vars
Cfg->>DB: 3. Initialize database connection pool
DB->>Mig: 4. Run pending migrations
Mig->>JWKS: 5. Initialize JWKS (generate or load RSA keys)
JWKS->>WA: 6. Initialize WebAuthn configuration
WA->>GQL: 7. Build GraphQL schemas (admin + authz)
GQL->>Sched: 8. Start background job scheduler
Sched->>Srv: 9. Start all three servers concurrently
```
1. **Parse CLI**: Read `--config` path and any subcommands (e.g., `sync-users --file`)
2. **Load settings**: Merge default values, configuration file, and environment variables
3. **Initialize database**: Create connection pool to SQLite or PostgreSQL
4. **Run migrations**: Apply any pending schema migrations automatically
5. **Initialize JWKS**: Generate a 2048-bit RSA key pair on first run, or load existing keys from disk
6. **Initialize WebAuthn**: Configure the WebAuthn relying party based on the server's public URL
7. **Build GraphQL schemas**: Construct the async-graphql schemas for admin and authorization APIs
8. **Start scheduler**: Register cron jobs for cleanup of sessions, tokens, and challenges
9. **Start servers**: Launch all three HTTP servers concurrently on their respective ports
All three servers share the same tokio runtime and application state. If any server fails to bind its port, the entire process exits with an error.

View file

@ -0,0 +1,123 @@
# Building from Source
## Clone the Repository
```bash
git clone https://github.com/cloudnebulaproject/barycenter.git
cd barycenter
```
## Build
For a development build:
```bash
cargo build
```
For an optimized release build:
```bash
cargo build --release
```
The resulting binary is located at:
- Development: `target/debug/barycenter`
- Release: `target/release/barycenter`
## Run
Start the server with the default configuration:
```bash
cargo run
```
Or with a specific configuration file:
```bash
cargo run -- --config path/to/config.toml
```
In release mode:
```bash
cargo run --release
```
Or run the compiled binary directly:
```bash
./target/release/barycenter --config config.toml
```
## Workspace Structure
Barycenter is organized as a Cargo workspace with the following crates:
```
barycenter/
├── Cargo.toml # Workspace root
├── src/ # Main application crate
│ ├── main.rs # Entry point and CLI parsing
│ ├── settings.rs # Configuration loading
│ ├── storage.rs # Database layer
│ ├── web.rs # HTTP endpoints and routing
│ ├── jwks.rs # JWKS and JWT signing
│ ├── errors.rs # Error types
│ └── ...
├── client-wasm/ # WebAuthn WASM client (browser-side)
│ ├── Cargo.toml
│ └── src/
├── migration/ # SeaORM database migrations
│ ├── Cargo.toml
│ └── src/
├── static/ # Static assets (HTML, CSS, JS, WASM)
├── book/ # mdbook documentation (this book)
├── config.toml # Default configuration file
└── data/ # Runtime data (keys, database) -- created on first run
```
### Main Crate
The root crate (`src/`) contains the Barycenter server application. This is where the OIDC endpoints, authentication logic, admin API, and authorization policy engine live.
### client-wasm
The `client-wasm/` crate compiles to WebAssembly and runs in the browser. It handles WebAuthn API calls for passkey registration and authentication. See the [Prerequisites](./prerequisites.md) page for wasm-pack installation instructions.
To build the WASM module:
```bash
cd client-wasm
wasm-pack build --target web --out-dir ../static/wasm
```
The build output is placed in `static/wasm/` and served by the Barycenter web server automatically.
### migration
The `migration/` crate contains SeaORM migration definitions. Migrations run automatically when Barycenter starts -- there is no separate migration command to run. The migration crate handles creating and updating all database tables: clients, auth codes, access tokens, refresh tokens, sessions, users, passkeys, WebAuthn challenges, device codes, consents, job executions, and properties.
## CLI Arguments
```
barycenter [OPTIONS] [SUBCOMMAND]
Options:
--config <PATH> Path to configuration file (default: config.toml)
Subcommands:
sync-users --file <PATH> Sync users from a YAML/JSON file
```
## Verify the Build
After building, you can verify the binary runs correctly:
```bash
./target/release/barycenter --config config.toml
```
You should see log output indicating the three servers are starting on their respective ports. The default test user credentials are `admin` / `password123`.

View file

@ -0,0 +1,173 @@
# Configuration File
Barycenter reads its configuration from a TOML file. By default it looks for `config.toml` in the working directory, or you can specify a path with `--config`.
## Full Annotated Example
```toml
# =============================================================================
# Server Configuration
# =============================================================================
[server]
# Address to bind to. Use "0.0.0.0" to listen on all interfaces.
# Default: "127.0.0.1"
host = "0.0.0.0"
# Port for the public OIDC server.
# The admin GraphQL server runs on port+1 (e.g., 8081).
# The authorization policy server runs on port+2 (e.g., 8082).
# Default: 8080
port = 8080
# The public URL where this server is reachable by clients and browsers.
# Used as the OIDC issuer identifier. Must not include a trailing slash.
# If not set, the issuer is constructed as http://{host}:{port}.
# Default: not set
public_base_url = "https://id.example.com"
# Whether to allow unauthenticated dynamic client registration at
# POST /connect/register. Set to false in production if you want to
# control client registration through other means.
# Default: true
allow_public_registration = true
# Port for the admin GraphQL API server. If not set, defaults to port+1.
# Default: not set (auto-derived from port)
# admin_port = 8081
# =============================================================================
# Database Configuration
# =============================================================================
[database]
# Database connection string. Barycenter auto-detects the backend from the URL.
#
# SQLite: sqlite://path/to/database.db?mode=rwc
# PostgreSQL: postgresql://user:password@host:port/dbname
#
# The ?mode=rwc flag for SQLite means read-write-create: the file is created
# if it does not already exist.
#
# Default: "sqlite://data/barycenter.db?mode=rwc"
url = "sqlite://data/barycenter.db?mode=rwc"
# =============================================================================
# Key Configuration
# =============================================================================
[keys]
# Path to the JWKS (JSON Web Key Set) file containing the public key(s).
# This file is published at /.well-known/jwks.json.
# Default: "data/jwks.json"
jwks_path = "data/jwks.json"
# Path to the RSA private key in PEM format. Generated automatically on
# first run if it does not exist. Used for signing ID tokens.
# Default: "data/private_key.pem"
private_key_path = "data/private_key.pem"
# Key ID included in the JWT header. Must match the kid in the JWKS.
# Default: "barycenter-key-1"
key_id = "barycenter-key-1"
# Signing algorithm. Currently only RS256 is supported.
# Default: "RS256"
alg = "RS256"
# =============================================================================
# Federation Configuration
# =============================================================================
[federation]
# List of OpenID Federation trust anchor URLs.
# Used for future trust chain validation. Currently informational.
# Default: []
trust_anchors = []
# =============================================================================
# Authorization Policy Configuration
# =============================================================================
[authz]
# Enable or disable the authorization policy server.
# When disabled, the authz port is not opened.
# Default: false
enabled = false
# Port for the authorization policy server. If not set, defaults to port+2.
# Default: not set (auto-derived from server port)
# port = 8082
# Directory containing KDL policy definition files.
# Policies are loaded from all .kdl files in this directory.
# Default: "policies/"
policies_dir = "policies/"
```
## Section Reference
### `[server]`
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `host` | String | `"127.0.0.1"` | Bind address |
| `port` | Integer | `8080` | Public server port |
| `public_base_url` | String | *none* | Public URL / OIDC issuer |
| `allow_public_registration` | Boolean | `true` | Allow unauthenticated client registration |
| `admin_port` | Integer | `port + 1` | Admin GraphQL server port |
### `[database]`
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `url` | String | `"sqlite://data/barycenter.db?mode=rwc"` | Database connection string |
### `[keys]`
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `jwks_path` | String | `"data/jwks.json"` | Path to public JWKS file |
| `private_key_path` | String | `"data/private_key.pem"` | Path to RSA private key |
| `key_id` | String | `"barycenter-key-1"` | Key ID for JWT header |
| `alg` | String | `"RS256"` | Signing algorithm |
### `[federation]`
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `trust_anchors` | Array of Strings | `[]` | Trust anchor URLs |
### `[authz]`
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `enabled` | Boolean | `false` | Enable the authz policy server |
| `port` | Integer | `port + 2` | Authz server port |
| `policies_dir` | String | `"policies/"` | KDL policy files directory |
## Issuer URL Logic
The OIDC issuer identifier is determined as follows:
1. If `server.public_base_url` is set, use that value (with any trailing slash removed).
2. Otherwise, construct the issuer as `http://{host}:{port}`.
The issuer appears in:
- The `iss` claim of all ID tokens
- The `issuer` field in `/.well-known/openid-configuration`
- Various OIDC metadata endpoints
For production deployments, always set `public_base_url` to the externally-reachable HTTPS URL of your Barycenter instance.
## Minimal Production Configuration
```toml
[server]
host = "0.0.0.0"
port = 8080
public_base_url = "https://id.example.com"
allow_public_registration = false
[database]
url = "postgresql://barycenter:secret@db.internal:5432/barycenter"
[keys]
jwks_path = "/var/lib/barycenter/jwks.json"
private_key_path = "/var/lib/barycenter/private_key.pem"
```

View file

@ -0,0 +1,61 @@
# Configuration
Barycenter uses a layered configuration system. Values are resolved in the following order, where later sources override earlier ones:
1. **Built-in defaults** -- Sensible defaults defined in the application code
2. **Configuration file** -- A TOML file (default: `config.toml`)
3. **Environment variables** -- Variables prefixed with `BARYCENTER__`
This means an environment variable will always take precedence over a value in the configuration file, and a value in the configuration file will override the built-in default.
## Configuration Sources
- **[Configuration File](./config-file.md)** -- Full annotated reference for `config.toml` with all available sections and keys.
- **[Environment Variables](./env-variables.md)** -- How to override configuration values using environment variables, with examples and precedence rules.
- **[Database Setup](./database-setup.md)** -- Choosing between SQLite and PostgreSQL, connection string formats, and migration behavior.
## Quick Example
A minimal `config.toml`:
```toml
[server]
host = "0.0.0.0"
port = 8080
[database]
url = "sqlite://data/barycenter.db?mode=rwc"
```
The same values via environment variables:
```bash
export BARYCENTER__SERVER__HOST=0.0.0.0
export BARYCENTER__SERVER__PORT=8080
export BARYCENTER__DATABASE__URL="sqlite://data/barycenter.db?mode=rwc"
```
## Specifying a Config File
By default, Barycenter looks for `config.toml` in the current working directory. Use the `--config` flag to specify a different path:
```bash
barycenter --config /etc/barycenter/config.toml
```
If the specified file does not exist, Barycenter will exit with an error. If no `--config` flag is given and `config.toml` does not exist in the current directory, Barycenter will start with built-in defaults and any environment variable overrides.
## Logging
Logging is controlled by the `RUST_LOG` environment variable, not the configuration file. Barycenter uses the standard Rust `tracing` ecosystem.
```bash
# Info-level logging for Barycenter, warn for dependencies
RUST_LOG=barycenter=info cargo run
# Verbose debug output
RUST_LOG=debug cargo run
# Trace-level for the Barycenter crate only
RUST_LOG=barycenter=trace cargo run
```

View file

@ -0,0 +1,153 @@
# Database Setup
Barycenter supports two database backends: **SQLite** for development and small deployments, and **PostgreSQL** for production workloads. The backend is automatically detected from the connection string -- no additional configuration flags are needed.
## SQLite
SQLite is the default backend and requires no external database server. It is well-suited for development, testing, and single-instance deployments.
### Connection String Format
```
sqlite://path/to/database.db?mode=rwc
```
The `?mode=rwc` flag means **read-write-create**: the database file is created automatically if it does not exist. This is the recommended mode for most use cases.
### Examples
```toml
# Relative path (relative to working directory)
[database]
url = "sqlite://data/barycenter.db?mode=rwc"
# Absolute path
[database]
url = "sqlite:///var/lib/barycenter/barycenter.db?mode=rwc"
```
Via environment variable:
```bash
export BARYCENTER__DATABASE__URL="sqlite://data/barycenter.db?mode=rwc"
```
### SQLite Considerations
- **Single-writer**: SQLite uses a file-level lock for writes. Only one Barycenter instance can write to a given database file at a time.
- **No network access**: The database file must be on local or network-attached storage accessible from the Barycenter process.
- **Backup**: Copy the database file while Barycenter is stopped, or use SQLite's `.backup` command for online backups.
- **Performance**: SQLite handles moderate request loads well. For high-throughput production deployments, consider PostgreSQL.
## PostgreSQL
PostgreSQL is the recommended backend for production deployments. It supports concurrent connections, replication, and standard database administration tools.
### Connection String Format
```
postgresql://user:password@host:port/database
```
### Examples
```toml
[database]
url = "postgresql://barycenter:secret@localhost:5432/barycenter"
```
With SSL:
```toml
[database]
url = "postgresql://barycenter:secret@db.example.com:5432/barycenter?sslmode=require"
```
Via environment variable:
```bash
export BARYCENTER__DATABASE__URL="postgresql://barycenter:secret@db.internal:5432/barycenter"
```
### PostgreSQL Setup
Create a database and user for Barycenter:
```sql
CREATE USER barycenter WITH PASSWORD 'your-secure-password';
CREATE DATABASE barycenter OWNER barycenter;
```
Or using `createdb`:
```bash
createuser barycenter --pwprompt
createdb barycenter --owner=barycenter
```
Barycenter only needs standard privileges on its own database. It does not require superuser access.
### PostgreSQL Considerations
- **Connection pooling**: Barycenter maintains a connection pool internally via SeaORM. For very large deployments, consider placing PgBouncer or a similar connection pooler in front of PostgreSQL.
- **Replication**: You can use PostgreSQL streaming replication for high availability. Barycenter should always connect to the primary (writable) instance.
- **SSL/TLS**: Use `?sslmode=require` or `?sslmode=verify-full` in the connection string for encrypted database connections in production.
## Automatic Migrations
Barycenter runs database migrations automatically on startup. There is no separate migration command or manual step required.
When the application starts:
1. It connects to the configured database.
2. It checks for pending migrations.
3. It applies any new migrations in order.
4. It logs which migrations were applied (if any).
This applies to both SQLite and PostgreSQL. The migration system is idempotent -- running migrations on an already-up-to-date database is a no-op.
### Managed Tables
Migrations create and manage the following tables:
| Table | Purpose |
|-------|---------|
| `clients` | OAuth client registrations |
| `auth_codes` | Authorization codes with PKCE challenges |
| `access_tokens` | Bearer tokens with scope and expiration |
| `refresh_tokens` | Refresh tokens with rotation tracking |
| `sessions` | User sessions with AMR, ACR, and MFA state |
| `users` | User accounts with password hashes and 2FA settings |
| `passkeys` | WebAuthn credential storage |
| `webauthn_challenges` | Temporary WebAuthn challenge data |
| `device_codes` | Device authorization grant codes |
| `consents` | Per-client, per-scope consent records |
| `job_executions` | Background job execution history |
| `properties` | Key-value property store |
### Migration Safety
- **Non-destructive**: Migrations only add tables and columns; they do not drop or alter existing data.
- **Backup first**: Before upgrading Barycenter to a new version, back up your database. While migrations are designed to be safe, having a backup provides a rollback path.
- **Version tracking**: Applied migrations are tracked in a `seaql_migrations` table managed by SeaORM.
## Switching Databases
To migrate from SQLite to PostgreSQL (or vice versa):
1. Set up the target database (create the PostgreSQL database and user, or prepare the SQLite path).
2. Update the `database.url` in your configuration or environment.
3. Start Barycenter -- migrations will create the schema in the new database.
4. Export data from the old database and import it into the new one using standard tools.
There is no built-in data migration tool between backends. Use `sqlite3` and `psql` (or equivalent tools) for data transfer.
## High Availability
For high-availability deployments:
- **PostgreSQL** is required. SQLite does not support concurrent writers from multiple processes.
- Run multiple Barycenter instances pointing to the same PostgreSQL database.
- Use a load balancer in front of the public server ports.
- Ensure all instances share the same RSA key material (via shared storage or identical `private_key_path` contents).
- PostgreSQL handles concurrent access and locking automatically.

View file

@ -0,0 +1,181 @@
# Docker
Barycenter publishes container images to the GitHub Container Registry. You can also build the image locally from the repository.
## Pull the Pre-Built Image
```bash
docker pull ghcr.io/cloudnebulaproject/barycenter:latest
```
Tagged versions are also available:
```bash
docker pull ghcr.io/cloudnebulaproject/barycenter:0.1.0
```
The images are built for both `linux/amd64` and `linux/arm64` architectures.
## Run the Container
Barycenter exposes three ports corresponding to its [three-server architecture](./architecture.md):
| Port | Purpose |
|------|---------|
| 8080 | Public OIDC server |
| 8081 | Admin GraphQL API |
| 8082 | Authorization policy server |
### Basic Usage
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-p 8081:8081 \
-p 8082:8082 \
ghcr.io/cloudnebulaproject/barycenter:latest
```
This starts Barycenter with the default SQLite database and an auto-generated RSA key pair. Data is stored inside the container and will be lost when the container is removed.
### With Persistent Storage
To persist the database and key material across container restarts, mount the `data/` directory and provide a configuration file:
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-p 8081:8081 \
-p 8082:8082 \
-v $(pwd)/data:/app/data \
-v $(pwd)/config.toml:/app/config.toml:ro \
ghcr.io/cloudnebulaproject/barycenter:latest
```
The `data/` directory will contain:
- The SQLite database file (if using SQLite)
- The RSA private key (PEM format)
- The JWKS public key set
### With Environment Variables
Configuration values can be overridden via environment variables:
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-p 8081:8081 \
-p 8082:8082 \
-e BARYCENTER__SERVER__PORT=8080 \
-e BARYCENTER__SERVER__PUBLIC_BASE_URL=https://id.example.com \
-e BARYCENTER__DATABASE__URL=postgresql://user:pass@db-host/barycenter \
ghcr.io/cloudnebulaproject/barycenter:latest
```
### With PostgreSQL
For production deployments using PostgreSQL:
```bash
docker run -d \
--name barycenter \
-p 8080:8080 \
-p 8081:8081 \
-p 8082:8082 \
-v $(pwd)/data:/app/data \
-e BARYCENTER__DATABASE__URL=postgresql://barycenter:secret@postgres:5432/barycenter \
--network my-network \
ghcr.io/cloudnebulaproject/barycenter:latest
```
## Build the Image Locally
From the repository root:
```bash
docker build -t barycenter:local .
```
For a specific platform:
```bash
docker build --platform linux/amd64 -t barycenter:local .
```
For multi-architecture builds:
```bash
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t barycenter:local \
.
```
## Docker Compose Example
A minimal `docker-compose.yml` for development:
```yaml
services:
barycenter:
image: ghcr.io/cloudnebulaproject/barycenter:latest
ports:
- "8080:8080"
- "8081:8081"
- "8082:8082"
volumes:
- ./data:/app/data
- ./config.toml:/app/config.toml:ro
environment:
RUST_LOG: barycenter=info
```
With PostgreSQL:
```yaml
services:
barycenter:
image: ghcr.io/cloudnebulaproject/barycenter:latest
ports:
- "8080:8080"
- "8081:8081"
- "8082:8082"
volumes:
- barycenter-data:/app/data
environment:
BARYCENTER__DATABASE__URL: postgresql://barycenter:secret@postgres:5432/barycenter
BARYCENTER__SERVER__PUBLIC_BASE_URL: http://localhost:8080
RUST_LOG: barycenter=info
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:17
environment:
POSTGRES_USER: barycenter
POSTGRES_PASSWORD: secret
POSTGRES_DB: barycenter
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U barycenter"]
interval: 5s
timeout: 5s
retries: 5
volumes:
barycenter-data:
postgres-data:
```
## Volume Reference
| Mount point | Purpose | Required |
|-------------|---------|----------|
| `/app/data` | Database file (SQLite), RSA keys, JWKS | Recommended for persistence |
| `/app/config.toml` | Configuration file (read-only mount) | Optional (can use env vars instead) |
| `/app/policies` | KDL authorization policy files | Only if using the authz engine |

View file

@ -0,0 +1,112 @@
# Environment Variables
All Barycenter configuration values can be overridden using environment variables. This is particularly useful for containerized deployments and CI/CD pipelines where you want to avoid mounting configuration files.
## Naming Convention
Environment variables use the prefix `BARYCENTER__` (with double underscores) and map to the TOML configuration hierarchy using `__` as a separator for nested keys.
The pattern is:
```
BARYCENTER__{SECTION}__{KEY}
```
For example, the TOML key `server.port` becomes `BARYCENTER__SERVER__PORT`.
## Precedence
Configuration values are resolved in this order (later sources override earlier ones):
```
Built-in defaults < config.toml < Environment variables
```
An environment variable **always wins** over the same key in the configuration file. This lets you set base configuration in a file and override specific values per environment.
## Common Examples
### Server Settings
```bash
# Change the listen port
export BARYCENTER__SERVER__PORT=9090
# Set the public-facing URL (used as OIDC issuer)
export BARYCENTER__SERVER__PUBLIC_BASE_URL=https://id.example.com
# Bind to all interfaces
export BARYCENTER__SERVER__HOST=0.0.0.0
# Disable public client registration
export BARYCENTER__SERVER__ALLOW_PUBLIC_REGISTRATION=false
```
### Database
```bash
# Use PostgreSQL
export BARYCENTER__DATABASE__URL="postgresql://barycenter:secret@localhost:5432/barycenter"
# Use SQLite with an explicit path
export BARYCENTER__DATABASE__URL="sqlite:///var/lib/barycenter/data.db?mode=rwc"
```
### Key Paths
```bash
export BARYCENTER__KEYS__JWKS_PATH="/var/lib/barycenter/jwks.json"
export BARYCENTER__KEYS__PRIVATE_KEY_PATH="/var/lib/barycenter/private_key.pem"
export BARYCENTER__KEYS__KEY_ID="my-custom-key-id"
```
### Authorization Policy Engine
```bash
export BARYCENTER__AUTHZ__ENABLED=true
export BARYCENTER__AUTHZ__POLICIES_DIR="/etc/barycenter/policies"
```
## Logging
Logging is controlled by the standard `RUST_LOG` environment variable, **not** through the `BARYCENTER__` prefix. Barycenter uses the Rust `tracing` ecosystem.
```bash
# Show info-level logs from Barycenter, warn from dependencies
export RUST_LOG=barycenter=info
# Verbose debug output for everything
export RUST_LOG=debug
# Trace-level logging for the Barycenter crate only
export RUST_LOG=barycenter=trace
# Multiple filters
export RUST_LOG=barycenter=debug,sea_orm=info,axum=warn
```
## Complete Mapping Reference
| Environment Variable | TOML Key | Default |
|---------------------|----------|---------|
| `BARYCENTER__SERVER__HOST` | `server.host` | `127.0.0.1` |
| `BARYCENTER__SERVER__PORT` | `server.port` | `8080` |
| `BARYCENTER__SERVER__PUBLIC_BASE_URL` | `server.public_base_url` | *none* |
| `BARYCENTER__SERVER__ALLOW_PUBLIC_REGISTRATION` | `server.allow_public_registration` | `true` |
| `BARYCENTER__SERVER__ADMIN_PORT` | `server.admin_port` | `port + 1` |
| `BARYCENTER__DATABASE__URL` | `database.url` | `sqlite://data/barycenter.db?mode=rwc` |
| `BARYCENTER__KEYS__JWKS_PATH` | `keys.jwks_path` | `data/jwks.json` |
| `BARYCENTER__KEYS__PRIVATE_KEY_PATH` | `keys.private_key_path` | `data/private_key.pem` |
| `BARYCENTER__KEYS__KEY_ID` | `keys.key_id` | `barycenter-key-1` |
| `BARYCENTER__KEYS__ALG` | `keys.alg` | `RS256` |
| `BARYCENTER__AUTHZ__ENABLED` | `authz.enabled` | `false` |
| `BARYCENTER__AUTHZ__PORT` | `authz.port` | `port + 2` |
| `BARYCENTER__AUTHZ__POLICIES_DIR` | `authz.policies_dir` | `policies/` |
## Tips
- **Boolean values**: Use `true` or `false` (case-insensitive).
- **Integer values**: Provide plain numbers without quotes (e.g., `8080`).
- **String values with special characters**: Quote the value in the shell if it contains characters like `?`, `&`, or spaces.
- **Docker**: Use the `-e` flag or an `env_file` to pass variables to containers.
- **Systemd**: Use `Environment=` directives in the service unit file, or `EnvironmentFile=` to load from a file.

View file

@ -0,0 +1,18 @@
# Installation
Barycenter can be installed by building from source or by running the pre-built Docker image. Choose the method that best fits your environment.
## Installation Methods
- **[Prerequisites](./prerequisites.md)** -- Required and optional tooling for building and running Barycenter.
- **[Building from Source](./building-from-source.md)** -- Clone the repository, compile with Cargo, and locate the output binary.
- **[Docker](./docker.md)** -- Pull the container image or build it locally, with guidance on port mapping and volume mounts.
## Which Method to Choose
| Method | Best for | Requirements |
|--------|----------|--------------|
| Build from source | Development, customization, contribution | Rust toolchain, SQLite or PostgreSQL dev libs |
| Docker | Quick evaluation, CI/CD, production deployment | Docker or compatible container runtime |
After installation, proceed to the [Quickstart](./quickstart.md) guide to run Barycenter and issue your first tokens.

View file

@ -0,0 +1,74 @@
# Key Concepts
This page defines the core terminology used throughout the Barycenter documentation. If you are already familiar with OAuth 2.0, OIDC, and WebAuthn, you can skip ahead to the [installation guide](./installation.md).
## OpenID Connect (OIDC)
OpenID Connect is an identity layer built on top of OAuth 2.0. While OAuth 2.0 handles *authorization* (granting access to resources), OIDC adds *authentication* (verifying who a user is). OIDC introduces the concept of an ID Token -- a signed JWT that contains claims about the authenticated user. Barycenter implements OIDC Core 1.0 as the identity provider (also called the OpenID Provider or OP).
## OAuth 2.0
OAuth 2.0 is the industry-standard authorization framework that allows third-party applications to obtain limited access to a user's resources without exposing their credentials. It defines grant types (ways to obtain tokens), scopes (permissions), and token types. Barycenter uses OAuth 2.0 as the foundation for its token issuance and access control.
## Authorization Code Flow
The Authorization Code flow is the most secure OAuth 2.0 grant type for server-side and native applications. The flow works in two steps: first, the user authenticates and the authorization server returns a short-lived authorization code to the client via a redirect; second, the client exchanges that code for tokens by calling the token endpoint directly. This two-step process keeps tokens out of the browser's URL bar and history. Barycenter uses this as its primary flow and requires PKCE for all authorization code requests.
## PKCE (Proof Key for Code Exchange)
PKCE (pronounced "pixy") is an extension to the Authorization Code flow that prevents authorization code interception attacks. The client generates a random `code_verifier`, derives a `code_challenge` from it using SHA-256, and sends the challenge with the authorization request. When exchanging the code for tokens, the client sends the original verifier, and the server verifies it matches the stored challenge. Barycenter only supports the S256 challenge method -- the plain method is rejected.
## WebAuthn / Passkeys
WebAuthn (Web Authentication) is a W3C standard that enables passwordless authentication using public-key cryptography. A *passkey* is a WebAuthn credential that can be either hardware-bound (e.g., a YubiKey) or cloud-synced (e.g., iCloud Keychain, Google Password Manager). Barycenter supports passkeys for both single-factor authentication (replacing passwords entirely) and as a second factor after password login.
## AMR (Authentication Method References)
AMR is a claim in the ID Token that indicates which authentication methods were used during the login session. Barycenter tracks the following AMR values:
- `pwd` -- password authentication
- `hwk` -- hardware-bound passkey (e.g., YubiKey, security key)
- `swk` -- software/cloud-synced passkey (e.g., iCloud Keychain, password manager)
Multiple values indicate multi-factor authentication. For example, `["pwd", "hwk"]` means the user authenticated with both a password and a hardware security key.
## ACR (Authentication Context Class Reference)
ACR is a claim in the ID Token that indicates the overall assurance level of the authentication. Barycenter uses two levels:
- `aal1` -- single-factor authentication (password only, or passkey only)
- `aal2` -- two-factor authentication (password plus passkey, or equivalent)
Relying parties can request a minimum ACR level by including the `acr_values` parameter in the authorization request.
## JWT (JSON Web Token)
A JWT is a compact, URL-safe token format consisting of three Base64url-encoded parts separated by dots: a header, a payload (claims), and a signature. Barycenter issues ID Tokens as signed JWTs using RS256 (RSA with SHA-256). The header includes a `kid` (Key ID) that maps to the corresponding public key in the JWKS endpoint.
## ID Token
The ID Token is a JWT issued by Barycenter that contains claims about the authentication event and the authenticated user. Standard claims include `iss` (issuer), `sub` (subject), `aud` (audience), `exp` (expiration), and `iat` (issued at). Barycenter also includes `auth_time`, `amr`, `acr`, `at_hash` (access token hash), and optionally `nonce`.
## Access Token
An access token is an opaque bearer token that grants the holder access to protected resources. In Barycenter, access tokens are random strings (24 random bytes, Base64url-encoded) stored in the database with an associated subject, scope, and expiration. They are used to call the `/userinfo` endpoint and can be presented to resource servers that validate them against Barycenter.
## Refresh Token
A refresh token is a long-lived credential that allows a client to obtain new access tokens without requiring the user to re-authenticate. Barycenter implements refresh token rotation: each time a refresh token is used, a new one is issued and the old one is invalidated. This limits the window of exposure if a refresh token is compromised.
## JWKS (JSON Web Key Set)
A JWKS is a JSON document containing the public keys used to verify JWT signatures. Barycenter publishes its JWKS at `/.well-known/jwks.json`. Relying parties fetch this endpoint to obtain the public key matching the `kid` in the ID Token header, then use it to verify the token's RS256 signature. Barycenter generates a 2048-bit RSA key pair on first startup and persists it to disk.
## KDL (KDL Document Language)
KDL is a document language designed to be a more human-friendly alternative to XML, JSON, or TOML for configuration files. Barycenter uses KDL to define authorization policies in its built-in policy engine. Policy files are stored in a configurable directory and evaluated by the authorization server.
## ReBAC (Relationship-Based Access Control)
ReBAC is an access control model where authorization decisions are based on the relationships between entities. For example, "user A can edit document B because user A is a member of group C, and group C has edit access to document B." Barycenter's policy engine supports ReBAC patterns in its KDL policy definitions.
## ABAC (Attribute-Based Access Control)
ABAC is an access control model where authorization decisions are based on attributes of the subject (user), the resource, the action, and the environment. For example, "allow access if the user's department is 'engineering' and the resource is tagged 'internal' and the current time is within business hours." Barycenter's policy engine combines ABAC with ReBAC for flexible authorization rules.

View file

@ -0,0 +1,46 @@
# Overview
Barycenter is a lightweight OpenID Connect Identity Provider written in Rust. It provides a complete OAuth 2.0 and OIDC implementation suitable for organizations that need a self-hosted identity provider without the operational overhead of larger platforms like Keycloak or the limited scope of token-only services like Dex.
## Purpose
Barycenter exists to fill a gap between minimal token issuers and full-featured identity management suites. It provides a standards-compliant OIDC implementation with modern authentication methods, a built-in authorization policy engine, and a small operational footprint -- all in a single statically-compiled binary.
## Capabilities
- **OpenID Connect Authorization Code Flow** with mandatory PKCE (S256) for all clients
- **Dynamic Client Registration** via the `/connect/register` endpoint
- **WebAuthn/Passkey Authentication** supporting single-factor passwordless login and two-factor verification
- **Device Authorization Grant** (RFC 8628) for input-constrained devices such as smart TVs and CLI tools
- **Token Management** including access tokens, ID tokens with full claim sets, and refresh tokens with rotation
- **Token Revocation** for access and refresh tokens
- **Consent Flow** with per-client, per-scope tracking and remembering user decisions
- **KDL-Based Authorization Policy Engine** combining Relationship-Based Access Control (ReBAC) and Attribute-Based Access Control (ABAC)
- **Admin GraphQL API** for user management, 2FA enforcement, and operational job control
- **Background Job Scheduler** for automatic cleanup of expired sessions, tokens, and challenges
- **Dual Database Support** with SQLite for development and PostgreSQL for production, with automatic migrations
- **Security Headers** applied to all responses (CSP, X-Frame-Options, referrer policy, and more)
## Positioning
| | Barycenter | Keycloak | Dex |
|---|---|---|---|
| Language | Rust | Java | Go |
| Binary size | Small, single binary | Large (JVM) | Small, single binary |
| Database | SQLite or PostgreSQL | PostgreSQL (required) | Various connectors |
| Authentication | Passwords, passkeys, 2FA | Passwords, OTP, WebAuthn, social | Delegates to upstream IdPs |
| Authorization | Built-in policy engine (ReBAC + ABAC) | Role-based, fine-grained authz | None |
| Admin interface | GraphQL API | Web console + REST API | gRPC API |
| Memory footprint | Low | High | Low |
| Target use case | Self-hosted IdP with authz | Enterprise IAM | Federated connector |
Barycenter is best suited for teams that want a self-contained identity provider they can compile, configure, and deploy without managing a JVM runtime, external policy engines, or complex clustering setups.
## Key Features at a Glance
- **Standards compliant**: OIDC Core, OAuth 2.0, PKCE (RFC 7636), Device Authorization (RFC 8628)
- **Modern authentication**: WebAuthn/FIDO2 passkeys with conditional UI and autofill support
- **Three-port architecture**: Public OIDC endpoints, admin GraphQL API, and authorization policy service each on dedicated ports
- **Configuration layers**: Defaults, TOML configuration file, and environment variable overrides
- **Automatic key management**: RSA key pair generated on first run and persisted for subsequent starts
- **Zero-downtime migrations**: Database schema migrations run automatically on startup

View file

@ -0,0 +1,117 @@
# Prerequisites
## Required
### Rust Toolchain
Barycenter requires a stable Rust toolchain. Install it via [rustup](https://rustup.rs/):
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
Verify the installation:
```bash
rustc --version
cargo --version
```
Any current stable release of Rust will work. The project does not require nightly features.
### Database Development Libraries
Barycenter supports SQLite and PostgreSQL. You need the development libraries for at least one of them.
**SQLite (default for development):**
```bash
# Debian / Ubuntu
sudo apt install libsqlite3-dev
# Fedora / RHEL
sudo dnf install sqlite-devel
# macOS (included with Xcode Command Line Tools)
xcode-select --install
# Arch Linux
sudo pacman -S sqlite
```
**PostgreSQL (recommended for production):**
```bash
# Debian / Ubuntu
sudo apt install libpq-dev
# Fedora / RHEL
sudo dnf install postgresql-devel
# macOS
brew install libpq
# Arch Linux
sudo pacman -S postgresql-libs
```
### Build Essentials
A C compiler and linker are required for building native dependencies (SQLite, argon2, etc.):
```bash
# Debian / Ubuntu
sudo apt install build-essential pkg-config
# Fedora / RHEL
sudo dnf groupinstall "Development Tools"
# macOS (included with Xcode Command Line Tools)
xcode-select --install
```
## Optional
These tools are not required to build or run Barycenter but are useful for development and testing.
### wasm-pack
Required only if you need to build the WebAuthn client WASM module:
```bash
cargo install wasm-pack
```
The WASM client handles browser-side WebAuthn API calls for passkey registration and authentication. Pre-built WASM artifacts may be included in releases.
### cargo-nextest
The project uses [cargo-nextest](https://nexte.st/) for running tests. Tests run in separate processes, which prevents port conflicts in integration tests.
```bash
cargo install cargo-nextest
```
### mdbook
Required only if you want to build this documentation locally:
```bash
cargo install mdbook
```
Then from the repository root:
```bash
mdbook serve book
```
## Operating System Support
Barycenter builds and runs on:
- **Linux** (primary development and deployment target) -- x86_64 and aarch64
- **macOS** -- x86_64 (Intel) and aarch64 (Apple Silicon)
- **Windows** -- Builds with MSVC toolchain; SQLite backend recommended
Linux is the recommended platform for production deployments. The Docker images are built for both `linux/amd64` and `linux/arm64`.

View file

@ -0,0 +1,223 @@
# Quickstart
This guide walks through a complete OIDC Authorization Code flow with Barycenter -- from starting the server to obtaining an ID token. By the end, you will have registered a client, authenticated a user, and exchanged an authorization code for tokens.
## Prerequisites
- Barycenter is [installed](./installation.md) and ready to run
- `curl` and `openssl` are available on your system
- A web browser for the authentication step
## 1. Start the Server
```bash
cargo run
```
Or with the release binary:
```bash
./target/release/barycenter --config config.toml
```
Barycenter starts three servers:
```
INFO Public server listening on 0.0.0.0:8080
INFO Admin server listening on 0.0.0.0:8081
INFO Authz server listening on 0.0.0.0:8082
```
A default test user is available: username `admin`, password `password123`.
## 2. Verify the Server Is Running
Check the OIDC discovery document:
```bash
curl -s http://localhost:8080/.well-known/openid-configuration | python3 -m json.tool
```
You should see the provider metadata including the authorization, token, and userinfo endpoints.
## 3. Register a Client
Use dynamic client registration to create an OAuth client:
```bash
curl -s -X POST http://localhost:8080/connect/register \
-H "Content-Type: application/json" \
-d '{
"redirect_uris": ["http://localhost:8080/callback"],
"client_name": "Quickstart Client"
}' | python3 -m json.tool
```
Save the `client_id` and `client_secret` from the response:
```json
{
"client_id": "aBcDeFgHiJkLmNoPqRsTuVwX",
"client_secret": "sEcReTvAlUeHeRe...",
"redirect_uris": ["http://localhost:8080/callback"],
"client_name": "Quickstart Client",
"token_endpoint_auth_method": "client_secret_basic"
}
```
Set them as shell variables for the following steps:
```bash
CLIENT_ID="<your client_id>"
CLIENT_SECRET="<your client_secret>"
```
## 4. Generate PKCE Parameters
Barycenter requires PKCE (S256) for all authorization requests.
```bash
# Generate a random code verifier
CODE_VERIFIER=$(openssl rand -base64 32 | tr -d '=' | tr '+/' '-_')
# Derive the code challenge (S256)
CODE_CHALLENGE=$(echo -n "$CODE_VERIFIER" | openssl dgst -binary -sha256 | base64 | tr -d '=' | tr '+/' '-_')
# Generate a random state parameter
STATE=$(openssl rand -hex 16)
echo "Code Verifier: $CODE_VERIFIER"
echo "Code Challenge: $CODE_CHALLENGE"
echo "State: $STATE"
```
## 5. Open the Authorization URL
Construct the authorization URL and open it in your browser:
```bash
AUTH_URL="http://localhost:8080/authorize?\
client_id=${CLIENT_ID}&\
redirect_uri=http://localhost:8080/callback&\
response_type=code&\
scope=openid&\
code_challenge=${CODE_CHALLENGE}&\
code_challenge_method=S256&\
state=${STATE}"
echo "$AUTH_URL"
```
Open the printed URL in your browser. You will be presented with the login page.
## 6. Authenticate
Log in with the default test credentials:
- **Username:** `admin`
- **Password:** `password123`
After successful authentication and consent, the browser will redirect to the callback URL with an authorization code in the query string:
```
http://localhost:8080/callback?code=AUTH_CODE_HERE&state=YOUR_STATE
```
Copy the `code` parameter from the URL bar. The redirect will likely show an error page since there is no application listening at `/callback` -- that is expected. The authorization code in the URL is what you need.
```bash
AUTH_CODE="<code from the redirect URL>"
```
## 7. Exchange the Code for Tokens
Exchange the authorization code for an access token and ID token:
```bash
curl -s -X POST http://localhost:8080/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-u "${CLIENT_ID}:${CLIENT_SECRET}" \
-d "grant_type=authorization_code" \
-d "code=${AUTH_CODE}" \
-d "redirect_uri=http://localhost:8080/callback" \
-d "code_verifier=${CODE_VERIFIER}" | python3 -m json.tool
```
The response contains your tokens:
```json
{
"access_token": "eyJhbGci...",
"token_type": "Bearer",
"expires_in": 3600,
"id_token": "eyJhbGci...",
"refresh_token": "dGhpcyBp..."
}
```
## 8. Inspect the ID Token
The ID token is a JWT. You can decode the payload to see the claims:
```bash
echo "$ID_TOKEN" | cut -d'.' -f2 | base64 -d 2>/dev/null | python3 -m json.tool
```
Expected claims:
```json
{
"iss": "http://localhost:8080",
"sub": "user-subject-id",
"aud": "your-client-id",
"exp": 1234567890,
"iat": 1234564290,
"auth_time": 1234564290,
"amr": ["pwd"],
"acr": "aal1",
"at_hash": "base64url-encoded-hash"
}
```
## 9. Call the UserInfo Endpoint
Use the access token to retrieve user claims:
```bash
ACCESS_TOKEN="<access_token from the token response>"
curl -s http://localhost:8080/userinfo \
-H "Authorization: Bearer ${ACCESS_TOKEN}" | python3 -m json.tool
```
## Complete Flow Diagram
```mermaid
sequenceDiagram
participant User as User (Browser)
participant Client as Client App
participant Bary as Barycenter
Client->>Client: Generate code_verifier and code_challenge (S256)
Client->>User: Redirect to /authorize with code_challenge
User->>Bary: GET /authorize?client_id=...&code_challenge=...
Bary->>User: Show login page
User->>Bary: POST /login (username + password)
Bary->>Bary: Validate credentials, create session
Bary->>User: Show consent page
User->>Bary: Approve consent
Bary->>User: Redirect to redirect_uri with code + state
User->>Client: Follow redirect with authorization code
Client->>Bary: POST /token (code + code_verifier + client credentials)
Bary->>Bary: Verify PKCE, validate code, generate tokens
Bary->>Client: Return access_token + id_token + refresh_token
Client->>Bary: GET /userinfo (Bearer access_token)
Bary->>Client: Return user claims
```
## What's Next
- [Configuration](./configuration.md) -- Customize ports, database, key paths, and more.
- Authentication -- Learn about passkey login and two-factor authentication.
- OpenID Connect -- Explore client registration options, token claims, and discovery metadata.
- Admin API -- Manage users and background jobs via GraphQL.

View file

@ -0,0 +1,209 @@
# Authorization Code Flow with PKCE
Barycenter implements the OAuth 2.0 Authorization Code flow as defined in [RFC 6749 Section 4.1](https://datatracker.ietf.org/doc/html/rfc6749#section-4.1), extended with Proof Key for Code Exchange (PKCE) as defined in [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636). PKCE with the S256 challenge method is **required** for all authorization requests.
## Flow Overview
```mermaid
sequenceDiagram
participant User as User Agent
participant Client as Client Application
participant Authz as Barycenter /authorize
participant Login as Barycenter /login
participant Token as Barycenter /token
Client->>Client: Generate code_verifier (random)
Client->>Client: code_challenge = BASE64URL(SHA256(code_verifier))
Client->>User: Redirect to /authorize
User->>Authz: GET /authorize?client_id=...&code_challenge=...
Authz->>Authz: Validate client_id, redirect_uri, PKCE params
alt No active session
Authz->>User: Redirect to /login
User->>Login: GET /login
Login->>User: Login form (passkey autofill + password)
User->>Login: POST /login (credentials)
Login->>Authz: Redirect back to /authorize
end
alt 2FA required
Authz->>User: Redirect to /login/2fa
User->>User: Complete second-factor verification
end
alt Consent required
Authz->>User: Redirect to /consent
User->>User: Approve requested scopes
end
Authz->>Authz: Generate authorization code (5 min TTL)
Authz->>User: Redirect to redirect_uri?code=...&state=...
User->>Client: Follow redirect with code
Client->>Token: POST /token (code + code_verifier)
Token->>Token: SHA256(code_verifier) == stored code_challenge?
Token->>Client: { access_token, id_token, token_type, expires_in }
```
## Authorization Endpoint
```
GET /authorize
```
### Required Parameters
| Parameter | Type | Description |
|---|---|---|
| `client_id` | string | The client identifier issued during [registration](./client-registration.md). |
| `redirect_uri` | string | Must exactly match one of the URIs registered for the client. |
| `response_type` | string | The requested response type. See [supported values](#response-types) below. |
| `scope` | string | Space-delimited list of scopes. **Must include `openid`**. |
| `code_challenge` | string | The PKCE code challenge, derived from the code verifier. |
| `code_challenge_method` | string | **Must be `S256`**. The plain method is not supported. |
### Optional Parameters
| Parameter | Type | Description |
|---|---|---|
| `state` | string | Opaque value to maintain state between request and callback. Returned unchanged in the redirect. Strongly recommended for CSRF protection. |
| `nonce` | string | Value to associate with the ID Token. Included as a claim in the issued ID Token for replay protection. |
| `prompt` | string | Controls the authentication UX. One of: `none`, `login`, `consent`, `select_account`. |
| `display` | string | How the authorization page is displayed. Informational; Barycenter serves standard HTML. |
| `ui_locales` | string | Space-separated list of preferred locales (BCP 47). |
| `claims_locales` | string | Space-separated list of preferred locales for claims. |
| `max_age` | integer | Maximum authentication age in seconds. If the user's last authentication is older than this value, re-authentication is required. Values below 300 trigger 2FA. |
| `acr_values` | string | Space-separated list of requested Authentication Context Class Reference values. |
### Response Types
| Value | Description |
|---|---|
| `code` | Authorization Code flow. Returns an authorization code in the redirect URI query parameters. This is the primary and recommended flow. |
| `id_token` | Implicit flow returning only an ID Token in the fragment. |
| `id_token token` | Implicit flow returning both an ID Token and an access token in the fragment. |
For nearly all use cases, `response_type=code` with PKCE is the correct choice. The implicit response types are provided for compatibility but are not recommended for new integrations.
## PKCE (Proof Key for Code Exchange)
PKCE prevents authorization code interception attacks. Barycenter enforces PKCE on every authorization request.
### How S256 Works
1. **Generate a code verifier**: a cryptographically random string between 43 and 128 characters, using the unreserved character set (`[A-Za-z0-9\-._~]`).
2. **Derive the code challenge**: compute `BASE64URL(SHA-256(code_verifier))`.
3. **Send the challenge** in the authorization request via `code_challenge` and `code_challenge_method=S256`.
4. **Send the verifier** when exchanging the authorization code at the token endpoint via `code_verifier`.
5. **Server verifies**: Barycenter computes `BASE64URL(SHA-256(code_verifier))` and compares it to the stored `code_challenge`. The token is issued only if they match.
### Example: Generating PKCE Values
```bash
# Generate a random code verifier (43+ chars, base64url, no padding)
code_verifier=$(openssl rand -base64 32 | tr -d '=' | tr '+/' '-_')
echo "code_verifier: $code_verifier"
# Derive the code challenge (SHA-256, base64url, no padding)
code_challenge=$(printf '%s' "$code_verifier" \
| openssl dgst -binary -sha256 \
| base64 \
| tr -d '=' \
| tr '+/' '-_')
echo "code_challenge: $code_challenge"
```
## Validation Rules
When Barycenter receives a request to `/authorize`, it performs the following checks in order:
1. **`client_id` exists** -- the client must be registered. If not, an error is returned directly (not via redirect).
2. **`redirect_uri` exact match** -- the provided URI must exactly match one of the client's registered redirect URIs. If not, an error is returned directly.
3. **`scope` includes `openid`** -- the `openid` scope is mandatory for all OIDC requests.
4. **`code_challenge_method` is `S256`** -- the `plain` method is rejected.
5. **`code_challenge` is present** -- PKCE is not optional.
6. **Session state** -- if the user has no active session, they are redirected to `/login`. After authentication, the flow resumes.
7. **2FA requirement** -- if the user has admin-enforced 2FA, if the requested scopes include high-value scopes (`admin`, `payment`, `transfer`, `delete`), or if `max_age` is below 300 seconds, the user is redirected to `/login/2fa`.
8. **Consent** -- if consent has not been previously granted for the requested scopes, the user is redirected to `/consent`.
## Authorization Code
When all checks pass, Barycenter generates an authorization code:
- **Format**: 24 cryptographically random bytes, base64url-encoded.
- **TTL**: 5 minutes. Codes that are not exchanged within this window expire.
- **Single-use**: once exchanged at the token endpoint, the code is marked as consumed. Attempting to reuse it will fail.
- **Bound data**: the code is associated with the client_id, redirect_uri, scope, PKCE challenge, subject, and nonce.
The user agent is redirected to the client's `redirect_uri` with the code and (if provided) the state:
```
HTTP/1.1 302 Found
Location: https://client.example.com/callback?code=abc123...&state=xyz
```
## Error Responses
Errors that occur **before** redirect_uri validation (invalid client_id, missing or non-matching redirect_uri) are displayed directly to the user. All other errors are returned as query parameters on the redirect URI per [RFC 6749 Section 4.1.2.1](https://datatracker.ietf.org/doc/html/rfc6749#section-4.1.2.1).
| Error Code | Condition |
|---|---|
| `invalid_request` | Missing required parameter, unsupported parameter value, or malformed request. |
| `unauthorized_client` | The client is not authorized for the requested response type or grant type. |
| `access_denied` | The user denied the authorization request. |
| `unsupported_response_type` | The `response_type` value is not supported. |
| `invalid_scope` | The requested scope is invalid, unknown, or missing `openid`. |
| `server_error` | An unexpected internal error occurred. |
| `temporarily_unavailable` | The server is temporarily unable to handle the request. |
Example error redirect:
```
HTTP/1.1 302 Found
Location: https://client.example.com/callback?error=invalid_scope&error_description=The+openid+scope+is+required&state=xyz
```
## Complete Example
```bash
# 1. Generate PKCE values
code_verifier=$(openssl rand -base64 32 | tr -d '=' | tr '+/' '-_')
code_challenge=$(printf '%s' "$code_verifier" \
| openssl dgst -binary -sha256 | base64 | tr -d '=' | tr '+/' '-_')
# 2. Direct the user to the authorization endpoint
# (open this URL in a browser)
echo "https://idp.example.com/authorize?\
client_id=my_client_id&\
redirect_uri=https://app.example.com/callback&\
response_type=code&\
scope=openid%20profile%20email&\
code_challenge=${code_challenge}&\
code_challenge_method=S256&\
state=random_state_value&\
nonce=random_nonce_value"
# 3. After the user authenticates and consents, extract the code from
# the redirect: https://app.example.com/callback?code=AUTH_CODE&state=random_state_value
# 4. Exchange the code for tokens (see Token Endpoint documentation)
curl -X POST https://idp.example.com/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "code=AUTH_CODE" \
-d "redirect_uri=https://app.example.com/callback" \
-d "client_id=my_client_id" \
-d "client_secret=my_client_secret" \
-d "code_verifier=${code_verifier}"
```
## Next Steps
- [Token Endpoint](./token-endpoint.md) -- exchanging the authorization code for tokens.
- [Client Authentication](./client-authentication.md) -- how to authenticate at the token endpoint.
- [ID Token](./id-token.md) -- structure and claims of the issued ID Token.

View file

@ -0,0 +1,128 @@
# Client Authentication
When making requests to the [token endpoint](./token-endpoint.md), clients must authenticate themselves to prove they are the legitimate holder of the `client_id`. Barycenter supports two authentication methods defined in [RFC 6749 Section 2.3](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3) and [OpenID Connect Core Section 9](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication).
## Supported Methods
| Method | Description | Credential Location |
|---|---|---|
| `client_secret_basic` | HTTP Basic authentication | `Authorization` header |
| `client_secret_post` | Form-encoded credentials | Request body |
Both methods are equally supported. Choose the one that best fits your HTTP client library or framework.
## client_secret_basic
The client sends its credentials using HTTP Basic authentication. The `client_id` and `client_secret` are combined with a colon separator, base64-encoded, and sent in the `Authorization` header.
### Format
```
Authorization: Basic base64(client_id:client_secret)
```
### Encoding Steps
1. Concatenate the `client_id`, a colon (`:`), and the `client_secret`.
2. Base64-encode the resulting string.
3. Set the `Authorization` header to `Basic ` followed by the encoded string.
### Example
Given:
- `client_id`: `my_client_id`
- `client_secret`: `my_client_secret`
```bash
# The base64 encoding of "my_client_id:my_client_secret"
echo -n "my_client_id:my_client_secret" | base64
# Output: bXlfY2xpZW50X2lkOm15X2NsaWVudF9zZWNyZXQ=
```
```bash
curl -X POST https://idp.example.com/token \
-H "Authorization: Basic bXlfY2xpZW50X2lkOm15X2NsaWVudF9zZWNyZXQ=" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "code=AUTH_CODE" \
-d "redirect_uri=https://app.example.com/callback" \
-d "code_verifier=VERIFIER"
```
Most HTTP libraries handle Basic authentication natively. For example, `curl` provides the `-u` flag:
```bash
curl -X POST https://idp.example.com/token \
-u "my_client_id:my_client_secret" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "code=AUTH_CODE" \
-d "redirect_uri=https://app.example.com/callback" \
-d "code_verifier=VERIFIER"
```
### Special Characters
If the `client_id` or `client_secret` contains special characters (such as `:`, `@`, or non-ASCII characters), they must be percent-encoded per [RFC 6749 Section 2.3.1](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) before base64 encoding. In practice, Barycenter generates credentials using base64url-safe characters, so this is typically not a concern.
## client_secret_post
The client sends its credentials as form parameters in the request body alongside the other token request parameters.
### Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| `client_id` | string | Yes | The client identifier. |
| `client_secret` | string | Yes | The client secret. |
### Example
```bash
curl -X POST https://idp.example.com/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "code=AUTH_CODE" \
-d "redirect_uri=https://app.example.com/callback" \
-d "code_verifier=VERIFIER" \
-d "client_id=my_client_id" \
-d "client_secret=my_client_secret"
```
## Choosing a Method
| Consideration | client_secret_basic | client_secret_post |
|---|---|---|
| Credential exposure in logs | Credentials in header, less likely to be logged | Credentials in body, may appear in access logs if the server logs POST bodies |
| Framework support | Most HTTP libraries support Basic auth natively | Requires manual parameter inclusion |
| Specification preference | Preferred by OAuth 2.0 spec | Acceptable alternative |
| Simplicity | Requires base64 encoding | Straightforward form parameters |
The OAuth 2.0 specification expresses a preference for `client_secret_basic`, but both methods are fully supported and provide equivalent security.
## Error Handling
If client authentication fails, the token endpoint returns HTTP 401 with a `WWW-Authenticate` header:
```
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Basic
Content-Type: application/json
{
"error": "invalid_client",
"error_description": "Client authentication failed"
}
```
Common causes of authentication failure:
- **Unknown `client_id`**: The client has not been [registered](./client-registration.md).
- **Incorrect `client_secret`**: The secret does not match the one issued during registration.
- **Malformed `Authorization` header**: The base64 encoding is incorrect or the header format is wrong.
- **Missing credentials**: Neither the `Authorization` header nor body parameters contain client credentials.
## Related
- [Client Registration](./client-registration.md) -- obtaining client credentials.
- [Token Endpoint](./token-endpoint.md) -- using credentials to request tokens.

View file

@ -0,0 +1,108 @@
# Client Registration
Barycenter supports Dynamic Client Registration as defined in [RFC 7591](https://datatracker.ietf.org/doc/html/rfc7591). Clients can register themselves programmatically by sending a POST request to the registration endpoint.
## Endpoint
```
POST /connect/register
Content-Type: application/json
```
## Request Body
| Field | Type | Required | Description |
|---|---|---|---|
| `redirect_uris` | array of strings | Yes | One or more redirect URIs for the client. Each URI must be an absolute URI. These are used for exact-match validation during [authorization requests](./authorization-code-flow.md). |
| `client_name` | string | No | Human-readable name for the client application. Displayed to users during consent. |
### Example Request
```bash
curl -X POST https://idp.example.com/connect/register \
-H "Content-Type: application/json" \
-d '{
"redirect_uris": [
"https://app.example.com/callback",
"https://app.example.com/auth/redirect"
],
"client_name": "My Application"
}'
```
## Response
A successful registration returns `201 Created` with the client credentials.
| Field | Type | Description |
|---|---|---|
| `client_id` | string | Unique identifier for the client. Generated as 24 random bytes, base64url-encoded. |
| `client_secret` | string | Client secret for authentication at the [token endpoint](./token-endpoint.md). Generated as 24 random bytes, base64url-encoded. |
| `redirect_uris` | array of strings | The registered redirect URIs, echoed back from the request. |
| `client_name` | string | The client name, echoed back from the request (if provided). |
### Example Response
```json
{
"client_id": "dG9hc3R5LWNsaWVudC1pZC1leGFtcGxl",
"client_secret": "c2VjcmV0LXZhbHVlLWhlcmUtZXhhbXBsZQ",
"redirect_uris": [
"https://app.example.com/callback",
"https://app.example.com/auth/redirect"
],
"client_name": "My Application"
}
```
## Validation
The registration endpoint validates the following:
- **`redirect_uris` must be present** and contain at least one URI.
- **Each URI must be a valid absolute URI**. Fragment components (`#fragment`) are not allowed per the OAuth 2.0 specification.
If validation fails, the endpoint returns an error response:
```json
{
"error": "invalid_client_metadata",
"error_description": "At least one redirect_uri is required"
}
```
## Client Credentials
After registration, the client must store both the `client_id` and `client_secret` securely. The `client_secret` is needed to authenticate at the token endpoint using either [client_secret_basic or client_secret_post](./client-authentication.md).
> **Important**: The `client_secret` is returned only once at registration time. Barycenter does not provide a mechanism to retrieve it later. If the secret is lost, the client must re-register.
## Usage After Registration
Once registered, the client can initiate the [Authorization Code flow](./authorization-code-flow.md):
```bash
# 1. Register
response=$(curl -s -X POST https://idp.example.com/connect/register \
-H "Content-Type: application/json" \
-d '{
"redirect_uris": ["https://app.example.com/callback"],
"client_name": "Demo Client"
}')
# 2. Extract credentials
client_id=$(echo "$response" | jq -r '.client_id')
client_secret=$(echo "$response" | jq -r '.client_secret')
echo "Client ID: $client_id"
echo "Client Secret: $client_secret"
# 3. Use in authorization request
echo "Authorize URL: https://idp.example.com/authorize?client_id=${client_id}&redirect_uri=https://app.example.com/callback&response_type=code&scope=openid&code_challenge=...&code_challenge_method=S256"
```
## Related
- [Authorization Code Flow](./authorization-code-flow.md) -- using the registered client to initiate authorization.
- [Client Authentication](./client-authentication.md) -- authenticating at the token endpoint with the issued credentials.
- [Discovery](./discovery-jwks.md) -- finding the registration endpoint via OpenID Connect Discovery.

View file

@ -0,0 +1,199 @@
# Discovery and JWKS
Barycenter publishes its configuration and public keys through two standard endpoints, enabling relying parties to automatically configure themselves without manual setup.
## OpenID Connect Discovery
### Endpoint
```
GET /.well-known/openid-configuration
```
This endpoint returns a JSON document describing the OpenID Provider's configuration, as defined in [OpenID Connect Discovery 1.0](https://openid.net/specs/openid-connect-discovery-1_0.html).
### Example Request
```bash
curl https://idp.example.com/.well-known/openid-configuration
```
### Response
```json
{
"issuer": "https://idp.example.com",
"authorization_endpoint": "https://idp.example.com/authorize",
"token_endpoint": "https://idp.example.com/token",
"userinfo_endpoint": "https://idp.example.com/userinfo",
"jwks_uri": "https://idp.example.com/.well-known/jwks.json",
"registration_endpoint": "https://idp.example.com/connect/register",
"revocation_endpoint": "https://idp.example.com/revoke",
"device_authorization_endpoint": "https://idp.example.com/device_authorization",
"scopes_supported": [
"openid",
"profile",
"email",
"offline_access"
],
"response_types_supported": [
"code",
"id_token",
"id_token token"
],
"grant_types_supported": [
"authorization_code",
"refresh_token",
"urn:ietf:params:oauth:grant-type:device_code"
],
"subject_types_supported": [
"public"
],
"id_token_signing_alg_values_supported": [
"RS256"
],
"token_endpoint_auth_methods_supported": [
"client_secret_basic",
"client_secret_post"
],
"claims_supported": [
"iss",
"sub",
"aud",
"exp",
"iat",
"auth_time",
"nonce",
"at_hash",
"amr",
"acr",
"name",
"given_name",
"family_name",
"email",
"email_verified"
],
"code_challenge_methods_supported": [
"S256"
],
"ui_locales_supported": [],
"claims_locales_supported": []
}
```
### Metadata Fields
#### Endpoints
| Field | Description |
|---|---|
| `issuer` | The identifier for the OpenID Provider. This value is used as the `iss` claim in [ID Tokens](./id-token.md) and must exactly match. |
| `authorization_endpoint` | URL for the [authorization endpoint](./authorization-code-flow.md). |
| `token_endpoint` | URL for the [token endpoint](./token-endpoint.md). |
| `userinfo_endpoint` | URL for the [UserInfo endpoint](./userinfo.md). |
| `jwks_uri` | URL for the [JWKS endpoint](#json-web-key-set-jwks) containing the public signing keys. |
| `registration_endpoint` | URL for [dynamic client registration](./client-registration.md). |
| `revocation_endpoint` | URL for [token revocation](./token-revocation.md). |
| `device_authorization_endpoint` | URL for the [device authorization endpoint](./grant-device-authorization.md). |
#### Supported Features
| Field | Description |
|---|---|
| `scopes_supported` | The scopes that Barycenter recognizes. `openid` is required for all OIDC requests. `profile` and `email` control which claims are returned by the [UserInfo endpoint](./userinfo.md). `offline_access` enables [refresh tokens](./grant-refresh-token.md). |
| `response_types_supported` | Supported `response_type` values for the authorization endpoint. `code` is the recommended Authorization Code flow. |
| `grant_types_supported` | Grant types accepted at the token endpoint. See individual grant type documentation for details. |
| `subject_types_supported` | Subject identifier types. Barycenter uses `public` subject identifiers (the same `sub` value is returned to all clients for a given user). |
| `id_token_signing_alg_values_supported` | Algorithms used for signing ID Tokens. Barycenter uses `RS256` exclusively. |
| `token_endpoint_auth_methods_supported` | [Client authentication methods](./client-authentication.md) accepted at the token endpoint. |
| `claims_supported` | Claims that may appear in [ID Tokens](./id-token.md) or [UserInfo responses](./userinfo.md). |
| `code_challenge_methods_supported` | PKCE challenge methods. Only `S256` is supported; `plain` is rejected. |
| `ui_locales_supported` | Supported UI locales. Currently empty (default locale only). |
| `claims_locales_supported` | Supported claims locales. Currently empty (default locale only). |
## JSON Web Key Set (JWKS)
### Endpoint
```
GET /.well-known/jwks.json
```
This endpoint returns the public keys used by Barycenter to sign ID Tokens, formatted as a JSON Web Key Set per [RFC 7517](https://datatracker.ietf.org/doc/html/rfc7517).
### Example Request
```bash
curl https://idp.example.com/.well-known/jwks.json
```
### Response
```json
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"alg": "RS256",
"kid": "key-1",
"n": "0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw",
"e": "AQAB"
}
]
}
```
### Key Fields
| Field | Type | Description |
|---|---|---|
| `kty` | string | Key type. Always `RSA` for Barycenter's signing keys. |
| `use` | string | Key usage. `sig` indicates the key is used for digital signatures. |
| `alg` | string | Algorithm. `RS256` (RSASSA-PKCS1-v1_5 using SHA-256). |
| `kid` | string | Key ID. Matches the `kid` in the JWT header of [ID Tokens](./id-token.md), allowing relying parties to select the correct key when multiple keys are published. |
| `n` | string | RSA modulus, base64url-encoded. |
| `e` | string | RSA public exponent, base64url-encoded. Typically `AQAB` (65537). |
### Key Lifecycle
- Barycenter generates a **2048-bit RSA key pair** on first startup if no existing key is found.
- The private key is persisted to the path configured in `keys.private_key_path`.
- The public key set is written to `keys.jwks_path` and served by this endpoint.
- The `kid` value is stable across restarts, ensuring that previously issued ID Tokens can still be verified.
## Using Discovery for Client Configuration
Most OIDC client libraries can auto-configure themselves using the discovery endpoint. The typical flow is:
1. Fetch `/.well-known/openid-configuration`.
2. Extract the relevant endpoint URLs (`authorization_endpoint`, `token_endpoint`, `jwks_uri`, etc.).
3. Fetch the JWKS from `jwks_uri` to obtain the public signing keys.
4. Cache the JWKS with appropriate TTL and refresh periodically.
### Example: Discovering and Verifying
```bash
# 1. Fetch the provider configuration
config=$(curl -s https://idp.example.com/.well-known/openid-configuration)
# 2. Extract the JWKS URI
jwks_uri=$(echo "$config" | jq -r '.jwks_uri')
# 3. Fetch the public keys
jwks=$(curl -s "$jwks_uri")
# 4. Display the signing key
echo "$jwks" | jq '.keys[0]'
```
## Caching Recommendations
- **Discovery document**: Cache for at least 24 hours. The configuration changes infrequently (only on server reconfiguration).
- **JWKS**: Cache based on HTTP cache headers. Refresh when encountering an unknown `kid` in a JWT header, as this may indicate key rotation.
## Related
- [ID Token](./id-token.md) -- verifying tokens with the JWKS public key.
- [Client Registration](./client-registration.md) -- the registration endpoint advertised in discovery.
- [Token Endpoint](./token-endpoint.md) -- the token endpoint advertised in discovery.

View file

@ -0,0 +1,122 @@
# Authorization Code Grant
The authorization code grant type exchanges an authorization code for an access token and ID token. This is the second half of the [Authorization Code flow](./authorization-code-flow.md), occurring after the user has authenticated and consented.
## Request
```
POST /token
Content-Type: application/x-www-form-urlencoded
```
### Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| `grant_type` | string | Yes | Must be `authorization_code`. |
| `code` | string | Yes | The authorization code received from the authorization endpoint redirect. |
| `redirect_uri` | string | Yes | Must exactly match the `redirect_uri` used in the original authorization request. |
| `client_id` | string | Yes | The client identifier. Required in the body for `client_secret_post` authentication; also extracted from the `Authorization` header for `client_secret_basic`. |
| `code_verifier` | string | Yes | The original PKCE code verifier that was used to generate the `code_challenge` sent to the authorization endpoint. |
Client authentication (via [client_secret_basic or client_secret_post](./client-authentication.md)) is also required.
### Example Request (client_secret_basic)
```bash
curl -X POST https://idp.example.com/token \
-u "my_client_id:my_client_secret" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "code=Rk1hWUxPdXotbXk2UGFmQndPTEVMWWpIZVhR" \
-d "redirect_uri=https://app.example.com/callback" \
-d "code_verifier=dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk"
```
### Example Request (client_secret_post)
```bash
curl -X POST https://idp.example.com/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=authorization_code" \
-d "code=Rk1hWUxPdXotbXk2UGFmQndPTEVMWWpIZVhR" \
-d "redirect_uri=https://app.example.com/callback" \
-d "client_id=my_client_id" \
-d "client_secret=my_client_secret" \
-d "code_verifier=dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk"
```
## PKCE Verification
Barycenter verifies the PKCE code verifier against the stored code challenge using the S256 method:
1. Compute `SHA-256(code_verifier)` to produce a 32-byte hash.
2. Encode the hash with base64url (no padding).
3. Compare the result to the `code_challenge` stored with the authorization code.
If the values do not match, the token request is rejected with `invalid_grant`.
```
SHA-256(code_verifier) --> base64url --> compare with stored code_challenge
```
This ensures that the party exchanging the authorization code is the same party that initiated the authorization request, even if the code was intercepted in transit.
## Response
A successful exchange returns HTTP 200 with a JSON body:
```json
{
"access_token": "VGhpcyBpcyBhbiBleGFtcGxlIGFjY2VzcyB0b2tlbg",
"token_type": "bearer",
"expires_in": 3600,
"id_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6ImtleS0xIn0.eyJpc3MiOiJodHRwczovL2lkcC5leGFtcGxlLmNvbSIsInN1YiI6InVzZXJfMTIzIiwiYXVkIjoibXlfY2xpZW50X2lkIiwiZXhwIjoxNzA5MzE1MjAwLCJpYXQiOjE3MDkzMTE2MDAsImF1dGhfdGltZSI6MTcwOTMxMTU5MCwibm9uY2UiOiJhYmMxMjMiLCJhdF9oYXNoIjoiSGsyYUtCeXpEcERReU4tR2VBN196dyIsImFtciI6WyJwd2QiXSwiYWNyIjoiYWFsMSJ9.signature",
"refresh_token": "cmVmcmVzaC10b2tlbi1leGFtcGxl"
}
```
| Field | Type | Present | Description |
|---|---|---|---|
| `access_token` | string | Always | Bearer token, valid for 1 hour. |
| `token_type` | string | Always | `"bearer"`. |
| `expires_in` | integer | Always | `3600` (seconds). |
| `id_token` | string | Always | Signed JWT with identity claims. See [ID Token](./id-token.md). |
| `refresh_token` | string | Conditional | Present only if `offline_access` was included in the authorized scope. See [Refresh Token Grant](./grant-refresh-token.md). |
## Validation and Error Conditions
The token endpoint performs several checks before issuing tokens:
| Check | Error Code | Description |
|---|---|---|
| Client authentication | `invalid_client` | The client_id/client_secret pair is invalid. HTTP 401. |
| Code exists | `invalid_grant` | The authorization code is not recognized. |
| Code not expired | `invalid_grant` | The code's 5-minute TTL has elapsed. |
| Code not consumed | `invalid_grant` | The code has already been exchanged. |
| Client ID match | `invalid_grant` | The `client_id` in the token request does not match the client that obtained the code. |
| Redirect URI match | `invalid_grant` | The `redirect_uri` does not match the one used in the authorization request. |
| PKCE verification | `invalid_grant` | `SHA-256(code_verifier)` does not match the stored `code_challenge`. |
All error responses use HTTP 400 (except `invalid_client` which uses HTTP 401):
```json
{
"error": "invalid_grant",
"error_description": "The authorization code has been consumed"
}
```
## Security Considerations
- **Authorization codes are single-use.** After a successful exchange, the code is permanently marked as consumed. If an attacker replays a code, the request is rejected.
- **Codes expire after 5 minutes.** This limits the window for interception.
- **PKCE binds the code to the original requester.** Even if an authorization code is intercepted, it cannot be exchanged without the original `code_verifier`.
- **Redirect URI is validated twice** -- once at the authorization endpoint and once at the token endpoint -- ensuring consistency.
## Related
- [Authorization Code Flow](./authorization-code-flow.md) -- the full flow from authorization to token exchange.
- [Client Authentication](./client-authentication.md) -- authenticating the token request.
- [ID Token](./id-token.md) -- understanding the issued ID Token.
- [Token Endpoint](./token-endpoint.md) -- overview of all grant types.

View file

@ -0,0 +1,273 @@
# Device Authorization Grant
Barycenter implements the OAuth 2.0 Device Authorization Grant as defined in [RFC 8628](https://datatracker.ietf.org/doc/html/rfc8628). This grant type enables authorization on input-constrained devices such as smart TVs, CLI tools, IoT devices, and other environments where the user cannot easily interact with a browser on the same device.
## Flow Overview
The device authorization grant involves three participants: the **device** (client application running on the constrained device), the **user** (who authorizes on a separate browser), and the **authorization server** (Barycenter).
```mermaid
sequenceDiagram
participant Device as Device Client
participant Server as Barycenter
participant User as User (Browser)
Device->>Server: POST /device_authorization
Note right of Device: client_id + scope
Server->>Device: device_code, user_code,<br/>verification_uri, interval
Device->>User: Display user_code and<br/>verification_uri
par User authorizes in browser
User->>Server: GET /device
Server->>User: Code entry page
User->>Server: POST /device/verify (user_code)
Server->>User: Login page (if not authenticated)
User->>Server: Authenticate
Server->>User: Consent page
User->>Server: POST /device/consent (approve)
Server->>User: Success confirmation
and Device polls for token
loop Poll every {interval} seconds
Device->>Server: POST /token<br/>grant_type=urn:ietf:params:oauth:grant-type:device_code
Server->>Device: authorization_pending (waiting)
end
Device->>Server: POST /token<br/>grant_type=urn:ietf:params:oauth:grant-type:device_code
Server->>Device: access_token, id_token
end
```
## Step 1: Device Requests Authorization
The device initiates the flow by requesting a device code.
### Endpoint
```
POST /device_authorization
Content-Type: application/x-www-form-urlencoded
```
### Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| `client_id` | string | Yes | The registered client identifier. |
| `scope` | string | No | Space-delimited list of requested scopes. Must include `openid` for OIDC. |
### Example Request
```bash
curl -X POST https://idp.example.com/device_authorization \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "client_id=my_client_id" \
-d "scope=openid%20profile"
```
### Response
```json
{
"device_code": "ZGV2aWNlLWNvZGUtZXhhbXBsZS12YWx1ZQ",
"user_code": "ABCD-1234",
"verification_uri": "https://idp.example.com/device",
"verification_uri_complete": "https://idp.example.com/device?user_code=ABCD-1234",
"expires_in": 600,
"interval": 5
}
```
| Field | Type | Description |
|---|---|---|
| `device_code` | string | The device verification code. Used by the device when polling the token endpoint. |
| `user_code` | string | A short, human-readable code displayed to the user. The user enters this on the verification page. |
| `verification_uri` | string | The URI the user should visit to enter the code. Display this to the user. |
| `verification_uri_complete` | string | The verification URI with the user_code pre-filled as a query parameter. Can be displayed as a QR code for convenience. |
| `expires_in` | integer | Lifetime of the device_code and user_code in seconds. |
| `interval` | integer | Minimum polling interval in seconds. The device must wait at least this long between token requests. |
## Step 2: User Authorizes in Browser
The device displays the `user_code` and `verification_uri` (or a QR code of `verification_uri_complete`) to the user. The user then:
### 2a. Opens the Verification Page
```
GET /device
```
The user navigates to the verification URI in a browser on any device (phone, laptop, etc.). Barycenter displays a page with a code entry form.
### 2b. Enters the User Code
```
POST /device/verify
Content-Type: application/x-www-form-urlencoded
```
| Parameter | Type | Required | Description |
|---|---|---|---|
| `user_code` | string | Yes | The code displayed on the device. |
If the user arrived via `verification_uri_complete`, the code is pre-filled.
After submitting the code, the user is redirected to authenticate (if not already logged in) and then to approve the authorization.
### 2c. Approves the Request
```
POST /device/consent
```
After authentication, the user reviews the requested scopes and approves or denies the request. On approval, the device code is marked as authorized, and the user sees a success confirmation page.
## Step 3: Device Polls for Token
While the user is authorizing in their browser, the device polls the token endpoint at regular intervals.
### Endpoint
```
POST /token
Content-Type: application/x-www-form-urlencoded
```
### Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| `grant_type` | string | Yes | Must be `urn:ietf:params:oauth:grant-type:device_code`. |
| `device_code` | string | Yes | The device code received from the device authorization endpoint. |
| `client_id` | string | Yes | The client identifier. |
Client authentication (via [client_secret_basic or client_secret_post](./client-authentication.md)) is also required.
### Example Polling Request
```bash
curl -X POST https://idp.example.com/token \
-u "my_client_id:my_client_secret" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=urn:ietf:params:oauth:grant-type:device_code" \
-d "device_code=ZGV2aWNlLWNvZGUtZXhhbXBsZS12YWx1ZQ" \
-d "client_id=my_client_id"
```
### Polling Responses
The device will receive one of the following responses during polling:
#### Authorization Pending
The user has not yet completed authorization. Continue polling.
```json
{
"error": "authorization_pending",
"error_description": "The user has not yet authorized this device"
}
```
**HTTP Status: 400**
#### Slow Down
The device is polling too frequently. Increase the interval by 5 seconds.
```json
{
"error": "slow_down",
"error_description": "Polling too frequently, slow down"
}
```
**HTTP Status: 400**
#### Expired Token
The device code has expired. The device must restart the flow from Step 1.
```json
{
"error": "expired_token",
"error_description": "The device code has expired"
}
```
**HTTP Status: 400**
#### Access Denied
The user denied the authorization request.
```json
{
"error": "access_denied",
"error_description": "The user denied the authorization request"
}
```
**HTTP Status: 400**
#### Success
The user has authorized the device. Tokens are returned.
```json
{
"access_token": "ZXhhbXBsZS1hY2Nlc3MtdG9rZW4",
"token_type": "bearer",
"expires_in": 3600,
"id_token": "eyJhbGciOiJSUzI1NiIs...",
"refresh_token": "ZXhhbXBsZS1yZWZyZXNoLXRva2Vu"
}
```
**HTTP Status: 200**
## Implementation Notes for Device Clients
### Polling Best Practices
```
initial_interval = response.interval // e.g., 5 seconds
loop:
wait(interval)
response = POST /token with device_code
if response.error == "authorization_pending":
continue
elif response.error == "slow_down":
interval = interval + 5
continue
elif response.error == "expired_token":
restart flow
elif response.error == "access_denied":
show error to user
break
elif response.access_token:
store tokens
break
```
### Displaying the Code
The device should display the `user_code` and `verification_uri` prominently:
```
To sign in, visit:
https://idp.example.com/device
and enter code:
ABCD-1234
```
If the device can display a QR code, encode `verification_uri_complete` as a QR code to allow the user to scan instead of typing.
## Related
- [Token Endpoint](./token-endpoint.md) -- overview of all grant types.
- [Client Registration](./client-registration.md) -- registering the device client.
- [Client Authentication](./client-authentication.md) -- authenticating token requests.

View file

@ -0,0 +1,129 @@
# Refresh Token Grant
The refresh token grant allows clients to obtain a new access token without requiring the user to re-authenticate. Barycenter implements refresh token rotation as recommended by [OAuth 2.0 Security Best Current Practice](https://datatracker.ietf.org/doc/html/draft-ietf-oauth-security-topics): each time a refresh token is used, it is revoked and a new one is issued.
## Prerequisites
Refresh tokens are only issued when the `offline_access` scope is included in the original authorization request:
```
GET /authorize?...&scope=openid%20offline_access&...
```
If `offline_access` is not requested, the token response will not contain a `refresh_token` field, and this grant type cannot be used.
## Request
```
POST /token
Content-Type: application/x-www-form-urlencoded
```
### Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| `grant_type` | string | Yes | Must be `refresh_token`. |
| `refresh_token` | string | Yes | The refresh token previously issued to the client. |
Client authentication (via [client_secret_basic or client_secret_post](./client-authentication.md)) is also required.
### Example Request
```bash
curl -X POST https://idp.example.com/token \
-u "my_client_id:my_client_secret" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "grant_type=refresh_token" \
-d "refresh_token=cmVmcmVzaC10b2tlbi1leGFtcGxl"
```
## Response
A successful refresh returns HTTP 200 with a new token set:
```json
{
"access_token": "bmV3LWFjY2Vzcy10b2tlbi1leGFtcGxl",
"token_type": "bearer",
"expires_in": 3600,
"id_token": "eyJhbGciOiJSUzI1NiIs...",
"refresh_token": "bmV3LXJlZnJlc2gtdG9rZW4tZXhhbXBsZQ"
}
```
| Field | Type | Description |
|---|---|---|
| `access_token` | string | A new bearer access token, valid for 1 hour. |
| `token_type` | string | `"bearer"`. |
| `expires_in` | integer | `3600` (seconds). |
| `id_token` | string | A new signed [ID Token](./id-token.md). |
| `refresh_token` | string | A **new** refresh token. The previous refresh token is no longer valid. |
## Token Rotation
Barycenter implements refresh token rotation to limit the impact of token theft:
1. **On each use**, the presented refresh token is marked as revoked in the database.
2. **A new refresh token** is issued and returned in the response.
3. **The new token tracks its parent.** Barycenter records which refresh token was used to obtain the new one (`parent_token` tracking).
```
Original refresh_token (RT1)
--> Exchange --> RT1 revoked, RT2 issued
--> Exchange --> RT2 revoked, RT3 issued
--> Exchange --> RT3 revoked, RT4 issued
```
### Replay Detection
If a revoked refresh token is presented (indicating potential theft), Barycenter rejects the request with `invalid_grant`. The parent_token chain allows Barycenter to identify token reuse.
```
RT1 (revoked) --> Exchange attempt --> Rejected (invalid_grant)
```
This protects against a scenario where an attacker obtains a refresh token after the legitimate client has already rotated it.
## Error Conditions
| Error Code | Condition |
|---|---|
| `invalid_client` | Client authentication failed. HTTP 401. |
| `invalid_grant` | The refresh token is not recognized, has been revoked, or has expired. |
| `invalid_request` | Missing `refresh_token` parameter. |
### Example Error
```json
{
"error": "invalid_grant",
"error_description": "The refresh token has been revoked"
}
```
## Token Lifecycle
Refresh tokens follow this lifecycle:
| State | Description |
|---|---|
| **Active** | Valid and can be exchanged for new tokens. |
| **Consumed** | Has been used in a token exchange. Replaced by a new refresh token. |
| **Revoked** | Explicitly revoked via the [revocation endpoint](./token-revocation.md) or detected as replayed. |
| **Expired** | Past its expiration time. Cleaned up by background jobs. |
The `cleanup_expired_refresh_tokens` background job runs every hour (at :30) to remove expired refresh tokens from the database.
## Security Recommendations
- **Store refresh tokens securely.** They are long-lived credentials. Use secure storage mechanisms appropriate to your platform (e.g., encrypted storage on mobile, HTTP-only cookies for web).
- **Always use the latest refresh token.** After each refresh, discard the old token and use the newly issued one.
- **Handle `invalid_grant` gracefully.** If a refresh fails, redirect the user through the full authorization flow to obtain new tokens.
- **Request `offline_access` only when needed.** If your application does not need to refresh tokens (e.g., single-page apps with short sessions), omit the `offline_access` scope.
## Related
- [Token Endpoint](./token-endpoint.md) -- overview of all grant types.
- [Authorization Code Flow](./authorization-code-flow.md) -- obtaining the initial refresh token.
- [Token Revocation](./token-revocation.md) -- explicitly revoking refresh tokens.

170
book/src/oidc/id-token.md Normal file
View file

@ -0,0 +1,170 @@
# ID Token
The ID Token is the core artifact of OpenID Connect. It is a JSON Web Token (JWT) that contains claims about the authentication event and the authenticated user. Barycenter signs ID Tokens using the RS256 algorithm (RSA Signature with SHA-256).
## JWT Structure
A JWT consists of three base64url-encoded parts separated by dots:
```
header.payload.signature
```
### Header
```json
{
"alg": "RS256",
"typ": "JWT",
"kid": "key-1"
}
```
| Field | Description |
|---|---|
| `alg` | The signing algorithm. Always `RS256`. |
| `typ` | The token type. Always `JWT`. |
| `kid` | The key identifier. Matches the `kid` in the [JWKS](./discovery-jwks.md) endpoint, allowing relying parties to select the correct public key for verification. |
### Payload (Claims)
The payload contains the identity and authentication claims.
### Signature
The signature is computed over the base64url-encoded header and payload using the RSA private key:
```
RSASSA-PKCS1-V1_5-SIGN(SHA-256, base64url(header) + "." + base64url(payload))
```
Relying parties verify the signature using the public key from the [JWKS endpoint](./discovery-jwks.md).
## Claims Reference
| Claim | Type | Required | Description |
|---|---|---|---|
| `iss` | string | Yes | Issuer identifier. The URL of the Barycenter instance (e.g., `https://idp.example.com`). |
| `sub` | string | Yes | Subject identifier. A unique, stable identifier for the authenticated user. |
| `aud` | string | Yes | Audience. The `client_id` of the relying party that requested the token. |
| `exp` | integer | Yes | Expiration time as a Unix timestamp. The token must not be accepted after this time. |
| `iat` | integer | Yes | Issued-at time as a Unix timestamp. When the token was created. |
| `auth_time` | integer | Yes | Time of the authentication event as a Unix timestamp. When the user last actively authenticated. |
| `nonce` | string | Conditional | Present if a `nonce` was provided in the authorization request. Used by the client to associate the ID Token with its request and mitigate replay attacks. |
| `at_hash` | string | Yes | Access Token hash. Binds the ID Token to the access token. See [computation details](#at_hash-computation) below. |
| `amr` | array of strings | Yes | Authentication Methods References. Describes how the user authenticated. See [AMR values](#amr-values). |
| `acr` | string | Yes | Authentication Context Class Reference. Indicates the assurance level of the authentication. See [ACR values](#acr-values). |
## at_hash Computation
The `at_hash` claim binds the ID Token to the co-issued access token, preventing token substitution attacks. Barycenter computes it as follows:
1. Compute the SHA-256 hash of the ASCII representation of the `access_token` value.
2. Take the **left-most 128 bits** (16 bytes) of the hash.
3. Base64url-encode the result (no padding).
```
at_hash = base64url(SHA-256(access_token)[0..16])
```
### Example
Given an access token `VGhpcyBpcyBhbiBleGFtcGxl`:
```
SHA-256("VGhpcyBpcyBhbiBleGFtcGxl")
= 1e4d6b... (32 bytes)
Left 128 bits = first 16 bytes
at_hash = base64url(first_16_bytes)
= "Hk2aKByzDpDQyN-GeA7_zw"
```
Relying parties should verify the `at_hash` by performing the same computation on the received `access_token` and comparing the result.
## AMR Values
The `amr` (Authentication Methods References) claim is an array of strings indicating which authentication methods were used during the session.
| Value | Meaning |
|---|---|
| `pwd` | Password authentication. The user provided a username and password. |
| `hwk` | Hardware-bound key. The user authenticated with a hardware-bound passkey (e.g., YubiKey, platform authenticator without cloud sync). |
| `swk` | Software key. The user authenticated with a cloud-synced passkey (e.g., iCloud Keychain, Google Password Manager). |
Multiple values indicate that more than one authentication method was used (multi-factor authentication):
| AMR | Scenario |
|---|---|
| `["pwd"]` | Password-only authentication. |
| `["hwk"]` | Single-factor passkey login (hardware-bound). |
| `["swk"]` | Single-factor passkey login (cloud-synced). |
| `["pwd", "hwk"]` | Password + hardware passkey (two-factor). |
| `["pwd", "swk"]` | Password + cloud-synced passkey (two-factor). |
## ACR Values
The `acr` (Authentication Context Class Reference) claim indicates the authentication assurance level.
| Value | Meaning | Condition |
|---|---|---|
| `aal1` | Authentication Assurance Level 1. Single-factor authentication. | One authentication method was used (password or passkey). |
| `aal2` | Authentication Assurance Level 2. Multi-factor authentication. | Two or more authentication methods were used (e.g., password + passkey). |
## Example Decoded Token
### Header
```json
{
"alg": "RS256",
"typ": "JWT",
"kid": "key-1"
}
```
### Payload
```json
{
"iss": "https://idp.example.com",
"sub": "550e8400-e29b-41d4-a716-446655440000",
"aud": "dG9hc3R5LWNsaWVudC1pZC1leGFtcGxl",
"exp": 1709315200,
"iat": 1709311600,
"auth_time": 1709311590,
"nonce": "n-0S6_WzA2Mj",
"at_hash": "Hk2aKByzDpDQyN-GeA7_zw",
"amr": ["pwd", "hwk"],
"acr": "aal2"
}
```
This token represents a user who authenticated with a password and then verified with a hardware-bound passkey (two-factor authentication at AAL2).
## Verifying an ID Token
Relying parties must validate the ID Token before trusting its claims. The verification steps are:
1. **Decode the JWT** into its three parts (header, payload, signature).
2. **Retrieve the public key** from the [JWKS endpoint](./discovery-jwks.md) using the `kid` from the header.
3. **Verify the signature** using the RS256 algorithm and the public key.
4. **Validate the `iss` claim** matches the expected issuer URL.
5. **Validate the `aud` claim** contains the client's own `client_id`.
6. **Check the `exp` claim** to ensure the token has not expired. Allow for a small clock skew (e.g., 30 seconds).
7. **Validate the `nonce`** matches the nonce sent in the authorization request (if one was sent).
8. **Verify `at_hash`** by computing it from the received access token and comparing.
## Key Management
- Barycenter generates a **2048-bit RSA key pair** on first startup.
- The private key is persisted to disk (configured via `keys.private_key_path`) and reused across restarts.
- The public key is published via the [JWKS endpoint](./discovery-jwks.md).
- The `kid` in the JWT header corresponds to the `kid` in the JWKS, enabling key rotation.
## Related
- [Discovery and JWKS](./discovery-jwks.md) -- retrieving the public key for verification.
- [Token Endpoint](./token-endpoint.md) -- obtaining ID Tokens.
- [UserInfo Endpoint](./userinfo.md) -- retrieving additional user claims.

View file

@ -0,0 +1,100 @@
# Token Endpoint
The token endpoint is used to exchange an authorization grant for an access token, ID token, and optionally a refresh token.
## Endpoint
```
POST /token
Content-Type: application/x-www-form-urlencoded
```
All requests to the token endpoint must use `application/x-www-form-urlencoded` encoding for the request body. JSON request bodies are not accepted.
## Client Authentication
Every token request must include client authentication. Barycenter supports two methods:
- **`client_secret_basic`** -- HTTP Basic authentication with the client_id and client_secret.
- **`client_secret_post`** -- client_id and client_secret sent as form parameters in the request body.
See [Client Authentication](./client-authentication.md) for details and examples.
## Supported Grant Types
Barycenter supports three grant types at the token endpoint:
### [Authorization Code Grant](./grant-authorization-code.md)
The primary grant type for web and native applications. Exchanges an authorization code (obtained from the [authorization endpoint](./authorization-code-flow.md)) for tokens. Requires PKCE verification.
```
grant_type=authorization_code
```
### [Refresh Token Grant](./grant-refresh-token.md)
Obtains a new access token using a previously issued refresh token. Implements token rotation for security -- each use of a refresh token invalidates it and issues a new one.
```
grant_type=refresh_token
```
### [Device Authorization Grant](./grant-device-authorization.md)
Enables input-constrained devices (smart TVs, CLI tools, IoT devices) to obtain tokens by having the user authorize on a separate device with a browser.
```
grant_type=urn:ietf:params:oauth:grant-type:device_code
```
## Common Response Format
All successful token responses share the same structure:
```json
{
"access_token": "eyJhbGciOiJS...",
"token_type": "bearer",
"expires_in": 3600,
"id_token": "eyJhbGciOiJS...",
"refresh_token": "dGhpcyBpcyBh..."
}
```
| Field | Type | Description |
|---|---|---|
| `access_token` | string | Bearer token for accessing protected resources such as the [UserInfo endpoint](./userinfo.md). |
| `token_type` | string | Always `"bearer"`. |
| `expires_in` | integer | Token lifetime in seconds. Default is `3600` (1 hour). |
| `id_token` | string | A signed [ID Token](./id-token.md) (JWT) containing identity claims. Present when the `openid` scope was requested. |
| `refresh_token` | string | A refresh token for obtaining new access tokens. Present only when the `offline_access` scope was requested. |
## Error Responses
Token endpoint errors are returned as JSON with an HTTP 400 status code (or 401 for authentication failures):
```json
{
"error": "invalid_grant",
"error_description": "The authorization code has expired"
}
```
| Error Code | Condition |
|---|---|
| `invalid_request` | Missing required parameter or malformed request. |
| `invalid_client` | Client authentication failed (wrong secret, unknown client_id). Returns HTTP 401. |
| `invalid_grant` | The authorization code, refresh token, or device code is invalid, expired, or already consumed. |
| `unauthorized_client` | The client is not authorized for the requested grant type. |
| `unsupported_grant_type` | The grant type is not supported. |
| `invalid_scope` | The requested scope is invalid or exceeds the originally granted scope. |
| `slow_down` | Device code grant only: the client is polling too frequently. |
| `authorization_pending` | Device code grant only: the user has not yet completed authorization. |
| `expired_token` | Device code grant only: the device code has expired. |
## Related
- [Client Authentication](./client-authentication.md) -- how to authenticate requests to this endpoint.
- [ID Token](./id-token.md) -- structure of the returned ID Token.
- [Token Revocation](./token-revocation.md) -- revoking issued tokens.

View file

@ -0,0 +1,104 @@
# Token Revocation
Barycenter provides a token revocation endpoint that allows clients to invalidate access tokens and refresh tokens when they are no longer needed. This is defined in [RFC 7009](https://datatracker.ietf.org/doc/html/rfc7009).
## Endpoint
```
POST /revoke
Content-Type: application/x-www-form-urlencoded
```
## Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| `token` | string | Yes | The token to revoke. This can be either an access token or a refresh token. |
Client authentication (via [client_secret_basic or client_secret_post](./client-authentication.md)) is also required.
## Example Request
### Using client_secret_basic
```bash
curl -X POST https://idp.example.com/revoke \
-u "my_client_id:my_client_secret" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "token=VGhpcyBpcyBhbiBleGFtcGxlIGFjY2VzcyB0b2tlbg"
```
### Using client_secret_post
```bash
curl -X POST https://idp.example.com/revoke \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "token=VGhpcyBpcyBhbiBleGFtcGxlIGFjY2VzcyB0b2tlbg" \
-d "client_id=my_client_id" \
-d "client_secret=my_client_secret"
```
## Response
### Successful Revocation
A successful revocation returns HTTP 200 with an empty body:
```
HTTP/1.1 200 OK
Content-Length: 0
```
Per RFC 7009, the server responds with HTTP 200 regardless of whether the token was found, was already revoked, or was never valid. This prevents token scanning attacks -- a client cannot determine whether a token exists by observing the response.
### Behavior Matrix
| Token State | Server Action | Response |
|---|---|---|
| Active token | Sets `revoked` flag in database | HTTP 200 |
| Already revoked | No change | HTTP 200 |
| Expired token | No change (already unusable) | HTTP 200 |
| Unknown token | No action | HTTP 200 |
### Error Responses
Errors are only returned for problems with the request itself, not with the token:
| Error Code | HTTP Status | Condition |
|---|---|---|
| `invalid_client` | 401 | Client authentication failed. |
| `invalid_request` | 400 | The `token` parameter is missing. |
```json
{
"error": "invalid_client",
"error_description": "Client authentication failed"
}
```
## Effect of Revocation
Once a token is revoked:
- **Access tokens**: Any subsequent request to a protected resource (such as the [UserInfo endpoint](./userinfo.md)) using the revoked token will be rejected with HTTP 401.
- **Refresh tokens**: Any attempt to use the revoked refresh token at the [token endpoint](./token-endpoint.md) will be rejected with `invalid_grant`.
Revocation is immediate. There is no grace period or propagation delay.
> **Note**: Revoking an access token does **not** automatically revoke the associated refresh token, and vice versa. If you need to invalidate both, send two separate revocation requests.
## When to Revoke Tokens
Common scenarios where token revocation is appropriate:
- **User logout**: Revoke the access token and refresh token when the user explicitly signs out.
- **Session termination**: When an administrator terminates a user's session.
- **Security incident**: If a token may have been compromised, revoke it immediately.
- **Application uninstall**: When a user removes or disconnects a client application.
## Related
- [Token Endpoint](./token-endpoint.md) -- obtaining tokens.
- [Refresh Token Grant](./grant-refresh-token.md) -- refresh token rotation also revokes old tokens.
- [Client Authentication](./client-authentication.md) -- authenticating revocation requests.
- [Discovery](./discovery-jwks.md) -- the revocation endpoint is advertised in the discovery document.

194
book/src/oidc/userinfo.md Normal file
View file

@ -0,0 +1,194 @@
# UserInfo Endpoint
The UserInfo endpoint returns claims about the authenticated user. It is an OAuth 2.0 protected resource that requires a valid access token obtained through the [token endpoint](./token-endpoint.md).
## Endpoint
```
GET /userinfo
Authorization: Bearer <access_token>
```
## Authentication
The access token must be provided as a Bearer token in the `Authorization` header per [RFC 6750](https://datatracker.ietf.org/doc/html/rfc6750):
```
Authorization: Bearer VGhpcyBpcyBhbiBleGFtcGxlIGFjY2VzcyB0b2tlbg
```
The token must be:
- **Valid** -- recognized by Barycenter as a previously issued token.
- **Not expired** -- within its 1-hour TTL.
- **Not revoked** -- not flagged as revoked in the database.
## Example Request
```bash
curl -X GET https://idp.example.com/userinfo \
-H "Authorization: Bearer VGhpcyBpcyBhbiBleGFtcGxlIGFjY2VzcyB0b2tlbg"
```
## Response
The response is a JSON object containing claims about the user. The claims returned depend on the scopes that were authorized during the original authorization request.
```json
{
"sub": "550e8400-e29b-41d4-a716-446655440000",
"preferred_username": "alice",
"name": "Alice Johnson",
"given_name": "Alice",
"family_name": "Johnson",
"email": "alice@example.com",
"email_verified": true
}
```
## Scope-Based Claims
The set of claims returned is determined by the scopes granted to the access token.
### `openid` (required)
The `openid` scope is mandatory for all OIDC requests. It grants access to the subject identifier.
| Claim | Type | Description |
|---|---|---|
| `sub` | string | Subject identifier. A unique, stable identifier for the user. Always present. |
### `profile`
The `profile` scope grants access to the user's profile information. Only claims that have a value stored for the user are included in the response.
| Claim | Type | Description |
|---|---|---|
| `preferred_username` | string | Short name the user prefers. **Defaults to the login username** if not explicitly set. |
| `name` | string | Full display name of the user. |
| `given_name` | string | First name / given name. |
| `family_name` | string | Last name / surname / family name. |
| `nickname` | string | Casual name or alias. |
| `picture` | string | URL of the user's profile picture. |
| `profile` | string | URL of the user's profile page. |
| `website` | string | URL of the user's website or blog. |
| `gender` | string | Gender of the user (e.g., `"female"`, `"male"`, or other values). |
| `birthdate` | string | Birthday in `YYYY-MM-DD` format (or `YYYY` for year only). |
| `zoneinfo` | string | Time zone from the [IANA Time Zone Database](https://www.iana.org/time-zones) (e.g., `"Europe/Zurich"`). |
| `locale` | string | Locale as a BCP47 language tag (e.g., `"en-US"`, `"de-CH"`). |
| `updated_at` | number | Unix timestamp of when the profile was last updated. |
### `email`
The `email` scope grants access to the user's email address and verification status.
| Claim | Type | Description |
|---|---|---|
| `email` | string | The user's email address. Falls back to the `email` field on the user record if not set as a property. |
| `email_verified` | boolean | Whether the email address has been verified. Falls back to the `email_verified` field on the user record. |
### Summary Table
| Scope | Claims Returned |
|---|---|
| `openid` | `sub` |
| `openid profile` | `sub`, `preferred_username`, `name`, `given_name`, `family_name`, ... (all profile claims that have values) |
| `openid email` | `sub`, `email`, `email_verified` |
| `openid profile email` | `sub`, all profile claims, `email`, `email_verified` |
> **Note**: Claims are only included in the response if values exist for the user. For example, if a user has no `picture` stored, that claim will be absent from the response even if the `profile` scope was granted. The exception is `preferred_username`, which always falls back to the login username.
## Setting User Claims
User claims are stored in the **properties table** as key-value pairs. They can be set in two ways:
### Via User Sync (JSON file)
Include claims in the `properties` field of the user definition:
```json
{
"users": [
{
"username": "alice",
"email": "alice@example.com",
"password": "secure-password",
"properties": {
"name": "Alice Johnson",
"given_name": "Alice",
"family_name": "Johnson",
"preferred_username": "alice",
"picture": "https://example.com/photos/alice.jpg",
"locale": "en-US",
"zoneinfo": "America/New_York"
}
}
]
}
```
### Via Properties API
```bash
# Set a single property
curl -X PUT https://idp.example.com/properties/<subject>/name \
-H "Content-Type: application/json" \
-d '"Alice Johnson"'
```
## ID Token Claims
The same scope-gated claims are also included in the **ID Token** (JWT) when the corresponding scopes are requested. This means clients can access profile and email claims directly from the ID token without making a separate call to the userinfo endpoint.
## Error Responses
### Missing or Invalid Token
If no token is provided or the token is malformed:
```
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer error="invalid_token", error_description="No access token provided"
```
### Expired Token
```
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer error="invalid_token", error_description="The access token has expired"
```
### Revoked Token
```
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer error="invalid_token", error_description="The access token has been revoked"
```
### Insufficient Scope
If the token does not have the `openid` scope:
```
HTTP/1.1 403 Forbidden
WWW-Authenticate: Bearer error="insufficient_scope", scope="openid"
```
## Relationship to the ID Token
Both the ID Token and the UserInfo endpoint provide identity claims, but they serve different purposes:
| Aspect | ID Token | UserInfo Endpoint |
|---|---|---|
| **When obtained** | At token exchange time | On-demand, any time the access token is valid |
| **Format** | Signed JWT (verifiable offline) | Plain JSON (requires server call) |
| **Freshness** | Snapshot at authentication time | Current values from the database |
| **Use case** | Authentication proof for the client | Retrieving up-to-date profile information |
The `sub` claim is guaranteed to be consistent between the ID Token and the UserInfo response for the same user.
## Related
- [Token Endpoint](./token-endpoint.md) -- obtaining the access token.
- [ID Token](./id-token.md) -- claims available in the JWT.
- [Authorization Code Flow](./authorization-code-flow.md) -- requesting specific scopes.

View file

@ -0,0 +1,413 @@
# API Endpoints
Barycenter exposes three independent HTTP servers, each serving a distinct purpose. The public server handles all OIDC and OAuth 2.0 protocol traffic. The admin server provides GraphQL-based management interfaces. The optional authorization policy server evaluates permission checks.
## Public Server
**Default port:** `8080` (configurable via `server.port`)
### Discovery and Registration
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/.well-known/openid-configuration` | None | Returns OpenID Provider metadata as JSON. Includes all supported endpoints, response types, signing algorithms, and grant types. |
| `GET` | `/.well-known/jwks.json` | None | Returns the JSON Web Key Set containing the provider's public signing keys. Clients use these keys to verify ID Token signatures. |
| `POST` | `/connect/register` | None | Dynamic client registration per [RFC 7591](https://www.rfc-editor.org/rfc/rfc7591). Accepts a JSON body with client metadata and returns credentials. |
#### `POST /connect/register`
**Request:**
```http
POST /connect/register HTTP/1.1
Content-Type: application/json
{
"redirect_uris": ["https://app.example.com/callback"],
"client_name": "My Application"
}
```
**Response:**
```http
HTTP/1.1 201 Created
Content-Type: application/json
{
"client_id": "...",
"client_secret": "...",
"redirect_uris": ["https://app.example.com/callback"],
"client_name": "My Application"
}
```
---
### OAuth 2.0 / OIDC Protocol
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/authorize` | None | Authorization endpoint. Initiates the Authorization Code flow with PKCE. Redirects the user to login if no session exists. |
| `POST` | `/token` | Client auth | Token endpoint. Exchanges authorization codes, refresh tokens, or device codes for access tokens and ID tokens. |
| `POST` | `/revoke` | Client auth | Token revocation endpoint. Invalidates an access token or refresh token. |
| `GET` | `/userinfo` | Bearer token | UserInfo endpoint. Returns claims about the authenticated user based on the granted scopes. |
#### `GET /authorize`
Initiates the authorization flow. Only the `code` response type with PKCE S256 is required; implicit flows (`id_token`, `id_token token`) are also supported.
**Query Parameters:**
| Parameter | Required | Description |
|-----------|----------|-------------|
| `client_id` | Yes | The registered client identifier. |
| `redirect_uri` | Yes | Must match a URI registered for the client. |
| `response_type` | Yes | `code`, `id_token`, or `id_token token`. |
| `scope` | Yes | Space-separated scopes. Must include `openid`. |
| `code_challenge` | Yes (for `code`) | Base64url-encoded SHA-256 hash of the code verifier. |
| `code_challenge_method` | Yes (for `code`) | Must be `S256`. Plain is rejected. |
| `state` | Recommended | Opaque value returned in the redirect for CSRF protection. |
| `nonce` | Optional | Opaque value included in the ID Token for replay protection. |
| `max_age` | Optional | Maximum authentication age in seconds. Values below 300 trigger 2FA. |
| `prompt` | Optional | `none`, `login`, or `consent`. Controls the authentication UX. |
**Success Response:** HTTP 302 redirect to `redirect_uri` with `code` and `state` query parameters.
**Error Response:** HTTP 302 redirect to `redirect_uri` with `error`, `error_description`, and `state` query parameters.
#### `POST /token`
**Client Authentication Methods:**
- **`client_secret_basic`**: HTTP Basic authentication with `client_id:client_secret` base64-encoded in the `Authorization` header.
- **`client_secret_post`**: `client_id` and `client_secret` sent as form parameters in the request body.
**Grant Type: `authorization_code`**
```http
POST /token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Authorization: Basic base64(client_id:client_secret)
grant_type=authorization_code
&code=AUTH_CODE
&redirect_uri=https://app.example.com/callback
&code_verifier=PKCE_VERIFIER
```
**Grant Type: `refresh_token`**
```http
POST /token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Authorization: Basic base64(client_id:client_secret)
grant_type=refresh_token
&refresh_token=REFRESH_TOKEN
```
**Grant Type: `urn:ietf:params:oauth:grant-type:device_code`**
```http
POST /token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Authorization: Basic base64(client_id:client_secret)
grant_type=urn:ietf:params:oauth:grant-type:device_code
&device_code=DEVICE_CODE
```
**Success Response:**
```json
{
"access_token": "...",
"token_type": "Bearer",
"expires_in": 3600,
"refresh_token": "...",
"id_token": "..."
}
```
#### `POST /revoke`
Revokes an access token or refresh token. The request uses the same client authentication methods as the token endpoint.
```http
POST /revoke HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Authorization: Basic base64(client_id:client_secret)
token=TOKEN_VALUE
&token_type_hint=access_token
```
Returns HTTP 200 with an empty body on success, regardless of whether the token was valid.
#### `GET /userinfo`
```http
GET /userinfo HTTP/1.1
Authorization: Bearer ACCESS_TOKEN
```
**Response:**
```json
{
"sub": "user-subject-uuid",
"name": "alice",
"email": "alice@example.com"
}
```
---
### Device Authorization Grant
Implements [RFC 8628](https://www.rfc-editor.org/rfc/rfc8628) for input-constrained devices such as smart TVs and CLI tools.
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `POST` | `/device_authorization` | None | Initiates the device flow. Returns a `device_code`, `user_code`, and `verification_uri`. |
| `GET` | `/device` | None | Renders the device verification page where the user enters the `user_code`. |
| `POST` | `/device/verify` | Session | Verifies the user code and associates it with the authenticated session. |
| `POST` | `/device/consent` | Session | Records the user's consent decision for the device flow. |
#### `POST /device_authorization`
```http
POST /device_authorization HTTP/1.1
Content-Type: application/x-www-form-urlencoded
client_id=CLIENT_ID
&scope=openid profile
```
**Response:**
```json
{
"device_code": "...",
"user_code": "ABCD-1234",
"verification_uri": "https://idp.example.com/device",
"verification_uri_complete": "https://idp.example.com/device?user_code=ABCD-1234",
"expires_in": 600,
"interval": 5
}
```
The client polls `POST /token` with `grant_type=urn:ietf:params:oauth:grant-type:device_code` at the specified `interval` until the user completes verification.
---
### Authentication
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/login` | None | Renders the login page with passkey autofill and password fallback. |
| `POST` | `/login` | None | Processes password authentication. Creates a session on success. |
| `GET` | `/login/2fa` | Session (partial) | Renders the second-factor authentication page. |
| `GET` | `/logout` | Session | Ends the user session and clears the session cookie. |
| `POST` | `/register` | None | Public user self-registration. Only available when `server.allow_public_registration` is `true`. |
#### Consent
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/consent` | Session | Renders the consent page showing requested scopes and client information. |
| `POST` | `/consent` | Session | Records the user's consent decision. On approval, redirects back to the authorization flow. |
---
### WebAuthn / Passkey
All WebAuthn endpoints exchange JSON payloads conforming to the [Web Authentication API](https://www.w3.org/TR/webauthn-3/).
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `POST` | `/webauthn/register/start` | Session | Begins passkey registration. Returns `PublicKeyCredentialCreationOptions`. |
| `POST` | `/webauthn/register/finish` | Session | Completes passkey registration with the authenticator response. |
| `POST` | `/webauthn/authenticate/start` | None | Begins passkey authentication. Returns `PublicKeyCredentialRequestOptions`. |
| `POST` | `/webauthn/authenticate/finish` | None | Completes passkey authentication and creates a session. |
| `POST` | `/webauthn/2fa/start` | Session (partial) | Begins second-factor passkey verification. Requires a partial session from password login. |
| `POST` | `/webauthn/2fa/finish` | Session (partial) | Completes second-factor verification. Upgrades the session to `mfa_verified=1`. |
---
### Passkey Management
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/account/passkeys` | Session | Lists all passkeys registered to the authenticated user. |
| `DELETE` | `/account/passkeys/{credential_id}` | Session | Deletes a specific passkey by its credential ID. |
| `PATCH` | `/account/passkeys/{credential_id}` | Session | Updates the friendly name of a passkey. |
#### `PATCH /account/passkeys/{credential_id}`
```http
PATCH /account/passkeys/abc123 HTTP/1.1
Content-Type: application/json
{
"friendly_name": "YubiKey 5 NFC"
}
```
---
### Properties
A simple key-value store for arbitrary metadata associated with an owner.
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/properties/{owner}/{key}` | None | Returns the value for the given owner and key. |
| `PUT` | `/properties/{owner}/{key}` | None | Creates or updates the value for the given owner and key. |
---
### Federation
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/federation/trust-anchors` | None | Returns the list of configured OpenID Federation trust anchor URLs. |
---
## Admin Server
**Default port:** `8081` (main port + 1, configurable via `server.admin_port`)
The admin server provides two independent GraphQL APIs served on the same port. Neither requires authentication; restrict access at the network level.
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `POST` | `/admin/graphql` | None | Seaography entity CRUD API. Supports queries and mutations on all database entities. |
| `GET` | `/admin/playground` | None | GraphiQL interactive explorer for the entity CRUD API. |
| `POST` | `/admin/jobs` | None | Job management GraphQL API. Trigger background jobs and query execution history. |
| `GET` | `/admin/jobs/playground` | None | GraphiQL interactive explorer for the job management API. |
### Entity CRUD API (`/admin/graphql`)
The Seaography-generated API provides full CRUD operations on all database entities. Use the GraphiQL playground at `/admin/playground` to explore available queries and mutations.
### Job Management API (`/admin/jobs`)
**Available Jobs:**
| Job Name | Schedule | Description |
|----------|----------|-------------|
| `cleanup_expired_sessions` | Hourly at :00 | Removes sessions past their `expires_at` timestamp. |
| `cleanup_expired_refresh_tokens` | Hourly at :30 | Removes expired and revoked refresh tokens. |
| `cleanup_expired_challenges` | Every 5 minutes | Removes expired WebAuthn challenge records. |
**Example: Trigger a Job**
```graphql
mutation {
triggerJob(jobName: "cleanup_expired_sessions") {
success
message
}
}
```
**Example: Query Job History**
```graphql
query {
jobLogs(limit: 10, onlyFailures: false) {
id
jobName
startedAt
completedAt
success
recordsProcessed
}
}
```
**Example: Query User 2FA Status**
```graphql
query {
user2faStatus(username: "alice") {
username
requires2fa
passkeyEnrolled
passkeyCount
passkeyEnrolledAt
}
}
```
**Example: Enforce 2FA for a User**
```graphql
mutation {
setUser2faRequired(username: "alice", required: true) {
success
message
requires2fa
}
}
```
---
## Authorization Policy Server
**Default port:** `8082` (main port + 2, configurable via `authz.port`)
This server is only started when `authz.enabled = true`. It provides a Relationship-Based Access Control (ReBAC) evaluation API.
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `POST` | `/v1/check` | None | Evaluates whether a subject has a specific permission on a resource. |
| `POST` | `/v1/expand` | None | Expands a permission to enumerate all subjects that hold it on a resource. |
| `GET` | `/healthz` | None | Returns HTTP 200 if the authorization server is healthy. |
#### `POST /v1/check`
```http
POST /v1/check HTTP/1.1
Content-Type: application/json
{
"namespace": "documents",
"object": "doc:readme",
"relation": "viewer",
"subject": "user:alice"
}
```
**Response:**
```json
{
"allowed": true
}
```
#### `POST /v1/expand`
```http
POST /v1/expand HTTP/1.1
Content-Type: application/json
{
"namespace": "documents",
"object": "doc:readme",
"relation": "viewer"
}
```
**Response:**
```json
{
"subjects": ["user:alice", "user:bob", "group:engineering#member"]
}
```

Some files were not shown because too many files have changed in this diff Show more