Introduction
vmctl is a command-line tool for creating, managing, and provisioning virtual machines on Linux (QEMU/KVM) and illumos (Propolis/bhyve). It offers both imperative commands for one-off tasks and a declarative configuration format (VMFile.kdl) for reproducible VM environments.
Why vmctl?
Managing VMs with raw QEMU commands is tedious and error-prone. vmctl handles the plumbing: disk overlays, cloud-init ISOs, SSH key generation, network configuration, and process lifecycle. You describe what you want; vmctl figures out how.
Think of it like this:
| Docker world | vmctl world |
|---|---|
docker run | vmctl create --start |
docker-compose.yml | VMFile.kdl |
docker compose up | vmctl up |
docker compose down | vmctl down |
A Taste
Create a VMFile.kdl:
vm "dev" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
vcpus 2
memory 2048
cloud-init {
hostname "dev"
}
ssh {
user "ubuntu"
}
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y build-essential"
}
}
Then:
vmctl up # download image, create VM, boot, provision
vmctl ssh # connect over SSH
vmctl down # shut it down
Platform Support
| Platform | Backend | Status |
|---|---|---|
| Linux | QEMU/KVM | Fully supported |
| illumos | Propolis/bhyve | Experimental |
Project Structure
vmctl is split into two crates:
- vm-manager - Library crate with the hypervisor abstraction, image management, SSH, provisioning, and VMFile parsing.
- vmctl - CLI binary built on top of vm-manager.
Both live in a Cargo workspace under crates/.
Installation
vmctl is built from source using Rust's Cargo build system.
Requirements
- Rust 1.85 or later (edition 2024)
- A working C compiler (for native dependencies like libssh2)
Building from Source
Clone the repository and build the release binary:
git clone https://github.com/user/vm-manager.git
cd vm-manager
cargo build --release -p vmctl
The binary will be at target/release/vmctl. Copy it somewhere in your $PATH:
sudo cp target/release/vmctl /usr/local/bin/
Feature Flags
The vm-manager library crate has one optional feature:
| Feature | Description |
|---|---|
pure-iso | Use a pure-Rust ISO 9660 generator (isobemak) instead of shelling out to genisoimage/mkisofs. Useful in minimal or containerized environments. |
To build with it:
cargo build --release -p vmctl --features vm-manager/pure-iso
Verify Installation
vmctl --help
You should see the list of available subcommands.
Prerequisites
vmctl requires several system tools depending on the backend and features you use.
Linux (QEMU/KVM)
Required
| Tool | Purpose | Install (Debian/Ubuntu) |
|---|---|---|
qemu-system-x86_64 | VM hypervisor | sudo apt install qemu-system-x86 |
qemu-img | Disk image operations | sudo apt install qemu-utils |
/dev/kvm | Hardware virtualization | Kernel module (usually built-in) |
Cloud-Init ISO Generation (one of)
| Tool | Purpose | Install |
|---|---|---|
genisoimage | ISO 9660 image creation | sudo apt install genisoimage |
mkisofs | Alternative ISO tool | sudo apt install mkisofs |
Or build with the pure-iso feature to avoid needing either.
Verify Everything
# QEMU
qemu-system-x86_64 --version
# qemu-img
qemu-img --version
# KVM access
ls -la /dev/kvm
# ISO tools (one of these)
genisoimage --version 2>/dev/null || mkisofs --version 2>/dev/null
# Your user should be in the kvm group
groups | grep -q kvm && echo "kvm: OK" || echo "kvm: add yourself to the kvm group"
If /dev/kvm is not present, enable KVM in your BIOS/UEFI settings (look for "VT-x" or "AMD-V") and ensure the kvm kernel module is loaded:
sudo modprobe kvm
sudo modprobe kvm_intel # or kvm_amd
illumos (Propolis)
For the experimental Propolis backend:
- A running
propolis-serverinstance - ZFS pool (default:
rpool) nebula-vmzone brand installed- VNIC networking configured
Quick Start
This guide walks you through creating your first VM in under a minute.
Imperative (One-Off)
Create and start a VM from an Ubuntu cloud image:
vmctl create \
--name demo \
--image-url https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img \
--vcpus 2 \
--memory 2048 \
--start
Wait a moment for the image to download and the VM to boot, then connect:
vmctl ssh demo
When you're done:
vmctl destroy demo
Declarative (Reproducible)
Create a VMFile.kdl in your project directory:
vm "demo" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
vcpus 2
memory 2048
cloud-init {
hostname "demo"
}
ssh {
user "ubuntu"
}
}
Bring it up:
vmctl up
vmctl will download the image (cached for future use), create a QCOW2 overlay, generate an Ed25519 SSH keypair, build a cloud-init ISO, and boot the VM.
Connect:
vmctl ssh
Tear it down:
vmctl down
Or destroy it completely (removes all VM files):
vmctl down --destroy
Next Steps
- Concepts: How vmctl Works for an understanding of what happens under the hood.
- Tutorials: Declarative Workflow for a complete walkthrough with provisioning.
- VMFile.kdl Reference for the full configuration format.
How vmctl Works
State Directory
vmctl stores all VM state under $XDG_DATA_HOME/vmctl/ (typically ~/.local/share/vmctl/):
~/.local/share/vmctl/
vms.json # VM registry (name -> handle mapping)
images/ # Downloaded image cache
vms/
<vm-name>/ # Per-VM working directory
overlay.qcow2 # Copy-on-write disk overlay
seed.iso # Cloud-init NoCloud ISO
qmp.sock # QEMU Machine Protocol socket
console.sock # Serial console socket
console.log # Boot/cloud-init log
provision.log # Provisioning output log
id_ed25519_generated # Auto-generated SSH private key
id_ed25519_generated.pub # Auto-generated SSH public key
pidfile # QEMU process PID
QCOW2 Overlays
vmctl never modifies the base image directly. Instead, it creates a QCOW2 copy-on-write overlay on top of the original. This means:
- Multiple VMs can share the same base image.
- The base image stays clean in the cache.
- Destroying a VM just deletes the overlay.
If you specify disk in your VMFile, the overlay is resized to that size and the guest filesystem can be grown.
RouterHypervisor
All hypervisor operations go through a RouterHypervisor that dispatches to the appropriate backend based on the platform:
- Linux ->
QemuBackend - illumos ->
PropolisBackend - Testing ->
NoopBackend
Each backend implements the same Hypervisor trait, so the CLI code is platform-agnostic.
The Up Flow
When you run vmctl up, the following happens for each VM defined in VMFile.kdl:
- Parse - Read and validate the VMFile.
- Resolve - Download images (if URL), generate SSH keys (if cloud-init enabled), resolve paths.
- Prepare - Create work directory, QCOW2 overlay, cloud-init seed ISO, allocate MAC address and SSH port.
- Start - Launch QEMU with the correct arguments, wait for QMP socket.
- Provision - Wait for SSH to become available (up to 120 seconds), then run each provisioner in order.
If a VM is already running, vmctl up skips it. If it's stopped, it restarts and re-provisions.
Imperative vs Declarative
vmctl supports two workflows for managing VMs.
Imperative
Use individual commands to create, configure, and manage VMs step by step:
vmctl create --name myvm --image-url https://example.com/image.img --vcpus 2 --memory 2048 --start
vmctl ssh myvm
vmctl stop myvm
vmctl destroy myvm
This is useful for:
- Quick one-off VMs
- Experimenting with different images
- Scripting custom workflows
Declarative
Define your VMs in a VMFile.kdl and let vmctl converge to the desired state:
vm "myvm" {
image-url "https://example.com/image.img"
vcpus 2
memory 2048
cloud-init {
hostname "myvm"
}
ssh {
user "ubuntu"
}
provision "shell" {
inline "echo hello"
}
}
vmctl up # create + start + provision
vmctl down # stop
vmctl reload # destroy + recreate + provision
vmctl provision # re-run provisioners only
This is useful for:
- Reproducible development environments
- Multi-VM setups
- Checked-in VM definitions alongside your project
- Complex provisioning workflows
When to Use Which
| Scenario | Approach |
|---|---|
| "I need a quick VM to test something" | Imperative |
| "My project needs a build VM with specific packages" | Declarative |
| "I want to script VM lifecycle in CI" | Either, depending on complexity |
| "Multiple VMs that work together" | Declarative |
VM Lifecycle
Every VM in vmctl moves through a set of well-defined states.
States
| State | Description |
|---|---|
Preparing | Backend is allocating resources (overlay, ISO, sockets) |
Prepared | Resources allocated, ready to boot |
Running | VM is booted and executing |
Stopped | VM has been shut down (gracefully or forcibly) |
Failed | An error occurred during a lifecycle operation |
Destroyed | VM and all its resources have been cleaned up |
Transitions
prepare() start()
[new] ──────────> Prepared ──────────> Running
│ │
suspend() │ │ stop(timeout)
┌────────────┘ └──────────────┐
v v
Suspended ─── resume() ──> Stopped
│
start() │
Running <──────────┘
Any state ── destroy() ──> Destroyed
Commands and Transitions
| Command | From State | To State |
|---|---|---|
vmctl create | (none) | Prepared |
vmctl start | Prepared, Stopped | Running |
vmctl stop | Running | Stopped |
vmctl suspend | Running | Suspended (paused vCPUs) |
vmctl resume | Suspended | Running |
vmctl destroy | Any | Destroyed |
vmctl up | (none), Stopped | Running (auto-creates if needed) |
vmctl down | Running | Stopped |
vmctl reload | Any | Running (destroys + recreates) |
Graceful Shutdown
vmctl stop sends an ACPI power-down signal via QMP. If the guest doesn't shut down within the timeout (default 30 seconds), vmctl sends SIGTERM, and finally SIGKILL as a last resort.
Networking Modes
vmctl supports several networking modes depending on your needs and permissions.
User Mode (SLIRP) - Default
network "user"
QEMU's built-in user-mode networking. No root or special permissions required.
How it works:
- QEMU emulates a full TCP/IP stack in userspace.
- The guest gets a private IP (typically
10.0.2.x). - Outbound connections from the guest are NAT'd through the host.
- SSH access is provided via host port forwarding (ports 10022-10122, deterministically assigned per VM name).
Pros: Zero setup, no root needed. Cons: No inbound connections (except forwarded ports), lower performance than TAP.
TAP Mode
network "tap" {
bridge "br0"
}
Creates a TAP device and attaches it to a host bridge. The guest appears as a real machine on the bridge's network.
How it works:
- vmctl creates a TAP interface and bridges it.
- The guest gets an IP via DHCP from whatever serves the bridge network.
- Full Layer 2 connectivity.
Pros: Real network presence, full inbound/outbound, better performance. Cons: Requires bridge setup, may need root or appropriate capabilities.
If no bridge name is specified, it defaults to br0.
VNIC Mode (illumos only)
network "vnic" {
name "vnic0"
}
Uses an illumos VNIC for exclusive-IP zone networking. Only available on the Propolis backend.
None
network "none"
No networking at all. Useful for isolated compute tasks or testing.
IP Discovery
vmctl discovers the guest IP differently depending on the network mode:
| Mode | IP Discovery Method |
|---|---|
| User | Returns 127.0.0.1 (SSH via forwarded port) |
| TAP | Parses ARP table (ip neigh show), falls back to dnsmasq lease files by MAC address |
| VNIC | Zone-based discovery |
| None | Not available |
Image Management
vmctl can work with local disk images or download them from URLs. Downloaded images are cached for reuse.
Image Cache
Downloaded images are stored in ~/.local/share/vmctl/images/. If an image already exists in the cache, it won't be re-downloaded.
Supported Formats
vmctl uses qemu-img to detect and convert image formats. Common formats:
- qcow2 - QEMU's native format, supports snapshots and compression.
- raw - Plain disk image.
The format is auto-detected from the file header.
Zstd Decompression
If a URL ends in .zst or .zstd, vmctl automatically decompresses the image after downloading. This is common for distribution cloud images.
Overlay System
vmctl never boots from the base image directly. Instead:
- The base image is stored in the cache (or at a local path you provide).
- A QCOW2 overlay is created on top, pointing to the base as a backing file.
- All writes go to the overlay. The base stays untouched.
- Destroying a VM just removes the overlay.
This means multiple VMs can share the same base image efficiently.
Disk Resizing
If you specify disk (in GB) in your VMFile or --disk on the CLI, the overlay is created with that size. The guest OS can then grow its filesystem to fill the available space (most cloud images do this automatically via cloud-init's growpart module).
Managing Images with the CLI
# Download an image to the cache
vmctl image pull https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
# List cached images
vmctl image list
# Inspect a local image
vmctl image inspect ./my-image.qcow2
Cloud-Init and SSH Keys
vmctl uses cloud-init to configure guests on first boot. It generates a NoCloud seed ISO containing user-data and meta-data, which the guest's cloud-init agent picks up automatically.
SSH Key Modes
There are three ways to get SSH access to a VM:
1. Auto-Generated Keypair (Recommended)
When you define a cloud-init block without an explicit ssh-key, vmctl generates a per-VM Ed25519 keypair:
cloud-init {
hostname "myvm"
}
ssh {
user "ubuntu"
}
The keys are stored in the VM's work directory:
~/.local/share/vmctl/vms/<name>/id_ed25519_generated(private)~/.local/share/vmctl/vms/<name>/id_ed25519_generated.pub(public)
This is the simplest option. No key management required.
2. Explicit SSH Key
Point to your own public key file:
cloud-init {
ssh-key "~/.ssh/id_ed25519.pub"
}
ssh {
user "ubuntu"
private-key "~/.ssh/id_ed25519"
}
3. Raw User-Data
Provide a complete cloud-config YAML file for full control:
cloud-init {
user-data "./my-cloud-config.yaml"
}
In this mode, you're responsible for setting up SSH access yourself in the user-data.
SSH Key Resolution
When vmctl needs to SSH into a VM (for vmctl ssh or provisioning), it searches for a private key in this order:
- Generated key in the VM's work directory (
id_ed25519_generated) - Key specified with
--keyflag orprivate-keyin VMFile ssh block - Standard keys in
~/.ssh/:id_ed25519,id_ecdsa,id_rsa
SSH User Resolution
--userCLI flaguserin VMFilesshblock- Default:
"vm"
Cloud-Init User Setup
When vmctl generates the cloud-config, it creates a user with:
- The specified username
- Passwordless
sudoaccess - The SSH public key in
authorized_keys - Bash as the default shell
- Root login disabled
Creating a VM Imperatively
This tutorial walks through the full lifecycle of a VM using individual vmctl commands.
Create a VM
vmctl create \
--name tutorial \
--image-url https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img \
--vcpus 2 \
--memory 2048 \
--ssh-key ~/.ssh/id_ed25519.pub
This downloads the image (cached for future use), creates a QCOW2 overlay, generates a cloud-init ISO with your SSH key, and registers the VM.
Start It
vmctl start tutorial
Check Status
vmctl list
NAME BACKEND VCPUS MEM NETWORK PID SSH
tutorial qemu 2 2048 user 12345 10042
For detailed info:
vmctl status tutorial
Connect via SSH
vmctl ssh tutorial
vmctl waits for SSH to become available (cloud-init needs a moment to set up the user), then drops you into a shell.
Suspend and Resume
Pause the VM without shutting it down:
vmctl suspend tutorial
Resume it:
vmctl resume tutorial
The VM continues from exactly where it was, no reboot needed.
Stop the VM
vmctl stop tutorial
This sends an ACPI power-down signal. If the guest doesn't shut down within 30 seconds, vmctl sends SIGTERM.
To change the timeout:
vmctl stop tutorial --timeout 60
Restart
A stopped VM can be started again:
vmctl start tutorial
Destroy
When you're done, clean up everything:
vmctl destroy tutorial
This stops the VM (if running), removes the overlay, cloud-init ISO, and all work directory files, and unregisters the VM from the store.
Declarative Workflow with VMFile.kdl
This tutorial shows how to define VMs in a configuration file and manage them with vmctl up/down.
Write a VMFile
Create VMFile.kdl in your project directory:
vm "webserver" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
vcpus 2
memory 2048
disk 20
cloud-init {
hostname "webserver"
}
ssh {
user "ubuntu"
}
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y nginx"
}
provision "shell" {
inline "echo 'Hello from vmctl!' | sudo tee /var/www/html/index.html"
}
}
Bring It Up
vmctl up
vmctl will:
- Discover
VMFile.kdlin the current directory. - Download the Ubuntu image (or use the cached copy).
- Generate an Ed25519 SSH keypair for this VM.
- Create a QCOW2 overlay with 20GB disk.
- Build a cloud-init ISO with the hostname and generated SSH key.
- Boot the VM.
- Wait for SSH to become available.
- Run the provision steps in order, streaming output to your terminal.
Connect
vmctl ssh
When there's only one VM in the VMFile, you don't need to specify the name.
Make Changes
Edit VMFile.kdl to add another provisioner, then reload:
vmctl reload
This destroys the existing VM and recreates it from scratch with the updated definition.
To re-run just the provisioners without recreating:
vmctl provision
Bring It Down
Stop the VM:
vmctl down
Or stop and destroy:
vmctl down --destroy
Filtering by Name
If your VMFile defines multiple VMs, use --name to target a specific one:
vmctl up --name webserver
vmctl ssh --name webserver
vmctl down --name webserver
Provisioning
Provisioners run commands and upload files to a VM after it boots. They execute in order and stop on the first failure.
Provision Types
Shell with Inline Command
Execute a command directly on the guest:
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y curl"
}
Shell with Script File
Upload and execute a local script:
provision "shell" {
script "scripts/setup.sh"
}
The script is uploaded to /tmp/vmctl-provision-<step>.sh on the guest, made executable, and run. Paths are relative to the directory containing VMFile.kdl.
File Upload
Upload a file to the guest via SFTP:
provision "file" {
source "config/nginx.conf"
destination "/tmp/nginx.conf"
}
Execution Details
- Shell provisioners stream stdout/stderr to your terminal in real-time.
- A non-zero exit code aborts the entire provisioning sequence.
- Output is logged to
provision.login the VM's work directory. - vmctl waits up to 120 seconds for SSH to become available before provisioning starts.
Multi-Stage Example
A common pattern is to combine file uploads with shell commands:
vm "builder" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
vcpus 4
memory 4096
cloud-init {
hostname "builder"
}
ssh {
user "ubuntu"
}
// Stage 1: Install dependencies
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y build-essential"
}
// Stage 2: Upload source code
provision "file" {
source "src.tar.gz"
destination "/tmp/src.tar.gz"
}
// Stage 3: Build
provision "shell" {
inline "cd /tmp && tar xzf src.tar.gz && cd src && make"
}
}
Re-Running Provisioners
To re-run provisioners on an already-running VM:
vmctl provision
Or for a specific VM:
vmctl provision --name builder
Viewing Provision Logs
vmctl log builder --provision
Real-World: OmniOS Builder VM
This tutorial walks through the real-world VMFile.kdl used in the vm-manager project itself to build software on OmniOS (an illumos distribution).
The Goal
Build a Rust binary (forger) on OmniOS. This requires:
- An OmniOS cloud VM with development tools.
- Uploading the source code.
- Compiling on the guest.
The VMFile
vm "omnios-builder" {
image-url "https://downloads.omnios.org/media/stable/omnios-r151056.cloud.qcow2"
vcpus 4
memory 4096
disk 20
cloud-init {
hostname "omnios-builder"
}
ssh {
user "smithy"
}
provision "shell" {
script "scripts/bootstrap-omnios.sh"
}
provision "file" {
source "scripts/forger-src.tar.gz"
destination "/tmp/forger-src.tar.gz"
}
provision "shell" {
script "scripts/install-forger.sh"
}
}
Stage 1: Bootstrap (bootstrap-omnios.sh)
This script installs system packages and the Rust toolchain:
- Sets up PATH for GNU tools (OmniOS ships BSD-style tools by default).
- Installs
gcc14,gnu-make,pkg-config,openssl,curl,git, and other build dependencies via IPS (pkg install). - Installs Rust via
rustup. - Verifies all tools are available.
Stage 2: Upload Source
The file provisioner uploads a pre-packed tarball of the forger source code. This tarball is created beforehand with:
./scripts/pack-forger.sh
The pack script:
- Copies
crates/forger,crates/spec-parser, andimages/into a staging directory. - Generates a minimal workspace
Cargo.toml. - Includes
Cargo.lockfor reproducible builds. - Creates
scripts/forger-src.tar.gz.
Stage 3: Build and Install (install-forger.sh)
- Extracts the tarball to
$HOME/forger. - Runs
cargo build -p forger --release. - Copies the binary to
/usr/local/bin/forger.
The Full Workflow
# Pack the source on the host
./scripts/pack-forger.sh
# Bring up the VM, provision, and build
vmctl up
# SSH in to test the binary
vmctl ssh
forger --help
# Tear it down when done
vmctl down --destroy
Key Takeaways
- Multi-stage provisioning separates concerns: system setup, source upload, build.
- File provisioners transfer artifacts to the guest.
- Script provisioners are easier to iterate on than inline commands for complex logic.
- Streaming output lets you watch the build progress in real-time.
VMFile.kdl Overview
VMFile.kdl is the declarative configuration format for vmctl. It uses KDL (KDL Document Language), a human-friendly configuration language.
Discovery
vmctl looks for VMFile.kdl in the current directory by default. You can override this with --file:
vmctl up --file path/to/MyVMFile.kdl
Basic Structure
A VMFile contains one or more vm blocks, each defining a virtual machine:
vm "name" {
// image source (required)
// resources
// networking
// cloud-init
// ssh config
// provisioners
}
Path Resolution
All paths in a VMFile are resolved relative to the directory containing the VMFile. Tilde (~) is expanded to the user's home directory.
// Relative to VMFile directory
image "images/ubuntu.qcow2"
// Absolute path
image "/opt/images/ubuntu.qcow2"
// Home directory expansion
cloud-init {
ssh-key "~/.ssh/id_ed25519.pub"
}
Validation
vmctl validates the VMFile on parse and provides detailed error messages with hints:
- VM names must be unique.
- Each VM must have exactly one image source (
imageorimage-url, not both). - Shell provisioners must have exactly one of
inlineorscript. - File provisioners must have both
sourceanddestination. - Network type must be
"user","tap", or"none".
VM Block
The vm block is the top-level element in a VMFile. It defines a single virtual machine.
Syntax
vm "name" {
// configuration nodes
}
The name is a required string argument. It must be unique across all vm blocks in the file.
Example
vm "dev-server" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
vcpus 2
memory 2048
disk 20
cloud-init {
hostname "dev-server"
}
ssh {
user "ubuntu"
}
}
Name Requirements
- Must be a non-empty string.
- Must be unique within the VMFile.
- Used as the VM identifier in
vmctl list,vmctl ssh,--namefiltering, etc. - Used as the work directory name under
~/.local/share/vmctl/vms/.
Image Sources
Every VM must specify exactly one image source. The two options are mutually exclusive.
Local Image
image "path/to/image.qcow2"
Points to a disk image on the host filesystem. The path is resolved relative to the VMFile directory, with tilde expansion.
The file must exist at parse time. Supported formats are auto-detected by qemu-img (qcow2, raw, etc.).
Remote Image
image-url "https://example.com/image.qcow2"
Downloads the image and caches it in ~/.local/share/vmctl/images/. If the image is already cached, it won't be re-downloaded.
URLs ending in .zst or .zstd are automatically decompressed after download.
Validation
- Exactly one of
imageorimage-urlmust be specified. - Specifying both is an error.
- Specifying neither is an error.
Resources
Resource nodes control the VM's CPU, memory, and disk allocation.
vcpus
vcpus 2
Number of virtual CPUs. Must be greater than 0.
Default: 1
memory
memory 2048
Memory in megabytes. Must be greater than 0.
Default: 1024 (1 GB)
disk
disk 20
Disk size in gigabytes. When specified, the QCOW2 overlay is created with this size, allowing the guest to use more space than the base image provides. Most cloud images auto-grow the filesystem via cloud-init.
Default: not set (overlay matches base image size)
Network Block
The network node configures VM networking.
Syntax
network "mode"
// or
network "mode" {
// mode-specific attributes
}
Modes
User (Default)
network "user"
QEMU's SLIRP user-mode networking. No root required. SSH access is via a forwarded host port.
TAP
network "tap"
// or with explicit bridge:
network "tap" {
bridge "br0"
}
TAP device attached to a Linux bridge. The guest appears on the bridge's network with a real IP.
Default bridge: "br0"
None
network "none"
No networking.
Default
If no network node is specified, user-mode networking is used.
Cloud-Init Block
The cloud-init block configures guest initialization via cloud-init's NoCloud datasource.
Syntax
cloud-init {
hostname "myvm"
ssh-key "~/.ssh/id_ed25519.pub"
user-data "path/to/cloud-config.yaml"
}
All fields are optional.
Fields
hostname
hostname "myvm"
Sets the guest hostname via cloud-init metadata.
ssh-key
ssh-key "~/.ssh/id_ed25519.pub"
Path to an SSH public key file. The key is injected into the cloud-config's authorized_keys for the SSH user. Path is resolved relative to the VMFile directory.
user-data
user-data "cloud-config.yaml"
Path to a raw cloud-config YAML file. When this is set, vmctl passes the file contents directly as user-data without generating its own cloud-config. You are responsible for user creation and SSH setup.
Mutually exclusive with ssh-key in practice - if you provide raw user-data, vmctl won't inject any SSH keys.
Auto-Generated SSH Keys
When a cloud-init block is present but neither ssh-key nor user-data is specified, vmctl automatically:
- Generates a per-VM Ed25519 keypair.
- Injects the public key into the cloud-config.
- Stores both keys in the VM's work directory.
This is the recommended approach for most use cases.
SSH Block
The ssh block tells vmctl how to connect to the guest for provisioning and vmctl ssh.
Syntax
ssh {
user "ubuntu"
private-key "~/.ssh/id_ed25519"
}
Fields
user
user "ubuntu"
The SSH username to connect as. This should match the user created by cloud-init.
Default: "vm" (used when the ssh block exists but user is omitted)
private-key
private-key "~/.ssh/id_ed25519"
Path to the SSH private key for authentication. Path is resolved relative to the VMFile directory.
Default: When omitted, vmctl uses the auto-generated key if available, or falls back to standard keys in ~/.ssh/.
When to Include
The ssh block is required if you want to:
- Use
vmctl sshwith VMFile-based name inference. - Run provisioners (they connect via SSH).
If you only use imperative commands and don't need provisioning, the ssh block is optional.
Provision Blocks
Provision blocks define steps to run on the guest after boot. They execute in order and abort on the first failure.
Shell Provisioner
Inline Command
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y nginx"
}
Executes the command directly on the guest via SSH.
Script File
provision "shell" {
script "scripts/setup.sh"
}
The script file is uploaded to /tmp/vmctl-provision-<step>.sh on the guest, made executable with chmod +x, and executed. The path is resolved relative to the VMFile directory.
Validation
A shell provisioner must have exactly one of inline or script. Specifying both or neither is an error.
File Provisioner
provision "file" {
source "config/app.conf"
destination "/etc/app/app.conf"
}
Uploads a local file to the guest via SFTP.
Required Fields
| Field | Description |
|---|---|
source | Local file path (relative to VMFile directory) |
destination | Absolute path on the guest |
Execution Behavior
- Provisioners run sequentially in the order they appear.
- Shell provisioners stream stdout and stderr to your terminal in real-time.
- A non-zero exit code from any shell provisioner aborts the sequence.
- All output is also logged to
provision.login the VM's work directory. - vmctl waits up to 120 seconds for SSH to become available before starting provisioners.
Multi-VM Definitions
A VMFile can define multiple VMs. Each vm block is independent.
Example
vm "web" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
vcpus 2
memory 2048
cloud-init {
hostname "web"
}
ssh {
user "ubuntu"
}
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y nginx"
}
}
vm "db" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
vcpus 2
memory 4096
disk 50
cloud-init {
hostname "db"
}
ssh {
user "ubuntu"
}
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y postgresql"
}
}
Behavior with Multi-VM
vmctl upbrings up all VMs in order.vmctl downstops all VMs.vmctl sshrequires--namewhen multiple VMs are defined (or it will error).- Use
--namewith any command to target a specific VM.
Filtering
vmctl up --name web # only bring up "web"
vmctl provision --name db # re-provision only "db"
vmctl down --name web # stop only "web"
Constraints
- VM names must be unique within the file.
- Each VM is fully independent (no shared networking or cross-references).
Full Example
A complete VMFile.kdl demonstrating every available feature:
// Development VM with all options specified
vm "full-example" {
// Image source: URL (auto-cached)
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
// Resources
vcpus 4
memory 4096
disk 40
// Networking: user-mode (default, no root needed)
network "user"
// Cloud-init guest configuration
cloud-init {
hostname "full-example"
// ssh-key and user-data are omitted, so vmctl auto-generates an Ed25519 keypair
}
// SSH connection settings
ssh {
user "ubuntu"
// private-key is omitted, so vmctl uses the auto-generated key
}
// Provisioners run in order after boot
provision "shell" {
inline "sudo apt-get update && sudo apt-get install -y build-essential curl git"
}
provision "file" {
source "config/bashrc"
destination "/home/ubuntu/.bashrc"
}
provision "shell" {
script "scripts/setup-dev-tools.sh"
}
}
// Second VM demonstrating TAP networking and explicit keys
vm "tap-example" {
image "~/images/debian-12-generic-amd64.qcow2"
vcpus 2
memory 2048
network "tap" {
bridge "br0"
}
cloud-init {
hostname "tap-vm"
ssh-key "~/.ssh/id_ed25519.pub"
}
ssh {
user "debian"
private-key "~/.ssh/id_ed25519"
}
}
// Minimal VM: just an image and defaults
vm "minimal" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
}
What Happens
Running vmctl up with this VMFile:
- full-example: Downloads the Ubuntu image, creates a 40GB overlay, auto-generates SSH keys, boots with 4 vCPUs / 4GB RAM, runs three provisioners.
- tap-example: Uses a local Debian image, sets up TAP networking on
br0, injects your existing SSH key. - minimal: Downloads the same Ubuntu image (cache hit), boots with defaults (1 vCPU, 1GB RAM, user networking), no cloud-init, no provisioning.
Use --name to target specific VMs:
vmctl up --name full-example
vmctl ssh --name tap-example
vmctl down --name minimal
vmctl
The main entry point for the vmctl CLI.
Synopsis
vmctl <COMMAND>
Commands
| Command | Description |
|---|---|
create | Create a new VM |
start | Start an existing VM |
stop | Stop a running VM |
destroy | Destroy a VM and clean up resources |
list | List all VMs |
status | Show detailed VM status |
console | Attach to serial console |
ssh | SSH into a VM |
suspend | Suspend (pause) a running VM |
resume | Resume a suspended VM |
image | Manage VM images |
up | Bring up VMs from VMFile.kdl |
down | Bring down VMs from VMFile.kdl |
reload | Destroy and recreate VMs from VMFile.kdl |
provision | Re-run provisioners from VMFile.kdl |
log | Show VM logs |
Environment Variables
| Variable | Description |
|---|---|
RUST_LOG | Control log verbosity (e.g., RUST_LOG=debug vmctl up) |
XDG_DATA_HOME | Override data directory (default: ~/.local/share) |
vmctl create
Create a new VM and optionally start it.
Synopsis
vmctl create [OPTIONS] --name <NAME>
Options
| Option | Type | Default | Description |
|---|---|---|---|
--name | string | required | VM name |
--image | path | Path to a local disk image | |
--image-url | string | URL to download an image from | |
--vcpus | integer | 1 | Number of virtual CPUs |
--memory | integer | 1024 | Memory in MB |
--disk | integer | Disk size in GB (overlay resize) | |
--bridge | string | Bridge name for TAP networking | |
--cloud-init | path | Path to cloud-init user-data file | |
--ssh-key | path | Path to SSH public key file | |
--start | flag | false | Start the VM after creation |
Details
One of --image or --image-url must be provided. If --image-url is given, the image is downloaded and cached.
When --bridge is specified, TAP networking is used. Otherwise, user-mode (SLIRP) networking is used.
When --ssh-key is provided, a cloud-init ISO is generated that injects the public key. The SSH user defaults to "vm".
Examples
# Create from a URL with defaults
vmctl create --name myvm --image-url https://example.com/image.img
# Create with custom resources and start immediately
vmctl create --name myvm \
--image-url https://example.com/image.img \
--vcpus 4 --memory 4096 --disk 40 \
--ssh-key ~/.ssh/id_ed25519.pub \
--start
# Create from local image with TAP networking
vmctl create --name myvm --image ./ubuntu.qcow2 --bridge br0
See Also
vmctl start
Start an existing VM.
Synopsis
vmctl start <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Details
Starts a VM that is in the Prepared or Stopped state. The VM must have been previously created with vmctl create or vmctl up.
Examples
vmctl start myvm
See Also
vmctl stop
Stop a running VM.
Synopsis
vmctl stop [OPTIONS] <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Options
| Option | Type | Default | Description |
|---|---|---|---|
--timeout | integer | 30 | Graceful shutdown timeout in seconds |
Details
Sends an ACPI power-down signal via QMP. If the guest doesn't shut down within the timeout, vmctl sends SIGTERM to the QEMU process, then SIGKILL as a last resort.
Examples
# Stop with default 30-second timeout
vmctl stop myvm
# Give it more time to shut down gracefully
vmctl stop myvm --timeout 120
See Also
vmctl destroy
Destroy a VM and clean up all associated resources.
Synopsis
vmctl destroy <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Details
Stops the VM if it's running, then removes all associated files: QCOW2 overlay, cloud-init ISO, log files, SSH keys, sockets, and the work directory. Unregisters the VM from the store.
This action is irreversible.
Examples
vmctl destroy myvm
See Also
vmctl down (declarative equivalent)
vmctl list
List all registered VMs.
Synopsis
vmctl list
Output
NAME BACKEND VCPUS MEM NETWORK PID SSH
webserver qemu 2 2048 user 12345 10042
database qemu 4 4096 tap 12346 -
| Column | Description |
|---|---|
NAME | VM name |
BACKEND | Hypervisor backend (qemu, propolis, noop) |
VCPUS | Number of virtual CPUs |
MEM | Memory in MB |
NETWORK | Networking mode (user, tap, vnic, none) |
PID | QEMU process PID (or - if not running) |
SSH | SSH host port (or - if not available) |
Examples
vmctl list
vmctl status
Show detailed status of a VM.
Synopsis
vmctl status <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Output
Displays all known information about the VM:
- Name, ID, Backend, State
- vCPUs, Memory, Disk
- Network configuration (mode, bridge name)
- Work directory path
- Overlay path, Seed ISO path
- PID, VNC address
- SSH port, MAC address
Examples
vmctl status myvm
See Also
vmctl console
Attach to a VM's serial console.
Synopsis
vmctl console <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Details
Connects to the VM's serial console via a Unix socket (QEMU) or WebSocket (Propolis). You'll see the same output as a physical serial port: boot messages, kernel output, and a login prompt.
Press Ctrl+] (0x1d) to detach from the console.
Examples
vmctl console myvm
See Also
vmctl ssh
SSH into a VM.
Synopsis
vmctl ssh [OPTIONS] [NAME]
Arguments
| Argument | Description |
|---|---|
NAME | VM name (optional; inferred from VMFile.kdl if only one VM is defined) |
Options
| Option | Type | Description |
|---|---|---|
--user | string | SSH username (overrides VMFile) |
--key | path | Path to SSH private key |
--file | path | Path to VMFile.kdl (for reading ssh user) |
Key Resolution
vmctl searches for a private key in this order:
- Auto-generated key in VM's work directory (
id_ed25519_generated) - Key specified with
--key ~/.ssh/id_ed25519~/.ssh/id_ecdsa~/.ssh/id_rsa
User Resolution
--userCLI flaguserfield in VMFile'ssshblock- Default:
"vm"
Details
vmctl first verifies SSH connectivity using libssh2 (with a 30-second retry timeout), then hands off to the system ssh binary for full interactive terminal support. SSH options StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null are set automatically.
For user-mode networking, vmctl connects to 127.0.0.1 on the forwarded host port. For TAP networking, it discovers the guest IP via ARP.
Examples
# SSH into the only VM in VMFile.kdl
vmctl ssh
# SSH into a specific VM
vmctl ssh myvm
# Override user and key
vmctl ssh myvm --user root --key ~/.ssh/special_key
See Also
vmctl suspend
Suspend (pause) a running VM.
Synopsis
vmctl suspend <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Details
Pauses the VM's vCPUs via QMP. The VM remains in memory but stops executing. Use vmctl resume to continue.
Examples
vmctl suspend myvm
See Also
vmctl resume
Resume a suspended VM.
Synopsis
vmctl resume <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Details
Resumes a VM that was paused with vmctl suspend. The VM continues from exactly where it left off.
Examples
vmctl resume myvm
See Also
vmctl image
Manage VM disk images.
Synopsis
vmctl image <SUBCOMMAND>
Subcommands
vmctl image pull
Download an image to the local cache.
vmctl image pull [OPTIONS] <URL>
| Argument/Option | Type | Description |
|---|---|---|
URL | string | URL to download (positional) |
--name | string | Name to save as in the cache |
vmctl image list
List cached images.
vmctl image list
Output:
NAME SIZE PATH
noble-server-cloudimg-amd64.img 0.62 GB /home/user/.local/share/vmctl/images/noble-server-cloudimg-amd64.img
vmctl image inspect
Show image format and details.
vmctl image inspect <PATH>
| Argument | Description |
|---|---|
PATH | Path to image file (positional) |
Examples
# Download and cache an image
vmctl image pull https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
# List what's cached
vmctl image list
# Check format of a local image
vmctl image inspect ./my-image.qcow2
vmctl up
Bring up VMs defined in VMFile.kdl.
Synopsis
vmctl up [OPTIONS]
Options
| Option | Type | Default | Description |
|---|---|---|---|
--file | path | Path to VMFile.kdl (auto-discovered if omitted) | |
--name | string | Only bring up a specific VM | |
--no-provision | flag | false | Skip provisioning steps |
Details
For each VM in the VMFile:
- If the VM is already running, it is skipped.
- If the VM exists but is stopped, it is restarted and re-provisioned.
- If the VM doesn't exist, it is created, started, and provisioned.
Images are downloaded and cached as needed. SSH keys are auto-generated when cloud-init is configured without an explicit key.
Examples
# Bring up all VMs in ./VMFile.kdl
vmctl up
# Bring up a specific VM
vmctl up --name webserver
# Bring up without provisioning
vmctl up --no-provision
# Use a specific VMFile
vmctl up --file path/to/VMFile.kdl
See Also
vmctl down
Bring down VMs defined in VMFile.kdl.
Synopsis
vmctl down [OPTIONS]
Options
| Option | Type | Default | Description |
|---|---|---|---|
--file | path | Path to VMFile.kdl (auto-discovered if omitted) | |
--name | string | Only bring down a specific VM | |
--destroy | flag | false | Destroy VMs instead of just stopping |
Details
Without --destroy, VMs are stopped gracefully (30-second timeout). They can be restarted with vmctl up or vmctl start.
With --destroy, VMs are fully destroyed: all files removed, unregistered from the store. This is irreversible.
Examples
# Stop all VMs in VMFile.kdl
vmctl down
# Stop a specific VM
vmctl down --name webserver
# Destroy all VMs
vmctl down --destroy
See Also
vmctl reload
Destroy and recreate VMs from VMFile.kdl.
Synopsis
vmctl reload [OPTIONS]
Options
| Option | Type | Default | Description |
|---|---|---|---|
--file | path | Path to VMFile.kdl (auto-discovered if omitted) | |
--name | string | Only reload a specific VM | |
--no-provision | flag | false | Skip provisioning after reload |
Details
For each VM: destroys the existing instance (if any), then creates, starts, and provisions a fresh VM from the current VMFile definition. Useful when you've changed the VMFile and want a clean slate.
Examples
# Reload all VMs
vmctl reload
# Reload a specific VM
vmctl reload --name webserver
# Reload without provisioning
vmctl reload --no-provision
See Also
vmctl provision
Re-run provisioners on running VMs from VMFile.kdl.
Synopsis
vmctl provision [OPTIONS]
Options
| Option | Type | Default | Description |
|---|---|---|---|
--file | path | Path to VMFile.kdl (auto-discovered if omitted) | |
--name | string | Only provision a specific VM |
Details
Re-runs all provision steps defined in the VMFile on already-running VMs. The VM must be running and have an ssh block in the VMFile.
vmctl waits up to 120 seconds for SSH to become available, then runs each provisioner in sequence, streaming output to the terminal and logging to provision.log.
Useful for iterating on provision scripts without recreating the VM.
Examples
# Re-provision all VMs
vmctl provision
# Re-provision a specific VM
vmctl provision --name builder
See Also
vmctl log
Show VM console and provision logs.
Synopsis
vmctl log [OPTIONS] <NAME>
Arguments
| Argument | Description |
|---|---|
NAME | VM name (positional) |
Options
| Option | Type | Default | Description |
|---|---|---|---|
--console | flag | false | Show only console log (boot / cloud-init output) |
--provision | flag | false | Show only provision log |
--tail, -n | integer | 0 | Show the last N lines (0 = all) |
Details
By default (no flags), both console and provision logs are shown. The console log captures serial output (boot messages, cloud-init output). The provision log captures stdout/stderr from provisioner runs.
Log files are located in the VM's work directory:
console.log- Serial console outputprovision.log- Provisioning output
Examples
# Show all logs
vmctl log myvm
# Show only provision output
vmctl log myvm --provision
# Show last 50 lines of console log
vmctl log myvm --console --tail 50
See Also
Architecture Overview
vm-manager is structured as a two-crate Cargo workspace.
High-Level Design
┌─────────────────────────────────────────┐
│ vmctl CLI │
│ (crates/vmctl) │
│ │
│ Commands → VMFile parser → Hypervisor │
└──────────────────┬──────────────────────┘
│
┌──────────────────┴──────────────────────┐
│ vm-manager library │
│ (crates/vm-manager) │
│ │
│ ┌─────────────┐ ┌──────────────────┐ │
│ │ Hypervisor │ │ Image Manager │ │
│ │ Trait │ │ │ │
│ └──────┬──────┘ └──────────────────┘ │
│ │ │
│ ┌──────┴──────────────────────┐ │
│ │ RouterHypervisor │ │
│ │ ┌──────┐ ┌────────┐ ┌────┐│ │
│ │ │ QEMU │ │Propolis│ │Noop││ │
│ │ └──────┘ └────────┘ └────┘│ │
│ └─────────────────────────────┘ │
│ │
│ ┌───────────┐ ┌──────────────────┐ │
│ │ SSH │ │ Cloud-Init │ │
│ │ Module │ │ Generator │ │
│ └───────────┘ └──────────────────┘ │
│ │
│ ┌───────────┐ ┌──────────────────┐ │
│ │ Provision │ │ VMFile │ │
│ │ Runner │ │ Parser │ │
│ └───────────┘ └──────────────────┘ │
└─────────────────────────────────────────┘
Async Runtime
vmctl uses Tokio with the multi-threaded runtime. Most operations are async, with one exception: SSH operations use ssh2 (libssh2 bindings), which is blocking. These are wrapped in tokio::task::spawn_blocking to avoid blocking the async executor.
Platform Abstraction
The Hypervisor trait defines a platform-agnostic interface. The RouterHypervisor dispatches calls to the correct backend based on the BackendTag stored in each VmHandle:
- Linux builds include
QemuBackend. - illumos builds include
PropolisBackend. - All platforms include
NoopBackendfor testing.
Conditional compilation (#[cfg(target_os = ...)]) ensures only the relevant backend is compiled.
Crate Structure
Workspace Layout
vm-manager/
Cargo.toml # Workspace root
crates/
vm-manager/ # Library crate
Cargo.toml
src/
lib.rs # Re-exports
traits.rs # Hypervisor trait, ConsoleEndpoint
types.rs # VmSpec, VmHandle, VmState, NetworkConfig, etc.
error.rs # VmError with miette diagnostics
vmfile.rs # VMFile.kdl parser and resolver
image.rs # ImageManager (download, cache, overlay)
ssh.rs # SSH connect, exec, streaming, upload
provision.rs # Provisioner runner
cloudinit.rs # NoCloud seed ISO generation
backends/
mod.rs # RouterHypervisor
qemu.rs # QEMU/KVM backend (Linux)
qmp.rs # QMP client
propolis.rs # Propolis/bhyve backend (illumos)
noop.rs # No-op backend (testing)
vmctl/ # CLI binary crate
Cargo.toml
src/
main.rs # CLI entry point, clap App
commands/
create.rs # vmctl create
start.rs # vmctl start, suspend, resume
stop.rs # vmctl stop
destroy.rs # vmctl destroy
list.rs # vmctl list
status.rs # vmctl status
console.rs # vmctl console
ssh.rs # vmctl ssh
image.rs # vmctl image (pull, list, inspect)
up.rs # vmctl up
down.rs # vmctl down
reload.rs # vmctl reload
provision_cmd.rs # vmctl provision
log.rs # vmctl log
vm-manager Crate
The library crate. Contains all business logic and can be used as a dependency by other Rust projects.
Public re-exports from lib.rs:
RouterHypervisor(frombackends)Hypervisor,ConsoleEndpoint(fromtraits)VmError,Result(fromerror)- All types from
types:BackendTag,VmSpec,VmHandle,VmState,NetworkConfig,CloudInitConfig,SshConfig
vmctl Crate
The CLI binary. Depends on vm-manager and adds:
- Clap-based argument parsing
- Store persistence (
vms.json) - Terminal I/O (console bridging, log display)
- VMFile discovery and command dispatch
Hypervisor Backends
The Hypervisor Trait
All backends implement the Hypervisor trait defined in crates/vm-manager/src/traits.rs:
#![allow(unused)] fn main() { pub trait Hypervisor: Send + Sync { fn prepare(&self, spec: &VmSpec) -> impl Future<Output = Result<VmHandle>>; fn start(&self, vm: &VmHandle) -> impl Future<Output = Result<VmHandle>>; fn stop(&self, vm: &VmHandle, timeout: Duration) -> impl Future<Output = Result<VmHandle>>; fn suspend(&self, vm: &VmHandle) -> impl Future<Output = Result<VmHandle>>; fn resume(&self, vm: &VmHandle) -> impl Future<Output = Result<VmHandle>>; fn destroy(&self, vm: VmHandle) -> impl Future<Output = Result<()>>; fn state(&self, vm: &VmHandle) -> impl Future<Output = Result<VmState>>; fn guest_ip(&self, vm: &VmHandle) -> impl Future<Output = Result<String>>; fn console_endpoint(&self, vm: &VmHandle) -> Result<ConsoleEndpoint>; } }
QEMU Backend (Linux)
Located in crates/vm-manager/src/backends/qemu.rs.
Prepare:
- Creates work directory under
~/.local/share/vmctl/vms/<name>/. - Creates QCOW2 overlay on top of the base image.
- Generates cloud-init seed ISO (if configured).
- Allocates a deterministic SSH port (10022-10122 range, hash-based).
- Generates a locally-administered MAC address.
Start:
- Launches
qemu-system-x86_64with KVM acceleration. - CPU type:
host(passthrough). - Machine type:
q35,accel=kvm. - Devices: virtio-blk for disk, virtio-rng for entropy.
- Console: Unix socket + log file.
- VNC: localhost, auto-port.
- Networking: User-mode (SLIRP with port forwarding) or TAP (bridged).
- Daemonizes with PID file.
- Connects via QMP to verify startup and retrieve VNC address.
Stop:
- ACPI power-down via QMP (
system_powerdown). - Poll for process exit (500ms intervals) up to timeout.
- SIGTERM if timeout exceeded.
- SIGKILL as last resort.
IP Discovery:
- User-mode: returns
127.0.0.1(SSH via forwarded port). - TAP: parses ARP table (
ip neigh show), falls back to dnsmasq lease files by MAC address.
QMP Client
Located in crates/vm-manager/src/backends/qmp.rs. Async JSON-over-Unix-socket client implementing the QEMU Machine Protocol.
Commands: system_powerdown, quit, stop, cont, query_status, query_vnc.
Propolis Backend (illumos)
Located in crates/vm-manager/src/backends/propolis.rs.
- Uses ZFS clones for VM disks.
- Manages zones with the
nebula-vmbrand. - Communicates with
propolis-servervia REST API. - Networking via illumos VNICs.
- Suspend/resume not yet implemented.
Noop Backend
Located in crates/vm-manager/src/backends/noop.rs. All operations succeed immediately. Used for testing.
RouterHypervisor
Located in crates/vm-manager/src/backends/mod.rs. Dispatches Hypervisor trait calls to the correct backend based on the VmHandle's BackendTag.
Construction:
RouterHypervisor::new(bridge, zfs_pool)- Platform-aware, creates the appropriate backend.RouterHypervisor::noop_only()- Testing mode.
State Management
VM Store
vmctl persists VM state in a JSON file at $XDG_DATA_HOME/vmctl/vms.json (typically ~/.local/share/vmctl/vms.json). Falls back to /tmp if XDG_DATA_HOME is not set.
The store is a simple mapping from VM name to VmHandle.
VmHandle Serialization
VmHandle is serialized to JSON with all fields. Fields added in later versions have #[serde(default)] annotations, so older JSON files are deserialized without errors (missing fields get defaults).
Example stored handle:
{
"id": "abc123",
"name": "myvm",
"backend": "qemu",
"work_dir": "/home/user/.local/share/vmctl/vms/myvm",
"overlay_path": "/home/user/.local/share/vmctl/vms/myvm/overlay.qcow2",
"seed_iso_path": "/home/user/.local/share/vmctl/vms/myvm/seed.iso",
"pid": 12345,
"qmp_socket": "/home/user/.local/share/vmctl/vms/myvm/qmp.sock",
"console_socket": "/home/user/.local/share/vmctl/vms/myvm/console.sock",
"vnc_addr": "127.0.0.1:5900",
"vcpus": 2,
"memory_mb": 2048,
"disk_gb": 20,
"network": {"type": "User"},
"ssh_host_port": 10042,
"mac_addr": "52:54:00:ab:cd:ef"
}
Write Safety
The store uses an atomic write pattern:
- Write to a
.tmpfile. - Rename (atomic on most filesystems) to the final path.
This prevents corruption if the process is interrupted during a write.
State vs Process State
The store records the last known state but doesn't actively monitor QEMU processes. When vmctl queries a VM's state, it:
- Checks if the PID file exists.
- Sends
kill(pid, 0)to verify the process is alive. - If alive, queries QMP for detailed status (
running,paused, etc.). - If dead, reports
Stopped.
SSH Subsystem
Library
vmctl uses the ssh2 crate (Rust bindings to libssh2) for SSH operations. The SSH module is at crates/vm-manager/src/ssh.rs.
Core Functions
connect
Establishes a TCP connection and authenticates via public key.
Supports two authentication modes:
- In-memory PEM: Private key stored as a string (used for auto-generated keys).
- File path: Reads key from disk.
exec
Executes a command and collects the full stdout/stderr output. Blocking.
exec_streaming
Executes a command and streams stdout/stderr in real-time to provided writers. Uses non-blocking I/O:
- Opens a channel and calls
exec(). - Switches the session to non-blocking mode.
- Polls stdout and stderr in a loop with 8KB buffers.
- Flushes output after each read.
- Sleeps 50ms when no data is available.
- Switches back to blocking mode to read the exit status.
This is used by the provisioner to show build output live.
upload
Transfers a file to the guest via SFTP. Creates the SFTP subsystem, opens a remote file, and writes the local file contents.
connect_with_retry
Attempts to connect repeatedly until a timeout (typically 120 seconds for provisioning, 30 seconds for vmctl ssh). Uses exponential backoff starting at 1 second, capped at 5 seconds. Runs the blocking connect on tokio::task::spawn_blocking.
Why Not Native SSH?
libssh2 is used for programmatic operations (provisioning, connectivity checks) because it can be controlled from Rust code. For interactive sessions (vmctl ssh), vmctl hands off to the system ssh binary for proper terminal handling (PTY allocation, signal forwarding, etc.).
Error Handling
Approach
vm-manager uses miette for rich diagnostic error reporting. Every error variant includes:
- A human-readable message.
- A diagnostic code (e.g.,
vm_manager::qemu::spawn_failed). - A
helpmessage telling the user what to do.
Errors are defined with #[derive(thiserror::Error, miette::Diagnostic)].
Error Variants
| Code | Trigger | Help |
|---|---|---|
vm_manager::qemu::spawn_failed | QEMU process failed to start | Ensure qemu-system-x86_64 is installed, in PATH, and KVM is available (/dev/kvm) |
vm_manager::qemu::qmp_connect_failed | Can't connect to QMP socket | QEMU may have crashed before QMP socket ready; check work directory logs |
vm_manager::qemu::qmp_command_failed | QMP command returned an error | (varies) |
vm_manager::image::overlay_creation_failed | QCOW2 overlay creation failed | Ensure qemu-img is installed and base image exists and is readable |
vm_manager::network::ip_discovery_timeout | Guest IP not found | Guest may not have DHCP lease; check network config and cloud-init |
vm_manager::propolis::unreachable | Can't reach propolis-server | Ensure propolis-server is running and listening on expected address |
vm_manager::cloudinit::iso_failed | Seed ISO generation failed | Ensure genisoimage or mkisofs installed, or enable pure-iso feature |
vm_manager::ssh::failed | SSH connection or command failed | Check SSH key, guest reachability, and sshd running |
vm_manager::ssh::keygen_failed | Ed25519 key generation failed | Internal error; please report it |
vm_manager::image::download_failed | Image download failed | Check network connectivity and URL correctness |
vm_manager::image::format_detection_failed | Can't detect image format | Ensure qemu-img installed and file is valid disk image |
vm_manager::image::conversion_failed | Image format conversion failed | Ensure qemu-img installed and sufficient disk space |
vm_manager::vm::not_found | VM not in store | Run vmctl list to see available VMs |
vm_manager::vm::invalid_state | Operation invalid for current state | (varies) |
vm_manager::backend::not_available | Backend not supported on platform | Backend not supported on current platform |
vm_manager::vmfile::not_found | VMFile.kdl not found | Create VMFile.kdl in current directory or specify path with --file |
vm_manager::vmfile::parse_failed | KDL syntax error | Check VMFile.kdl syntax; see https://kdl.dev |
vm_manager::vmfile::validation | VMFile validation error | (custom hint per error) |
vm_manager::provision::failed | Provisioner step failed | Check provisioner config and VM SSH reachability |
vm_manager::io | General I/O error | (transparent) |
Type Alias
The library defines pub type Result<T> = std::result::Result<T, VmError> for convenience. CLI commands return miette::Result<()> for rich terminal output.
Using vm-manager as a Crate
The vm-manager library can be used as a Rust dependency for building custom VM management tools.
Add the Dependency
[dependencies]
vm-manager = { path = "crates/vm-manager" }
# or from a git repository:
# vm-manager = { git = "https://github.com/user/vm-manager.git" }
Re-Exports
The crate root re-exports the most commonly used types:
#![allow(unused)] fn main() { use vm_manager::{ // Hypervisor abstraction Hypervisor, ConsoleEndpoint, RouterHypervisor, // Error handling VmError, Result, // Core types BackendTag, VmSpec, VmHandle, VmState, NetworkConfig, CloudInitConfig, SshConfig, }; }
Minimal Example
use vm_manager::{RouterHypervisor, Hypervisor, VmSpec, NetworkConfig}; use std::time::Duration; #[tokio::main] async fn main() -> vm_manager::Result<()> { // Create a hypervisor (platform-detected) let hyp = RouterHypervisor::new(None, "rpool".into()); // Define a VM let spec = VmSpec { name: "example".into(), image_path: "/path/to/image.qcow2".into(), vcpus: 2, memory_mb: 2048, disk_gb: Some(20), network: NetworkConfig::User, cloud_init: None, ssh: None, }; // Lifecycle let handle = hyp.prepare(&spec).await?; let handle = hyp.start(&handle).await?; // ... use the VM ... hyp.stop(&handle, Duration::from_secs(30)).await?; hyp.destroy(handle).await?; Ok(()) }
Feature Flags
| Feature | Effect |
|---|---|
pure-iso | Use pure-Rust ISO generation instead of genisoimage/mkisofs |
Hypervisor Trait
The Hypervisor trait is the core abstraction for VM lifecycle management. All backends implement it.
Definition
#![allow(unused)] fn main() { pub trait Hypervisor: Send + Sync { fn prepare(&self, spec: &VmSpec) -> impl Future<Output = Result<VmHandle>>; fn start(&self, vm: &VmHandle) -> impl Future<Output = Result<VmHandle>>; fn stop(&self, vm: &VmHandle, timeout: Duration) -> impl Future<Output = Result<VmHandle>>; fn suspend(&self, vm: &VmHandle) -> impl Future<Output = Result<VmHandle>>; fn resume(&self, vm: &VmHandle) -> impl Future<Output = Result<VmHandle>>; fn destroy(&self, vm: VmHandle) -> impl Future<Output = Result<()>>; fn state(&self, vm: &VmHandle) -> impl Future<Output = Result<VmState>>; fn guest_ip(&self, vm: &VmHandle) -> impl Future<Output = Result<String>>; fn console_endpoint(&self, vm: &VmHandle) -> Result<ConsoleEndpoint>; } }
Methods
prepare
Allocates resources for a VM based on the provided VmSpec. Creates the work directory, QCOW2 overlay, cloud-init ISO, and networking configuration. Returns a VmHandle in the Prepared state.
start
Boots the VM. Returns an updated VmHandle with runtime information (PID, VNC address, etc.).
stop
Gracefully shuts down the VM. Tries ACPI power-down first, then force-kills after the timeout. Returns the handle in Stopped state.
suspend / resume
Pauses and unpauses VM vCPUs without shutting down.
destroy
Stops the VM (if running) and removes all associated resources. Takes ownership of the handle.
state
Queries the current VM state by checking the process and QMP status.
guest_ip
Discovers the guest's IP address. Method varies by network mode and backend.
console_endpoint
Returns the console connection details. Synchronous (not async).
ConsoleEndpoint
#![allow(unused)] fn main() { pub enum ConsoleEndpoint { UnixSocket(PathBuf), // QEMU serial console WebSocket(String), // Propolis console None, // Noop backend } }
Implementing a Custom Backend
To add a new hypervisor backend:
- Create a struct implementing
Hypervisor. - Add it to
RouterHypervisorwith appropriate#[cfg]gates. - Add a new variant to
BackendTag. - Implement dispatch in
RouterHypervisor'sHypervisorimpl.
Core Types
All types are defined in crates/vm-manager/src/types.rs and re-exported from the crate root.
VmSpec
The input specification for creating a VM.
#![allow(unused)] fn main() { pub struct VmSpec { pub name: String, pub image_path: PathBuf, pub vcpus: u16, pub memory_mb: u64, pub disk_gb: Option<u32>, pub network: NetworkConfig, pub cloud_init: Option<CloudInitConfig>, pub ssh: Option<SshConfig>, } }
VmHandle
A runtime handle to a managed VM. Serializable to JSON for persistence.
#![allow(unused)] fn main() { pub struct VmHandle { pub id: String, pub name: String, pub backend: BackendTag, pub work_dir: PathBuf, pub overlay_path: Option<PathBuf>, pub seed_iso_path: Option<PathBuf>, pub pid: Option<u32>, pub qmp_socket: Option<PathBuf>, pub console_socket: Option<PathBuf>, pub vnc_addr: Option<String>, pub vcpus: u16, // default: 1 pub memory_mb: u64, // default: 1024 pub disk_gb: Option<u32>, pub network: NetworkConfig, pub ssh_host_port: Option<u16>, pub mac_addr: Option<String>, } }
All optional fields default to None and numeric fields have sensible defaults for backward-compatible deserialization.
VmState
#![allow(unused)] fn main() { pub enum VmState { Preparing, Prepared, Running, Stopped, Failed, Destroyed, } }
Implements Display with lowercase names.
NetworkConfig
#![allow(unused)] fn main() { pub enum NetworkConfig { Tap { bridge: String }, User, // default Vnic { name: String }, None, } }
Serialized with #[serde(tag = "type")] for clean JSON representation.
CloudInitConfig
#![allow(unused)] fn main() { pub struct CloudInitConfig { pub user_data: Vec<u8>, pub instance_id: Option<String>, pub hostname: Option<String>, } }
user_data is the raw cloud-config YAML content.
SshConfig
#![allow(unused)] fn main() { pub struct SshConfig { pub user: String, pub public_key: Option<String>, pub private_key_path: Option<PathBuf>, pub private_key_pem: Option<String>, } }
Supports both file-based keys (private_key_path) and in-memory keys (private_key_pem).
BackendTag
#![allow(unused)] fn main() { pub enum BackendTag { Noop, Qemu, Propolis, } }
Serialized as lowercase strings. Implements Display.
Image Management API
The image module handles downloading, caching, format detection, and overlay creation. Located in crates/vm-manager/src/image.rs.
ImageManager
#![allow(unused)] fn main() { pub struct ImageManager { client: reqwest::Client, cache: PathBuf, // default: ~/.local/share/vmctl/images/ } }
new
#![allow(unused)] fn main() { ImageManager::new() -> Self }
Creates an ImageManager with the default cache directory.
download
#![allow(unused)] fn main() { async fn download(&self, url: &str, destination: &Path) -> Result<()> }
Downloads an image from a URL to a local path. Skips if the destination already exists. Auto-decompresses .zst/.zstd files. Logs progress every 5%.
pull
#![allow(unused)] fn main() { async fn pull(&self, url: &str, name: Option<&str>) -> Result<PathBuf> }
Downloads an image to the cache directory and returns the cached path. If name is None, extracts the filename from the URL.
list
#![allow(unused)] fn main() { fn list(&self) -> Result<Vec<CachedImage>> }
Lists all images in the cache with their names, sizes, and paths.
detect_format
#![allow(unused)] fn main() { async fn detect_format(path: &Path) -> Result<String> }
Runs qemu-img info --output=json and returns the format string (e.g., "qcow2", "raw").
create_overlay
#![allow(unused)] fn main() { async fn create_overlay(base: &Path, overlay: &Path, size_gb: Option<u32>) -> Result<()> }
Creates a QCOW2 overlay with the given base image as a backing file. Optionally resizes to size_gb.
convert
#![allow(unused)] fn main() { async fn convert(src: &Path, dst: &Path, format: &str) -> Result<()> }
Converts an image between formats using qemu-img convert.
SSH and Provisioning API
SSH Module
Located in crates/vm-manager/src/ssh.rs.
connect
#![allow(unused)] fn main() { pub fn connect(ip: &str, port: u16, config: &SshConfig) -> Result<Session> }
Establishes an SSH connection and authenticates. Supports in-memory PEM keys and file-based keys.
exec
#![allow(unused)] fn main() { pub fn exec(sess: &Session, cmd: &str) -> Result<(String, String, i32)> }
Executes a command and returns (stdout, stderr, exit_code).
exec_streaming
#![allow(unused)] fn main() { pub fn exec_streaming<W1: Write, W2: Write>( sess: &Session, cmd: &str, stdout_writer: &mut W1, stderr_writer: &mut W2, ) -> Result<(String, String, i32)> }
Executes a command with real-time output streaming. Uses non-blocking I/O with 8KB buffers and 50ms polling interval. Both writes to the provided writers and collects the full output.
upload
#![allow(unused)] fn main() { pub fn upload(sess: &Session, local: &Path, remote: &str) -> Result<()> }
Uploads a file via SFTP.
connect_with_retry
#![allow(unused)] fn main() { pub async fn connect_with_retry( ip: &str, port: u16, config: &SshConfig, timeout: Duration, ) -> Result<Session> }
Retries connection with exponential backoff (1s to 5s). Runs blocking SSH on tokio::task::spawn_blocking.
Provisioning Module
Located in crates/vm-manager/src/provision.rs.
run_provisions
#![allow(unused)] fn main() { pub fn run_provisions( sess: &Session, provisions: &[ProvisionDef], base_dir: &Path, vm_name: &str, log_dir: Option<&Path>, ) -> Result<()> }
Runs all provisioners in sequence:
- Shell (inline): Executes the command via
exec_streaming. - Shell (script): Uploads the script to
/tmp/vmctl-provision-<step>.sh, makes it executable, runs it. - File: Uploads via SFTP.
Output is streamed to the terminal and appended to provision.log if log_dir is provided.
Aborts on the first non-zero exit code with VmError::ProvisionFailed.
VMFile Parsing API
The VMFile module parses and resolves VMFile.kdl configuration files. Located in crates/vm-manager/src/vmfile.rs.
Types
VmFile
#![allow(unused)] fn main() { pub struct VmFile { pub base_dir: PathBuf, pub vms: Vec<VmDef>, } }
VmDef
#![allow(unused)] fn main() { pub struct VmDef { pub name: String, pub image: ImageSource, pub vcpus: u16, pub memory_mb: u64, pub disk_gb: Option<u32>, pub network: NetworkDef, pub cloud_init: Option<CloudInitDef>, pub ssh: Option<SshDef>, pub provisions: Vec<ProvisionDef>, } }
ImageSource
#![allow(unused)] fn main() { pub enum ImageSource { Local(String), Url(String), } }
ProvisionDef
#![allow(unused)] fn main() { pub enum ProvisionDef { Shell(ShellProvision), File(FileProvision), } pub struct ShellProvision { pub inline: Option<String>, pub script: Option<String>, } pub struct FileProvision { pub source: String, pub destination: String, } }
Functions
discover
#![allow(unused)] fn main() { pub fn discover(explicit: Option<&Path>) -> Result<PathBuf> }
Finds the VMFile. If explicit is provided, uses that path. Otherwise, looks for VMFile.kdl in the current directory.
parse
#![allow(unused)] fn main() { pub fn parse(path: &Path) -> Result<VmFile> }
Parses a VMFile.kdl into a VmFile struct. Validates:
- At least one
vmblock. - No duplicate VM names.
- Each VM has a valid image source.
- Provisioner blocks are well-formed.
resolve
#![allow(unused)] fn main() { pub async fn resolve(def: &VmDef, base_dir: &Path) -> Result<VmSpec> }
Converts a VmDef into a VmSpec ready for the hypervisor:
- Downloads images from URLs.
- Resolves local image paths.
- Generates Ed25519 SSH keypairs if needed.
- Reads cloud-init user-data files.
- Resolves all relative paths against
base_dir.
Utility Functions
#![allow(unused)] fn main() { pub fn expand_tilde(s: &str) -> PathBuf }
Expands ~ to the user's home directory.
#![allow(unused)] fn main() { pub fn resolve_path(raw: &str, base_dir: &Path) -> PathBuf }
Expands tilde and makes relative paths absolute against base_dir.
#![allow(unused)] fn main() { pub fn generate_ssh_keypair(vm_name: &str) -> Result<(String, String)> }
Generates an Ed25519 keypair. Returns (public_key_openssh, private_key_pem).
Running in Docker/Podman
vmctl can run inside a container for CI/CD pipelines or isolated environments. The key requirement is access to /dev/kvm.
Dockerfile
FROM rust:1.85-bookworm AS builder
WORKDIR /build
COPY . .
RUN cargo build --release -p vmctl --features vm-manager/pure-iso
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
qemu-system-x86 \
qemu-utils \
openssh-client \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /build/target/release/vmctl /usr/local/bin/vmctl
ENV XDG_DATA_HOME=/data
ENTRYPOINT ["vmctl"]
The pure-iso feature eliminates the need for genisoimage in the container.
Docker
docker build -t vmctl .
docker run --rm \
--device /dev/kvm \
-v vmctl-data:/data \
vmctl list
The --device /dev/kvm flag passes through KVM access. No --privileged or special capabilities are needed for user-mode networking.
For TAP networking, you'll need --cap-add NET_ADMIN and appropriate bridge configuration.
Podman
podman build -t vmctl .
podman run --rm \
--device /dev/kvm \
-v vmctl-data:/data \
vmctl list
Podman works identically for user-mode networking.
Persistent Data
Mount a volume at the XDG_DATA_HOME path (/data in the Dockerfile above) to persist VM state and cached images across container runs.
Using VMFiles
Mount your project directory to use VMFile.kdl:
docker run --rm \
--device /dev/kvm \
-v vmctl-data:/data \
-v $(pwd):/workspace \
-w /workspace \
vmctl up
TAP Networking and Bridges
TAP networking gives VMs a real presence on a host network, with full Layer 2 connectivity.
Creating a Bridge
# Create bridge
sudo ip link add br0 type bridge
sudo ip link set br0 up
# Assign an IP to the bridge (optional, for host-to-guest communication)
sudo ip addr add 10.0.0.1/24 dev br0
DHCP with dnsmasq
Provide IP addresses to guests:
sudo dnsmasq \
--interface=br0 \
--bind-interfaces \
--dhcp-range=10.0.0.100,10.0.0.200,12h \
--no-daemon
IP Forwarding and NAT
If you want guests to reach the internet:
# Enable forwarding
sudo sysctl -w net.ipv4.ip_forward=1
# NAT outbound traffic
sudo iptables -t nat -A POSTROUTING -s 10.0.0.0/24 ! -o br0 -j MASQUERADE
Using with vmctl
Imperative
vmctl create --name myvm --image ./image.qcow2 --bridge br0
Declarative
vm "myvm" {
image "image.qcow2"
network "tap" {
bridge "br0"
}
cloud-init {
hostname "myvm"
}
ssh {
user "ubuntu"
}
}
IP Discovery
vmctl discovers TAP-networked guest IPs by:
- Checking the ARP table (
ip neigh show) for the guest's MAC address on the bridge. - Falling back to dnsmasq lease files.
This happens automatically when you run vmctl ssh or provisioners.
Security Considerations
- TAP interfaces may bypass host firewall rules.
- Guests on the bridge can see other devices on the network.
- Use iptables rules on the bridge to restrict traffic if needed.
illumos / Propolis Backend
vmctl includes experimental support for running VMs on illumos using the Propolis hypervisor (bhyve-based).
Requirements
- illumos-based OS (OmniOS, SmartOS, etc.)
propolis-serverinstalled and runnable- ZFS pool (default:
rpool) nebula-vmzone brand installed
How It Works
The Propolis backend manages VMs as illumos zones:
- Prepare: Creates a ZFS clone from
{pool}/images/{vm}@latestto{pool}/vms/{vm}. - Start: Boots the zone with
zoneadm -z {vm} boot, waits for propolis-server on127.0.0.1:12400, then sends the instance spec and run command via REST API. - Stop: Sends a stop command to propolis-server, then halts the zone.
- Destroy: Stops the VM, uninstalls the zone (
zoneadm uninstall -F), deletes the zone config (zonecfg delete -F), and destroys the ZFS dataset.
Networking
Uses illumos VNICs for exclusive-IP zone networking:
network "vnic" {
name "vnic0"
}
Limitations
- Suspend/resume not yet implemented.
- Console endpoint (WebSocket) is defined but not fully integrated.
- VNC address not yet exposed.
Building for illumos
cargo build --release -p vmctl --target x86_64-unknown-illumos
Custom Cloud-Init User Data
For advanced guest configuration, you can provide a complete cloud-config YAML file instead of using vmctl's built-in cloud-init generation.
Raw User-Data
vm "custom" {
image-url "https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img"
cloud-init {
user-data "cloud-config.yaml"
}
}
Example cloud-config.yaml
#cloud-config
users:
- name: deploy
groups: sudo
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-ed25519 AAAA... your-key
package_update: true
packages:
- nginx
- certbot
- python3-certbot-nginx
write_files:
- path: /etc/nginx/sites-available/default
content: |
server {
listen 80;
server_name _;
root /var/www/html;
}
runcmd:
- systemctl enable nginx
- systemctl start nginx
growpart:
mode: auto
devices: ["/"]
The pure-iso Feature
By default, vmctl generates the NoCloud seed ISO by shelling out to genisoimage or mkisofs. If neither is available, you can build with the pure-iso feature:
cargo build --release -p vmctl --features vm-manager/pure-iso
This uses the isobemak crate to generate ISO 9660 images entirely in Rust.
What vmctl Generates
When you don't provide raw user-data, vmctl generates a cloud-config that:
- Creates a user with the specified name.
- Grants passwordless sudo.
- Sets bash as the default shell.
- Injects the SSH public key into
authorized_keys. - Disables root login.
- Sets the hostname (from
hostnamefield or VM name). - Sets a unique
instance-idin the metadata.
If you need more control than this, use raw user-data.
Debugging and Logs
Log Verbosity
vmctl uses the tracing crate with RUST_LOG environment variable support:
# Default (info level)
vmctl up
# Debug logging
RUST_LOG=debug vmctl up
# Trace logging (very verbose)
RUST_LOG=trace vmctl up
# Target specific modules
RUST_LOG=vm_manager::ssh=debug vmctl ssh myvm
VM Logs
Console Log
The serial console output is captured to console.log in the VM's work directory. This includes boot messages and cloud-init output:
vmctl log myvm --console
Provision Log
Provisioner stdout/stderr is captured to provision.log:
vmctl log myvm --provision
Tail Recent Output
vmctl log myvm --console --tail 50
Work Directory
Each VM's files are in ~/.local/share/vmctl/vms/<name>/:
ls ~/.local/share/vmctl/vms/myvm/
Contents:
overlay.qcow2- Disk overlayseed.iso- Cloud-init ISOconsole.log- Serial outputprovision.log- Provisioner outputqmp.sock- QMP control socketconsole.sock- Console socketpidfile- QEMU PIDid_ed25519_generated- Auto-generated SSH keyid_ed25519_generated.pub- Public key
QMP Socket
You can interact with the QEMU Machine Protocol directly for advanced debugging:
# Using socat
socat - UNIX-CONNECT:~/.local/share/vmctl/vms/myvm/qmp.sock
After connecting, send {"execute": "qmp_capabilities"} to initialize, then commands like:
{"execute": "query-status"}
{"execute": "query-vnc"}
{"execute": "human-monitor-command", "arguments": {"command-line": "info network"}}
Common Issues
"QEMU spawn failed"
- Verify
qemu-system-x86_64is in your PATH. - Check
/dev/kvmexists and is accessible. - Ensure your user is in the
kvmgroup.
"Cloud-init ISO failed"
- Install
genisoimageormkisofs. - Or rebuild with
--features vm-manager/pure-iso.
"SSH failed"
- Check the console log for cloud-init errors:
vmctl log myvm --console - Verify the guest is reachable (check
vmctl status myvmfor SSH port). - Ensure sshd is running in the guest.
- Try connecting manually:
ssh -p <port> -i <key> user@127.0.0.1
"IP discovery timeout" (TAP networking)
- Verify the bridge exists and has DHCP.
- Check
ip neigh showfor the guest's MAC address. - Ensure the guest has obtained a DHCP lease (check console log).
VM stuck in "Stopped" state but QEMU still running
- Check
vmctl status myvmfor the PID. - Verify:
kill -0 <pid>- if the process is alive, the QMP socket may be stale. - Destroy and recreate:
vmctl destroy myvm.