fixup catalog locations and import. Add end2end test for that. (ignored by default)

This commit is contained in:
Till Wegmueller 2025-08-13 19:09:22 +02:00
parent f2922dab11
commit 287b692920
No known key found for this signature in database
6 changed files with 452 additions and 169 deletions

151
IMPLEMENTATION_STATUS.md Normal file
View file

@ -0,0 +1,151 @@
### Continuation: Whats still missing to actually install into a User or Partial Image
Below is a concrete, actionable continuation that builds on the current libips Image/catalog/installed DB work and outlines what remains to support real installs into User or Partial images. It follows the projects error-handling guidelines (thiserror + miette; no “fancy” in lib crates), and suggests minimal APIs and milestones.
---
### Whats implemented today (recap)
- Image structure and metadata
- Full vs Partial images and metadata paths (var/pkg vs .pkg)
- Publisher add/remove/default, persisted in pkg6.image.json
- Catalogs
- Download per-publisher catalogs via RestBackend
- Build a merged image catalog (ImageCatalog) and query packages
- Retrieve manifests reconstructed from catalogs
- Installed DB
- Installed packages redb with add/remove/query/get-manifest
- Keys are full FMRI strings with publisher
- Errors
- ImageError, InstalledError, Catalog errors use thiserror + miette (no fancy)
This is a solid foundation for discovery and state, but doesnt yet apply manifests to the filesystem or fetch payloads, which are required for actual installs.
---
### Missing components for real installation
1) Dependency resolution and planning
- Need a solver that, given requested specs, picks package versions, resolves require dependencies, excludes obsolete/renamed where appropriate, and produces an InstallPlan.
2) Payload fetching
- RestBackend currently fetches catalogs only; it needs a method to fetch content payloads (files) by digest/hash to a local cache, with verification.
3) Action executor (filesystem apply)
- Implement an installer that interprets Manifest actions (Dir, File, Link, etc.) relative to the image root, writes files atomically, sets modes/owners/groups, and updates the Installed DB upon success.
4) Transaction/locking and rollback
- Image-level lock to serialize operations; minimal rollback with temp files or a small journal.
5) Uninstall/update planning and execution
- Compute diffs vs installed manifests; remove safely; preserve config files where appropriate; perform updates atomically.
6) Partial/User image policy
- Define which actions are permitted in partial/user images (likely restrict Users/Groups/Drivers, etc.) and enforce with clear diagnostics.
7) Security and verification (future)
- TLS settings for repos, signature verification for catalogs and payloads.
8) CLI wiring
- pkg6 install/uninstall/update subcommands calling into libips high-level APIs.
9) Tests
- Unit: executor; Integration: mock repo + payloads; E2E: cargo xtask setup-test-env.
---
### Proposed modules and APIs
#### 1. Solver
- Location: libips/src/solver/mod.rs
- Types
- ResolvedPkg { fmri: Fmri, manifest: actions::Manifest }
- Constraint { stem: String, version_req: Option<String>, publisher: Option<String> }
- InstallPlan { add: Vec<ResolvedPkg>, remove: Vec<ResolvedPkg>, update: Vec<(ResolvedPkg, ResolvedPkg)>, reasons: Vec<String> }
- Error
- SolverError (thiserror + miette, no fancy), code prefix ips::solver_error
- Functions
- fn resolve_install(image: &Image, constraints: &[Constraint]) -> Result<InstallPlan, SolverError>
- MVP behavior
- Choose highest non-obsolete version matching constraints; fetch manifests via Image::get_manifest_from_catalog; perform require dependency closure; error on missing deps.
#### 2. Payload fetching
- Extend repository API
- trait ReadableRepository add:
- fn fetch_payload(&mut self, publisher: &str, digest: &str, dest: &Path) -> Result<(), RepositoryError>
- Or introduce a small RepositorySource used by installer to abstract fetching/caching.
- RestBackend implementation
- Derive URL for payloads by digest; download to temp; verify with crate::digest; move into cache.
- Image helpers
- fn content_cache_dir(&self) -> PathBuf (e.g., metadata_dir()/content)
- fn ensure_payload(&self, digest: &Digest) -> Result<PathBuf, ImageError>
Note: Ensure file actions in manifests include digest/hash attributes. If current catalog->manifest synthesis drops them, extend it so actions::File carries digest, size, mode, owner, group, path.
#### 3. Action executor
- Location: libips/src/apply/mod.rs
- Types
- ApplyOptions { dry_run: bool, preserve_configs: bool, no_backup: bool }
- InstallerError (thiserror + miette), code ips::installer_error
- Functions
- fn apply_install_plan(image: &Image, plan: &InstallPlan, repo_src: &mut impl RepositorySource, opts: &ApplyOptions) -> Result<(), InstallerError>
- Handling (MVP)
- Dir: create with mode/owner/group
- File: fetch payload; write to temp; fsync; set metadata; rename atomically
- Link: create symlink/hardlink
- Attr/License: metadata only (store or ignore initially)
- Policy for Partial images
- Forbid user/group creation and other privileged actions; return ValidationError (ips::validation_error::forbidden_action)
#### 4. High-level Image orchestration
- New APIs on Image
- fn plan_install(&self, specs: &[String]) -> Result<InstallPlan, ImageError>
- fn apply_plan(&self, plan: &InstallPlan, opts: &ApplyOptions) -> Result<(), ImageError>
- fn install(&self, specs: &[String], opts: &ApplyOptions) -> Result<(), ImageError>
- Behavior
- Acquire per-image lock (metadata_dir()/image.lock)
- Resolve plan; ensure payloads; apply; on success, update Installed DB via existing methods
#### 5. Uninstall and update
- Plan functions similar to install; compute diffs using old vs new manifests (actions::Diff exists to help)
- Track per-package installed file list for precise removal; can derive from manifest for MVP.
---
### Minimal milestone sequence (practical path)
- Milestone A: “Hello-world” install into a temp Partial image
1) Ensure file actions include digest in manifests
2) Add RestBackend::fetch_payload + Image cache
3) Implement executor for Dir/File/Link
4) Image::install that resolves a single package without deps and applies
5) Update Installed DB only after filesystem success
- Milestone B: Basic dependency closure and uninstall
1) MVP solver for require deps
2) Per-package file tracking; uninstall using that
3) Image lock; dry-run flag
4) Tests for partial image policy and path isolation
- Milestone C: Updates and diagnostics
1) Diff-based updates; safe replacement
2) Improved miette diagnostics with codes and help
3) CLI commands in pkg6 with fancy feature
---
### Error handling alignment
- New error enums:
- SolverError: ips::solver_error::{missing_dependency, conflict, …}
- InstallerError: ips::installer_error::{io, forbidden_action, payload_missing, …}
- ValidationError (if separate): ips::validation_error::{forbidden_action, invalid_spec, …}
- Library code uses specific error types; app code (pkg6) may use miette::Result with fancy.
Example variant:
- #[diagnostic(code(ips::installer_error::forbidden_action), help("Remove this package or use a full image"))]
---
### Open items to confirm
- Exact allowed action set for Partial/User images?
- Payload cache location and retention policy; proposed metadata_dir()/content with hash sharding
- REST payload URL structure (by digest) for your repos; adjust RestBackend accordingly
If you can confirm the above policy and repository layout for payloads, I can draft precise function signatures and a skeleton module structure next.

View file

@ -7,7 +7,7 @@ use serde::{Deserialize, Serialize};
use std::fs;
use std::path::{Path, PathBuf};
use thiserror::Error;
use tracing::{info, warn};
use tracing::{info, warn, trace};
/// Table definition for the catalog database
/// Key: stem@version
@ -319,90 +319,86 @@ impl ImageCatalog {
/// Build the catalog from downloaded catalogs
pub fn build_catalog(&self, publishers: &[String]) -> Result<()> {
println!("Building catalog with publishers: {:?}", publishers);
println!("Catalog directory: {:?}", self.catalog_dir);
println!("Catalog database path: {:?}", self.db_path);
info!("Building catalog (publishers: {})", publishers.len());
trace!("Catalog directory: {:?}", self.catalog_dir);
trace!("Catalog database path: {:?}", self.db_path);
if publishers.is_empty() {
println!("No publishers provided");
return Err(CatalogError::NoPublishers);
}
// Open the database
println!("Opening database at {:?}", self.db_path);
trace!("Opening database at {:?}", self.db_path);
let db = Database::open(&self.db_path)
.map_err(|e| CatalogError::Database(format!("Failed to open database: {}", e)))?;
// Begin a writing transaction
println!("Beginning write transaction");
trace!("Beginning write transaction");
let tx = db.begin_write()
.map_err(|e| CatalogError::Database(format!("Failed to begin transaction: {}", e)))?;
// Open the catalog table
println!("Opening catalog table");
trace!("Opening catalog table");
let mut catalog_table = tx.open_table(CATALOG_TABLE)
.map_err(|e| CatalogError::Database(format!("Failed to open catalog table: {}", e)))?;
// Open the obsoleted table
println!("Opening obsoleted table");
trace!("Opening obsoleted table");
let mut obsoleted_table = tx.open_table(OBSOLETED_TABLE)
.map_err(|e| CatalogError::Database(format!("Failed to open obsoleted table: {}", e)))?;
// Process each publisher
for publisher in publishers {
println!("Processing publisher: {}", publisher);
trace!("Processing publisher: {}", publisher);
let publisher_catalog_dir = self.catalog_dir.join(publisher);
println!("Publisher catalog directory: {:?}", publisher_catalog_dir);
trace!("Publisher catalog directory: {:?}", publisher_catalog_dir);
// Skip if the publisher catalog directory doesn't exist
if !publisher_catalog_dir.exists() {
println!("Publisher catalog directory not found: {}", publisher_catalog_dir.display());
warn!("Publisher catalog directory not found: {}", publisher_catalog_dir.display());
continue;
}
// Create a catalog manager for this publisher
// The catalog parts are in a subdirectory: publisher/<publisher>/catalog/
let catalog_parts_dir = publisher_catalog_dir.join("publisher").join(publisher).join("catalog");
println!("Creating catalog manager for publisher: {}", publisher);
println!("Catalog parts directory: {:?}", catalog_parts_dir);
// Determine where catalog parts live. Support both legacy nested layout
// (publisher/<publisher>/catalog) and flat layout (directly under publisher dir).
let nested_dir = publisher_catalog_dir.join("publisher").join(publisher).join("catalog");
let flat_dir = publisher_catalog_dir.clone();
// Check if the catalog parts directory exists
let catalog_parts_dir = if nested_dir.exists() { &nested_dir } else { &flat_dir };
trace!("Creating catalog manager for publisher: {}", publisher);
trace!("Catalog parts directory: {:?}", catalog_parts_dir);
// Check if the catalog parts directory exists (either layout)
if !catalog_parts_dir.exists() {
println!("Catalog parts directory not found: {}", catalog_parts_dir.display());
warn!("Catalog parts directory not found: {}", catalog_parts_dir.display());
continue;
}
let mut catalog_manager = CatalogManager::new(&catalog_parts_dir, publisher)
let mut catalog_manager = CatalogManager::new(catalog_parts_dir, publisher)
.map_err(|e| CatalogError::Repository(crate::repository::RepositoryError::Other(format!("Failed to create catalog manager: {}", e))))?;
// Get all catalog parts
println!("Getting catalog parts for publisher: {}", publisher);
trace!("Getting catalog parts for publisher: {}", publisher);
let parts = catalog_manager.attrs().parts.clone();
println!("Catalog parts: {:?}", parts.keys().collect::<Vec<_>>());
trace!("Catalog parts: {:?}", parts.keys().collect::<Vec<_>>());
// Load all catalog parts
for part_name in parts.keys() {
println!("Loading catalog part: {}", part_name);
trace!("Loading catalog part: {}", part_name);
catalog_manager.load_part(part_name)
.map_err(|e| CatalogError::Repository(crate::repository::RepositoryError::Other(format!("Failed to load catalog part: {}", e))))?;
}
// Process each catalog part
for (part_name, _) in parts {
println!("Processing catalog part: {}", part_name);
trace!("Processing catalog part: {}", part_name);
if let Some(part) = catalog_manager.get_part(&part_name) {
println!("Found catalog part: {}", part_name);
println!("Packages in part: {:?}", part.packages.keys().collect::<Vec<_>>());
if let Some(publisher_packages) = part.packages.get(publisher) {
println!("Packages for publisher {}: {:?}", publisher, publisher_packages.keys().collect::<Vec<_>>());
} else {
println!("No packages found for publisher: {}", publisher);
}
trace!("Found catalog part: {}", part_name);
trace!("Packages in part: {:?}", part.packages.keys().collect::<Vec<_>>());
self.process_catalog_part(&mut catalog_table, &mut obsoleted_table, part, publisher)?;
} else {
println!("Catalog part not found: {}", part_name);
trace!("Catalog part not found: {}", part_name);
}
}
}
@ -427,109 +423,101 @@ impl ImageCatalog {
part: &CatalogPart,
publisher: &str,
) -> Result<()> {
println!("Processing catalog part for publisher: {}", publisher);
trace!("Processing catalog part for publisher: {}", publisher);
// Get packages for this publisher
if let Some(publisher_packages) = part.packages.get(publisher) {
println!("Found {} package stems for publisher {}", publisher_packages.len(), publisher);
let total_versions: usize = publisher_packages.values().map(|v| v.len()).sum();
let mut processed: usize = 0;
let mut obsolete_count: usize = 0;
let progress_step: usize = 500; // report every N packages
trace!(
"Found {} package stems ({} versions) for publisher {}",
publisher_packages.len(),
total_versions,
publisher
);
// Process each package stem
for (stem, versions) in publisher_packages {
println!("Processing package stem: {}", stem);
println!("Found {} versions for stem {}", versions.len(), stem);
trace!("Processing package stem: {} ({} versions)", stem, versions.len());
// Process each package version
for version_entry in versions {
println!("Processing version: {}", version_entry.version);
println!("Actions: {:?}", version_entry.actions);
trace!("Processing version: {} | actions: {:?}", version_entry.version, version_entry.actions);
// Create the FMRI
let version = if !version_entry.version.is_empty() {
match crate::fmri::Version::parse(&version_entry.version) {
Ok(v) => {
println!("Parsed version: {:?}", v);
Some(v)
},
Ok(v) => Some(v),
Err(e) => {
println!("Failed to parse version '{}': {}", version_entry.version, e);
warn!("Failed to parse version '{}': {}", version_entry.version, e);
continue;
}
}
} else {
println!("Empty version string");
None
};
let fmri = Fmri::with_publisher(publisher, stem, version);
println!("Created FMRI: {}", fmri);
// Create the key for the catalog table (stem@version)
let catalog_key = format!("{}@{}", stem, version_entry.version);
println!("Catalog key: {}", catalog_key);
// Create the key for the obsoleted table (full FMRI including publisher)
let obsoleted_key = fmri.to_string();
println!("Obsoleted key: {}", obsoleted_key);
// Check if we already have this package in the catalog
let existing_manifest = if let Ok(bytes) = catalog_table.get(catalog_key.as_str()) {
if let Some(bytes) = bytes {
println!("Found existing manifest for {}", catalog_key);
Some(serde_json::from_slice::<Manifest>(bytes.value())?)
} else {
println!("No existing manifest found for {}", catalog_key);
None
}
} else {
println!("Error getting manifest for {}", catalog_key);
None
let existing_manifest = match catalog_table.get(catalog_key.as_str()) {
Ok(Some(bytes)) => Some(serde_json::from_slice::<Manifest>(bytes.value())?),
_ => None,
};
// Create or update the manifest
println!("Creating or updating manifest");
let manifest = self.create_or_update_manifest(existing_manifest, version_entry, stem, publisher)?;
// Check if the package is obsolete
let is_obsolete = self.is_package_obsolete(&manifest);
println!("Package is obsolete: {}", is_obsolete);
if is_obsolete { obsolete_count += 1; }
// Serialize the manifest
let manifest_bytes = serde_json::to_vec(&manifest)?;
println!("Serialized manifest size: {} bytes", manifest_bytes.len());
// Store the package in the appropriate table
if is_obsolete {
println!("Storing obsolete package in obsoleted table");
// Store obsolete packages in the obsoleted table with the full FMRI as key
// We don't store any meaningful values in the obsoleted table as per requirements,
// but we need to provide a valid byte slice
let empty_bytes: &[u8] = &[0u8; 0];
match obsoleted_table.insert(obsoleted_key.as_str(), empty_bytes) {
Ok(_) => println!("Successfully inserted into obsoleted table"),
Err(e) => {
println!("Failed to insert into obsoleted table: {}", e);
return Err(CatalogError::Database(format!("Failed to insert into obsoleted table: {}", e)));
}
}
obsoleted_table
.insert(obsoleted_key.as_str(), empty_bytes)
.map_err(|e| CatalogError::Database(format!("Failed to insert into obsoleted table: {}", e)))?;
} else {
println!("Storing non-obsolete package in catalog table");
// Store non-obsolete packages in the catalog table with stem@version as a key
match catalog_table.insert(catalog_key.as_str(), manifest_bytes.as_slice()) {
Ok(_) => println!("Successfully inserted into catalog table"),
Err(e) => {
println!("Failed to insert into catalog table: {}", e);
return Err(CatalogError::Database(format!("Failed to insert into catalog table: {}", e)));
}
}
catalog_table
.insert(catalog_key.as_str(), manifest_bytes.as_slice())
.map_err(|e| CatalogError::Database(format!("Failed to insert into catalog table: {}", e)))?;
}
processed += 1;
if processed % progress_step == 0 {
info!(
"Import progress (publisher {}): {}/{} packages ({} obsolete)",
publisher,
processed,
total_versions,
obsolete_count
);
}
}
}
// Final summary for this part/publisher
info!(
"Finished import for publisher {}: {} packages processed ({} obsolete)",
publisher,
processed,
obsolete_count
);
} else {
println!("No packages found for publisher: {}", publisher);
trace!("No packages found for publisher: {}", publisher);
}
println!("Finished processing catalog part for publisher: {}", publisher);
Ok(())
}

View file

@ -2152,7 +2152,7 @@ impl FileBackend {
pub fn get_catalog_manager(
&mut self,
publisher: &str,
) -> Result<std::cell::RefMut<crate::repository::catalog::CatalogManager>> {
) -> Result<std::cell::RefMut<'_, crate::repository::catalog::CatalogManager>> {
if self.catalog_manager.is_none() {
let publisher_dir = self.path.join("publisher");
let manager = crate::repository::catalog::CatalogManager::new(&publisher_dir, publisher)?;
@ -2170,7 +2170,7 @@ impl FileBackend {
/// It uses interior mutability with RefCell to allow mutation through an immutable reference.
pub fn get_obsoleted_manager(
&mut self,
) -> Result<std::cell::RefMut<crate::repository::obsoleted::ObsoletedPackageManager>> {
) -> Result<std::cell::RefMut<'_, crate::repository::obsoleted::ObsoletedPackageManager>> {
if self.obsoleted_manager.is_none() {
let manager = crate::repository::obsoleted::ObsoletedPackageManager::new(&self.path);
let refcell = std::cell::RefCell::new(manager);

View file

@ -612,17 +612,13 @@ impl RestBackend {
None => return Err(RepositoryError::Other("No local cache path set".to_string())),
};
// Create publisher directory if it doesn't exist
let publisher_dir = cache_path.join("publisher").join(publisher);
fs::create_dir_all(&publisher_dir)?;
// The local cache path is expected to already point to the per-publisher directory
// Ensure the directory exists
fs::create_dir_all(cache_path)?;
// Create catalog directory if it doesn't exist
let catalog_dir = publisher_dir.join("catalog");
fs::create_dir_all(&catalog_dir)?;
// Get or create the catalog manager
// Get or create the catalog manager pointing at the per-publisher directory directly
if !self.catalog_managers.contains_key(publisher) {
let catalog_manager = CatalogManager::new(&catalog_dir, publisher)?;
let catalog_manager = CatalogManager::new(cache_path, publisher)?;
self.catalog_managers.insert(publisher.to_string(), catalog_manager);
}
@ -656,10 +652,22 @@ impl RestBackend {
// Use a no-op reporter if none was provided
let progress = progress.unwrap_or(&NoopProgressReporter);
// Construct the URL for the catalog file
let url = format!("{}/catalog/1/{}", self.uri, file_name);
// Prepare candidate URLs to support both modern and legacy pkg5 depotd layouts
let mut urls: Vec<String> = vec![
format!("{}/catalog/1/{}", self.uri, file_name),
format!("{}/publisher/{}/catalog/1/{}", self.uri, publisher, file_name),
];
if file_name == "catalog.attrs" {
// Some older depots expose catalog.attrs at the root or under publisher path
urls.insert(1, format!("{}/catalog.attrs", self.uri));
urls.push(format!("{}/publisher/{}/catalog.attrs", self.uri, publisher));
}
debug!("Downloading catalog file: {}", url);
debug!(
"Attempting to download '{}' via {} candidate URL(s)",
file_name,
urls.len()
);
// Create progress info for this operation
let mut progress_info = ProgressInfo::new(format!("Downloading {}", file_name))
@ -668,49 +676,56 @@ impl RestBackend {
// Notify that we're starting the download
progress.start(&progress_info);
// Make the HTTP request
let response = self.client.get(&url)
.send()
.map_err(|e| {
// Report failure
progress.finish(&progress_info);
RepositoryError::Other(format!("Failed to download catalog file: {}", e))
})?;
let mut last_error: Option<String> = None;
// Check if the request was successful
if !response.status().is_success() {
// Report failure
progress.finish(&progress_info);
return Err(RepositoryError::Other(format!(
"Failed to download catalog file: HTTP status {}",
response.status()
)));
for url in urls {
debug!("Trying URL: {}", url);
match self.client.get(&url).send() {
Ok(resp) => {
if resp.status().is_success() {
// Update total if server provided content length
if let Some(content_length) = resp.content_length() {
progress_info = progress_info.with_total(content_length);
progress.update(&progress_info);
}
// Read the response body
let body = resp.bytes().map_err(|e| {
progress.finish(&progress_info);
RepositoryError::Other(format!("Failed to read response body: {}", e))
})?;
// Update progress with the final size
progress_info = progress_info.with_current(body.len() as u64);
if progress_info.total.is_none() {
progress_info = progress_info.with_total(body.len() as u64);
}
// Report completion
progress.finish(&progress_info);
return Ok(body.to_vec());
} else {
last_error = Some(format!("HTTP status {} for {}", resp.status(), url));
}
}
Err(e) => {
last_error = Some(format!("{} for {}", e, url));
}
}
}
// Get the content length if available
if let Some(content_length) = response.content_length() {
progress_info = progress_info.with_total(content_length);
progress.update(&progress_info);
}
// Read the response body
let body = response.bytes()
.map_err(|e| {
// Report failure
progress.finish(&progress_info);
RepositoryError::Other(format!("Failed to read response body: {}", e))
})?;
// Update progress with the final size
progress_info = progress_info.with_current(body.len() as u64);
if progress_info.total.is_none() {
progress_info = progress_info.with_total(body.len() as u64);
}
// Report completion
// Report failure after exhausting all URLs
progress.finish(&progress_info);
Ok(body.to_vec())
Err(RepositoryError::Other(match last_error {
Some(s) => format!(
"Failed to download '{}' from any known endpoint: {}",
file_name, s
),
None => format!(
"Failed to download '{}' from any known endpoint",
file_name
),
}))
}
/// Download and store a catalog file
@ -744,13 +759,8 @@ impl RestBackend {
None => return Err(RepositoryError::Other("No local cache path set".to_string())),
};
// Create publisher directory if it doesn't exist
let publisher_dir = cache_path.join("publisher").join(publisher);
fs::create_dir_all(&publisher_dir)?;
// Create catalog directory if it doesn't exist
let catalog_dir = publisher_dir.join("catalog");
fs::create_dir_all(&catalog_dir)?;
// Ensure the per-publisher directory (local cache path) exists
fs::create_dir_all(cache_path)?;
// Download the catalog file
let content = self.download_catalog_file(publisher, file_name, progress)?;
@ -767,8 +777,8 @@ impl RestBackend {
// Notify that we're starting to store the file
progress.start(&progress_info);
// Store the file
let file_path = catalog_dir.join(file_name);
// Store the file directly under the per-publisher directory
let file_path = cache_path.join(file_name);
let mut file = File::create(&file_path)
.map_err(|e| {
// Report failure

View file

@ -0,0 +1,85 @@
// End-to-end network test against OpenIndiana Hipster repository.
//
// This test is ignored by default to avoid network usage during CI runs.
// To run manually:
// cargo test -p libips --test e2e_openindiana -- --ignored --nocapture
// Optionally set IPS_E2E_NET=1 to annotate that network is expected.
//
// What it does:
// - Creates a temporary Image (Full)
// - Adds publisher "openindiana.org" with origin https://pkg.openindiana.org/hipster
// - Downloads the publisher catalog via RestBackend
// - Builds the image-wide merged catalog
// - Asserts that we discover at least one package and can retrieve a manifest
use std::env;
use tempfile::tempdir;
use libips::image::{Image, ImageType};
fn should_run_network_tests() -> bool {
// Even when ignored, provide an env switch to document intent
env::var("IPS_E2E_NET").map(|v| v == "1" || v.to_lowercase() == "true").unwrap_or(false)
}
#[test]
#[ignore]
fn e2e_download_and_build_catalog_openindiana() {
// If the env var is not set, just return early (test is ignored by default anyway)
if !should_run_network_tests() {
eprintln!(
"Skipping e2e_download_and_build_catalog_openindiana (set IPS_E2E_NET=1 and run with --ignored to execute)"
);
return;
}
// Create a temporary directory for image
let temp = tempdir().expect("failed to create temp dir");
let img_path = temp.path().join("image");
// Create the image
let mut image = Image::create_image(&img_path, ImageType::Full).expect("failed to create image");
// Add OpenIndiana publisher
let publisher = "openindiana.org";
let origin = "https://pkg.openindiana.org/hipster";
image
.add_publisher(publisher, origin, vec![], true)
.expect("failed to add publisher");
// Download catalog and build merged catalog
image
.download_publisher_catalog(publisher)
.expect("failed to download publisher catalog");
image.build_catalog().expect("failed to build merged catalog");
// Query catalog; we expect at least one package
let packages = image
.query_catalog(None)
.expect("failed to query catalog");
assert!(
!packages.is_empty(),
"expected at least one package from OpenIndiana catalog"
);
// Attempt to get a manifest for the first package
let some_pkg = &packages[0];
let manifest_opt = image
.get_manifest_from_catalog(&some_pkg.fmri)
.expect("failed to get manifest from catalog");
assert!(
manifest_opt.is_some(),
"expected to retrieve a manifest for at least one package"
);
// Optional debugging output
eprintln!(
"Fetched {} packages; example FMRI: {} (obsolete: {})",
packages.len(),
some_pkg.fmri,
some_pkg.obsolete
);
}

49
run_openindiana_image_import.sh Executable file
View file

@ -0,0 +1,49 @@
#!/usr/bin/env bash
# Create an image under sample_data/test-image and import OpenIndiana catalogs
# so you can inspect the results locally.
#
# Usage:
# ./run_openindiana_image_import.sh
#
# Notes:
# - Requires network access to https://pkg.openindiana.org/hipster
# - You can change RUST_LOG below to control verbosity (error|warn|info|debug|trace)
set -euo pipefail
set -x
export RUST_LOG=info
IMG_PATH="sample_data/test-image"
PUBLISHER="openindiana.org"
ORIGIN="https://pkg.openindiana.org/hipster"
PKG6_BIN="./target/debug/pkg6"
# Ensure sample_data exists and reset image dir for a clean run
mkdir -p "$(dirname "$IMG_PATH")"
if [ -d "$IMG_PATH" ]; then
rm -rf "$IMG_PATH"
fi
# Build pkg6 (and dependencies)
cargo build -p pkg6
# 1) Create image and add publisher (this also downloads the per-publisher catalog files)
"$PKG6_BIN" image-create \
-F "$IMG_PATH" \
-p "$PUBLISHER" \
-g "$ORIGIN"
# 2) Build the merged image-wide catalog database (also refreshes per-publisher catalogs)
"$PKG6_BIN" -R "$IMG_PATH" refresh "$PUBLISHER"
# 3) Print database statistics so you can inspect counts quickly
"$PKG6_BIN" -R "$IMG_PATH" debug-db --stats
# Optional: show configured publishers
"$PKG6_BIN" -R "$IMG_PATH" publisher -o table
echo "Done. Image created at: $IMG_PATH"
echo "Per-publisher catalog files under: $IMG_PATH/var/pkg/catalog/$PUBLISHER"
echo "Merged catalog database at: $IMG_PATH/var/pkg/catalog.redb"