Skip to main content

Beta Release Version v2.2.2

· 18 min read

Version v2.2.2 is a major feature release. Headline additions: Volume Snapshots and Backups (on-storage rollback marks and disaster-recovery exports for block storage volumes), Orphan VM Import (adopt KVM domains that already exist on a hypervisor as Hypervisor-managed instances without downtime or disk migration), and User Self-Registration with built-in CAPTCHA support for Cloudflare Turnstile and Google reCAPTCHA v2 / v3, signed email verification, and a post-verification onboarding top-up flow. Alongside the headliners: a comprehensive overhaul of the instance backup experience, a redesigned Forge tab, a polished tasks progress view, and the carry-over polish from earlier 2.2.x - RustFS as the new self-hosted object storage default, a fully reworked hypervisor self-update flow, and richer AI Assistant diagnostics for admins.

  • [Feature] Volume Snapshots and Backups - Block storage volumes now have a dedicated Snapshots & Backups tab on every volume detail page. Take instant on-storage snapshots for rollback, or export full backups to your existing remote backup storage (S3-compatible, NFS, or local) for disaster recovery. Both support in-place rollback and restore-to-new-volume modes.
  • [Feature] Per-Volume Safety Controls - Volume operations enforce per-volume serialization (one operation at a time), a 5-minute cooldown between operations, per-plan caps on retained snapshots and backups, and a per-user inflight limit so a single user cannot saturate the queue.
  • [Feature] Volume Backup Billing - Volume plans now expose per-GB / per-month credit pricing for retained snapshots and retained backups, plus configurable caps on how many of each a customer can keep on a single volume. Defaults bill nothing for retention but allow up to 5 snapshots and 10 backups per volume.
  • [Feature] Orphan VM Import - A new admin workflow on every hypervisor manage page surfaces KVM domains that exist on libvirt but are not tracked by Hypervisor - for example, VMs from before the slave was attached, or VMs created out-of-band. Eligible (qcow2 file-backed) domains can be adopted as Hypervisor-managed instances in place. MAC, IPs, and disk paths are preserved; per-disk storage assignment in the import dialog means no mv of qcow2 files.
  • [Feature] User Self-Registration with CAPTCHA - A complete public signup pipeline with three CAPTCHA providers (Cloudflare Turnstile, Google reCAPTCHA v2, Google reCAPTCHA v3), per-form toggles for Login and Register, signed email-verification links, a Verify-Your-Email page that auto-redirects via WebSocket the moment the user clicks the link in another tab, and an optional post-verification top-up onboarding modal.
  • [Improvement] Disk-Grouped Backup Tree - The Backups tab on instance manage pages now groups every backup under its source disk, with the full incremental chain rooted under each full backup. Disk cards are collapsible so long-running production instances with dozens of chains are no longer overwhelming.
  • [Improvement] Cross-Disk Restore Picker - When a backup's original disk has been deleted, the restore flow shows a target-disk picker with format and size validation, plus a clear warning about partition-layout and bootloader differences.
  • [Improvement] Type-to-Confirm Safeguards - Every destructive backup action now requires typing a value back to the system before proceeding: type the instance hostname and the backup name to restore, type the backup name to delete. When restoring to a different disk, also choose the target disk explicitly.
  • [Improvement] Live Backup Queue Banner - The new queue card at the top of the backup tab streams pending and running jobs in real time. Admins and end users see active backups without leaving the instance manage page.
  • [Improvement] Backup Notification Email Formatting - The "Backup Completed" and "Backup Failed" notification emails now render their detail block as a clean bullet list. Previously single newlines collapsed into one wall-of-text paragraph; now each detail (instance, IP, type, size, duration, completed-at) appears on its own line.
  • [Improvement] Forge Tab Redesign - The Forge (live instance snapshot) tab on both admin and user instance manage pages has been rebuilt with an active-session hero card, animated flame icon, transition timeline, semantic action buttons (green Commit, red Discard with descriptive subtext), and visual disk-selection tiles with primary and detached badges.
  • [Improvement] Tasks Tab Progress Bars - Instance task progress bars now use elegant gradient fills tuned for both light and dark modes: emerald for completed, rose for failed, primary-color shimmer for running, muted slate for pending. Status icons pick up matching colors so each row reads at a glance.
  • [Improvement] Backup Storage Type Simplification - FTP and SFTP have been removed as standalone backup storage types. Existing deployments using them should migrate to NFS or S3-compatible storage. The admin form no longer offers an rclone-specific provider dropdown; standard S3 / RustFS / Ceph S3 covers the same ground with simpler configuration.
  • [Improvement] Object Storage - RustFS Replaces MinIO - With MinIO no longer being actively maintained on GitHub, the platform's self-hosted object storage stack now standardizes on RustFS (drop-in compatible) and Ceph S3 for production workloads. Existing MinIO deployments continue to work without changes - RustFS is API-compatible.
  • [Improvement] Hypervisor Self-Update - Async Streaming - Hypervisor updates triggered from the admin panel are now handled by a background job on the slave instead of running inside the HTTP request. Output streams back to the admin panel line by line, so you can watch the full update in real time without holding a browser tab open. Updates also survive the supervisor restart that happens mid-update.
  • [Fix] Hypervisor Self-Update - Permission Denied - Fixed a recurring Permission denied error when re-running an update whose previous attempt left files in the staging directory owned by a different system user. The staging directory is now wiped and recreated cleanly on each run.
  • [Fix] Hypervisor Self-Update - Duplicate Completion Message - The "Update completed successfully" line no longer shows twice in the live output stream.
  • [Fix] Admin Reinstall - Image Picker Scope - The admin reinstall flow now scopes the image picker to images accessible to the instance's owner instead of the full library.
  • [Fix] Hypervisor Manage Page in Dark Mode - Pie ring SVGs render correctly with explicit sizing, legend dot colors are inlined so they paint reliably, the ID chip displays cleanly, and the GPU section shows a proper empty state when no devices are present.
  • [Fix] Slave Backup Failure Logging - When a backup job flips to failed without a structured error payload, the admin queue UI now shows a clear "Backup failed (no error detail reported by slave)" sentinel instead of the last successful step (which was misleading).
  • [Fix] Duplicate Admin Backup Failure Emails - Per-iteration mailables are now rebuilt for each admin recipient, so the To: header no longer accumulates addresses across recipients (which previously caused duplicate notification emails).

Volume Snapshots and Backups

Block storage volumes are first-class entities in the platform: they live independently of instances, can be detached and re-attached, and survive instance deletion. Until v2.2.2, however, there was no way to capture a point-in-time copy of a volume or roll one back to a known-good state. v2.2.2 introduces both, as two distinct primitives:

Snapshots are on-storage rollback marks. They are created using the volume's native backend - Ceph RBD snapshots for Ceph-backed volumes, qcow2 internal snapshots for file-backed volumes - so they are nearly instant and consume only the differential storage that diverges from the snapshot point. Snapshots live on the same storage as the source volume, which makes them fast and cheap, but also means they are not disaster recovery - if you lose the underlying storage, you lose the snapshot too.

Backups are full exports to remote backup storage. They use the same BackupStorage configuration that already powers instance backups (S3-compatible buckets like RustFS / Ceph S3 / Wasabi / B2, NFS exports, or local paths). A backup survives complete loss of the source storage and is the right tool for disaster recovery and long-term retention.

Both operations are full-only - there are no incremental chains for volumes - and both support two restore modes:

  • Roll back this volume (in-place) - overwrite the source volume's contents. The volume must be detached, or the instance it's attached to must be stopped, before in-place restore can run.
  • Restore to new volume - provision a fresh volume from the snapshot or backup. The source volume is untouched. The new volume picks up the same plan and hypervisor group as the source by default; both are configurable in the restore dialog.

The Snapshots & Backups tab streams progress in real time. When an operation is in flight, an active-operation banner with a gradient progress bar appears at the top of the tab. Status pills next to each row update as the work moves through pendingcreating/restoringavailable (or failed).

Safety controls

Volume operations enforce multiple guard rails to prevent corruption and runaway usage:

  • Per-volume serialization - only one operation (snapshot, backup, or restore) can run on a given volume at a time. Attempting a second operation while one is in flight is rejected.
  • 5-minute cooldown between operations on the same volume. The UI shows a countdown so you know exactly when you can act again.
  • Per-plan caps on retained snapshots (max_snapshots) and backups (max_backups), configurable from the admin Volume Plans page. Defaults are 5 snapshots and 10 backups per volume.
  • Per-user inflight limit - a user may have at most three volume operations in flight at once across their account.
  • Live-volume protection - in-place restore is blocked when the volume is attached to a running instance. Stop the instance or detach the volume first.

Billing

Volume plans now expose per-GB / per-month credit pricing for retained snapshots and retained backups, configured from the admin Volume Plans page. Defaults bill nothing for retention but allow customers to take snapshots and backups within their plan's caps. Set the credit values above zero to start metering - snapshot and backup line items will appear alongside compute charges in customer usage reports on the next billing tick.

The full guide - including how snapshots and backups differ in practice, when to use each, and how restore modes work - is in the Volume Snapshots and Backups feature guide.

A Better Instance Backup Experience

The Backups tab on instance manage pages received a significant overhaul. The old flat list of every backup ever taken made it hard to see chain relationships, encouraged accidental deletions, and gave no visibility into in-flight backup or restore operations.

In v2.2.2:

  1. Backups are grouped by source disk. Each disk gets its own card. Inside the card, every full backup is rooted at the top with its dependent incremental backups indented underneath in chain order. Chains load unpaginated so you always see the complete picture.
  2. Disk cards are collapsible. Click the chevron to fold a disk down to one line. On instances with many disks this keeps the page short and scannable.
  3. A new active queue card sits at the top of the tab and streams every pending and running backup or restore job in real time. You no longer need to navigate to the global backup queue page to see what's happening for the instance you're already looking at.
  4. All destructive actions require type-to-confirm. Restore prompts ask you to type the instance's hostname and the backup name. Delete prompts ask for the backup name. When the original source disk has been deleted (orphan backups), the restore prompt also requires you to choose a target disk explicitly. This catches both accidental clicks and copy-paste mistakes when working with multiple instances side-by-side.
  5. Cross-disk restore. When the original disk no longer exists, a target-disk picker shows only disks of the same storage type and at least the size of the backup, with a clear warning that partition layouts and bootloader paths can differ across disks.

Cleaner backup notification emails

The "Backup Completed Successfully" and "Backup Failed" emails are now formatted as proper markdown bullet lists. Each detail row - instance, IP address, backup type, size, duration, completed-at - renders on its own line in the customer's mail client instead of collapsing into a single dense paragraph. The change applies automatically to existing installations through the included migration.

Forge Tab Redesigned

Forge - the live instance snapshot feature that lets you commit or discard changes after a snapshot point - now has a tab UI that matches the importance of what it does. The active-session hero card uses an ember/orange gradient with an animated flame icon that pulses while a snapshot is being created or committed. A transition timeline bar shows progress through the create → commit / discard lifecycle. The disk picker for "Enable Forge" is now a grid of selectable visual tiles with primary and detached badges, replacing the bare checkbox list. The action buttons are clearly differentiated: green "Commit Changes" with a "Merge into base disks · permanent" subcaption, red "Discard Changes" with a "Roll back to checkpoint" subcaption.

Both light and dark mode are supported with carefully tuned tonal variants so the tab feels at home in either theme.

Tasks Tab Progress Bars

Instance task progress bars now use elegant gradient fills:

  • Done - emerald with a soft glow
  • Failed - rose with a soft glow
  • Running - primary-color gradient with a continuous shimmer animation
  • Pending - muted slate

The status icon column also picks up matching text-success / text-danger / text-primary colors, so each row's outcome is readable at a glance even on dense task lists.

RustFS Replaces MinIO

Following MinIO's transition away from active GitHub maintenance, the supported self-hosted object storage backends are now RustFS (a fully API- and admin-compatible drop-in) and Ceph S3 for larger deployments. All product documentation, setup guides, and admin UI text have been updated to reflect this. There are no breaking changes for existing customers running MinIO - RustFS speaks the same protocol and uses the same mc admin client, so your stack continues to work unchanged.

If you're setting up object storage for the first time, the Object Storage Setup guide walks through the new RustFS-based deployment.

Hypervisor Self-Update: Real-Time Streaming, Reliable Completion

Triggering a hypervisor update from the admin panel previously kept the browser request open while the slave executed the entire update inline. This caused two recurring issues: the update output could appear stuck halfway through if the supervisor restart killed the streaming connection, and re-running an update sometimes failed with Permission denied errors on staging files left behind by a previous run.

In v2.2.2, the update flow has been redesigned:

  1. The admin panel triggers the update on the master.
  2. The master tells the slave to start, and returns immediately.
  3. The slave dispatches a background job that runs the actual update as a fully detached process.
  4. The detached process streams every line of output back to the master in real time.
  5. The admin panel receives those lines via WebSocket and displays them live.
  6. Final completion (or failure) status flows back the same way.

The end result: you see a live, line-by-line view of the update from "Connecting to hypervisor..." through "Update completed successfully", even across the supervisor restart that happens partway through. And re-running an update no longer trips over leftovers from a prior attempt - the staging area is always wiped clean before download begins.

Orphan VM Import

Hypervisor now lets you adopt KVM virtual machines that already exist on a hypervisor but were never tracked by the platform - VMs from before the slave was attached, VMs created with virsh directly, or VMs migrated in from another control panel. The adoption is non-destructive: nothing on the hypervisor is moved, renamed, copied, or restarted, and the guest VM keeps running through the import.

A new Orphan VMs card appears on every hypervisor's manage page. Click Scan for Orphan VMs to inspect every libvirt domain that Hypervisor doesn't already manage. The discovery pipeline reads each domain's libvirt XML, runs qemu-img info on each disk, and runs virsh domifaddr to collect any guest-reported IPs. Results stream back to the admin panel via WebSocket and are cached for one hour.

Each discovered domain shows up with its runtime state (running / paused / shut off), vCPU count, RAM size, per-disk format/classification badges, and either an Import button (eligible) or a "Skipped: …" reason (ineligible). Click a domain name to open a side panel with the full inspected payload - every disk including unsupported ones with their reason, every NIC with detected guest IPs, and the libvirt VNC port.

Eligibility

A domain is eligible when every disk is a qcow2 file-backed file. Anything else - LVM, ZFS, raw block, or Ceph RBD - is reported with an explicit reason. Cloud-init / config ISOs (commonly attached as sdx, .iso files, or paths containing cidata/cloudinit) are auto-excluded from eligibility and listed under "CDROMs (excluded)". The runtime state of the VM does not affect eligibility.

Per-disk storage assignment, no mv

When you click Import, a dialog opens listing every disk. Disks whose source directory matches an existing storage row on the hypervisor are tagged Matched and require no action. Disks whose directory does not match prompt you to either pick another existing storage from a Select2 dropdown OR register the directory as a new storage row in one click. Either way, no qcow2 files are moved - the Storage row's path already points where the disks live.

Pending Assignment banner and Assign User flow

Imported instances are owned by a system "Orphan Imports" user (suspended, never logs in) and flagged as pending assignment. A yellow banner on the admin instance manage page prompts you to assign a real owner - and optionally a real plan - via a new Assign User modal. Once assigned, the synthetic per-VM plan that backed the import is soft-deleted automatically. Customers never see this banner because the orphan-imports user can't log in.

Slave compatibility hardening

The slave runtime now reads interface names, VNC port, cloud-init device, and disk paths from instance state rather than reconstructing them from the hostname. This is what lets imported VMs (with libvirt-assigned interface names like vnet0 instead of vir{name}, and disk paths under /var/lib/libvirt/images/ instead of /home/hypervisor/disks/) participate in suspend/resume, firewall accounting, network attach/detach, VNC enable/disable, traffic accounting, and destroy without behavioural drift. Imported VMs also keep their original libvirt domain XML across power cycles until adopted, so any custom configuration the VM had before adoption survives.

The full guide - including troubleshooting and what's out of scope in v1 - is in the Orphan VM Import feature guide.

User Self-Registration with CAPTCHA and Email Verification

Hypervisor now ships a complete self-service signup pipeline alongside the existing admin-driven user creation flow. Once you turn it on, your /register route accepts new customers, runs the CAPTCHA challenge of your choice, sends an email-verification link, and (optionally) drops the verified user straight into a top-up modal so they can fund their account in one step.

Three CAPTCHA providers

Pick one provider; configure it once; choose which forms it protects:

  • Cloudflare Turnstile - privacy-friendly, no third-party cookies, no Google dependency. Most users see a small "I'm verifying you're human..." pill that resolves automatically without interaction.
  • Google reCAPTCHA v2 (checkbox) - the familiar "I'm not a robot" checkbox. Reliable, broadly compatible.
  • Google reCAPTCHA v3 (invisible scoring) - no user interaction; assigns each submission a score from 0.0 to 1.0. Hypervisor compares the score against a configurable threshold (default 0.5) and rejects below it.

CAPTCHA can be applied to Login, Register, or both, independently. There's also a Test CAPTCHA button in admin Settings → Security that runs a server-side validation against your secret key and surfaces the provider's response, so you can confirm your configuration is correct without going through the registration form.

Self-registration toggle

A single switch under admin Settings → Security enables the public /register page. With self-registration off, attempting to visit /register redirects users away - admin-only provisioning continues to work as before. With it on, the public registration form is reachable. CAPTCHA on Register is recommended as soon as self-registration goes live.

Email verification flow

Every new self-registered customer receives a signed verification link valid for 60 minutes. Until they click it, they land on a Verify your email page with the email the link was sent to, a Resend button (rate-limited to once per minute), and a live status pill that flips from "Pending" to "Verified" the instant they click the link in another tab - the page subscribes to a private WebSocket channel for that user and auto-redirects, no manual refresh needed.

Post-verification onboarding

If you've configured a top-up link in admin Settings → Security, the first dashboard load after verification opens a one-time modal:

Welcome! Add credits to start deploying. [ Add Credits → ] [ Skip for now ]

Customers can dismiss it. It only triggers on the post-verification redirect - navigating to the dashboard normally never opens it.

Profile fields

Signup collects country (with flag-emoji dropdown and full country names) and phone number. Both are visible on the admin user edit page so support can identify accounts. A configurable Terms of Service link can also appear under the form per deployment.

The full guide - including provider-specific tuning notes for reCAPTCHA v3, troubleshooting for "verification failed" errors, and security posture recommendations - is in the CAPTCHA and Self-Registration feature guide.