Instance Backups
Overview
Instance Backups let you capture and restore the full state of your virtual machines to a wide range of storage destinations. The backup pipeline streams data directly from the VM disk to the destination with no local staging, supporting local mounts, S3-compatible object storage, SFTP, FTP, and any rclone-supported backend.
Key capabilities:
- Remote destinations -- Stream backups directly to AWS S3, MinIO, Wasabi, Backblaze B2, DigitalOcean Spaces, SFTP servers, FTP servers, or any of rclone's 40+ supported backends
- Zero local staging -- Full and incremental backups pipe straight from the VM disk to the destination. No temporary disk space is required on the hypervisor.
- External snapshots -- Running VMs use atomic external snapshots so backups read from a frozen base image while writes continue to an overlay. Zero downtime, committed back via live pivot.
- Incremental chains -- Dirty bitmaps track changed blocks since the last backup. Incrementals are sparse qcow2 files containing only the changed regions.
- Encrypted credentials -- S3 keys, SFTP passwords, and rclone configuration are stored encrypted on the master and pushed to hypervisors only when needed.
- Concurrent safe -- Each backup uses its own qemu-nbd Unix socket, so multiple backups can run in parallel on the same hypervisor without device conflicts.
- Policy-driven or manual -- Use backup policies for automated scheduled backups, or trigger one-off backups from the instance Backups tab.
How It Works
The backup pipeline runs on the hypervisor in six stages:
- External snapshot (running VMs only) -- An atomic disk-only snapshot freezes the base image. All subsequent writes go to a thin overlay file. The base image is now safe to read without coordination with the VM.
- NBD mount -- qemu-nbd exposes the frozen base image over a Unix socket with read-only access. A dirty bitmap is attached for incremental backups.
- Stream -- qemu-img convert reads from the NBD socket and writes a qcow2 file to stdout. For incrementals, only blocks marked dirty by the bitmap are included (sparse qcow2).
- Upload -- For remote destinations, rclone pipes stdin directly to the storage backend. For local destinations, qemu-img writes straight to the mount path.
- Bitmap clear -- After a successful backup, the dirty bitmap is cleared so the next incremental starts from this point.
- Commit + pivot -- The overlay is merged back into the base image via
virsh block-commit --active --pivot. The VM's active disk switches back to the base file with a brief I/O pause (milliseconds).
Stopped VMs skip steps 1 and 6 -- the pipeline mounts the disk directly via qemu-nbd and streams it out. Ceph RBD disks use rbd export and rbd export-diff piped through rclone.
Prerequisites
Before creating backups:
- A hypervisor with backup enabled -- Hypervisors must have backup storage assigned and enabled.
- A backup storage target -- Admins configure one or more backup storage entries via the Admin Backup Storage page.
- Provisioned dependencies on each hypervisor (installed automatically by
hypervisor:deploy):rclonefor remote transfersqemu-utils/qemu-imgfor the streaming pipelinelibnbd-bin/libnbdfor bitmap extent mapping
Configuring Backup Storage (Admin)
Navigate to Backup Storage in the Admin sidebar. Click Add Storage to create a new backup destination.
Storage Types
| Type | Use Case |
|---|---|
| Local | Traditional local mount point on the hypervisor (NFS, SMB, local disk). Backward compatible with older setups. |
| S3 | Any S3-compatible endpoint -- AWS S3, MinIO, Wasabi, Backblaze B2 (S3 API), DigitalOcean Spaces, etc. |
| SFTP | SSH/SFTP server. Password authentication. |
| FTP | FTP server. Password authentication. |
| Rclone | Reference a pre-configured rclone remote by name. Useful for backends not exposed directly in the UI (Backblaze B2 native, Azure Blob, Google Cloud Storage, etc.). |
Fields by Type
Local:
| Field | Description |
|---|---|
| Name | Descriptive label (e.g., "Local NFS Backup") |
| Path | Absolute mount path on the hypervisor (e.g., /mnt/backup) |
| Enabled | Toggle the storage on/off |
S3:
| Field | Description |
|---|---|
| Name | Descriptive label |
| Endpoint | Full HTTPS URL of the S3 endpoint (e.g., https://s3.us-east-1.amazonaws.com or https://minio.example.com) |
| Bucket | Target bucket name |
| Region | Bucket region (e.g., us-east-1) |
| Access Key | IAM access key with read/write/delete permissions on the bucket |
| Secret Key | Corresponding secret key |
| Path Prefix | Optional prefix within the bucket (e.g., backups/production) |
SFTP / FTP:
| Field | Description |
|---|---|
| Name | Descriptive label |
| Host | Server hostname or IP |
| Port | SSH/FTP port (default 22 for SFTP, 21 for FTP) |
| Username | Login username |
| Password | Password (stored encrypted) |
| Path Prefix | Optional subdirectory on the remote (e.g., /backups) |
Rclone:
| Field | Description |
|---|---|
| Name | Descriptive label |
| Remote Name | The name of the pre-configured rclone remote on the hypervisor (e.g., my-b2-backup) |
| Path Prefix | Optional subdirectory within the remote |
For the Rclone type, the remote must be pre-configured on each hypervisor using rclone config. The platform does not manage these remotes -- use this type when you need a backend the built-in types don't support.
Credential Encryption
All configuration fields (access keys, passwords, remote settings) are stored using Laravel's encrypted array cast. Credentials are only decrypted on the hypervisor when preparing a backup.
Automated Backups via Policies
Scheduled, automated backups run through Backup Policies -- see the Instance Backup Policies guide for full documentation.
Briefly: you create a policy with a schedule (daily/weekly full backup time, incremental frequency, retention), then attach instances to the policy. The policy scheduler runs every 5 minutes and queues backups for attached instances when their schedule window is reached.
Policies work with any backup storage type -- the instance inherits the backup storage from its hypervisor, and the scheduler writes to that destination.
Manual Backups
You can trigger a one-off backup from the instance Backups tab.
User Panel
- Navigate to the instance you want to back up
- Open the Backups tab
- Click Create Backup
- Choose:
- Backup Type: Full or Incremental (incremental requires a prior full backup with an active dirty bitmap)
- Backup Device: Primary disk only, or all attached disks
- Click Start Backup
The backup is queued immediately. Progress shows in the Backup Queue subtab with the current percentage and transfer rate.
Admin Panel
Same flow on the Admin instance manage page. Admins can trigger backups regardless of the user's backup policy attachment, as long as the hypervisor has enabled backup storage.
Restoring Backups
From the instance Backups tab, click the Restore icon next to any backup. The restore flow:
- Powers off the VM (required for consistent disk state)
- Streams the backup from the destination back to the VM disk:
- Full backup: Single
qemu-img convertoperation -- for remote storage,rclone catpipes intoqemu-img convert - Incremental backup: Rebuilds the chain by restoring the parent full backup first, then applying each incremental via
qemu-img rebase+qemu-img commit
- Full backup: Single
- Starts the VM after restore completes
Progress and any errors appear in the task log visible on the instance Tasks tab.
Restore overwrites the current disk state. All changes made since the backup was taken will be lost. Make a fresh backup before restoring if you want to preserve current state.
Incremental Backup Chains
An incremental backup contains only the blocks that changed since the last backup. Incrementals reference a parent backup, forming a chain:
full (base) ← incremental 1 ← incremental 2 ← incremental 3
To restore an incremental, the system must restore the full base first, then apply each incremental in order. This is handled automatically -- you select the incremental you want to restore, and the system walks the chain.
Chain length is controlled by the backup policy's max_incremental_chain setting. When the chain reaches this length, the next scheduled backup will be a new full backup, starting a fresh chain.
Deleting Backups
Click the Delete icon next to a backup in the list. The deletion:
- Removes the file from the remote destination via
rclone deletefile(orunlinkfor local storage) - Cleans up any Ceph RBD snapshot if applicable
- Removes the backup record from the database
You cannot delete a full backup while incrementals in its chain still exist. Delete the chain's incrementals first (or let the policy retention clean them up), then the parent full backup becomes deletable.
Ceph RBD Backups
For instances stored on Ceph RBD datastores, the pipeline uses rbd export (full) and rbd export-diff (incremental) instead of qemu-nbd. The output still pipes through rclone for remote destinations, so S3/SFTP/FTP backups work identically to qcow2-backed instances.
RBD snapshots are created automatically before each backup and kept in a rolling window (last 2 snapshots retained) for incremental chain support.
Monitoring and Troubleshooting
Backup Queue
The Backup Queue subtab on the instance Backups tab shows:
- Pending backups waiting to start
- In-progress backups with live percentage
- Recently completed or failed backups
Task Log
Every backup creates a Task record with:
- Full command execution log (including qemu-img, rclone, virsh output)
- Exit codes
- Error messages
- Duration
Access the task log from the instance Tasks tab.
Common Issues
"Hypervisor backup not enabled"
The hypervisor has no enabled backup storage assigned. Configure backup storage via the Admin Backup Storage page and assign it to the hypervisor.
"Bitmap not found, falling back to full backup"
The dirty bitmap was missing or cleared. The system automatically runs a full backup instead. Subsequent incrementals will work normally.
"NBD socket did not appear"
qemu-nbd failed to start. Check the task log for details -- usually indicates a missing or corrupt base image file, or permission issues.
"rclone rcat: failed to upload"
Network or authentication failure. Verify the storage credentials, endpoint, and network connectivity from the hypervisor.
"Block commit failed"
The overlay could not be merged back into the base image. The VM continues running on the overlay (no data loss). Investigate the disk state and contact support -- do not run another backup until resolved.
Architecture Summary
| Component | Role |
|---|---|
BackupPipeline (slave) | Orchestrates external snapshot, NBD mount, streaming, bitmap clear, commit |
RcloneHelper (slave) | Manages rclone remote configuration and stream commands |
HypervisorBackupStorage (master) | Stores storage type and encrypted configuration |
InstanceBackupPolicy (master) | Defines automated schedules and retention |
PolicyBackupScheduler (master cron) | Queues scheduled backups every 5 minutes |
BackupInitiator (master cron) | Sends queued backup requests to hypervisors |
BackupInstance command (slave) | Executes the backup pipeline on the hypervisor |
RestoreBackupInstance command (slave) | Streams backup data back to the VM disk |
Related Guides
- Instance Backup Policies -- Automated scheduled backups
- Object Storage -- S3-compatible storage for backup targets
- Database Backup Policies -- Backups for Managed Databases