Orphan VM Import
Overview
Orphan VM Import is an admin-only workflow for adopting KVM virtual machines that already exist on a hypervisor but are not tracked by Hypervisor. Typical sources: VMs that pre-date the slave attachment, VMs created directly with virsh, or VMs migrated in from another control panel. Eligible VMs become Hypervisor-managed instances in place, without downtime, disk migration, or guest reconfiguration.
Key characteristics:
- Non-destructive. Nothing on the hypervisor is moved, renamed, copied, or restarted. The VM keeps running through the import.
- MAC and IP preserved. Each NIC keeps its libvirt-assigned MAC address. Detected guest IPs are linked to matching subnets when one exists; otherwise the import is logged as IP-unmatched and you assign the IP later.
- Per-disk storage assignment. Each disk is attached to an existing storage row on the hypervisor, or you register the disk's directory as a new storage row in one click. No
mvof qcow2 files. - Two-step adoption. Imported VMs land in a "Pending Assignment" state owned by a system user. An admin then assigns a real owner (and optionally a real plan) from the instance manage page.
Eligibility
The Orphan VM Import scanner inspects every libvirt domain on the hypervisor that Hypervisor does not already track. For each domain it reads the libvirt XML, runs qemu-img info against each disk, and runs virsh domifaddr to collect any guest-reported IPs.
A domain is eligible for import when every disk meets both of these conditions:
| Property | Required value |
|---|---|
| Disk format | qcow2 |
| Backend | regular file on the host filesystem |
Anything else (LVM, ZFS, raw block devices, Ceph RBD) is reported with an explicit "Skipped: ..." reason in the discovery table. Cloud-init / config ISOs, commonly attached as sdx, .iso files, or paths containing cidata/cloudinit, are auto-excluded from the eligibility check and listed separately under "CDROMs (excluded)".
The runtime state of the VM (running / paused / shut off) does not affect eligibility.
Discovering Orphan VMs
- Navigate to Hypervisors > Hypervisor.
- Locate the Orphan VMs card below the GPU Devices section.
- Click Scan for Orphan VMs. A gradient progress bar appears while libvirt domains are inspected; the panel updates in real time over WebSocket.
Each row in the results table shows:
- Domain name (libvirt name, e.g.
h1631) - Runtime state (running / paused / shut off)
- vCPU count and RAM size
- Per-disk format/classification badges
- An Import button (eligible) or a "Skipped: ..." reason (ineligible)
Click a domain name to open the side panel with the full inspected payload: every disk including unsupported ones with their reason, every NIC with detected guest IPs, and the libvirt VNC port.
The scan result is cached for one hour. Click Re-scan to refresh.
Importing a Single VM
Click Import on an eligible row. The import dialog opens and lists every disk:
- Disks whose source directory matches an existing storage row on this hypervisor are tagged with a green **Matched: StorageName ** badge. No action needed.
- Disks whose directory does not match any existing storage row are tagged Needs assignment. For each, choose one of:
- Use existing. Pick an existing storage row from the dropdown.
- Create new. Register the disk's directory as a new storage row in one click. Edit the suggested name and path before confirming.
Confirm the dialog. The VM is imported as a new Hypervisor instance:
- Owner is set to a system "Orphan Imports" user (suspended; never logs in).
- Plan is a synthetic per-VM plan named
imported-hypervisor-domainwith the libvirt-extracted CPU, RAM, and disk size. The plan is hidden from customer plan listings. - Status reflects the libvirt runtime state.
- The imported instance carries a Pending Assignment flag.
Bulk Import
When at least two rows can be imported without storage assignment (every disk on every row is already matched to an existing storage), an Import All Eligible (n) button appears below the table. Clicking it imports each eligible row sequentially. Rows that need storage assignment are skipped with a warning toast; open them individually to choose storage.
Pending Assignment Banner
Imported instances render a banner at the top of the admin instance manage page:
Assign a user, then optionally pick a real plan and add IPs. Click Assign User in the banner to open the assignment modal.
Customers never see this banner because the synthetic owner is suspended.
Assigning a User and Plan
Click Assign User on the banner. The Assign User modal opens with two pickers:
- User (required). Picks any real user on the platform.
- Plan (optional). Defaults to the synthetic per-VM plan. Pick a real plan to replace it.
When you submit:
- The instance's owner is updated to the chosen user.
- The Pending Assignment flag clears and the banner disappears.
- If you picked a real plan, the synthetic
imported-...plan is soft-deleted automatically (it has no other instances by construction). If you kept the synthetic plan, it stays attached.
Limitations
The v1 Orphan VM Import feature does not support:
- Non-qcow2 disks. LVM, ZFS, raw block, and Ceph RBD orphan VMs are reported but not importable. Convert to qcow2 first if you need to adopt them.
- Conversion-on-import. Hypervisor will not run
qemu-img convertfor you. - Bulk multi-hypervisor scan. Scan one hypervisor at a time.
- Auto-IP allocation. Only IPs already detected on the guest (via
qemu-guest-agentor DHCP lease) are linked, and only when they fall inside a subnet you already have. IPs that don't match any subnet are logged to the import task; assign them manually after. - Snapshot history adoption. Only the active disk state is imported. Existing libvirt snapshots are not enumerated.
- CDROM and floppy adoption. These devices are listed informationally and excluded from the imported disk list.
If a domain you expect to see is missing from the discovery list, check that the slave on that hypervisor is online and that the domain's libvirt UUID and name don't already match an instance Hypervisor manages.
Re-importing After Hard Delete
If you destroy an imported instance and want to re-adopt the same domain, the destroy flow cleans up Hypervisor's records but leaves the actual qcow2 disks and the libvirt domain in place. Run Scan for Orphan VMs again. The domain reappears in the list and can be re-imported.
The re-import guard refuses to re-adopt a domain that is currently tracked as a non-trashed instance on the same hypervisor.
Troubleshooting
Discovery completes but the table is empty
Hypervisor already manages every domain on this hypervisor. Confirm by listing instances on the hypervisor manage page.
A specific domain shows as "Skipped: ..." but should be importable
Open the side panel by clicking the domain name to see the full per-disk classification. The skip reason is shown next to each unsupported disk. The most common cause is one disk on a non-qcow2 backend (LVM volume, raw block); convert to qcow2 file-backed before re-scanning.
The import dialog shows "No existing storages, switch to Create new"
The hypervisor has no storage rows registered with a path that contains the disk's directory. Either register the directory as a new storage row from the dialog (recommended) or add a storage row first via Hypervisors > Hypervisor > Storage.
Detected IP doesn't link to my instance
The IP either wasn't reported by qemu-guest-agent / DHCP lease, or no subnet on the hypervisor contains it. Check the import task log for Found 192.0.2.10 on MAC 52:54:00:aa:bb:01 but no matching subnet on this hypervisor style entries (with the actual IP and MAC), then assign the IP manually after the import finishes.
Imported instance fails on power actions, network suspend, or destroy
Ensure the slave has been updated to a release that includes the Orphan VM Import compatibility hardening. Older slaves expect interface names of the form virh1631 (a literal vir prefix concatenated with the instance name), but imported VMs have libvirt-assigned names like vnet0 that the older code paths do not recognise. Update the slave from Hypervisors > Hypervisor > Update Agent.