mirror of
https://gitlab.com/libvirt/libvirt.git
synced 2024-12-22 13:45:38 +00:00
docs: Convert 'migration' doc to RST
After conversion the table doesn't have to custom colors, but otherwise seems to hold well. Signed-off-by: Peter Krempa <pkrempa@redhat.com> Reviewed-by: Ján Tomko <jtomko@redhat.com>
This commit is contained in:
parent
f88d0ea2cd
commit
8841302e3d
@ -77,7 +77,6 @@ docs_html_in_files = [
|
||||
'internals',
|
||||
'java',
|
||||
'logging',
|
||||
'migration',
|
||||
'nss',
|
||||
'pci-hotplug',
|
||||
'php',
|
||||
@ -121,6 +120,7 @@ docs_rst_files = [
|
||||
'hacking',
|
||||
'libvirt-go',
|
||||
'libvirt-go-xml',
|
||||
'migration',
|
||||
'newreposetup',
|
||||
'pci-addresses',
|
||||
'platforms',
|
||||
|
@ -1,688 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<body>
|
||||
<h1>Guest migration</h1>
|
||||
|
||||
<ul id="toc"></ul>
|
||||
|
||||
<p>
|
||||
Migration of guests between hosts is a complicated problem with many possible
|
||||
solutions, each with their own positive and negative points. For maximum
|
||||
flexibility of both hypervisor integration, and administrator deployment,
|
||||
libvirt implements several options for migration.
|
||||
</p>
|
||||
|
||||
<h2><a id="transport">Network data transports</a></h2>
|
||||
|
||||
<p>
|
||||
There are two options for the data transport used during migration, either
|
||||
the hypervisor's own <strong>native</strong> transport, or <strong>tunnelled</strong>
|
||||
over a libvirtd connection.
|
||||
</p>
|
||||
|
||||
<h3><a id="transportnative">Hypervisor native transport</a></h3>
|
||||
<p>
|
||||
<em>Native</em> data transports may or may not support encryption, depending
|
||||
on the hypervisor in question, but will typically have the lowest computational costs
|
||||
by minimising the number of data copies involved. The native data transports will also
|
||||
require extra hypervisor-specific network configuration steps by the administrator when
|
||||
deploying a host. For some hypervisors, it might be necessary to open up a large range
|
||||
of ports on the firewall to allow multiple concurrent migration operations.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Modern hypervisors support TLS for encryption and authentication of the
|
||||
migration connections which can be enabled using the
|
||||
<code>VIR_MIGRATE_TLS</code> flag. The <em>qemu</em> hypervisor driver
|
||||
allows users to force use of TLS via the <code>migrate_tls_force</code>
|
||||
knob configured in <code>/etc/libvirt/qemu.conf</code>.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<img class="diagram" src="migration-native.png" alt="Migration native path"/>
|
||||
</p>
|
||||
|
||||
<h3><a id="transporttunnel">libvirt tunnelled transport</a></h3>
|
||||
<p>
|
||||
<em>Tunnelled</em> data transports will always be capable of strong encryption
|
||||
since they are able to leverage the capabilities built in to the libvirt RPC protocol.
|
||||
The downside of a tunnelled transport, however, is that there will be extra data copies
|
||||
involved on both the source and destinations hosts as the data is moved between libvirtd
|
||||
and the hypervisor. This is likely to be a more significant problem for guests with
|
||||
very large RAM sizes, which dirty memory pages quickly. On the deployment side, tunnelled
|
||||
transports do not require any extra network configuration over and above what's already
|
||||
required for general libvirtd <a href="remote.html">remote access</a>, and there is only
|
||||
need for a single port to be open on the firewall to support multiple concurrent
|
||||
migration operations.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<em>Note:</em> Certain features such as migration of non-shared storage
|
||||
(<code>VIR_MIGRATE_NON_SHARED_DISK</code>), the multi-connection migration
|
||||
(<code>VIR_MIGRATE_PARALLEL</code>), or post-copy migration
|
||||
(<code>VIR_MIGRATE_POSTCOPY</code>) may not be available when using
|
||||
libvirt's tunnelling.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<img class="diagram" src="migration-tunnel.png" alt="Migration tunnel path"/>
|
||||
</p>
|
||||
|
||||
<h2><a id="flow">Communication control paths/flows</a></h2>
|
||||
|
||||
<p>
|
||||
Migration of virtual machines requires close co-ordination of the two
|
||||
hosts involved, as well as the application invoking the migration,
|
||||
which may be on the source, the destination, or a third host.
|
||||
</p>
|
||||
|
||||
<h3><a id="flowmanageddirect">Managed direct migration</a></h3>
|
||||
|
||||
<p>
|
||||
With <em>managed direct</em> migration, the libvirt client process
|
||||
controls the various phases of migration. The client application must
|
||||
be able to connect and authenticate with the libvirtd daemons on both
|
||||
the source and destination hosts. There is no need for the two libvirtd
|
||||
daemons to communicate with each other. If the client application
|
||||
crashes, or otherwise loses its connection to libvirtd during the
|
||||
migration process, an attempt will be made to abort the migration and
|
||||
restart the guest CPUs on the source host. There may be scenarios
|
||||
where this cannot be safely done, in which cases the guest will be
|
||||
left paused on one or both of the hosts.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<img class="diagram" src="migration-managed-direct.png" alt="Migration direct, managed"/>
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="flowpeer2peer">Managed peer to peer migration</a></h3>
|
||||
|
||||
<p>
|
||||
With <em>peer to peer</em> migration, the libvirt client process only
|
||||
talks to the libvirtd daemon on the source host. The source libvirtd
|
||||
daemon controls the entire migration process itself, by directly
|
||||
connecting the destination host libvirtd. If the client application crashes,
|
||||
or otherwise loses its connection to libvirtd, the migration process
|
||||
will continue uninterrupted until completion. Note that the
|
||||
source libvirtd uses its own credentials (typically root) to
|
||||
connect to the destination, rather than the credentials used
|
||||
by the client to connect to the source; if these differ, it is
|
||||
common to run into a situation where a client can connect to the
|
||||
destination directly but the source cannot make the connection to
|
||||
set up the peer-to-peer migration.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<img class="diagram" src="migration-managed-p2p.png" alt="Migration peer-to-peer"/>
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="flowunmanageddirect">Unmanaged direct migration</a></h3>
|
||||
|
||||
<p>
|
||||
With <em>unmanaged direct</em> migration, neither the libvirt client
|
||||
or libvirtd daemon control the migration process. Control is instead
|
||||
delegated to the hypervisor's over management services (if any). The
|
||||
libvirt client merely initiates the migration via the hypervisor's
|
||||
management layer. If the libvirt client or libvirtd crash, the
|
||||
migration process will continue uninterrupted until completion.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<img class="diagram" src="migration-unmanaged-direct.png" alt="Migration direct, unmanaged"/>
|
||||
</p>
|
||||
|
||||
|
||||
<h2><a id="security">Data security</a></h2>
|
||||
|
||||
<p>
|
||||
Since the migration data stream includes a complete copy of the guest
|
||||
OS RAM, snooping of the migration data stream may allow compromise
|
||||
of sensitive guest information. If the virtualization hosts have
|
||||
multiple network interfaces, or if the network switches support
|
||||
tagged VLANs, then it is very desirable to separate guest network
|
||||
traffic from migration or management traffic.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
In some scenarios, even a separate network for migration data may
|
||||
not offer sufficient security. In this case it is possible to apply
|
||||
encryption to the migration data stream. If the hypervisor does not
|
||||
itself offer encryption, then the libvirt tunnelled migration
|
||||
facility should be used.
|
||||
</p>
|
||||
|
||||
<h2><a id="offline">Offline migration</a></h2>
|
||||
|
||||
<p>
|
||||
Offline migration transfers the inactive definition of a domain
|
||||
(which may or may not be active). After successful completion, the
|
||||
domain remains in its current state on the source host and is defined
|
||||
but inactive on the destination host. It's a bit more clever than
|
||||
<code>virsh dumpxml</code> on source host followed by
|
||||
<code>virsh define</code> on destination host, as offline migration
|
||||
will run the pre-migration hook to update the domain XML on
|
||||
destination host. Currently, copying non-shared storage or other file
|
||||
based storages (e.g. UEFI variable storage) is not supported during
|
||||
offline migration.
|
||||
</p>
|
||||
|
||||
<h2><a id="uris">Migration URIs</a></h2>
|
||||
|
||||
<p>
|
||||
Initiating a guest migration requires the client application to
|
||||
specify up to three URIs, depending on the choice of control
|
||||
flow and/or APIs used. The first URI is that of the libvirt
|
||||
connection to the source host, where the virtual guest is
|
||||
currently running. The second URI is that of the libvirt
|
||||
connection to the destination host, where the virtual guest
|
||||
will be moved to (and in peer-to-peer migrations, this is from
|
||||
the perspective of the source, not the client). The third URI is
|
||||
a hypervisor specific
|
||||
URI used to control how the guest will be migrated. With
|
||||
any managed migration flow, the first and second URIs are
|
||||
compulsory, while the third URI is optional. With the
|
||||
unmanaged direct migration mode, the first and third URIs are
|
||||
compulsory and the second URI is not used.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Ordinarily management applications only need to care about the
|
||||
first and second URIs, which are both in the normal libvirt
|
||||
connection URI format. Libvirt will then automatically determine
|
||||
the hypervisor specific URI, by looking up the target host's
|
||||
configured hostname. There are a few scenarios where the management
|
||||
application may wish to have direct control over the third URI.
|
||||
</p>
|
||||
|
||||
<ol>
|
||||
<li>The configured hostname is incorrect, or DNS is broken. If a
|
||||
host has a hostname which will not resolve to match one of its
|
||||
public IP addresses, then libvirt will generate an incorrect
|
||||
URI. In this case the management application should specify the
|
||||
hypervisor specific URI explicitly, using an IP address, or a
|
||||
correct hostname.</li>
|
||||
<li>The host has multiple network interfaces. If a host has multiple
|
||||
network interfaces, it might be desirable for the migration data
|
||||
stream to be sent over a specific interface for either security
|
||||
or performance reasons. In this case the management application
|
||||
should specify the hypervisor specific URI, using an IP address
|
||||
associated with the network to be used.</li>
|
||||
<li>The firewall restricts what ports are available. When libvirt
|
||||
generates a migration URI it will pick a port number using hypervisor
|
||||
specific rules. Some hypervisors only require a single port to be
|
||||
open in the firewalls, while others require a whole range of port
|
||||
numbers. In the latter case the management application may wish
|
||||
to choose a specific port number outside the default range in order
|
||||
to comply with local firewall policies.</li>
|
||||
<li>The second URI uses UNIX transport method. In this advanced case
|
||||
libvirt should not guess a *migrateuri* and it should be specified using
|
||||
UNIX socket path URI: <code>unix:///path/to/socket</code>.</li>
|
||||
</ol>
|
||||
|
||||
<h2><a id="config">Configuration file handling</a></h2>
|
||||
|
||||
<p>
|
||||
There are two types of virtual machines known to libvirt. A <em>transient</em>
|
||||
guest only exists while it is running, and has no configuration file stored
|
||||
on disk. A <em>persistent</em> guest maintains a configuration file on disk
|
||||
even when it is not running.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
By default, a migration operation will not attempt to modify any configuration
|
||||
files that may be stored on either the source or destination host. It is the
|
||||
administrator, or management application's, responsibility to manage distribution
|
||||
of configuration files (if desired). It is important to note that the <code>/etc/libvirt</code>
|
||||
directory <strong>MUST NEVER BE SHARED BETWEEN HOSTS</strong>. There are some
|
||||
typical scenarios that might be applicable:
|
||||
</p>
|
||||
|
||||
<ul>
|
||||
<li>Centralized configuration files outside libvirt, in shared storage. A cluster
|
||||
aware management application may maintain all the master guest configuration
|
||||
files in a cluster filesystem. When attempting to start a guest, the config
|
||||
will be read from the cluster FS and used to deploy a persistent guest.
|
||||
For migration the configuration will need to be copied to the destination
|
||||
host and removed on the original.
|
||||
</li>
|
||||
<li>Centralized configuration files outside libvirt, in a database. A data center
|
||||
management application may not store configuration files at all. Instead it
|
||||
may generate libvirt XML on the fly when a guest is booted. It will typically
|
||||
use transient guests, and thus not have to consider configuration files during
|
||||
migration.
|
||||
</li>
|
||||
<li>Distributed configuration inside libvirt. The configuration file for each
|
||||
guest is copied to every host where the guest is able to run. Upon migration
|
||||
the existing config merely needs to be updated with any changes.
|
||||
</li>
|
||||
<li>Ad-hoc configuration management inside libvirt. Each guest is tied to a
|
||||
specific host and rarely migrated. When migration is required, the config
|
||||
is moved from one host to the other.
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<p>
|
||||
As mentioned above, libvirt will not modify configuration files during
|
||||
migration by default. The <code>virsh</code> command has two flags to
|
||||
influence this behaviour. The <code>--undefinesource</code> flag
|
||||
will cause the configuration file to be removed on the source host
|
||||
after a successful migration. The <code>--persistent</code> flag will
|
||||
cause a configuration file to be created on the destination host
|
||||
after a successful migration. The following table summarizes the
|
||||
configuration file handling in all possible state and flag
|
||||
combinations.
|
||||
</p>
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
<tr class="head">
|
||||
<th colspan="3">Before migration</th>
|
||||
<th colspan="2">Flags</th>
|
||||
<th colspan="3">After migration</th>
|
||||
</tr>
|
||||
<tr class="subhead">
|
||||
<th>Source type</th>
|
||||
<th>Source config</th>
|
||||
<th>Dest config</th>
|
||||
<th>--undefinesource</th>
|
||||
<th>--persistent</th>
|
||||
<th>Dest type</th>
|
||||
<th>Source config</th>
|
||||
<th>Dest config</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<!-- src:N, dst:N -->
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
</tr>
|
||||
|
||||
<!-- src:N, dst:Y -->
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y<br/>(unchanged dest config)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y<br/>(unchanged dest config)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y<br/>(replaced with source)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y<br/>(replaced with source)</td>
|
||||
</tr>
|
||||
|
||||
<!-- src:Y dst:N -->
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td>Transient</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td>Transient</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
</tr>
|
||||
|
||||
<!-- src:Y dst:Y -->
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="n">N</td>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y<br/>(unchanged dest config)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y<br/>(unchanged dest config)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y<br/>(replaced with source)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>Persistent</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td class="y">Y</td>
|
||||
<td>Persistent</td>
|
||||
<td class="n">N</td>
|
||||
<td class="y">Y<br/>(replaced with source)</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<h2><a id="scenarios">Migration scenarios</a></h2>
|
||||
|
||||
|
||||
<h3><a id="scenarionativedirect">Native migration, client to two libvirtd servers</a></h3>
|
||||
|
||||
<p>
|
||||
At an API level this requires use of virDomainMigrate, without the
|
||||
VIR_MIGRATE_PEER2PEER flag set. The destination libvirtd server
|
||||
will automatically determine the native hypervisor URI for migration
|
||||
based off the primary hostname. To force migration over an alternate
|
||||
network interface the optional hypervisor specific URI must be provided
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [HV-URI]
|
||||
|
||||
|
||||
eg using default network interface
|
||||
|
||||
virsh migrate web1 qemu+ssh://desthost/system
|
||||
virsh migrate web1 xen+tls://desthost/system
|
||||
|
||||
|
||||
eg using secondary network interface
|
||||
|
||||
virsh migrate web1 qemu://desthost/system tcp://10.0.0.1/
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Supported by Xen, QEMU, VMware and VirtualBox drivers
|
||||
</p>
|
||||
|
||||
<h3><a id="scenarionativepeer2peer">Native migration, client to and peer2peer between, two libvirtd servers</a></h3>
|
||||
|
||||
<p>
|
||||
virDomainMigrate, with the VIR_MIGRATE_PEER2PEER flag set,
|
||||
using the libvirt URI format for the 'uri' parameter. The
|
||||
destination libvirtd server will automatically determine
|
||||
the native hypervisor URI for migration, based off the
|
||||
primary hostname. The optional uri parameter controls how
|
||||
the source libvirtd connects to the destination libvirtd,
|
||||
in case it is not accessible using the same address that
|
||||
the client uses to connect to the destination, or a different
|
||||
encryption/auth scheme is required. There is no
|
||||
scope for forcing an alternative network interface for the
|
||||
native migration data with this method.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
This mode cannot be invoked from virsh
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Supported by QEMU driver
|
||||
</p>
|
||||
|
||||
<h3><a id="scenariotunnelpeer2peer1">Tunnelled migration, client and peer2peer between two libvirtd servers</a></h3>
|
||||
|
||||
<p>
|
||||
virDomainMigrate, with the VIR_MIGRATE_PEER2PEER & VIR_MIGRATE_TUNNELLED
|
||||
flags set, using the libvirt URI format for the 'uri' parameter. The
|
||||
destination libvirtd server will automatically determine
|
||||
the native hypervisor URI for migration, based off the
|
||||
primary hostname. The optional uri parameter controls how
|
||||
the source libvirtd connects to the destination libvirtd,
|
||||
in case it is not accessible using the same address that
|
||||
the client uses to connect to the destination, or a different
|
||||
encryption/auth scheme is required. The native hypervisor URI
|
||||
format is not used at all.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
This mode cannot be invoked from virsh
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Supported by QEMU driver
|
||||
</p>
|
||||
|
||||
<h3><a id="nativedirectunmanaged">Native migration, client to one libvirtd server</a></h3>
|
||||
|
||||
<p>
|
||||
virDomainMigrateToURI, without the VIR_MIGRATE_PEER2PEER flag set,
|
||||
using a hypervisor specific URI format for the 'uri' parameter.
|
||||
There is no use or requirement for a destination libvirtd instance
|
||||
at all. This is typically used when the hypervisor has its own
|
||||
native management daemon available to handle incoming migration
|
||||
attempts on the destination.
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
syntax: virsh migrate GUESTNAME HV-URI
|
||||
|
||||
|
||||
eg using same libvirt URI for all connections
|
||||
|
||||
</pre>
|
||||
|
||||
<h3><a id="nativepeer2peer">Native migration, peer2peer between two libvirtd servers</a></h3>
|
||||
|
||||
<p>
|
||||
virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER flag set,
|
||||
using the libvirt URI format for the 'uri' parameter. The
|
||||
destination libvirtd server will automatically determine
|
||||
the native hypervisor URI for migration, based off the
|
||||
primary hostname. There is no scope for forcing an alternative
|
||||
network interface for the native migration data with this
|
||||
method. The destination URI must be reachable using the source
|
||||
libvirtd credentials (which are not necessarily the same as the
|
||||
credentials of the client in connecting to the source).
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
|
||||
|
||||
|
||||
eg using same libvirt URI for all connections
|
||||
|
||||
virsh migrate --p2p web1 qemu+ssh://desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI auth scheme for peer2peer connections
|
||||
|
||||
virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI hostname for peer2peer connections
|
||||
|
||||
virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Supported by the QEMU driver
|
||||
</p>
|
||||
|
||||
<h3><a id="scenariotunnelpeer2peer2">Tunnelled migration, peer2peer between two libvirtd servers</a></h3>
|
||||
|
||||
<p>
|
||||
virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER & VIR_MIGRATE_TUNNELLED
|
||||
flags set, using the libvirt URI format for the 'uri' parameter. The
|
||||
destination libvirtd server will automatically determine
|
||||
the native hypervisor URI for migration, based off the
|
||||
primary hostname. The optional uri parameter controls how
|
||||
the source libvirtd connects to the destination libvirtd,
|
||||
in case it is not accessible using the same address that
|
||||
the client uses to connect to the destination, or a different
|
||||
encryption/auth scheme is required. The native hypervisor URI
|
||||
format is not used at all. The destination URI must be
|
||||
reachable using the source libvirtd credentials (which are not
|
||||
necessarily the same as the credentials of the client in
|
||||
connecting to the source).
|
||||
</p>
|
||||
|
||||
<pre>
|
||||
syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
|
||||
|
||||
|
||||
eg using same libvirt URI for all connections
|
||||
|
||||
virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI auth scheme for peer2peer connections
|
||||
|
||||
virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI hostname for peer2peer connections
|
||||
|
||||
virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
Supported by QEMU driver
|
||||
</p>
|
||||
|
||||
|
||||
<h3><a id="scenariounixsocket">Migration using only UNIX sockets</a></h3>
|
||||
|
||||
<p>
|
||||
In niche scenarios where libvirt daemon does not have access to the
|
||||
network (e.g. running in a restricted container on a host that has
|
||||
accessible network), when a management application wants to have complete
|
||||
control over the transfer or when migrating between two containers on the
|
||||
same host all the communication can be done using UNIX sockets. This
|
||||
includes connecting to non-standard socket path for the destination
|
||||
daemon, using UNIX sockets for hypervisor's communication or for the NBD
|
||||
data transfer. All of that can be used with both peer2peer and direct
|
||||
migration options.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Example using <code>/tmp/migdir</code> as a directory representing the
|
||||
same path visible from both libvirt daemons. That can be achieved by
|
||||
bind-mounting the same directory to different containers running separate
|
||||
daemons or forwarding connections to these sockets manually
|
||||
(using <code>socat</code>, <code>netcat</code> or a custom piece of
|
||||
software):
|
||||
</p>
|
||||
<pre>
|
||||
virsh migrate --domain web1 [--p2p] --copy-storage-all
|
||||
--desturi 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver'
|
||||
--migrateuri 'unix:///tmp/migdir/test-sock-qemu'
|
||||
--disks-uri unix:///tmp/migdir/test-sock-nbd
|
||||
</pre>
|
||||
|
||||
<p>
|
||||
One caveat is that on SELinux-enabled systems all the sockets that the
|
||||
hypervisor is going to connect to needs to have the proper context and
|
||||
that is chosen before its creation by the process that creates it. That
|
||||
is usually done by using <code>setsockcreatecon{,raw}()</code> functions.
|
||||
Generally *system_r:system_u:svirt_socket_t:s0* should do the trick, but
|
||||
check the SELinux rules and settings of your system.
|
||||
</p>
|
||||
|
||||
<p>
|
||||
Supported by QEMU driver
|
||||
</p>
|
||||
|
||||
</body>
|
||||
</html>
|
458
docs/migration.rst
Normal file
458
docs/migration.rst
Normal file
@ -0,0 +1,458 @@
|
||||
===============
|
||||
Guest migration
|
||||
===============
|
||||
|
||||
.. contents::
|
||||
|
||||
Migration of guests between hosts is a complicated problem with many possible
|
||||
solutions, each with their own positive and negative points. For maximum
|
||||
flexibility of both hypervisor integration, and administrator deployment,
|
||||
libvirt implements several options for migration.
|
||||
|
||||
Network data transports
|
||||
-----------------------
|
||||
|
||||
There are two options for the data transport used during migration, either the
|
||||
hypervisor's own **native** transport, or **tunnelled** over a libvirtd
|
||||
connection.
|
||||
|
||||
Hypervisor native transport
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
*Native* data transports may or may not support encryption, depending on the
|
||||
hypervisor in question, but will typically have the lowest computational costs
|
||||
by minimising the number of data copies involved. The native data transports
|
||||
will also require extra hypervisor-specific network configuration steps by the
|
||||
administrator when deploying a host. For some hypervisors, it might be necessary
|
||||
to open up a large range of ports on the firewall to allow multiple concurrent
|
||||
migration operations.
|
||||
|
||||
Modern hypervisors support TLS for encryption and authentication of the
|
||||
migration connections which can be enabled using the ``VIR_MIGRATE_TLS`` flag.
|
||||
The *qemu* hypervisor driver allows users to force use of TLS via the
|
||||
``migrate_tls_force`` knob configured in ``/etc/libvirt/qemu.conf``.
|
||||
|
||||
|Migration native path|
|
||||
|
||||
libvirt tunnelled transport
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
*Tunnelled* data transports will always be capable of strong encryption since
|
||||
they are able to leverage the capabilities built in to the libvirt RPC protocol.
|
||||
The downside of a tunnelled transport, however, is that there will be extra data
|
||||
copies involved on both the source and destinations hosts as the data is moved
|
||||
between libvirtd and the hypervisor. This is likely to be a more significant
|
||||
problem for guests with very large RAM sizes, which dirty memory pages quickly.
|
||||
On the deployment side, tunnelled transports do not require any extra network
|
||||
configuration over and above what's already required for general libvirtd
|
||||
`remote access <remote.html>`__, and there is only need for a single port to be
|
||||
open on the firewall to support multiple concurrent migration operations.
|
||||
|
||||
*Note:* Certain features such as migration of non-shared storage
|
||||
(``VIR_MIGRATE_NON_SHARED_DISK``), the multi-connection migration
|
||||
(``VIR_MIGRATE_PARALLEL``), or post-copy migration (``VIR_MIGRATE_POSTCOPY``)
|
||||
may not be available when using libvirt's tunnelling.
|
||||
|
||||
|Migration tunnel path|
|
||||
|
||||
Communication control paths/flows
|
||||
---------------------------------
|
||||
|
||||
Migration of virtual machines requires close co-ordination of the two hosts
|
||||
involved, as well as the application invoking the migration, which may be on the
|
||||
source, the destination, or a third host.
|
||||
|
||||
Managed direct migration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
With *managed direct* migration, the libvirt client process controls the various
|
||||
phases of migration. The client application must be able to connect and
|
||||
authenticate with the libvirtd daemons on both the source and destination hosts.
|
||||
There is no need for the two libvirtd daemons to communicate with each other. If
|
||||
the client application crashes, or otherwise loses its connection to libvirtd
|
||||
during the migration process, an attempt will be made to abort the migration and
|
||||
restart the guest CPUs on the source host. There may be scenarios where this
|
||||
cannot be safely done, in which cases the guest will be left paused on one or
|
||||
both of the hosts.
|
||||
|
||||
|Migration direct, managed|
|
||||
|
||||
Managed peer to peer migration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
With *peer to peer* migration, the libvirt client process only talks to the
|
||||
libvirtd daemon on the source host. The source libvirtd daemon controls the
|
||||
entire migration process itself, by directly connecting the destination host
|
||||
libvirtd. If the client application crashes, or otherwise loses its connection
|
||||
to libvirtd, the migration process will continue uninterrupted until completion.
|
||||
Note that the source libvirtd uses its own credentials (typically root) to
|
||||
connect to the destination, rather than the credentials used by the client to
|
||||
connect to the source; if these differ, it is common to run into a situation
|
||||
where a client can connect to the destination directly but the source cannot
|
||||
make the connection to set up the peer-to-peer migration.
|
||||
|
||||
|Migration peer-to-peer|
|
||||
|
||||
Unmanaged direct migration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
With *unmanaged direct* migration, neither the libvirt client or libvirtd daemon
|
||||
control the migration process. Control is instead delegated to the hypervisor's
|
||||
over management services (if any). The libvirt client merely initiates the
|
||||
migration via the hypervisor's management layer. If the libvirt client or
|
||||
libvirtd crash, the migration process will continue uninterrupted until
|
||||
completion.
|
||||
|
||||
|Migration direct, unmanaged|
|
||||
|
||||
Data security
|
||||
-------------
|
||||
|
||||
Since the migration data stream includes a complete copy of the guest OS RAM,
|
||||
snooping of the migration data stream may allow compromise of sensitive guest
|
||||
information. If the virtualization hosts have multiple network interfaces, or if
|
||||
the network switches support tagged VLANs, then it is very desirable to separate
|
||||
guest network traffic from migration or management traffic.
|
||||
|
||||
In some scenarios, even a separate network for migration data may not offer
|
||||
sufficient security. In this case it is possible to apply encryption to the
|
||||
migration data stream. If the hypervisor does not itself offer encryption, then
|
||||
the libvirt tunnelled migration facility should be used.
|
||||
|
||||
Offline migration
|
||||
-----------------
|
||||
|
||||
Offline migration transfers the inactive definition of a domain (which may or
|
||||
may not be active). After successful completion, the domain remains in its
|
||||
current state on the source host and is defined but inactive on the destination
|
||||
host. It's a bit more clever than ``virsh dumpxml`` on source host followed by
|
||||
``virsh define`` on destination host, as offline migration will run the
|
||||
pre-migration hook to update the domain XML on destination host. Currently,
|
||||
copying non-shared storage or other file based storages (e.g. UEFI variable
|
||||
storage) is not supported during offline migration.
|
||||
|
||||
Migration URIs
|
||||
--------------
|
||||
|
||||
Initiating a guest migration requires the client application to specify up to
|
||||
three URIs, depending on the choice of control flow and/or APIs used. The first
|
||||
URI is that of the libvirt connection to the source host, where the virtual
|
||||
guest is currently running. The second URI is that of the libvirt connection to
|
||||
the destination host, where the virtual guest will be moved to (and in
|
||||
peer-to-peer migrations, this is from the perspective of the source, not the
|
||||
client). The third URI is a hypervisor specific URI used to control how the
|
||||
guest will be migrated. With any managed migration flow, the first and second
|
||||
URIs are compulsory, while the third URI is optional. With the unmanaged direct
|
||||
migration mode, the first and third URIs are compulsory and the second URI is
|
||||
not used.
|
||||
|
||||
Ordinarily management applications only need to care about the first and second
|
||||
URIs, which are both in the normal libvirt connection URI format. Libvirt will
|
||||
then automatically determine the hypervisor specific URI, by looking up the
|
||||
target host's configured hostname. There are a few scenarios where the
|
||||
management application may wish to have direct control over the third URI.
|
||||
|
||||
#. The configured hostname is incorrect, or DNS is broken. If a host has a
|
||||
hostname which will not resolve to match one of its public IP addresses, then
|
||||
libvirt will generate an incorrect URI. In this case the management
|
||||
application should specify the hypervisor specific URI explicitly, using an
|
||||
IP address, or a correct hostname.
|
||||
#. The host has multiple network interfaces. If a host has multiple network
|
||||
interfaces, it might be desirable for the migration data stream to be sent
|
||||
over a specific interface for either security or performance reasons. In this
|
||||
case the management application should specify the hypervisor specific URI,
|
||||
using an IP address associated with the network to be used.
|
||||
#. The firewall restricts what ports are available. When libvirt generates a
|
||||
migration URI it will pick a port number using hypervisor specific rules.
|
||||
Some hypervisors only require a single port to be open in the firewalls,
|
||||
while others require a whole range of port numbers. In the latter case the
|
||||
management application may wish to choose a specific port number outside the
|
||||
default range in order to comply with local firewall policies.
|
||||
#. The second URI uses UNIX transport method. In this advanced case libvirt
|
||||
should not guess a \*migrateuri\* and it should be specified using UNIX
|
||||
socket path URI: ``unix:///path/to/socket``.
|
||||
|
||||
Configuration file handling
|
||||
---------------------------
|
||||
|
||||
There are two types of virtual machines known to libvirt. A *transient* guest
|
||||
only exists while it is running, and has no configuration file stored on disk. A
|
||||
*persistent* guest maintains a configuration file on disk even when it is not
|
||||
running.
|
||||
|
||||
By default, a migration operation will not attempt to modify any configuration
|
||||
files that may be stored on either the source or destination host. It is the
|
||||
administrator, or management application's, responsibility to manage
|
||||
distribution of configuration files (if desired). It is important to note that
|
||||
the ``/etc/libvirt`` directory **MUST NEVER BE SHARED BETWEEN HOSTS**. There are
|
||||
some typical scenarios that might be applicable:
|
||||
|
||||
- Centralized configuration files outside libvirt, in shared storage. A cluster
|
||||
aware management application may maintain all the master guest configuration
|
||||
files in a cluster filesystem. When attempting to start a guest, the config
|
||||
will be read from the cluster FS and used to deploy a persistent guest. For
|
||||
migration the configuration will need to be copied to the destination host
|
||||
and removed on the original.
|
||||
- Centralized configuration files outside libvirt, in a database. A data center
|
||||
management application may not store configuration files at all. Instead it
|
||||
may generate libvirt XML on the fly when a guest is booted. It will typically
|
||||
use transient guests, and thus not have to consider configuration files
|
||||
during migration.
|
||||
- Distributed configuration inside libvirt. The configuration file for each
|
||||
guest is copied to every host where the guest is able to run. Upon migration
|
||||
the existing config merely needs to be updated with any changes.
|
||||
- Ad-hoc configuration management inside libvirt. Each guest is tied to a
|
||||
specific host and rarely migrated. When migration is required, the config is
|
||||
moved from one host to the other.
|
||||
|
||||
As mentioned above, libvirt will not modify configuration files during migration
|
||||
by default. The ``virsh`` command has two flags to influence this behaviour. The
|
||||
``--undefinesource`` flag will cause the configuration file to be removed on the
|
||||
source host after a successful migration. The ``--persistent`` flag will cause a
|
||||
configuration file to be created on the destination host after a successful
|
||||
migration. The following table summarizes the configuration file handling in all
|
||||
possible state and flag combinations.
|
||||
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Before migration | Flags | After migration |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Source type | Source config | Dest config | --undefinesource | --persistent | Dest type | Source config | Dest config |
|
||||
+===================+===================+===================+===================+===================+===================+===================+===================+
|
||||
| Transient | N | N | N | N | Transient | N | N |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Transient | N | N | Y | N | Transient | N | N |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Transient | N | N | N | Y | Persistent | N | Y |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Transient | N | N | Y | Y | Persistent | N | Y |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Transient | N | Y | N | N | Persistent | N | Y |
|
||||
| | | | | | | | (unchanged dest |
|
||||
| | | | | | | | config) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Transient | N | Y | Y | N | Persistent | N | Y |
|
||||
| | | | | | | | (unchanged dest |
|
||||
| | | | | | | | config) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Transient | N | Y | N | Y | Persistent | N | Y |
|
||||
| | | | | | | | (replaced with |
|
||||
| | | | | | | | source) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Transient | N | Y | Y | Y | Persistent | N | Y |
|
||||
| | | | | | | | (replaced with |
|
||||
| | | | | | | | source) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | N | N | N | Transient | Y | N |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | N | Y | N | Transient | N | N |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | N | N | Y | Persistent | Y | Y |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | N | Y | Y | Persistent | N | Y |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | Y | N | N | Persistent | Y | Y |
|
||||
| | | | | | | | (unchanged dest |
|
||||
| | | | | | | | config) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | Y | Y | N | Persistent | N | Y |
|
||||
| | | | | | | | (unchanged dest |
|
||||
| | | | | | | | config) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | Y | N | Y | Persistent | Y | Y |
|
||||
| | | | | | | | (replaced with |
|
||||
| | | | | | | | source) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Persistent | Y | Y | Y | Y | Persistent | N | Y |
|
||||
| | | | | | | | (replaced with |
|
||||
| | | | | | | | source) |
|
||||
+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
|
||||
Migration scenarios
|
||||
-------------------
|
||||
|
||||
Native migration, client to two libvirtd servers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
At an API level this requires use of virDomainMigrate, without the
|
||||
VIR_MIGRATE_PEER2PEER flag set. The destination libvirtd server will
|
||||
automatically determine the native hypervisor URI for migration based off the
|
||||
primary hostname. To force migration over an alternate network interface the
|
||||
optional hypervisor specific URI must be provided
|
||||
|
||||
::
|
||||
|
||||
syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [HV-URI]
|
||||
|
||||
|
||||
eg using default network interface
|
||||
|
||||
virsh migrate web1 qemu+ssh://desthost/system
|
||||
virsh migrate web1 xen+tls://desthost/system
|
||||
|
||||
|
||||
eg using secondary network interface
|
||||
|
||||
virsh migrate web1 qemu://desthost/system tcp://10.0.0.1/
|
||||
|
||||
Supported by Xen, QEMU, VMware and VirtualBox drivers
|
||||
|
||||
Native migration, client to and peer2peer between, two libvirtd servers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
virDomainMigrate, with the VIR_MIGRATE_PEER2PEER flag set, using the libvirt URI
|
||||
format for the 'uri' parameter. The destination libvirtd server will
|
||||
automatically determine the native hypervisor URI for migration, based off the
|
||||
primary hostname. The optional uri parameter controls how the source libvirtd
|
||||
connects to the destination libvirtd, in case it is not accessible using the
|
||||
same address that the client uses to connect to the destination, or a different
|
||||
encryption/auth scheme is required. There is no scope for forcing an alternative
|
||||
network interface for the native migration data with this method.
|
||||
|
||||
This mode cannot be invoked from virsh
|
||||
|
||||
Supported by QEMU driver
|
||||
|
||||
Tunnelled migration, client and peer2peer between two libvirtd servers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
virDomainMigrate, with the VIR_MIGRATE_PEER2PEER & VIR_MIGRATE_TUNNELLED flags
|
||||
set, using the libvirt URI format for the 'uri' parameter. The destination
|
||||
libvirtd server will automatically determine the native hypervisor URI for
|
||||
migration, based off the primary hostname. The optional uri parameter controls
|
||||
how the source libvirtd connects to the destination libvirtd, in case it is not
|
||||
accessible using the same address that the client uses to connect to the
|
||||
destination, or a different encryption/auth scheme is required. The native
|
||||
hypervisor URI format is not used at all.
|
||||
|
||||
This mode cannot be invoked from virsh
|
||||
|
||||
Supported by QEMU driver
|
||||
|
||||
Native migration, client to one libvirtd server
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
virDomainMigrateToURI, without the VIR_MIGRATE_PEER2PEER flag set, using a
|
||||
hypervisor specific URI format for the 'uri' parameter. There is no use or
|
||||
requirement for a destination libvirtd instance at all. This is typically used
|
||||
when the hypervisor has its own native management daemon available to handle
|
||||
incoming migration attempts on the destination.
|
||||
|
||||
::
|
||||
|
||||
syntax: virsh migrate GUESTNAME HV-URI
|
||||
|
||||
|
||||
eg using same libvirt URI for all connections
|
||||
|
||||
|
||||
Native migration, peer2peer between two libvirtd servers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER flag set, using the
|
||||
libvirt URI format for the 'uri' parameter. The destination libvirtd server will
|
||||
automatically determine the native hypervisor URI for migration, based off the
|
||||
primary hostname. There is no scope for forcing an alternative network interface
|
||||
for the native migration data with this method. The destination URI must be
|
||||
reachable using the source libvirtd credentials (which are not necessarily the
|
||||
same as the credentials of the client in connecting to the source).
|
||||
|
||||
::
|
||||
|
||||
syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
|
||||
|
||||
|
||||
eg using same libvirt URI for all connections
|
||||
|
||||
virsh migrate --p2p web1 qemu+ssh://desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI auth scheme for peer2peer connections
|
||||
|
||||
virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI hostname for peer2peer connections
|
||||
|
||||
virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
|
||||
|
||||
Supported by the QEMU driver
|
||||
|
||||
Tunnelled migration, peer2peer between two libvirtd servers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER & VIR_MIGRATE_TUNNELLED
|
||||
flags set, using the libvirt URI format for the 'uri' parameter. The destination
|
||||
libvirtd server will automatically determine the native hypervisor URI for
|
||||
migration, based off the primary hostname. The optional uri parameter controls
|
||||
how the source libvirtd connects to the destination libvirtd, in case it is not
|
||||
accessible using the same address that the client uses to connect to the
|
||||
destination, or a different encryption/auth scheme is required. The native
|
||||
hypervisor URI format is not used at all. The destination URI must be reachable
|
||||
using the source libvirtd credentials (which are not necessarily the same as the
|
||||
credentials of the client in connecting to the source).
|
||||
|
||||
::
|
||||
|
||||
syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
|
||||
|
||||
|
||||
eg using same libvirt URI for all connections
|
||||
|
||||
virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI auth scheme for peer2peer connections
|
||||
|
||||
virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
|
||||
|
||||
|
||||
eg using different libvirt URI hostname for peer2peer connections
|
||||
|
||||
virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
|
||||
|
||||
Supported by QEMU driver
|
||||
|
||||
Migration using only UNIX sockets
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In niche scenarios where libvirt daemon does not have access to the network
|
||||
(e.g. running in a restricted container on a host that has accessible network),
|
||||
when a management application wants to have complete control over the transfer
|
||||
or when migrating between two containers on the same host all the communication
|
||||
can be done using UNIX sockets. This includes connecting to non-standard socket
|
||||
path for the destination daemon, using UNIX sockets for hypervisor's
|
||||
communication or for the NBD data transfer. All of that can be used with both
|
||||
peer2peer and direct migration options.
|
||||
|
||||
Example using ``/tmp/migdir`` as a directory representing the same path visible
|
||||
from both libvirt daemons. That can be achieved by bind-mounting the same
|
||||
directory to different containers running separate daemons or forwarding
|
||||
connections to these sockets manually (using ``socat``, ``netcat`` or a custom
|
||||
piece of software):
|
||||
|
||||
::
|
||||
|
||||
virsh migrate --domain web1 [--p2p] --copy-storage-all
|
||||
--desturi 'qemu+unix:///system?socket=/tmp/migdir/test-sock-driver'
|
||||
--migrateuri 'unix:///tmp/migdir/test-sock-qemu'
|
||||
--disks-uri unix:///tmp/migdir/test-sock-nbd
|
||||
|
||||
One caveat is that on SELinux-enabled systems all the sockets that the
|
||||
hypervisor is going to connect to needs to have the proper context and that is
|
||||
chosen before its creation by the process that creates it. That is usually done
|
||||
by using ``setsockcreatecon{,raw}()`` functions. Generally
|
||||
\*system_r:system_u:svirt_socket_t:s0\* should do the trick, but check the
|
||||
SELinux rules and settings of your system.
|
||||
|
||||
Supported by QEMU driver
|
||||
|
||||
.. |Migration native path| image:: migration-native.png
|
||||
:class: diagram
|
||||
.. |Migration tunnel path| image:: migration-tunnel.png
|
||||
:class: diagram
|
||||
.. |Migration direct, managed| image:: migration-managed-direct.png
|
||||
:class: diagram
|
||||
.. |Migration peer-to-peer| image:: migration-managed-p2p.png
|
||||
:class: diagram
|
||||
.. |Migration direct, unmanaged| image:: migration-unmanaged-direct.png
|
||||
:class: diagram
|
Loading…
Reference in New Issue
Block a user