When blockdev is used we always should use the blockdev mode for
non-shared storage migration.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Pass backing store as an argument rather than extracting it locally and
fix the callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When QEMU supports flushing caches at the end of migration, we can
safely allow migration even if disk/driver/@cache is not none nor
directsync.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Acked-By: Peter Krempa <pkrempa@redhat.com>
The original message was logically incorrect: cache != none or cache !=
directsync is always true. But even replacing "or" with "and" doesn't
make it more readable for humans.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Acked-By: Peter Krempa <pkrempa@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
Several general functions from domain_conf.c were lazily passing NULL as
the parseOpaque pointer instead of letting their callers pass the right
data. This patch fixes all paths leading to virDomainDefCopy to do the
right thing.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuMigrationAnyPrepareDef.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefFormatBufInternal.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefCopy.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that block job data is stored in the status XML portion we need to
make sure that everything which changes the state also saves the status
XML. The job registering function is used while parsing the status XML
so in that case we need to skip the XML saving.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add the job structure to the table when instantiating a new job and
remove it when it terminates/fails.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
To allow using -blockdev with blockjobs QEMU needs to reopen files in
read-write mode when modifying the backing chain. To achieve this we
need to use 'auto-read-only' for the backing files rather than the
normal 'read-only' property. That way qemu knows that the files need to
be reopened.
Note that the format drivers (e.g. qcow2) are still opened with the
read-only property enabled when being a member of the backing chain
since they are supposed to be immutable unless a block job is started.
QEMU v4.0 (since commit 23dece19da4) allows also dynamic behaviour for
auto-read-only which allows us to use sVirt as we only grant write
permissions to files when doing a blockjob.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Allow using the delayed dismiss of the job so that we can reap the state
even if libvirtd was not running when qemu emitted the job completion
event.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that we no longer support the HMP monitor, remove some dead code.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
The hash table returned by qemuMonitorGetAllBlockJobInfo is organized by
the frontend name (which skipps the 'drive-' prefix). While our code
properly matches the jobs to the disk, qemu needs the full job name
including the 'drive-' prefix to be able to identify jobs.
Fix this by adding an argument to qemuMonitorGetAllBlockJobInfo which
does not modify the job name before filling the hash.
This fixes a regression where users would not be able to cancel/pivot
block jobs after restarting libvirtd while a blockjob is running.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The upcoming virDomainBackup() API needs to take advantage of the
ability to expose a bitmap as part of nbd-server-add for a pull-mode
backup (this is the recently-added QEMU_CAPS_NBD_BITMAP capability).
Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Migration always uses a TCP socket for NBD servers, because we don't
support same-host migration. But upcoming pull-mode incremental backup
needs to also support a Unix socket, for retrieving the backup from
the same host. Support this by plumbing virStorageNetHostDef through
the monitor calls, since that is a nice reusable struct that can track
both TCP and Unix sockets.
Update qemumonitorjsontest to verify both forms of the QMP command.
Signed-off-by: Eric Blake <eblake@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Split out the 'shallow' and 'reuse' flags as booleans rather than passing
in flags and constructing them in irrelevant APIs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Split out the 'shallow' flag as a boolean argument rather than passing
in flags and constructing them in irrelevant APIs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The NBD migration code uses drive/blockdev-mirror internally. In those
APIs we pass around flags for the monitor commands which are based on
the flags for the virDomainBlockRebase API. Since there's only one flag
which changes, pass it around explicitly rather than obscuring it in a
bitfield.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Standardize on putting the _LAST enum value on the second line
of VIR_ENUM_IMPL invocations. Later patches that add string labels
to VIR_ENUM_IMPL will push most of these to the second line anyways,
so this saves some noise.
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Since the STOP event handler can use the pausedReason as sent to
qemuProcessStopCPUs, we no longer need to send duplicate suspended
lifecycle events because we know what caused the stop along with extra
details. This processing allows us to also remove the duplicated state
change from qemuProcessStopCPUs.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
qemuMigrationSrcPerform callers expect it to call virDomainObjEndAPI
in any case so on error paths we miss the virDomainObjEndAPI call.
To fix this let's make qemuMigrationSrcPerform callers responsible
for the virDomainObjEndAPI call.
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
qemu added the 'drive-mirror' command in v1.3.0 (d9b902db3fb71fdc)
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The VIR_MIGRATE_PARALLEL flag is implemented using QEMU's multifd
migration capability and the corresponding multifd-channels migration
parameter.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
snapshot_conf.h was mixing three separate types: the snapshot
definition, the snapshot object, and the snapshot object list.
Separate out the snapshot object list code into its own file, and
update includes for affected clients.
This is just code motion, but done in preparation of sharing a lot of
the object list code with checkpoints.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that virStorageSource is a subclass of virObject we can use
virObjectUnref and remove virStorageSourceFree which was a thin wrapper.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Since virStorageSource is now a subclass of virObject, we can use
VIR_AUTOUNREF instead.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Add virStorageSourceNew and refactor places allocating that structure to
use the helper.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If a domain has a disk that is type='network' we require specific
cache mode to allow migration with it (either 'directsync' or
'none'). This doesn't make much sense since network disks are
supposed to be safe to migrate by default.
At the same time, we should be checking for the actual source
type, not apparent type set in the domain XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
We have this very handy macro called VIR_STEAL_PTR() which steals
one pointer into the other and sets the other to NULL. The
following coccinelle patch was used to create this commit:
@ rule1 @
identifier a, b;
@@
- b = a;
...
- a = NULL;
+ VIR_STEAL_PTR(b, a);
Some places were clean up afterwards to make syntax-check happy
(e.g. some curly braces were removed where the body become a one
liner).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Currently the job name corresponds to the disk the job belongs to. For
jobs which will not correspond to disks we'll need to track the name
separately.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that the data is per-job, we don't really need to bother with
finishing the synchronous job handling if the job is already terminated.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Instead of passing in the disk information, pass in the job and name the
function accordingly.
Few callers needed to be modified to have the job pointer handy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The job error can be safely accessed in the job structure, so we don't
need to propagate it through qemuBlockJobUpdateDisk.
Drop the propagation and refactor any caller that pased non-NULL error.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The same message is reported in 3 distinct places. Move it out into a
single function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add a field tracking the current state of job so that it can be queried
later. Until now the job state e.g. that the job is _READY for
finalizing was tracked only for mirror jobs. Add tracking of state for
all jobs.
Similarly to 'qemuBlockJobType' this maps the existing states of the
blockjob from virConnectDomainEventBlockJobStatus to
'qemuBlockJobState' so that we can track some internal states as well.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Modify qemuBlockJobSyncBeginDisk to operate on qemuBlockt sJobDataPtr and
rename it accordingly.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We can properly track the job type when starting the job so that we
don't have to infer it later.
This patch also adds an enum of block job types specific to qemu
(qemuBlockjobType) which mirrors the public block job types
(virDomainBlockJobType) but allows for other types to be added later
which will not be public.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If the job wasn't started, we don't need to end the synchronous job. Add
a note and drop the unnecessary calls.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rather than directly modifying fields in the qemuBlockJobDataPtr
structure add a bunch of fields which allow to do the transitions.
This will help later when adding more complexity to the job handling.
APIs introduced in this patch are:
qemuBlockJobDiskNew - prepare for starting a new blockjob on a disk
qemuBlockJobDiskGetJob - get the block job data structure for a disk
For individual job state manipulation the following APIs are added:
qemuBlockJobStarted - Sets the job as started with qemu. Until that
the job can be cancelled without asking qemu.
qemuBlockJobStartupFinalize - finalize job startup. If the job was
started in qemu already, just releases
reference to the job object. Otherwise
clears everything as if the job was never
started.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Extract the disk mirroring startup code from the loop into a separate
function to allow cleaner cleanup paths.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When cancelling job after a reconnect we can now use the disk block job
state rather than having to re-detect it in the migration code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Internally we do a 'block-copy' to accomodate non-shared storage
migration but the code did not fill in that the block job was active on
the disk when starting the copy job. Since we handle block jobs finishes
regardless of having it registered it's not a problem but soon will
become one.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>