Although these and functions in the following two patches are for
now just being used by the qemu driver, it makes sense to have all
begin job functions in the same file.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch moves qemuDomainObjEndJob() into
src/conf/virdomainjob as universal virDomainObjEndJob().
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch moves qemuDomainObjBeginJob() into
src/conf/virdomainjob as universal virDomainObjBeginJob().
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch uses the job object directly in the domain object and
removes the job object from private data of all drivers that use
it as well as other relevant code (initializing and freeing the
structure).
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We have always considered "migrate_cancel" QMP command to return after
successfully cancelling the migration. But this is no longer true (to be
honest I'm not sure it ever was) as it just changes the migration state
to "cancelling". In most cases the migration is canceled pretty quickly
and we don't really notice anything, but sometimes it takes so long we
even get to clearing migration capabilities before the migration is
actually canceled, which fails as capabilities can only be changed when
no migration is running. So to avoid this issue, we can wait for the
migration to be really canceled after sending migrate_cancel. The only
place where we don't need synchronous behavior is when we're cancelling
migration on user's request while it is actively watched by another
thread.
https://bugzilla.redhat.com/show_bug.cgi?id=2114866
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We will need a little bit more code around qemuMonitorMigrateCancel to
make sure it works as expected. The new qemuMigrationSrcCancel helper
will avoid repeating the code in several places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Just before pushing my earlier commit I've switch order of two
arguments of virProcessGetStatInfo() (as suggested in review).
However, I forgot to swap the arguments in
qemuDomainGetStatsCpuProc() which leads to userTime and sysTime
being swapped.
Fixes: 044b8744d65f8571038f85685b3c4b241162977b
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For domains started under session URI, we don't set up CGroups
(well, how could we since we're not running as root anyways).
Nevertheless, fetching CPU statistics exits early because of
lacking cpuacct controller. But with recent extension to
virProcessGetStatInfo() we can get the values we need from the
proc filesystem. Implement the fallback for the session URI as
some of virt tools rely on cpu.* stats to be reported (virt-top,
virt-manager).
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/353
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1693707
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The virProcessGetStatInfo() helper parses /proc stat file for
given PID and/or TID and reports cumulative cpuTime which is just
a sum of user and sys times. But in near future, we'll need those
times separately, so make the function return them too (if caller
desires).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced back in 2012 by QEMU commit:
commit 783e9b4826b95e53e33c42db6b4bd7d89bdff147
introduce a new monitor command 'dump-guest-memory' to dump guest's memory
Released in QEMU 1.2.0
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This patch uses qemuMonitorQueryStats to query "halt_poll_success_ns"
and "halt_poll_fail_ns" for every vCPU. The respective values for each
vCPU are then added together.
Signed-off-by: Amneesh Singh <natto@weirdnatto.in>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
All callers pass 'true'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The event was introduced in qemu-2.3
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The operation makes no sense regardless of the way how we specify disks.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The 'persistjob' is always true and 'top' and 'base' are always NULL.
Adjust the functions to drop the arguments.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This function and its callees were a bit more entangled so remove the
pre-blockdev code separately.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Previous patches removed the job submission for the handler so now even
the handler itself can be removed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The code which fills 'qomName' does so only when the blockdev capability
is enabled so we don't have to check it separately as it can be only
non-NULL when blockdev is used.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Active layer block commit is unconditionally supported since qemu-2.0.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The 'change-backing-file' command was added in qemu-2.1 and doesn't have
any dependencies. We use it as witness for using blockjobs with relative
backing paths. Always assume it's supported.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The qemu code will need to check other qemu-private conditions when
reporting success for waiting. Thus we must replace all use of it with a
qemu-specific helper. For now the helper forwards directly to
virDomainObjWait.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function would fail to release the job in case
qemuMigrationSrcIsAllowed failed.
Fixes v8.5.0-157-g69e0e33873
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Commit 62627524607f added the acquiring of a job, but it is not always
VIR_ASYNC_JOB_MIGRATION_OUT, so the code fails when doing save or anything else.
Correct the async job by passing it from the caller as another parameter.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The 'xbzrle-cache-size' parameter was added in qemu-2.11 thus all
supported qemu versions now use the new code path.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'downtime-limit' field of 'migrate-set-parameters' was introduced in
qemu-2.8, thus all qemu versions supported by libvirt use the new code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'max-bandwidth' field was added as argument of
'migrate-set-parameters' in qemu-2.8, thus all qemu version supported by
libvirt already use the new code path.
This patch assumes the presence and removes the legacy code paths.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It was always possible to modify the inactive XML, because
VIR_DOMAIN_AFFECT_CURRENT (= 0) is accepted implicitly. But now
that the logic when changing both config and live XMLs is more
robust we can accept VIR_DOMAIN_AFFECT_CONFIG flag too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There are three APIs that allow changing IOThreads:
virDomainAddIOThread()
virDomainDelIOThread()
virDomainSetIOThreadParams()
In case of QEMU driver these are handled by
qemuDomainChgIOThread() which attempts to be versatile enough to
work on both inactive and live domain definitions at the same
time. However, it's a bit clumsy - when a change to live
definition succeeds but fails in inactive definition then there's
no rollback. And somewhat rightfully so - changes to live
definition are in general harder to roll back. Therefore, do what
we do elsewhere (qemuDomainAttachDeviceLiveAndConfig(),
qemuDomainDetachDeviceAliasLiveAndConfig(), ...):
1) do the change to inactive XML first,
2) in fact, do the change to a copy of inactive XML,
3) swap inactive XML and its copy only after everything
succeeded.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The domain post parse functions currently live in domain_conf.c
which thus grows always larger. Mimic what we've done for the
validation code and move the post parse code into a separate
file: domain_postparse.c.
I've started by moving every function with PostParse in its name
into the new file and then compile hunting for helper functions
only to move them as well.
In the end, I've moved virDomainDefPostParse symbol in
libvirt_private.syms into a new section. And while
virDomainDeviceDefPostParseOne() is made 'public' in
domain_postparse.h too, I'm not exporting it because it has no
caller outside src/conf/ and it's unlikely it ever will.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The flags will later be used to determine which parameters should
actually be applied.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently, libvirt allows only local filepaths to specify the location
of the 'nvram' image. Changing it to virStorageSource type will allow
supporting remote storage for nvram.
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Introduced in previous commit, QEMU driver needs to be taught how
to set VIR_DOMAIN_IOTHREAD_THREAD_POOL_MIN and
VIR_DOMAIN_IOTHREAD_THREAD_POOL_MAX parameters on given IOThread.
Fortunately, this is fairly trivial to do and since these two
parameters are exposed in domain XML too the update of inactive
XML can be wired up too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When a domain has a guest agent channel enabled and the agent is running
in the guest, we will get VSERPORT_CHANGE event on a destination host as
soon as we start vCPUs there. This is not an issue for normal migration,
but post-copy migration will remain running after we started vCPUs on
the destination. If it runs for more than 30s, the VSERPORT_CHANGE event
handler will fail to get a job and log the following error message:
Timed out during operation: cannot acquire state change lock (held
by monitor=remoteDispatchDomainMigrateFinish3Params)
and of course we will think the guest agent is not connected and thus
all APIs talking to it will fail. Until the agent or libvirt daemon is
restarted.
Luckily we only need to update the channel state (to mark it as
connected) and connect to the agent neither of which conflicts with
migration. Thus we can safely enable processing this event during
migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
So far migration could only be completed while a migration API was
running and waiting for the migration to finish. In case such API could
not be called (the connection that initiated the migration is broken)
the migration would just be aborted or left in a "don't know what to do"
state. But this will change soon and we will be able to successfully
complete such migration once we get the corresponding event from QEMU.
This is specific to post-copy migration when vCPUs are already running
on the destination and we're only waiting for all memory pages to be
transferred. Such post-copy migration (which no-one is actively
watching) is called unattended migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When connection breaks during post-copy migration, QEMU enters
'postcopy-paused' state. We need to handle this state and make the
situation visible to upper layers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>