When reconnecting to a running QEMU process, we construct the
per-domain path in all hugetlbfs mounts. This is a relict from
the past (v3.4.0-100-g5b24d25062) where we switched to a
per-domain path and we want to create those paths when libvirtd
restarts on upgrade.
And with namespaces enabled there is one corner case where the
path is not created. In fact an error is reported and the
reconnect fails. Ideally, all mount events are propagated into
the QEMU's namespace. And they probably are, except when the
target path does not exist inside the namespace. Now, it's pretty
common for users to mount hugetlbfs under /dev (e.g.
/dev/hugepages), but if domain is started without hugepages (or
more specifically - private hugetlbfs path wasn't created on
domain startup), then the reconnect code tries to create it.
But it fails to do so, well, it fails to set seclabels on the
path because, because the path does not exist in the private
namespace. And it doesn't exist because we specifically create
only a subset of all possible /dev nodes. Therefore, the mount
event, whilst propagated, is not successful and hence the
filesystem is not mounted. We have to do it ourselves.
If hugetlbfs is mount anywhere else there's no problem and this
is effectively a dead code.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2123196
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
The aim of qemuProcessNeedHugepagesPath() is to determine whether
a hugetlbfs mount point is required for given domain (as in
whether qemuBuildMemoryBackendProps() picks up
memory-backend-file pointing to a hugetlbfs mount point). Well,
when domain is configured to use memfd backend then that
condition can never be true. Therefore, skip creating domain's
private path under hugetlbfs mount points.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
This patch moves qemuDomainObjEndJob() into
src/conf/virdomainjob as universal virDomainObjEndJob().
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch moves qemuDomainObjBeginJob() into
src/conf/virdomainjob as universal virDomainObjBeginJob().
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch uses the job object directly in the domain object and
removes the job object from private data of all drivers that use
it as well as other relevant code (initializing and freeing the
structure).
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch adds the generalized job object into the domain object
so that it can be used by all drivers without the need to extract
it from the private data.
Because of this, the job object needs to be created and set
during the creation of the domain object. This patch also extends
xmlopt with possible job config containing virDomainJobObj
callbacks, its private data callbacks and one variable
(maxQueuedJobs).
This patch includes:
* addition of virDomainJobObj into virDomainObj (used in the
following patches)
* extending xmlopt with job config structure
* new function for freeing the virDomainJobObj
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
All the supported QEMU versions should have iothread support
on the virtio-scsi controllers if they are compiled in.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We have always considered "migrate_cancel" QMP command to return after
successfully cancelling the migration. But this is no longer true (to be
honest I'm not sure it ever was) as it just changes the migration state
to "cancelling". In most cases the migration is canceled pretty quickly
and we don't really notice anything, but sometimes it takes so long we
even get to clearing migration capabilities before the migration is
actually canceled, which fails as capabilities can only be changed when
no migration is running. So to avoid this issue, we can wait for the
migration to be really canceled after sending migrate_cancel. The only
place where we don't need synchronous behavior is when we're cancelling
migration on user's request while it is actively watched by another
thread.
https://bugzilla.redhat.com/show_bug.cgi?id=2114866
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We will need a little bit more code around qemuMonitorMigrateCancel to
make sure it works as expected. The new qemuMigrationSrcCancel helper
will avoid repeating the code in several places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Let's call this qemuMigrationSrcCancelUnattended as the function is
supposed to be used when no other thread is watching the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When commit bac6b266fb added this "functionality" this was the only
naming I could think of, but after discussion with Dan we found the name
'null' fits a bit better, so change it before we make a release with the
old name.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
After converting virNetworkDef * to g_autoptr(virNetworkDef) the
cleanup codepath was empty, so it has been removed.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This represents an interface connected to a VMWare Distributed Switch,
previously obscured as a dummy interface.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Reviewed-by: Ján Tomko <jtomko@redhat.com>
All callers pass 'true'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Set it same way we set throttling for other disks in
qemuProcessSetupDiskThrottling.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The only instance in this file can be simplified to avoid checking the
capability.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The code which fills 'qomName' does so only when the blockdev capability
is enabled so we don't have to check it separately as it can be only
non-NULL when blockdev is used.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The cleanup of the code to always assume support for QEMU_CAPS_BLOCKDEV
will not be simple, so for now we hardcode the support and the code will
be cleaned up gradually.
We also disallow users to clear the flags via the namespace property or
qemu.conf configuration.
The change to the PPC64 test data originates from the fact that the
capability dump is not from the release version but is lacking one of
the necessary flags to enable -blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch moves qemuDomainObjPreserveJob() as
virDomainObjPreserveJob() into hypervisor in order to be used by
other hypervisors as well.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It does not make sense to propagate virDomainObj and get
qemuDomainObjPrivate from it, when it is already accessible in
the only function qemuDomainObjPreserveJob() is called from. That
being said, we can also propagate virDomainJobObj directly and
avoid using qemu private structure.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
After QEMU is killed in qemuProcessStop() its mount namespace
doesn't exist anymore, because it was the only process running
there. Thus we should clear our internal flag that the domain has
namespace enabled so that seclabel restore code does not try to
enter it. We do the same in qemuProcessHandleMonitorEOF() but
when it is us, who decides to kill QEMU rather than QEMU quitting
we haven't seen EOF by the time qemuProcessStop() is called.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
All callers now pass false for 'retry' we are guaranteed to have a
monitor socket present. This means that the retry code can be removed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
In 'qemuProcessQMPLaunch' qemu is very specifically launched using it's
internal '-daemonize' flag (see comment in the function) to ensure that
the monitor socket is ready and opened prior to attempting the monitor
connection.
This means we don't have to retry the connection to the monitor in
qemuMonitorOpen as the socket will be already there.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
The 'timeout' argument is used by 'qemuMonitorOpenUnix' only when the
'retry' argument is true. The callers of 'qemuMonitorOpen' only pass '0'
for timeout when they call it with 'retry' true and use other values
when 'retry' is false and thus ignored.
This means we can remove the argument and simply have it set to the
default value of QEMU_DEFAULT_MONITOR_WAIT.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
Both callers pass 'false' as the argument via a variable which is not
modified. Remove the argument and pass 'false' directly.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
Commit v8.4.0-287-gd4d3bb8130 tried to make sure the original
pre-migration memory locking limit is restored at the end of migration,
but it missed the case when libvirt daemon is restarted during
migration which needs to be aborted on reconnect.
And if this was not enough, I forgot to actually save the status XML
after setting the field in priv (in the commit mentioned above and also
in v8.4.0-291-gd375993ab3).
https://bugzilla.redhat.com/show_bug.cgi?id=2107424
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This patch moves qemuDomainJobObj into hypervisor/ as generalized
virDomainJobObj along with generalized private job callbacks as
virDomainObjPrivateJobCallbacks.
Signed-off-by: Kristina Hanicova <khanicov@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The domain post parse functions currently live in domain_conf.c
which thus grows always larger. Mimic what we've done for the
validation code and move the post parse code into a separate
file: domain_postparse.c.
I've started by moving every function with PostParse in its name
into the new file and then compile hunting for helper functions
only to move them as well.
In the end, I've moved virDomainDefPostParse symbol in
libvirt_private.syms into a new section. And while
virDomainDeviceDefPostParseOne() is made 'public' in
domain_postparse.h too, I'm not exporting it because it has no
caller outside src/conf/ and it's unlikely it ever will.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Setup all fields for use with -blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
We don't really use it besides when starting up the VM so when
reconnecting this step is totally pointless.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Currently, libvirt allows only local filepaths to specify the location
of the 'nvram' image. Changing it to virStorageSource type will allow
supporting remote storage for nvram.
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
This is a special job for operations that need to modify domain state
during an active migration. The modification must not affect any state
that could conflict with the migration code. This is useful mainly for
event handlers that need to be processed during migration and which
could otherwise time out on acquiring a normal MODIFY job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This phase marks a migration protocol as broken in a post-copy phase.
Libvirt is no longer actively watching the migration in this phase as
the migration API that started the migration failed.
This may either happen when post-copy migration really fails (QEMU
enters postcopy-paused migration state) or when the migration still
progresses between both QEMU processes, but libvirt lost control of it
because the connection between libvirt daemons (in p2p migration) or a
daemon and client (non-p2p migration) was closed. For example, when one
of the daemons was restarted.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When recovering from a failed post-copy migration, we need to go through
all migration phases again, but don't need to repeat all the steps in
each phase. Let's create a new set of migration phases dedicated to
post-copy recovery so that we can easily distinguish between normal and
recovery code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When libvirt daemon is restarted during an active post-copy migration,
we do not always mark the migration as broken. In this phase libvirt is
not really needed for migration to finish successfully. In fact the
migration could have even finished while libvirt was not running or it
may still be happily running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
So far migration could only be completed while a migration API was
running and waiting for the migration to finish. In case such API could
not be called (the connection that initiated the migration is broken)
the migration would just be aborted or left in a "don't know what to do"
state. But this will change soon and we will be able to successfully
complete such migration once we get the corresponding event from QEMU.
This is specific to post-copy migration when vCPUs are already running
on the destination and we're only waiting for all memory pages to be
transferred. Such post-copy migration (which no-one is actively
watching) is called unattended migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Normally migrationPort is released in the Finish phase, but we need to
make sure it is properly released also in case qemuMigrationDstFinish is
not called at all. Currently the only callback which is called in this
situation qemuMigrationDstPrepareCleanup which already releases
migrationPort. This patch adds similar handling to additional callbacks
which will be used in the future.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When connection breaks during post-copy migration, QEMU enters
'postcopy-paused' state. We need to handle this state and make the
situation visible to upper layers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>