The function has no users now and there's no need for it as the common
pattern is to look up the whole disk object anyways.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
There are more places which require getting the topmost nodename to be
passed to qemu. Separate it out into a new function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This function potentially grabs both a monitor job and an agent job at
the same time. This is problematic because it means that a malicious (or
just buggy) guest agent can cause a denial of service on the host. The
presence of this function makes it easy to do the wrong thing and hold
both jobs at the same time. All existing uses have already been removed
by previous commits.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Wire up the allocation and disposal of private data.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
If we use fake reboot then domain goes thru running->shutdown->running
state changes with shutdown state only for short period of time. At
least this is implementation details leaking into API. And also there is
one real case when this is not convinient. I'm doing a backup with the
help of temporary block snapshot (with the help of qemu's API which is
used in the newly created libvirt's backup API). If guest is shutdowned
I want to continue to backup so I don't kill the process and domain is
in shutdown state. Later when backup is finished I want to destroy qemu
process. So I check if it is in shutdowned state and destroy it if it
is. Now if instead of shutdown domain got fake reboot then I can destroy
process in the middle of fake reboot process.
After shutdown event we also get stop event and now as domain state is
running it will be transitioned to paused state and back to running
later. Though this is not critical for the described case I guess it is
better not to leak these details to user too. So let's leave domain in
running state on stop event if fake reboot is in process.
Reconnection code handles this patch without modification. It detects
that qemu is not running due to shutdown and then calls qemuProcessShutdownOrReboot
which reboots as fake reboot flag is set.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
This will allow us to g_autoptr qemuDomainLogContext pointers
in the following patch.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
With NVMe disks, one can start a blockjob with a NVMe disk
that is not visible in domain XML (at least right away). Usually,
it's fairly easy to override this limitation of
qemuDomainGetMemLockLimitBytes() - for instance for hostdevs we
temporarily add the device to domain def, let the function
calculate the limit and then remove the device. But it's not so
easy with virStorageSourcePtr - in some cases they don't
necessarily are attached to a disk. And even if they are it's
done later in the process and frankly, I find it too complicated
to be able to use the simple trick we use with hostdevs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Now that all callers of qemuDomainGetHostdevPath() handle
/dev/vfio/vfio on their own, we can safely drop handling in this
function. In near future the decision whether domain needs VFIO
file is going to include more device types than just
virDomainHostdev.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Store the data of a backup job along with the index counter for new
backup jobs in the status XML. Currently we will support only one
backup job and thus there's no necessity to add arrays of jobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We will want to use the async job infrastructure along with all the APIs
and event for the backup job so add the backup job as a new async job
type.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce QEMU_DOMAIN_JOB_STATS_TYPE_BACKUP and the convertors and other
plumbing to be able to report statistics for the backup job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that the domain XML APIs don't use virCapsPtr we can stop passing it
around many QEMU driver methods.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The function is now used only in qemu_process.c so move it there and
name it 'qemuProcessPrepareQEMUCaps' which is more appropriate to what
it's doing.
The reworded comment now mentions that it will also post-process the
caps for VM startup.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add a helper which will covert the PFLASH code file and variable file
into the virStorageSource objects stored in private data so that we can
use them with -blockdev while keeping the infrastructure to determine
the path to the loaders intact.
This is a temporary solution until we will want to do snapshots of the
pflash where we will be forced do track the full backing chain in the
XML.
In the meanwhile just convert it partially so that we can stop using
-drive.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
To allow converting the pflash drives to blockdev we will need a
virStorageSource to allow using our helpers. Temporarily prior to
coverting loader data to a virStorageSoruce add private data which will
house this.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Some layered products such as oVirt have requested a way to avoid being
blocked by guest agent commands when querying a loaded vm. For example,
many guest agent commands are polled periodically to monitor changes,
and rather than blocking the calling process, they'd prefer to simply
time out when an agent query is taking too long.
This patch adds a way for the user to specify a custom agent timeout
that is applied to all agent commands.
One special case to note here is the 'guest-sync' command. 'guest-sync'
is issued internally prior to calling any other command. (For example,
when libvirt wants to call 'guest-get-fsinfo', we first call
'guest-sync' and then call 'guest-get-fsinfo').
Previously, the 'guest-sync' command used a 5-second timeout
(VIR_DOMAIN_QEMU_AGENT_COMMAND_DEFAULT), whereas the actual command that
followed always blocked indefinitely
(VIR_DOMAIN_QEMU_AGENT_COMMAND_BLOCK). As part of this patch, if a
custom timeout is specified that is shorter than
5 seconds, this new timeout is also used for 'guest-sync'. If there is
no custom timeout or if the custom timeout is longer than 5 seconds, we
will continue to use the 5-second timeout.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The pconfig feature was enabled in QEMU by accident in 3.1.0. All other
newer versions do not support it and it was removed from the
Icelake-Server CPU model in QEMU.
We don't normally change our CPU models even when QEMU does so to avoid
breaking migrations between different versions of libvirt. But we can
safely do so in this specific case. QEMU never supported enabling
pconfig so any domain which was able to start has pconfig disabled.
With a small compatibility hack which explicitly disables pconfig when
CPU model equals Icelake-Server in migratable domain definition, only
one migration scenario stays broken (and there's nothing we can do about
it): from any host to a host with libvirt < 5.10.0 and QEMU > 3.1.0.
https://bugzilla.redhat.com/show_bug.cgi?id=1749672
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The function does not do anything that could fail. Remove the return
value.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
qemuDomainPrepareDiskSourceData historically prepared everything but
we've split out the majority of the functionality so that it sets up
predominately only according to the configuration of the disk. There
was one leftover bit of setting the gluster debug level from the config.
Split this out into a separate function so that
qemuDomainPrepareDiskSourceData only prepares based on the disk.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
ACKed-by: Eric Blake <eblake@redhat.com>
When undefining a UEFI domain its nvram file has to be properly handled as
well. It's mandatory to use one of --nvram and --keep-nvram options when
'virsh undefine <domain>' is issued for a UEFI domain. To fix the bug as
reported, virsh should return an error message if neither option is used
and the nvram file should be removed when --nvram is given.
The cause of the problem is that when qemuDomainUndefineFlags() is invoked
on an inactive domain the path to its nvram file is empty. This commit
aims to fix this by formatting and filling in the path in time for the
nvram removal code to run properly.
https://bugzilla.redhat.com/show_bug.cgi?id=1751596
Signed-off-by: Pavel Mores <pmores@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Rather than having to fix 5 places once we support the combination, add
a function called by all the blockjob/snapshot APIs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Finish the refactor by moving and renaming functions from qemu_domain.c
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Move it to qemu_domain.c and rename it to qemuDomainObjFromDomain. This
will allow reusing it after splitting out checkpoint code from
qemu_driver.c.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Similar to the qemu_tpm.c, add a unit with a few functions to
start/stop and setup the cgroup of the external vhost-user-gpu
process. See function documentation.
The vhost-user connection fd is set on qemuDomainVideoPrivate struct.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
The same validation should be done for both static network devices and
hotplugged devices, but they are currently inconsistent. Move all the
relevant validation from qemuBuildInterfaceCommandLine() into the new
function qemuDomainValidateActualNetDef() and call the latter from
the former.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Let's pull this hunk out into a function, so it can be reused
in another codepath that needs to do the same thing.
Signed-off-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
For VM started and migrated/saved without slirp-helpers, let's prevent
the automatic setup (as it would fail to migrate otherwise).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Save & restore the slirp helper PID associated with a network
interface & the probed features.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Add dbusVMStates to keep a list of dbus-vmstate objects needed for
migration. They are populated on the command line during start or
qemuDBusVMStateAdd/Remove() will hotplug them as needed.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since libvirt stores the backing chain into the XML in a nested way it
is the prime possibility to hit libxml2's parsing limit of 256 layers.
Introduce code which will crawl the backing chain and verify that it's
not too deep. The maximum nesting is set to 200 layers so that there's
still some space left for additional properties or nesting into snapshot
XMLs.
The check is applied to all disk use cases (starting, hotplug, media
change) as well as block copy which changes image and snapshots.
We simply report an error and refuse the operation.
Without this check a restart of libvirtd would result in the status XML
failing to be parsed and thus losing the VM.
https://bugzilla.redhat.com/show_bug.cgi?id=1524278
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In addition to the data that libvirt needs and extracts internally,
copy and store the whole 'props' JSON sub-object of the data returned by
query-hotpluggable-cpus for future use.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Now that virDomainXMLNamespace matches virXMLNamespace,
we no longer need to keep both around.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefFormatBufInternal.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefCopy.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Qemu bitmap operations require knowing the node name associated with
the format layer (the qcow2 file); as upcoming patches will be
grabbing that information frequently, make a helper function to access
it.
Another potential benefit of this function is that we have a single
place where we could insert a QMP node-name scraping call if we don't
currently know the node name, when -blockdev is not supported;
however, the goal is that we hopefully don't ever have to do that
because we instead scrape node names only at the point where they
change.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
A lot of this work heavily copies from the existing snapshot APIs.
What's more, this patch is (intentionally) very similar to the
checkpoint code just added in the test driver, to the point that qemu
checkpoints are not fully usable in this patch, but it at least
bisects and builds cleanly. The separation between patches is done
because the grunt work of saving and restoring XML and tracking
relations between checkpoints is common to the test driver, while the
later patch adding integration with QMP is specific to qemu.
Also note that the interlocking to prevent checkpoints and snapshots
from existing at the same time will be a separate patch, to make it
easier to revert that restriction when we finally round out the design
for supporting interaction between the two concepts.
Signed-off-by: Eric Blake <eblake@redhat.com>
The PR manager is a property of the format layer in qemu so we need to
be able to track it also in the chains of orphaned block jobs.
Add a helper for qemu to look also into the blockjob state.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add support for handling the event either synchronously or
asynchronously using the event thread.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Block jobs currently belong to disks only so we can look up the block
job data for them in the corresponding disks. This won't be the case
when using blockdev as certain jobs don't even correspond to a disk and
most of them can run on a part of the backing chain.
Add a global table of blockjobs which can be used to look up the data
for the blockjobs when the job events need to be processed.
The table is a hash table organized by job name and has a reference to
the job. New and running jobs will later be added to this table.
Reference counting will allow to reap job state for synchronous callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Similarly to qemuDomainSaveStatus add a helper to save the config XML
named qemuDomainSaveConfig.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rename qemuDomainObjSaveJob and create a wrapper for it which does not
require 'driver' to be passed and export it so that other palces can
easily save the status XML without having to invoke virDomainSaveStatus
which has unpleasing parameters.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>