Switch from old VIR_ allocation APIs to glib equivalents.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This function potentially grabs both a monitor job and an agent job at
the same time. This is problematic because it means that a malicious (or
just buggy) guest agent can cause a denial of service on the host. The
presence of this function makes it easy to do the wrong thing and hold
both jobs at the same time. All existing uses have already been removed
by previous commits.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In order to avoid holding an agent job and a normal job at the same
time, we want to avoid accessing the domain's definition while holding
the agent job. To achieve this, qemuAgentGetFSInfo() only returns the
raw information from the agent query to the caller. The caller can then
release the agent job and then proceed to look up the disk alias from
the vm definition. This necessitates moving a few helper functions to
qemu_driver.c and exposing the agent data structure (qemuAgentFSInfo) in
the header.
In addition, because the agent function no longer returns the looked-up
disk alias, we can't test the alias within qemuagenttest. Instead we
simply test that we parse and return the raw agent data correctly.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The qemuAgentDiskInfo structure is filled with information received from
the agent command response, except for the 'alias' field, which is
retrieved from the vm definition. Limit this structure only to data that
was received from the agent message.
This is another intermediate step in moving the responsibility for
searching the vmdef from qemu_agent.c to qemu_driver.c so that we can
avoid holding an agent job and a normal job at the same time.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In an effort to avoid holding both an agent and normal job at the same
time, we shouldn't access the vm definition from within qemu_agent.c
(i.e. while the agent job is being held). In preparation, we need to
store the full filesystem disk information in qemuAgentDiskInfo. In a
following commit, we can pass this information back to the caller and
the caller can search the vm definition to match the filsystem disk to
an alias.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The function name doesn't give a good idea of what the function does.
Rename to qemuAgentGetFSInfoFillDisks() to make it more obvious than it
is filling in the disk information in the fsinfo struct.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
QEMU since 4.1.0 supports the "dies" parameter for -smp
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Recently CPU hardware vendors have started to support a new structure
inside the CPU package topology known as a "die". Thus the hierarchy
is now:
sockets > dies > cores > threads
This adds support for "dies" in the XML parser, with the value
defaulting to 1 if not specified for backwards compatibility.
For example a system with 64 logical CPUs might report
<topology sockets="4" dies="2" cores="4" threads="2"/>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When pause-before-switchover QEMU capability is enabled, we get STOP
event before MIGRATION event with postcopy-active state. To properly
handle post-copy migration and emit correct events commit
v4.10.0-rc1-4-geca9d21e6c added a hack to
qemuProcessHandleMigrationStatus which translates the paused state
reason to VIR_DOMAIN_PAUSED_POSTCOPY and emits
VIR_DOMAIN_EVENT_SUSPENDED_POSTCOPY event when migration state changes
to post-copy.
However, the code was effective on both sides of migration resulting in
a confusing VIR_DOMAIN_EVENT_SUSPENDED_POSTCOPY event on the destination
host, where entering post-copy mode is already properly advertised by
VIR_DOMAIN_EVENT_RESUMED_POSTCOPY event.
https://bugzilla.redhat.com/show_bug.cgi?id=1791458
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When resuming a domain from a save file, we read the domain XML
from the file, add it onto our internal list of domains, start
the qemu process, let it load the incoming migration stream and
resume its vCPUs afterwards. If anything goes wrong, the domain
object is removed from the list of domains and error is returned
to the caller. However, the qemu process might be left behind -
if resuming vCPUs fails (e.g. because qemu is unable to acquire
write lock on a disk) then due to a bug the qemu process is not
killed but the domain object is removed from the list.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1718707
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We have to keep the default - querying the agent if no flag is
set.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
libvirt currently always reports that USB is available as a bus subsystem
type when running "virsh domcapabilities". However, this is not always
true, for example the qemu-system-s390x binary normally never has support
for USB. Thus we should only report that USB is available if there is
also a USB host controller available where we can attach USB devices.
Reported-by: Sebastian Mitterle <smitterl@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1759849
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Historically there are two places where we format authentication and
encryption for a disk. The logich which formats it for backing files was
flawed though and didn't format it at all. This worked if the image
became a backing file through the means of a snapshot but not directly.
Force formatting of the source and encryption for any non-disk case to
fix the issue.
This caused problems in many places as we use the formatter to copy the
definition. Effectively any copy lost the secret definition.
https://bugzilla.redhat.com/show_bug.cgi?id=1789310https://bugzilla.redhat.com/show_bug.cgi?id=1788898
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Internal snapshots of a non-running domain do not carry any memory state
and restoring such a snapshot will not replace existing saved memory
state. This allows a scenario, where a user first suspends a domain into
managedsave, restores a non-running snapshot and then resumes the domain
from managedsave. After that, the guest system will run with its
previous memory state atop a different disk state. The most obvious
possible fallout from this is extensive file system corruption. Swap
content and RAID bitmaps might also be off.
This has been discussed[1] and fixed[2] from the end-user perspective for
virt-manager.
This patch marks the restore operation as risky at the libvirt level,
requiring the user to remove the saved memory state first or force the
operation.
[1] https://www.redhat.com/archives/virt-tools-list/2019-November/msg00011.html
[2] https://www.redhat.com/archives/virt-tools-list/2019-December/msg00049.html
Signed-off-by: Michael Weiser <michael.weiser@gmx.de>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Commit v5.10.0-290-g3a4787a301 refactored qemuDomainGetHostdevPath to
return a single path rather than an array of paths. When the function is
called on a missing device, it will now return NULL in @path rather than
a NULL array with zero items and the callers need to be adapted
properly.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
src/qemu/qemu_domain_address.c:680:13: error: this statement may fall through [-Werror=implicit-fallthrough=]
switch ((virDomainFSModel) dev->data.fs->model) {
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Fixes: f363af7e35
The point of this function is to translate virDomainOsDefFirmware
enum to qemuFirmwareOSInterface enum. However, with my commit
v5.10.0-507-g8e1804f9f6 we are passing a variable type of
virDomainLoader enum. Make the function accept both enums and
make the enum members correspond to each other.
This fixes clang build.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Split the formatting by fsdriver type to allow adding a new type.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Split the switch by fsdriver type to allow adding a new one.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Wire up the allocation and disposal of private data.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Remove the 'gluster' part and decouple the return from
the gluster_debug_level parsing to allow adding more options
to this section.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Inactive VM doesn't have qemuCaps set thus we'd never properly report
that VM backups are supported only for running VMs.
Move the capability check after the active check.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
While we discourage people to use the old style of specifying
UEFI for their domains (the old style is putting path to the FW
image under /domain/os/loader/ whilst the new one is using
/domain/os/@firmware), some applications might have not adapted
yet. They still rely on libvirt autofilling NVRAM path and
figuring out NVRAM template when using the old way (notably
virt-install does this). We must preserve backcompat for this
previously supported config approach. However, since we really
want distro maintainers to leave --with-loader-nvram configure
option and rely on JSON descriptors, we need to implement
autofilling of NVRAM template for the old way too.
Fedora: https://bugzilla.redhat.com/show_bug.cgi?id=1782778
RHEL: https://bugzilla.redhat.com/show_bug.cgi?id=1776949
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
These functions are meant to replace verbose check for the old
style of specifying UEFI with a simple function call.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This simplifies condition when matching FW interface by having a
single line condition instead of multiline one. Also, it prepares
the code for future expansion.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This function needs domain definition really, we don't need to
pass the whole domain object. This saves couple of dereferences
and characters esp. in more checks to come.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
We insert the checkpoint metadata into the list of checkpoints prior to
actually creating the on-disk bits. If the 'transaction' or any other
steps done between inserting the checkpoint and creating the on-disk
data fail we'd end up with an unusable checkpoint that would vanish
after libvirtd restart.
Prevent this by rolling back the metadata if we didn't actually take and
record the checkpoint.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
If we are certain that the checkpoint creation failed we remove the
metadata from the list. To allow reusing this in the backup code add a
new helper and export it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The underlying resctrl monitoring is actually using 64 bit counters,
not the 32bit one. Correct this by using 64bit data type for reading
hardware value.
To keep the interface consistent, the result of CPU last level cache
that occupied by vcpu processors of specific restrl monitor group is
still reported with a truncated 32bit data type. because, in silicon
world, CPU cache size will never exceed 4GB.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Wang Huaqiang <huaqiang.wang@intel.com>
Commit d75f865fb9 caused a job-deadlock if
a VM is running the backup job and being destroyed as it removed the
cleanup of the async job type and there was nothing to clean up the
backup job.
Add an explicit cleanup of the backup job when destroying a VM.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
When cancelling the blockjobs as part of failed backup job startup
recover we didn't pass in the correct async job type. Luckily the block
job handler and cancellation code paths use no block job at all
currently so those were correct.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Now that we delete the images elsewhere it's not required. Additionally
it's safe to do as we never released an upstream version which required
this being in place.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
While qemu is running both locations are identical in semantics, but the
move will allow us to fix the scenario when the VM is destroyed or
crashes where we'd leak the images.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
In contrast to snapshots the backup job does not complain when the
backup job's store file has backing pre-configured. It's actually
required so that the NBD server exposes all the data properly.
Remove our fake termination and use the existing disk source as backing.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
qemuDomainObjPrivateDataClear clears state which become invalid after VM
stopped running and the node name allocator belongs there.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The waiting loop used QEMU_ASYNC_JOB_NONE rather than 'asyncJob' passed
from the caller.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
A few places were importing dirname.h without actually using it.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
g_get_real_time() returns the time since epoch in microseconds.
It uses gettimeofday() internally while libvirt used clock_gettime
because it is declared async signal safe. In practice gettimeofday
is also async signal safe *provided* the timezone parameter is
NULL. This is indeed the case in g_get_real_time().
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The g_pattern_match function_simple is an acceptably close
approximation of fnmatch for libvirt's needs.
In contrast to fnmatch(), the '/' character can be matched
by the wildcards, there are no '[...]' character ranges and
'*' and '?' can not be escaped to include them literally in
a pattern.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The GLib g_lstat() function provides a portable impl for
Win32.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>