At the moment, any kind of issue being detected in any of the
firmware descriptor files will result in the entire process
being aborted.
In particular, installing a build of edk2 for an architecture
that libvirt doesn't yet know about, for example loongarch64,
will break most firmware-related functionality: it will no
longer be possible to define new EFI VMs, start existing ones,
or even just obtain the domcapabilities for any architecture.
This is obviously unnecessarily harsh. Adopt a more relaxed
approach and simply ignore the firmware descriptors that we
are unable to parse correctly.
https://bugzilla.redhat.com/show_bug.cgi?id=2258946
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Instead of returning the list of paths exactly as obtained
from qemuFirmwareFetchConfigs(), and allocating the list of
firmwares to be exactly that size right away, start with two
empty lists and add elements to them one by one.
At the moment this only makes things more verbose, but later
we're going to change things so that it's possible that some
of the paths/firmwares are not included in the lists returned
to the caller, and at that point the changes will pay off.
Note that we can't use g_auto() for the new list of paths,
because until the very last moment it's not null-terminated,
so g_strfreev() wouldn't be able to handle it correctly.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In a couple of cases, we were reporting an error without
actually terminating the parse process.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The current implementation sets the guest-sync timeout to the
smaller value between the default value (QEMU_AGENT_WAIT_TIME)
and agent->timeout, without considering the timeout passed
via the qga command.
This patch enhances the guest-sync timeout logic to use the
minimum value among the default value, agent->timeout, and
the timeout passed via the qga command.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/590
Signed-off-by: ray <honglei.wang@smartx.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Move the assumption from the code pre-creating the storage to
qemuMigrationDstPrepareStorage where it's checked for other cases.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Migrating into a 'directory' won't ever work as we ask qemu to emulate a
fat filesystem, so restoring of the files won't be possible. Same for
'vhost-user' disks which don't support blockjobs as there's no block
backend used in qemu.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Check the existance of storage per-type rather than trying to come up
with a common "path".
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that we have a switch statement, the code adding the 'slice' for
block devices of non-equal sizes can be moved to appropriate location.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Automatically free helper variables, remove the 'cleanup' label and
use virBufferCurrentContent() to take the XML from the buffer rather
than extracting it to a separate variable.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Allow storage migration of VDPA devices by properly checking that they
exist on the destionation. Pre-creation is not supported but if the
device exists the migration should be able to succeed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Decrease the likelyhood that addition of a new storage type will be
forgotten.
This patch also unifies the type check to consult the 'actual' type of
the storage in both cases as the NVMe check looked for the XML declared
type while virStorageSourceIsLocalStorage() looks for the
actual/translated type.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This option controls whether the sysctl config for enabling unprivileged
userfaultfd will be installed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
/dev/userfaultfd device is preferred over userfaultfd syscall for
post-copy migrations. Unless qemu driver is configured to disable mount
namespace or to forbid access to /dev/userfaultfd in cgroup_device_acl,
we will copy it to the limited /dev filesystem QEMU will have access to
and label it appropriately. So in the default configuration post-copy
migration will be allowed even without enabling
vm.unprivileged_userfaultfd sysctl.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Previously we were only starting or stopping nbdkit when the guest was
started or stopped or when hotplugging/unplugging a disk. But when doing
block operations, the disk backing store sources can also be be added or
removed independently of the disk device. When this happens the nbdkit
backend was not being handled properly. For example, when doing a
blockcopy from a nbdkit-backed disk to a new disk and pivoting to that
new location, the nbdkit process did not get cleaned up properly. Add
some functionality to qemuDomainStorageSourceAccessModify() to handle
this scenario.
Since we're now starting nbdkit from the ChainAccessAllow/Revoke()
functions, we no longer need to explicitly start nbdkit in hotplug code
paths because the hotplug functions already call these allow/revoke
functions and will start/stop nbdkit if necessary.
Add a check to qemuNbdkitProcessStart() to report an error if we
are trying to start nbdkit for a disk source that already has a running
nbdkit process. This shouldn't happen, and if it does it indicates an
error in another part of our code.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When starting nbdkit processes for the backing store of a disk, we were
returning an error if any backing store failed, but we were not cleaning
up processes that succeeded higher in the chain. Make sure that if we
return a failure status from qemuNbdkitStartStorageSource() that we roll
back any processes that had been started.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This will allow us to start or stop nbdkit for just a single disk source
or for every source in the backing chain. This will be used in following
patches.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
After previous cleanups, qemuMonitorIOWriteWithFD() is but a thin wrapper
over virSocketSendMsgWithFDs(). Replace the body of the former
with a call to the latter.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The 'raw' driver without any special configuration is not needed and
creates overhead in qemu.
Stop using the 'raw' format driver in cases when it's not needed. A
special case when it is needed is for FD passed images with only a
single writable FD passed, where we need an overlay driver to properly
reflect the 'read-only' flag.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Store whether qemu supports the appropriate option for block-stream and
block-commit commands and always use it if available.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The capability is asserted when both block-stream and block-commit QMP
commands support the 'backing-mask-protocol' argument.
The argument causes qemu to record 'raw' as the backing file format in
case when a protocol node is used directly. This is needed to preserve
compatibility of images after a block-commit or block-pull libvirt
operation with older libvirt versions in case when we'll want to remove
the unneded 'raw' format drivers from the block graph.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Move domain interface management methods from qemu to hypervisor. This
refactoring allows the domain management methods to be shared between CH and
qemu drivers.
This commit does not introduce any functional changes.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Drop unused parameter from virDomainNetReleaseActualDevice method.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Instead of open-coding a partial version of it.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
The qemuDomainGetSCSIControllerModel() function, which is
responsible for choosing a model for a SCSI controller that
didn't have one provided by the user, considers values >0 to
mean "model has been set".
Since MODEL_SCSI_AUTO == 0, this means that such a value is
considered the same as MODEL_SCSI_DEFAULT (-1). This makes
sense, as not specifying a model name or explicitly asking for
one to be automatically chosen intuitively should result in
the same behavior.
Specifically, there is no case in which a value of
MODEL_SCSI_AUTO or MODEL_SCSI_DEFAULT is encountered after the
initial controller creation: it is either replaced with an
actual model, or an error is raised.
Despite this, there are a few places in the QEMU driver where
we incorrectly treat these values as if they were actual
model names. To reduce confusion, make sure that no longer
happens.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
'virXPathNodeSet' returns -1 only when 'ctxt' or 'xpath' are NULL or
when the 'xpath' string is invalid. Both are programming errors. It
doesn't make sense for the code to overwrite the error message for
anything supposedly more relevant.
The majority of calls to 'virXPathNodeSet' already didn't do this, so
this patch fixes the rest to prevent it from spreading again.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There's nothing under the 'cleanup:' label thus the whole code can be
simplified.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Originally the migration code didn't register the NBD disk port with the
port allocator when it was manually specified. Later when commit
e74d627bb3 refactored the code and started registering it, the
old logic which was clearing 'priv->nbdPort' in case when it was manually
specified was not removed.
This caused following problems:
- the port was not released after successful migration
- the port was released even when it was not allocated on failures
regarding the NBD server start
- the port was not released on other failures of the migration after
NBD server startup
To address this we remove the assumption that 'priv->nbdPort' is used
only for auto-allocated port and fill it only once the port is
allocated and make the caller of qemuMigrationDstStartNBDServer
responsible for releasing it.
Fixes: e74d627bb3
Resolves: https://issues.redhat.com/browse/RHEL-21543
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Historically creating offline external snapshot required disk-only flag
as well. Now when user requests new snapshot for offline VM and at least
one disk is specified to use external snapshot we will no longer require
disk-only flag as all other not specified disk will use external
snapshots as well.
Resolves: https://issues.redhat.com/browse/RHEL-22797
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Introduce new function qemuSnapshotCreateUseExternal() that will return
true if we will use external snapshots as default location.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The condition was completely wrong. As per the comment for function
virDomainMomentIsAncestor() it checks that the first argument is
descendant of the second argument.
Consider the following snapshot tree for VM:
s1
|
+- s2
| |
| +- s3
|
+- s4
|
+- s5 (current)
When deleting s2 with the original code we checked if
virDomainMomentIsAncestor(s2, s5) which would return false basically for
any snapshot as s5 is leaf snapshot so no children.
When deleting s2 with fixed code we check if
virDomainMomentIsAncestor(s5, s2) which still returns false but when
deleting s4 it will correctly return true.
Before this fix it fails with the following error:
error: Failed to delete snapshot s2
error: invalid argument: could not find base disk source in disk source chain
After the fix it fails with correct error:
error: Failed to delete snapshot s2
error: unsupported configuration: deletion of non-leaf external snapshot that is not in active chain is not supported
Resolves: https://issues.redhat.com/browse/RHEL-23212
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
It has nothing to do with assigning addresses, so it makes more
sense to have it in qemu_domain.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
qemuDomainGetSCSIControllerModel() can return -1 on failure,
but qemuDomainFindOrCreateSCSIDiskController() didn't implement
any handling for this scenario.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Group things together where it makes sense, avoid unnecessary
uses of 'else if', plus other tweaks.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The current defaults, that can be altered on a per-architecture
basis, are derived from the historical x86 behavior.
Every time support for a new architecture is added to libvirt,
care must be taken to override these default: if that doesn't
happen, guests will end up with additional hardware, which is
something that's generally undesirable.
Turn things around, and require architectures to explicitly
ask for the devices to be created by default instead. The
behavior for existing architectures is preserved.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
They reference functions that have since been renamed.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This is pretty straightforward.
Resolves: https://issues.redhat.com/browse/RHEL-15316
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The QEMU_CAPS_DEVICE_VIRTIO_MEM_PCI_DYNAMIC_MEMSLOTS reflects
whether QEMU is capable of .dynamic-memslots for virtio-mem.
Use it when validating domain configuration.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Starting from v8.2.0-rc0~74^2~2 QEMU has .dynamic-memslots
attribute for virtio-mem-pci device. Introduce a capability which
reflects that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Encryption secrets are considered a format dependency, even
when being used by the storage node itself, as in the case of
using encryption engine=librbd.
Currently, the storage node is created (blockdev-add) before
creating the format dependencies (including encryption secrets).
As a result, when trying to perform a blockcopy when the target
disk uses librbd encryption, an error of this form is returned:
"error: internal error: unable to execute QEMU command 'blockdev-add': No secret with id 'libvirt-5-format-encryption-secret0'"
To overcome this error, we change the order of commands so that
format dependencies are created BEFORE creating the storage node.
Signed-off-by: Or Ozeri <oro@il.ibm.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
After v9.1.0-rc1~116 we track whether it's us who created a
macvtap or not. But when updating a vNIC its definition might be
replaced with a new one (though, ifname is not allowed to
change), e.g. to reflect new QoS, link state, etc.
Now, the fact whether we created macvtap for given vNIC is stored
in net->privateData->created. And replacing definition is done by
simply freeing the old definition and making the pointer point to
the new one. But this does not preserve the 'created' flag, which
in turn means when a domain is shutting off, the macvtap is not
removed (see loop inside of qemuProcessStop()).
Copy this flag into new definition and leave a note in
_qemuDomainNetworkPrivate struct.
Fixes: 61d1b9e659
Resolves: https://issues.redhat.com/browse/RHEL-22714
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
After guest is started, or we are reconnecting to already running
one (after daemon restart), qemuProcessRefreshRxFilters() is
called to refresh rx-filters (basically MAC addresses of guest
NICs) as they might have changed while we were not running (for
the case when reconnecting to an already running guest), or we
need to enable them by running a command (for freshly started
guest - see processNicRxFilterChangedEvent()).
Now, our XML parser allowed trustGuestRxFilters attribute for all
types and models of <interface/> while in reality, only virtio
model AND TUN/TAP based types can see MAC address changes. For
other combinations, QEMU reports an error.
This all means that when the daemon is restarted and it
reconnects to a guest with, well invalid configuration, or when
such guest is restored from a saved image, or migrated then we
issue the monitor command, to which QEMU replies with an error
which is then propagated to users:
error: internal error: unable to execute QEMU command 'query-rx-filter': invalid net client name: hostdev0
While on one hand users should fix their configuration (and after
v10.0.0-rc1~123 they can do that even on live domains), libvirt
can also has some logic built in that prevent issuing the command
in the first place (for obviously wrong cases).
Fixes: 060d4c83ef
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When trying to start nbdkit-backed disks in backing chains, we were
accidentally always checking the private data of the top of the chain
instead of using the loop variable.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Rewrite the function so that it's more compact and easier to
extend as new architectures, which will likely come with
multibus support right out the gate, are introduced.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It belongs next to qemuDomainSupportsPCI().
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The way the function is currently written sort of obscures this
fact, but ultimately we already unconditionally assume PCI
support on most architectures.
Arm and RISC-V need some additional checks to maintain
compatibility with existing configurations but for all future
architectures, such as the upcoming LoongArch64, we expect PCI
support to come out of the box.
Last but not least, the functions is made const-correct.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It's no longer used anywhere.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>