Let us introduce the xml and reply files for QEMU 10.0.0 on s390x.
Notable changes:
- new s390-ccw-virtio-10.0 machine type
- old machine types (2.4 - 2.8) dropped
- new CPU models
- New devices:
- virtio-mem-ccw
- chardev now supports qemu-vdagent
Signed-off-by: Shalini Chellathurai Saroja <shalini@linux.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Suggested-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When a migration is canceled very late once virtual CPUs are already
stopped, QEMU will automatically resume them. If this happens after we
exited a waiting loop in qemuMigrationSrcWaitForCompletion, but before a
loop that tries to make sure CPUs are stopped by waiting for the
appropriate event, we may end up waiting forever because the CPUs are
running (they were resumed by migrate_cancel), but the STOP event is
already gone.
This is possible because we enter monitor for fetching migration
statistics at which point other APIs can be processed and migration may
change its state. We should recheck the state when we get back from the
monitor code.
https://issues.redhat.com/browse/RHEL-52493
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Inactive domain XML can be wildly different to the live XML. For
instance, it can have VSOCK CID of that from another (running)
domain. Since domain status is not checked for, attempting to ssh
into an inactive domain may in fact result in opening a
connection to a different live domain that listens on said CID
currently.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/737
Resolves: https://issues.redhat.com/browse/RHEL-75577
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
JSON parser isn't called when reading empty files so `jerr` will be used
uninitialized in the original code. Empty files appear when a network
has no dhcp clients.
This patch checks for such files and skip them.
Fixes: a8d828c88bbdaf83ae78dc06cdd84d5667fcc424
Signed-off-by: Jiang XueQian <jiangxueqian@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This matches the cpu_family() check in `meson.build`
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Commit f8558a87ac8525b16f4cbba4f24e0885fde2b79e de-modularized the
'storage-file' backend for local files, and thus now the only
possibility to have the directory is when compiling with gluster.
This breaks RPM builds when building without gluster as the backend
directory no longer exists in such case. Move the stanza requiring the
directory under the gluster driver declarations.
Fixes: f8558a87ac8525b16f4cbba4f24e0885fde2b79e
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
In its upstream commit [1], qemu dropped s390-2.7 machine type,
then in commit [2] the s390-2.8 machine type was dropped. But as
Thomas Huth pointed out, any machine type that's older than 6
years is subject to removal [3]. This means, any machine type
older than 4.1 is going to be removed eventually.
We have two test cases that assumes existence of 2.7 machine type.
While they could be switched to 4.1 machine type, we also have
another test case that already check 4.2 machine type.
Therefore, just drop the 2.7 ones.
1: 3199c7ee76
2: 66924fe369
3: ce80c4fa6f
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
The 'storage_file' infrastructure serves as an abstraction on top of
file-looking storage technologies. Apart from local file it currently
implements also a backend for 'gluster'.
Historically it was all modularized and the local file module was
usually packaged with the 'core' part of the storage driver. Now with
split daemons one can install e.g. 'virqemud' without the storage driver
core which contains the 'fs' backend module. Since the qemu driver uses
the storage file backends to e.g. create storage for snapshots and
backups this allows users to create a deployment where some things will
not work properly.
As the 'fs' backend doesn't use any code that wouldn't be linked
directly anyways there's no point in actually shipping it as a module.
Let's compile it in so that all deployments can use it.
To achieve that, compile the source directly into the
'virt_storage_file_lib' static library and remove the loading code. Also
adjust the spec file.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add an example image formatted by:
qemu-img create -f qcow2 -o data_file=nbd+unix:///datafile?socket=/tmp/nbd,data_file_raw=true /tmp/nbddatastore.qcow2 10M -u
serving as an example when qemu records an empty string as the
'data_file' field.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In certain buggy conditions qemu can create an image which has empty
string stored as 'data_file'. While probing libvirt would consider the
empty string as a relative file name and construct the path using the
path of the parent image stripping the last component and appending the
empty string. This results into attempting to using a directory as an
image and thus the following error when attempting to start VM with such
an image:
error: unsupported configuration: storage type 'dir' requires use of storage format 'fat'
Reject empty strings passed in as 'data_file'.
Note that we do not have the same problem with 'backing store' as an
empty string there is interpreted as no backing file both by qemu and
libvirt.
Resolves: https://issues.redhat.com/browse/RHEL-70627
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The assigned string is 17 chars long once the trailing nul is taken
into account. This triggers a warning with GCC 15
src/util/virsystemd.c: In function ‘virSystemdEscapeName’:
src/util/virsystemd.c:59:38: error: initializer-string for array of ‘char’ is too long [-Werror=unterminated-string-initialization]
59 | static const char hextable[16] = "0123456789abcdef";
| ^~~~~~~~~~~~~~~~~~
Switch to a dynamically sized array as used in all the other places
we have a hextable array.
See also: https://gcc.gnu.org/PR115185
Reported-by: Yaakov Selkowitz <yselkowi@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The list of CPU features we probe from various MSR grew significantly
over time and the CPU map currently mentions 11 distinct MSR indexes.
But the code for directly probing host CPU features was still reading
only the original 0x10a index. Thus the CPU model in host capabilities
was missing a lot of features.
Instead of specifying a static list of indexes to read (which we would
forget to update in the future), let's just read all indexes found in
the CPU map.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When an I/O error happens (causing a domain to be paused) during live
migration which is later cancelled by a user, trying to resume the
domain doesn't make sense.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
None of the callers really care about the return value so we can drop it
and simplify the code a bit.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When migration fails in Perform phase, we call Finish on the destination
host with cancelled=1 and get the error from there and report it to the
user. This works well if the error on the destination caused the
migration to fail. But in other cases the main error may reported by the
source and the destination would just be complaining about broken
migration stream.
In other words, we don't really know which error caused the migration to
fail and we have no way of detecting that. So instead of choosing one
error, this patch will combine the error messages from both sides of
migration into a single message and report it to the user. The result
would be, for example:
operation failed: migration failed. Message from the source host:
operation failed: job 'migration out' failed: Certificate does not
match the hostname ble.bla. Message from the destination host:
operation failed: job 'migration in' failed: load of migration
failed: Invalid argument
And yes, this is ugly, but I wasn't able to come up with a better way of
fixing this issue.
https://issues.redhat.com/browse/RHEL-58933
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There are some features/improvements/bug fixes I've either
contributed or reviewed/merged. Document them for upcoming
release.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The schema does not allow that anyway and we then format them all back
which leads to libvirt producing an invalid XML.
Resolves: https://issues.redhat.com/browse/RHEL-70656
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The source_root() method is deprecated in 0.56.0 and we're
recommended to use project_source_root() instead.
This is similar to commit v8.9.0-rc1~70 but somehow, the old
method sneaked in.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The postcopy-recover migration state in QEMU means a connection for the
migration stream was established. Depending on the schedulers on both
hosts a relative timing of the corresponding MIGRATION event on the
source host and the destination host may differ. Specifically it's
possible that the source sees postcopy-recover while the destination is
still in postcopy-paused.
Currently the Perform phase on the source host ends when we get
postcopy-recover event and the Finish phase on the destination host is
called. If this is fast enough we can still see postcopy-paused state
when the Finish phase starts waiting for migration to complete. This is
interpreted as a failure and reported back to the caller. Even though
the recovery may actually start just a few moments later.
To avoid this race we now don't consider post-copy migration active in
postcopy-recover state and keep waiting for postcopy-active event (in
the success path). Thus the Finish phase is entered only after the
migration switches to postcopy-active. In this state QEMU guarantees the
destination already switched at least to postcopy-recover and we won't
be confused be seeing an old postcopy-failed state.
https://issues.redhat.com/browse/RHEL-73085
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.
Translation: libvirt/libvirt
Translate-URL: https://translate.fedoraproject.org/projects/libvirt/libvirt/
Signed-off-by: Fedora Weblate Translation <i18n@lists.fedoraproject.org>
The generated org.libvirt.api.policy.in file was recently added to the
POTFILES list as it contains translatable messages.
It is only generated when WITH_POLKIT && WITH_LIBVIRTD is satisfied
though, resulting in the 'po_check' syntax rule failing if either of
those conditions are not met.
It is harmless to unconditionally generate this file, as a separate
rule takes care of of installing it, and the latter remains under
the build conditions.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Since we previously only supported vlan tagging for interfaces
connected to an OVS bridge [*], the code in qemuChangeNet() (used by
the update-device API) assumed an interface with modified vlan config
was on an OVS bridge, and would call the OVS-specific
virNetDevOpenvswitchUpdateVlan().
Now that we support vlan tagging for interfaces connected to a
standard Linux host bridge, we must check the type of connection and
only call the OVS function when connected to an OVS bridge *both
before and after the update*. Otherwise we just set the flag to
re-connect to the bridge, which has the side effect of redoing the
vlan setup.
([*] or an SRIOV VF assigned using VFIO, but we don't support *any
runtime changes to that type of netdev so it's irrelevant here.)
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Update domain XML and network XML documentation to describe how
standard linux bridges support the VLAN configuration.
Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
Reviewed-by: Laine Stump <laine@redhat.com>
When we are deleting external snapshot that is not active we only need
to delete overlay disk image of the parent snapshot. This works
correctly even if parent snapshot is external and active as it will have
another overlay created when user reverted to that snapshot.
In case the parent snapshot is internal there are no overlay disk images
created as everything is stored internally within the disk image. In
this case we would delete the actual disk image storing internal
snapshots and most likely the original disk image as well resulting in
data loss once the VM is shutoff.
Fixes: https://gitlab.com/libvirt/libvirt/-/issues/734
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The host-model CPU mode was described as similar to copying the host CPU
definition from capabilities, which has not been the case for ages. The
host-model definition from domain capabilities is used instead.
Only the first sentence changed, but it required reformatting
essentially the whole paragraph so I used this as an opportunity to
reformat it a little bit more and split the long paragraph into several
smaller ones for better readability.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When VIR_INHIBITOR_WHAT_NONE is passed to virInhibitorNew, it is
an indication that daemon shutdown should be inhibited, but no
OS level inhibitors acquired. This is done by the virtnetworkd
daemon, for example, to prevent shutdown while running virtual
machines are present, without blocking / delaying OS shutdown.
Unfortunately the code forgot to skip the DBus call in this case,
resulting in errors being logged.
Reviewed-by: Laine Stump <laine@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When debugging it is useful to know what signals are being received and
metadata related to them. Log this data before calling the signal
handling callbacks.
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>