The data is gathered only once so we can move the whole block which
fetches the data out of the loop and get rid of the logic which
prevents multiple calls.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Refactor the logic to skip the body of the function if there's nothing
to do.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
There are two calls to virHashNew which check the return value. It's not
necessary any more as virHashNew always returns a valid pointer.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Aside from itinerant error (actually warning) messages due to an
unrecognized response from qemu, this isn't even necessary - the
migration proceeds successfully to completion anyway.
(I'm not sure where to see this status reported in the API though - do
we need to add an extra state, or recognition of a new event somewhere?)
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Normally a PCI hostdev can't be migrated, so
qemuMigrationSrcIsAllowedHostdev() won't permit it. In the case of a a
hostdev network interface that has <teaming type='transient'/> set,
QEMU will automatically unplug the device prior to migration, and
re-plug a corresponding device on the destination. This patch modifies
qemuMigrationSrcIsAllowedHostdev() to allow domains with those devices
to be migrated.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The QEMU driver uses the <teaming type='persistent|transient'
persistent='blah'/> element to setup a "failover" pair of devices -
the persistent device must be a virtio emulated NIC, with the only
extra configuration being the addition of ",failover=on" to the device
commandline, and the transient device must be a hostdev NIC
(<interface type='hostdev'> or <interface type='network'> with a
network that is a pool of SRIOV VFs) where the extra configuration is
the addition of ",failover_pair_id=$aliasOfVirtio" to the device
commandline. These new options are supported in QEMU 4.2.0 and later.
Extra qemu-specific validation is added to ensure that the device
type/model is appropriate and that the qemu binary supports these
commandline options.
The result of this will be:
1) The virtio device presented to the guest will have an extra bit set
in its PCI capabilities indicating that it can be used as a failover
backup device. The virtio guest driver will need to be equipped to do
something with this information - this is included in the Linux
virtio-net driver in kernel 4.18 and above (and also backported to
some older distro kernels). Unfortunately there is no way for libvirt
to learn whether or not the guest driver supports failover - if it
doesn't then the extra PCI capability will be ignored and the guest OS
will just see two independent devices. (NB: the current virtio guest
driver also requires that the MAC addresses of the two NICs match in
order to pair them into a bond).
2) When a migration is requested, QEMu will automatically unplug the
transient/hostdev NIC from the guest on the source host before
starting migration, and automatically re-plug a similar device after
restarting the guest CPUs on the destination host. While the transient
NIC is unplugged, all network traffic will go through the
persistent/virtio device, but when the hostdev NIC is plugged in, it
will get all the traffic. This means that in normal circumstances the
guest gets the performance advantage of vfio-assigned "real hardware"
networking, but it can still be migrated with the only downside being
a performance penalty (due to using an emulated NIC) during the
migration.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Presence of the virtio-net-pci option called "failover" indicates
support in a qemu binary of a simplistic bonding of a virtio-net
device with another PCI device. This feature allows migration of
guests that have a network device assigned to a guest with VFIO, by
creating a network bond device in the guest consisting of the
VFIO-assigned device and a virtio-net-pci device, then temporarily
(and automatically) unplugging the VFIO net device prior to migration
(and hotplugging an equivalent device on the migration
destination). (The feature is called "failover" because the bond
device uses the vfio-pci netdev for normal guest networking, but
"fails over" to the virtio-net-pci netdev once the vfio-pci device is
unplugged for migration.)
Full functioning of the feature also requires support in the
virtio-net driver in the guest OS (since that is where the bond device
resides), but if the "failover" commandline option is present for the
virtio-net-pci device in qemu, at least the qemu part of the feature
is available, and libvirt can add the proper options to both the
virtio-net-pci and vfio-pci device commandlines to indicate qemu
should attempt doing the failover during migration.
This patch just adds the qemu capabilities flag "virtio-net.failover".
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
There are a large number of different header files that
are related to the sockets APIs. The virsocket.h header
includes all of the relevant headers for Windows and UNIX
in one convenient place. If virsocketaddr.h is already
included, then there's no need for virsocket.h
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This is a simplified variant of gnulib's passfd module
without the portability code that we do not require.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Currently when disk is removed from iotune group (by setting
all tunables to zero) group name is leaved in config. Let's fix
it.
Given iotune defaults are taken from the destination group setting
tunables to zero may require different set of zero settings in API
call. Let's prohibit removing from group while specifying different
group name then current for the sanity sake.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
For example if disk is not in the group and we want to move it
there then it makes sense to specify only the group name in API call.
Currently the destination group iotune settings will be overwritten
with the disk settings which I would say is not what one would expect.
Thus let's get defaults from the group we are moving to.
And if we are moving the brand new group then is makes sense to
copy the current disk iotune settings to the group.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
virDomainSetBlockIoTune not simply sets the iotune params given in API
but use current settings for all the omitted params. Unfortunately
it uses current settings for active config when setting inactive
params. Let's fix it.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently it is possible to start a domain which have disks
in same iotune group and at the same time having different iotune
params. Both params set are passed to qemu in command line and the one
that is passed later down command line is get actually set.
Let's prohibit such configurations.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently upon successfull call to qemu's implementation of
virDomainSetBlockIoTune iotune settings are changed only for the
disk given in API if the disk is in iotune group while we need
to change the settings for all disks in the group.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
And introduce virDomainBlockIoTuneInfoHasAny.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
All the callees return either 0 or -1 so there is no need
for propagating the value. And we bail on the first error.
Remove the variable to make the function simpler.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We have a helper variable to make the code more concise,
use it consistently.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Now that the cleanup section is empty, eliminate the cleanup
label as well as the 'ret' variable.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Use the g_auto macros wherever possible to eliminate the cleanup
section.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The template still references libvirt-qemu-shim, which was at one
point the name used to refer to what we now know as virt-qemu-run.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
A recent commit added an error check for too-nested backing chains
followed by a return, even though errors above jump to cleanup.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Fixes: b168fa88b8
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Remove bogus G_GNUC_UNUSED attribute and add a missing space.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Fixes: d600667278
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
The algorithm is used in two places to find the parent checkpoint object
which contains given disk and then uses data from the disk. Additionally
the code is written in a very non-obvious way. Factor out the lookup of
the disk into a function which also simplifies the callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The function has no users now and there's no need for it as the common
pattern is to look up the whole disk object anyways.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If a disk is unplugged and then the user tries to delete a checkpoint
the code would try to use NULL node name as it was not checked.
Fix this by fetching the whole disk definition object and verifying it
was found.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Lookup the whole disk definition rather than just the node name.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Upcoming patches will also use the domain disk definition. Rename disk
to chkdisk for clarity.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Upcoming patches will also use the domain disk definition. Rename disk
to chkdisk for clarity.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemuCheckpointDiscard is a massive function that can be separated into
smaller bits. Extract the part that actually modifies the disk from the
metadata handling.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Test code will need to know whether the virQEMUCaps object contains any
machine types already. Add a helper and expose it via 'qemu_capspriv.h'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The previous approac of just purging the alias combined with the fact
that we filled in fake machine types in the test data meant that if a
test case used an alias machine type such as 'pc' or 'q35' it would not
properly resolve to the actual data returned by qemu.
This started to be a problem since the CPU driver now looks at the
default CPU reported with the machine type.
This patch replaces the original approach of just removing the alias by
replacing it with a copy of the machine type data which the type would
alias to. This means that we are using the real data while we don't
modify the test output after every qemu upgrade.
Additionally this change will allow us to drop adding the fake machine
types later.
The test fallout is from actually excercising the CPU driver with
actual data.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Separate out the internals as they will become more complex soon.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Every supported qemu is able to return the list of machine types it
supports so we can start validating it against that list. The advantage
is a better error message, and the change will also prevent having stale
test data.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Debian/Ubuntu linkers are more strict that other distros requiring glib
to be linked explicitly.
macOS needs -export-dynamic instead of -Wl,--export-dynamic
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Similarly to 510d154a0b we need to prevent
doing too deeply nested backing chains and reject them with a sane error
message.
Add a loop to go through the snapshots prior to attempting actually
creating them to prevent some possible inconsistent scenarios.
We don't need to do it when reusing backing chains as we'll be
re-detecting the backing chain in that case anyways.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Don't adopt the backing store data when reusing images provided by the
user. This will force a backing chain re-probe as users might have
passed in something unexpected in the overlay where our view of the
backing chain would not correspond.
This is done only for inactive snapshots as there we have way less
verification.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The previous "QEMU shim" proof of concept was taking an approach of only
caring about initial spawning of the QEMU process. It was then
registered with the libvirtd daemon who took over management of it. The
intent was that later libvirtd would be refactored so that the shim
retained control over the QEMU monitor and libvirt just forwarded APIs
to each shim as needed. This forwarding of APIs would require quite alot
of significant refactoring of libvirtd to achieve.
This impl thus takes a quite different approach, explicitly deciding to
keep the VMs completely separate from those seen & managed by libvirtd.
Instead it uses the new "qemu:///embed" URI scheme to embed the entire
QEMU driver in the shim, running with a custom root directory.
Once the driver is initialization, the shim starts a VM and then waits
to shutdown automatically when QEMU shuts down, or should kill QEMU if
it is terminated itself. This ought to use the AUTO_DESTROY feature but
that is not yet available in embedded mode, so we rely on installing a
few signal handlers to gracefully kill QEMU. This isn't reliable if
we crash of course, but you can restart with the same root dir.
Note this program does not expose any way to manage the QEMU process,
since there's no RPC interface enabled. It merely starts the VM and
cleans up when the guest shuts down at the end. This program is
installed to /usr/bin/virt-qemu-run enabling direct use by end users.
Most use cases will probably want to integrate the concept directly
into their respective application codebases. This standalone binary
serves as a nice demo though, and also provides a way to measure
performance of the startup process quite simply.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This enables support for running QEMU embedded to the calling
application process using a URI:
qemu:///embed?root=/some/path
Note that it is important to keep the path reasonably short to
avoid risk of hitting the limit on UNIX socket path names
which is 108 characters.
When using the embedded mode with a root=/var/tmp/embed, the
driver will use the following paths:
logDir: /var/tmp/embed/log/qemu
swtpmLogDir: /var/tmp/embed/log/swtpm
configBaseDir: /var/tmp/embed/etc/qemu
stateDir: /var/tmp/embed/run/qemu
swtpmStateDir: /var/tmp/embed/run/swtpm
cacheDir: /var/tmp/embed/cache/qemu
libDir: /var/tmp/embed/lib/qemu
swtpmStorageDir: /var/tmp/embed/lib/swtpm
defaultTLSx509certdir: /var/tmp/embed/etc/pki/qemu
These are identical whether the embedded driver is privileged
or unprivileged.
This compares with the system instance which uses
logDir: /var/log/libvirt/qemu
swtpmLogDir: /var/log/swtpm/libvirt/qemu
configBaseDir: /etc/libvirt/qemu
stateDir: /run/libvirt/qemu
swtpmStateDir: /run/libvirt/qemu/swtpm
cacheDir: /var/cache/libvirt/qemu
libDir: /var/lib/libvirt/qemu
swtpmStorageDir: /var/lib/libvirt/swtpm
defaultTLSx509certdir: /etc/pki/qemu
At this time all features present in the QEMU driver are available when
running in embedded mode, availability matching whether the embedded
driver is privileged or unprivileged.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The intent here is to allow the virt drivers to be run directly embedded
in an arbitrary process without interfering with libvirtd. To achieve
this they need to store all their configuration & state in a separate
directory tree from the main system or session libvirtd instances.
This can be useful for doing testing of the virt drivers in "make check"
without interfering with the user's own libvirtd instances.
It can also be used for applications using KVM/QEMU as a piece of
infrastructure to build an service, rather than for general purpose
OS hosting. A long standing example is libguestfs, which would prefer
if its temporary VMs did show up in the main libvirtd VM list, because
this confuses apps such as OpenStack Nova. A more recent example would
be Kata which is using KVM as a technology to build containers.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
If a domain is configured to have an egl-headless display and a virtio
video device, virgl will be enabled automatically within the guest, even
if the video device is configured with accel3d='no'.
In this case we should explicitly pass 'virgl=off' to qemu.
See https://bugzilla.redhat.com/show_bug.cgi?id=1791236 for more
information.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since v4.2-rc0, QEMU introduced a builtin rng backend that uses
getrandom() syscall to generate random. Add it to libvirt with the
backend model 'builtin'.
https://bugzilla.redhat.com/show_bug.cgi?id=1785091
Signed-off-by: Han Han <hhan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'builtin' rng backend model can be used as following:
<rng model='virtio'>
<backend model='builtin'/>
</rng>
Signed-off-by: Han Han <hhan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It is used to check if qemu is capable of rng-builtin object.
This object is added since qemu-4.2.0-rc0, commit 6c4e9d48.
Signed-off-by: Han Han <hhan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>