Allow swtpm (0.7.0 or later) to fsync on the directory where it writes
its state files into so that "the entry in the directory containing the
file has also reached disk" (fsync(2)).
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Neal Gompa <ngompa13@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
`virHashNew` cannot return NULL, the check is not needed.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
According to definition of virDomainCoreDumpFormat, the "elf" should be
the first argument in VIR_ENUM_*.
Fixes: 84cc4543be
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1981625
Signed-off-by: Han Han <hhan@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The same pattern of retrieving the domXML, running the hook script, and
checking for error is used throughout the libxl driver. Remove some
repetitive code by adding a helper function to perform these tasks.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce libxlDomainStartPerform as part of decomposing libxlDomainStart.
Perform all operations that are part of starting a domain. On error the
domain is destroyed from libxl's perspective, but the operations perfomed
in libxlDomainStartPrepare must be unwound by libxlDomainStart.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce libxlDomainStartPrepare as part of decomposing libxlDomainStart.
Perform all prepratory operations such as hostdevs, network devs, etc.
Also ensure all such operations are properly unwound on error.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Move network device cleanup code from libxlDomainCleanup to a helper
function for use in a subsequent patch.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
the logic to check for existence of a managed save image and use it to
start the VM can be moved to libxlDomainStartNew. libxlDomainStart has
become unwieldy and this is a small step to make it more manageable.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Wrong flag use could have user-visible implications. Mention the fix.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The 'storageMigration' flag is supposed to be true if storage migration
is requested, which is based on VIR_MIGRATE_NON_SHARED_DISK or
VIR_MIGRATE_NON_SHARED_INC flags. The assignment to the variable used
QEMU_MONITOR_MIGRATE_NON_SHARED_INC (0x04) instead of
VIR_MIGRATE_NON_SHARED_INC (0x80), caused libvirtd to skip the actual
copy of data.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1978526
Fixes: da69f4b208
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Libvirt started emitting two threshold events, once with index and once
withouth when the index isn't registered. Document this caveat.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Remember whether the user passed an explicit index when registering the
event so that we can avoid the top level event when it isn't needed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When users register the threshold event for the top level image with an
explicit index (e.g. vda[3]) they are clearly expecting the index in the
event.
This flag will help avoiding emission of the second event without the
index when the client clearly requested one with the index.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When qos is set or delete, we have to check if the port is an ovs managed
port. If true, call the virNetDevOpenvswitchInterfaceSetQos function when qos
is set, and call the virNetDevOpenvswitchInterfaceClearQos function when
the interface is to be destroyed.
Signed-off-by: Jinsheng Zhang <zhangjl02@inspur.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Return 0 directly if the port is ovs managed. When the ovs port is set
noqueue, qos config on this port will not work.
Signed-off-by: Jinsheng Zhang <zhangjl02@inspur.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Introduce qos setting and cleaning method. Use ovs command to set qos
parameters on specific interface of qemu virtual machine.
When an ovs port is created, we add 'ifname' to external-ids. When setting
qos on an ovs port, query its qos and queue. If found, change qos on queried
queue and qos, otherwise create new queue and qos. When cleaning qos, query
and clean queues and qos in ovs table record by 'ifname' and 'vmid'.
Signed-off-by: Jinsheng Zhang <zhangjl02@inspur.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Tell whether a port definition is an ovs managed virtual port
Signed-off-by: Jinsheng Zhang <zhangjl02@inspur.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When seeing a guest with a sound device, and no audio backend, we
automatically add an audio backend XML element based on the historical
QEMU driver behaviour. Unfortunately when we live migrate back to an
old libvirt, it may not understand the audio driver type we configured.
We thus need to strip the default audio backend when migrating.
Fixes https://gitlab.com/libvirt/libvirt/-/issues/179
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
There might be misunderstanding [1] when libvirt permits domain
redefinition and if it's a valid case at all.
1. b973d7c4b4/plugins/modules/virt.py (L533)
Signed-off-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It all started as a simple bug: trying to move domain memory
between NUMA nodes (e.g. via virsh numatune) did not work. I've
traced the problem to qemuProcessHook() because that's where we
decide whether to rely on CGroups or use numactl APIs to satisfy
<numatune/>. The problem was that virCgroupControllerAvailable()
was telling us that cpuset controller is unavailable. This is
CGroupsV2, and pretty weird because CGroupsV2 definitely do
support cpuset controller and I had them mounted in a standard
way. What I found out (with Pavel's help) was that
virCgroupNewSelf() was looking into the following path to detect
supported controllers:
/sys/fs/cgroup/system.slice/cgroup.controllers
However, if there's no other VM running then the system.slice
only has 'memory' and 'pids' controllers. Therefore, we saw
'cpuset' as not available. The fix is to look at the top most
path, which has the full set of controllers:
/sys/fs/cgroup/cgroup.controllers
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1976690
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>