Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v2.12.0-rc0~48^2~25 the
qom-list-properties command is always available for all QEMU
versions we support (4.2.0, currently). Therefore, we can assume
the capability is always set and thus doesn't need to be checked
for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v2.6.0-rc0~74^2~6 the
DUMP_COMPLETED event is always available for all QEMU versions we
support (4.2.0, currently). Therefore, we can assume the
capability is always set and thus doesn't need to be checked for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Historically, before sending any guest agent command we would
send 'guest-sync' command to make guest agent reset its internal
state and flush any partially read command (json). This was
because there was no event emitted when the agent
(dis-)connected.
But now that we have the event we can execute the sync command
just once - the first time after we've connected. Should agent
disconnect in the middle of reading a command, and then connect
back again we would get the event and disconnect and connect back
again, resulting in the sync command being executed again.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v2.1.0-rc0~18^2~2 the
VSERPORT_CHANGE event is always available for all QEMU versions
we support (4.2.0, currently). Therefore, we can assume the
capability is always set and thus doesn't need to be checked for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v3.0.0-rc0~124^2~1 the
set-numa-node command is always available for all QEMU versions
we support (4.2.0, currently). Therefore, we can assume the
capability is always set and thus doesn't need to be checked for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The qemuDomainQueryWakeupSuspendSupport() does not change state
of the domain as it just runs 'query-current-machine' QMP
command. Therefore, there's no need for it to acquire MODIFY job,
QUERY job is perfectly okay.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The was an attempt to document the retvals for
qemuDomainQueryWakeupSuspendSupport(). However, it's misleading
because in reality, the function can return nothing but 0 or -1,
but the comment implies retval of 1 too.
Since the set of possible return values complies with our
unwritten rule (0 for success, -1 for error), there's no real
value in having the comment and as such can be dropped.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Historically, we had no idea whether the qemu-ga running inside
the guest was running or not. Or whether it crashed in the middle
of reading of a command. That's why we issued guest-sync prior
any intended command, to make the agent flush any partially read
JSON and reset its state machine.
But with VSERPORT_CHANGE event we know when the guest agent
(dis-)connects and thus can issue the sync command just once for
each 'connection'. Whether the agent is synced is tracked in
agent->inSync member, which used to be set to true upon
successful sync. But after rework in v8.0.0-rc1~361 that line is
gone, leaving us with using the historic approach basically.
Fixes: cad84fd51eaac5e3bfdf441f9986e1f2639a0828
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v2.12.0-rc0~148^2~4 the .align
attribute of memory-backend-file is always available for all QEMU
versions we support (4.2.0, currently). Therefore, we can assume
the capability is always set and thus doesn't need to be checked
for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v2.11.0-rc0~95^2~9 the .discard
attribute of memory-backend-file is always available for all QEMU
versions we support (4.2.0, currently). Therefore, we can assume
the capability is always set and thus doesn't need to be checked
for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v2.1.0-rc0~41^2~26 only for Linux,
and later in v3.1.0-rc0~71^2~10 for all POSIX, the
memory-backend-file is going to be present for all QEMU versions
we support (4.2.0, currently). Therefore, we can assume the
capability is always set and thus doesn't need to be checked for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All supported QEMUs have this capability. Stop detecting it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduced in QEMU's commit of v2.1.0-rc0~41^2~104 the
memory-backend-ram is going to be present for all QEMU versions
we support (4.2.0, currently). Therefore, we can assume the
capability is always set and thus doesn't need to be checked for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The g_slist_free_full() function is perfectly capable of handling
NULL (in which case it's NOP), therefore there's no need to check
passed pointers for NULL. We have them though in couple of
places. Drop them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Glib can internally convert only unix timestamps up to
9999-12-31T23:59:59 (253402300799). Validate that the user doesn't use
more than that as otherwise we cause an assertion failure:
(process:1183396): GLib-CRITICAL **: 14:25:00.906: g_date_time_format: assertion 'datetime != NULL' failed
Additionally adjust the schema to allow bigger values as we use
'unsigned long long' to parse the value.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2128993
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In a rare case when virHashAddEntry fails we would just leak the
structure we wanted to add to the hash table.
Fixes: e89acdbc3bbada2f3c1a591278bc975ddee2d5a9
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The callers store only an 'unsigned int' in the field. Convert it to the
proper type including parser/formatter.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Adjust the parser and add missing switch cases to make the complier
happy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Convert the field, adjust the XML parser to use virXMLPropEnum and add
the VIR_DOMAIN_TIMER_TICKPOLICY_LAST enum case to all appropriate
'switch' statements.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The libvirt version is stored in an 'unsigned int' use the proper XPath
query function for the type and remove the temporary variable.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The aim of qemuDomainGetPreservedMounts() is to get a list of
filesystems mounted under /dev and optionally generate a path for
each one where they are moved temporarily when building the
namespace. And if given domain is also running it looks into its
mount table rather than at the host one. But if it did look at
the domain's private mount table, it find /dev mounted twice: the
first time by udev, the second time the tmpfs mounted by us.
Now, later in the function there's a "sorting" algorithm that
tries to reduce number of mount points needing preservation, by
identifying nested mount points. And if we keep the second
occurrence of /dev on the list, well, after the "sorting" we are
left with nothing but "/dev" because all other mount points are
nested.
Fixes: 46b03819ae8d833b11c2aaccb2c2a0361727f51b
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The aim of qemuDomainGetPreservedMounts() is to get a list of
filesystems mounted under /dev and optionally generate a path for
each one where they are moved temporarily when building the
namespace. And the function tries to be a bit clever about it.
For instance, if /dev/shm mount point exists, there's no need to
consider /dev/shm/a nor /dev/shm/b as preserving just 'top level'
/dev/shm gives the same result. To achieve this, the function
iterates over the list of filesystem as returned by
virFileGetMountSubtree() and removes the nested ones. However, it
does so in a bit clumsy way: plain VIR_DELETE_ELEMENT() is used
without freeing the string itself. Therefore, if all three
aforementioned example paths appeared on the list, /dev/shm/a and
/dev/shm/b strings would be leaked.
And when I think about it more, there's no real need to shrink
the array down (realloc()). It's going to be free()-d when
returning from the function. Switch to
VIR_DELETE_ELEMENT_INPLACE() then.
Fixes: cdd9205dfffa3aaed935446a41f0d2dd1357c268
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use virTristateSwitchFromBool to fill in the default if user didn't
request it explicitly.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
libvirt-guests has After= dependency for all the sockets and that is enough.
With the extra Before= in the service file systemd postpones the start of the
socket activated service (when libvirt-guests is trying to connect to the
socket) until after libvirt-guests is stopped effectively making `systemctl stop
libvirt-guests` deadlock. The reason for that is that all stop jobs are
scheduled before any start job. Removing the redundant Before= specification
fixes this behaviour.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
On normal vm startup, we open a file descriptor
for the vsock device in qemuProcessPrepareHost.
However, when doing domxml-to-native, no file descriptors are open.
Only pass the fd if it's not -1, to make domxml-to-native work.
https://bugzilla.redhat.com/show_bug.cgi?id=1777212
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When libvirtd is restarted during an active outgoing migration (or
snapshot, save, or dump which are internally implemented as migration)
it wants to cancel the migration. But by a mistake in commit
v8.7.0-57-g2d7b22b561 the qemuMigrationSrcCancel function is called with
wait == true, which leads to an instant crash by dereferencing NULL
pointer stored in priv->job.current.
When canceling migration to file (snapshot, save, dump), we don't need
to wait until it is really canceled as no migration capabilities or
parameters need to be restored.
On the other hand we need to wait when canceling outgoing migration and
since we don't have virDomainJobData at this point, we have to
temporarily restore the migration job to make sure we can process
MIGRATION events from QEMU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In my commit v8.7.0-57-g2d7b22b561 I attempted to make
qemuMigrationSrcCancel synchronous, but failed. When we are canceling
migration after some kind of error which is detected in
in qemuMigrationSrcWaitForCompletion, jobData->status will be set to
VIR_DOMAIN_JOB_STATUS_FAILED regardless on QEMU state. So instead of
relying on the translated jobData->status in qemuMigrationSrcIsCanceled
we need to check the migration status we get from QEMU MIGRATION event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
As advertised in the previous commit, QEMU_SCHED_CORE_VCPUS case
is implemented for hotplug case. The implementation is very
similar to the cold boot case, except here we fork off for every
vCPU (because the implementation is done in
qemuProcessSetupVcpu() which is also the function that's called
from hotplug code). But that's okay because our hotplug APIs
allow hotplugging one device at the time.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2074559
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
For QEMU_SCHED_CORE_VCPUS case, the vCPU threads should be placed
all into one scheduling group, but not the emulator or any of its
threads. Therefore, as soon as vCPU TIDs are detected, fork off a
child which then creates a separate scheduling group and adds all
vCPU threads into it.
Please note, this commit only handles the cold boot case. Hotplug
is going to be implemented in the next commit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
For QEMU_SCHED_CORE_FULL case, all helper processes should be
placed into the same scheduling group as the QEMU process they
serve. It may happen though, that a helper process is started
before QEMU (cold start of a domain). But we have the dummy
process running from which the QEMU process will inherit the
scheduling group, so we can use the dummy process PID as an
argument to virCommandSetRunAmong().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
For QEMU_SCHED_CORE_EMULATOR or QEMU_SCHED_CORE_FULL the QEMU
process (and its vCPU threads) should be placed into its own
scheduling group. Since we have the dummy process running for
exactly this purpose use its PID as an argument to
virCommandSetRunAmong().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The aim of this helper function is to spawn a child process in
which new scheduling group is created. This dummy process will
then used to distribute scheduling group from (e.g. when starting
helper processes or QEMU itself). The process is not needed for
QEMU_SCHED_CORE_NONE case (obviously) nor for
QEMU_SCHED_CORE_VCPUS case (because in that case a slightly
different child will be forked off).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Ideally, we would just pick the best default and users wouldn't
have to intervene at all. But in some cases it may be handy to
not bother with SCHED_CORE at all or place helper processes into
the same group as QEMU. Introduce a knob in qemu.conf to allow
users control this behaviour.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>