All 'script' blocks are defined as 'set -e' and so a single failed
return value means we won't collect some of the logs. Because of
the nature of the original job's failure some of the log sources
might not be available, but that's fine, however, the gitlab
after_script job cannot finish prematurely.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It could be quite confusing looking at the job log artifacts and having
an empty coredump log in there, IOW it doesn't really give much
confidence that the reporting mechanism actually works.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It's a directory, so -d should be used with 'test'.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Both log filters and log outputs expect string values, however, augeas
apparently requires an extra level of quotes apart from the ones we
pass via shell (see comment [1]) to work properly, otherwise augeas
ignores the value and returns 0.
Without this fix we don't set libvirt's log level to debug, we don't
set logging to a file and hence we don't include the logs in CI
artifacts in case the test suite fails.
[1] https://github.com/hercules-team/augeas/issues/301#issuecomment-143699880
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It was missing from the set. While at it, order the daemon set
alphabetically.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We document that a commit fixing an issue tracked in GitLab
should put just "Fixes: #NNN" into its commit message. But when
viewing git log, having full URL which is directly clickable is
more developer friendly and GitLab is capable of handling both.
Therefore, document that users should put full URL, just like
when fixing a bug tracked in other sites.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
On normal vm startup, we open a file descriptor
for the vsock device in qemuProcessPrepareHost.
However, when doing domxml-to-native, no file descriptors are open.
Only pass the fd if it's not -1, to make domxml-to-native work.
https://bugzilla.redhat.com/show_bug.cgi?id=1777212
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When libvirtd is restarted during an active outgoing migration (or
snapshot, save, or dump which are internally implemented as migration)
it wants to cancel the migration. But by a mistake in commit
v8.7.0-57-g2d7b22b561 the qemuMigrationSrcCancel function is called with
wait == true, which leads to an instant crash by dereferencing NULL
pointer stored in priv->job.current.
When canceling migration to file (snapshot, save, dump), we don't need
to wait until it is really canceled as no migration capabilities or
parameters need to be restored.
On the other hand we need to wait when canceling outgoing migration and
since we don't have virDomainJobData at this point, we have to
temporarily restore the migration job to make sure we can process
MIGRATION events from QEMU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In my commit v8.7.0-57-g2d7b22b561 I attempted to make
qemuMigrationSrcCancel synchronous, but failed. When we are canceling
migration after some kind of error which is detected in
in qemuMigrationSrcWaitForCompletion, jobData->status will be set to
VIR_DOMAIN_JOB_STATUS_FAILED regardless on QEMU state. So instead of
relying on the translated jobData->status in qemuMigrationSrcIsCanceled
we need to check the migration status we get from QEMU MIGRATION event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
systemd in hybrid mode uses v1 hierarchies for controllers and v2 for
process tracking.
The LXC code uses virCgroupAddMachineProcess() to move processes into
appropriate cgroup by manipulating cgroupfs directly. (Note, despite
libvirt also supports talking to systemd directly via
org.freedesktop.machine1 API.)
If this path is taken, libvirt/lxc must convince systemd that processes
really belong to new cgroup, i.e. also the tracking v2 hierarchy must
undergo migration too.
The current check would evaluate v2 backend as unavailable with hybrid
mode (because there are no available controllers). Simplify the
condition and consider the mounted cgroup2 as sufficient to touch v2
hierarchy.
This consequently creates an issue with binding the V2 mount. In hybrid
mode the V2 filesystem may be mounted upon the V1 filesystem. By reversing
the order in which backends are mounted in virCgroupBindMount this problem
is circumvented.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/182
Signed-off-by: Eric van Blokland <mail@ericvanblokland.nl>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
A recent merge request from Weblate adding a new file fails syntax-check
because it adds a new language at the end of LINGUAS, instead of sorting
it alphabetically. Rather than trying to work around it, drop this
pointless rule.
Reverts: 8d160b7979
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The removal of the special internal flag for '-netdev' validatition now
allows us to use the same virCommand object for validation of the
schema.
Pass it into the validator instead of re-parsing and re-generating
everything.
This improved the runtime of qemuxml2argvtest by ~25% on my box.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
As advertised in the previous commit, QEMU_SCHED_CORE_VCPUS case
is implemented for hotplug case. The implementation is very
similar to the cold boot case, except here we fork off for every
vCPU (because the implementation is done in
qemuProcessSetupVcpu() which is also the function that's called
from hotplug code). But that's okay because our hotplug APIs
allow hotplugging one device at the time.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2074559
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
For QEMU_SCHED_CORE_VCPUS case, the vCPU threads should be placed
all into one scheduling group, but not the emulator or any of its
threads. Therefore, as soon as vCPU TIDs are detected, fork off a
child which then creates a separate scheduling group and adds all
vCPU threads into it.
Please note, this commit only handles the cold boot case. Hotplug
is going to be implemented in the next commit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
For QEMU_SCHED_CORE_FULL case, all helper processes should be
placed into the same scheduling group as the QEMU process they
serve. It may happen though, that a helper process is started
before QEMU (cold start of a domain). But we have the dummy
process running from which the QEMU process will inherit the
scheduling group, so we can use the dummy process PID as an
argument to virCommandSetRunAmong().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
For QEMU_SCHED_CORE_EMULATOR or QEMU_SCHED_CORE_FULL the QEMU
process (and its vCPU threads) should be placed into its own
scheduling group. Since we have the dummy process running for
exactly this purpose use its PID as an argument to
virCommandSetRunAmong().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The aim of this helper function is to spawn a child process in
which new scheduling group is created. This dummy process will
then used to distribute scheduling group from (e.g. when starting
helper processes or QEMU itself). The process is not needed for
QEMU_SCHED_CORE_NONE case (obviously) nor for
QEMU_SCHED_CORE_VCPUS case (because in that case a slightly
different child will be forked off).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Ideally, we would just pick the best default and users wouldn't
have to intervene at all. But in some cases it may be handy to
not bother with SCHED_CORE at all or place helper processes into
the same group as QEMU. Introduce a knob in qemu.conf to allow
users control this behaviour.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
There are two modes of core scheduling that are handy wrt
virCommand:
1) create new trusted group when executing a virCommand
2) place freshly executed virCommand into the trusted group of
another process.
Therefore, implement these two new operations as new APIs:
virCommandSetRunAlone() and virCommandSetRunAmong(),
respectively.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Since its 5.14 release the Linux kernel allows userspace to
define trusted groups of processes/threads that can run on
sibling Hyper Threads (HT) at the same time. This is to mitigate
side channel attacks like L1TF or MDS. If there are no tasks to
fully utilize all HTs, then a HT will idle instead of running a
task from another (un-)trusted group.
On low level, this is implemented by cookies (effectively an UL
value): processes in the same trusted group share the same cookie
and cookie is unique to the group. There are four basic
operations:
1) PR_SCHED_CORE_GET -- get cookie of given PID,
2) PR_SCHED_CORE_CREATE -- create a new unique cookie for PID,
3) PR_SCHED_CORE_SHARE_TO -- push cookie of the caller onto
another PID,
4) PR_SCHED_CORE_SHARE_FROM -- pull cookie of another PID into
the caller.
Since a system where the code is built can be different to the
one where the code is ran let's provide declaration of some
values. It's not unusual for distros to ship older linux-headers
than the actual kernel.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
There are couple of scenarios where we need to reflect MAC change
done in the guest:
1) domain restore from a file (here, we don't store updated MAC
in the save file and thus on restore create the macvtap with
the original MAC),
2) reconnecting to a running domain (here, the guest might have
changed the MAC while we were not running),
3) migration (here, guest might change the MAC address but we
fail to respond to it,
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When restoring a domain from a save image, we need to query QEMU
for some runtime information that is not stored in status XML, or
even if it is, it's not parsed (e.g. virtio-mem actual size, or
soon rx-filters for macvtaps).
During migration, this is done in qemuMigrationDstFinishFresh(),
or in case of newly started domain in qemuProcessStart(). Except,
the way that the code is written, when restoring from a save
image (which is effectively a migration), the state is never
refreshed, because qemuProcessStart() sees incoming migration so
it does not refresh the state thinking it'll be done in the
finish phase. But restoring from a save image has no finish
phase. Therefore, refresh the state explicitly after the domain
was restored but before vCPUs are resumed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We are not updating domain XML to new MAC address, just merely
setting host side of macvtap. But we don't need a MODIFY job for
that, QUERY is just fine.
This allows us to process the event should it occur during
migration.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Parts of the code that responds to the NIC_RX_FILTER_CHANGED
event are going to be re-used. Separate them into a function
(qemuDomainSyncRxFilter()) and move the code into qemu_domain.c
so that it can be re-used from other places of the driver.
There's one slight change though: instead of passing device alias
from the just received event to qemuMonitorQueryRxFilter(), I've
switched to using the alias stored in our domain definition. But
these two are guaranteed to be equal. virDomainDefFindDevice()
made sure about that, if nothing else.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There's no need to call virNetDevRxFilterFree() explicitly, when
corresponding variables can be declared as
g_autoptr(virNetDevRxFilter).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The testdriver has xmlns support for overriding object default
state. demo it by pausing a VM
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
With the introduction of `libvirt` sub-directory to the cgroup topology
some of the cgroup configuration was moved into that sub-directory
together with the VM processes.
LXC uses virCgroupNewSelf() in the container process to detect cgroups
in order to report various data from cgroups inside the container.
We need to properly detect the new `libvirt` sub-directory here
otherwise LXC will report incorrect data.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The `legacy` mode is also valid so we need to take it into account as
well.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Due to the setup of the modular daemon service files the reverting to non-socket
activated daemons could have never worked. The reason is that masking the
socket files prevents starting the daemons since they require (as in Requires=
rather than Wants= in the service file) the sockets. On top of that it creates
issues with some libvirt-guests setups and needlessly increases our support
matrix.
Nothing prevents users to modify their setup in a way that will still work
without socket activation, but supporting such setup only creates burden on our
part.
This technically reverts most of commit 59d30adacd except the change made to
the libvirtd manpage since the monolithic daemon still supports traditional mode
of starting even on systemd.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This patch adds a new worker qemuDomainGetStatsVm which reports the
stats returned by "query-stats" via qemuMonitorQueryStats for the VM
target.
Signed-off-by: Amneesh Singh <natto@weirdnatto.in>
This patch adds the stats queried by qemuMonitorQueryStats for vCPU and
add them according to their QOM device path
Signed-off-by: Amneesh Singh <natto@weirdnatto.in>
This patch adds a hashtable for storing the stats schema and a function
to refresh it by querying "query-stats-schemas" using
qemuMonitorQueryStatsSchema
Signed-off-by: Amneesh Singh <natto@weirdnatto.in>
Commit 5c17a7ba41 introduced a new feature (ibrs) but did not update
existing cputestdata.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
As qemu becomes more modularized, it is important for libvirt to advertise
availability of the modularized functionality through capabilities. This
change adds channel devices to domain capabilities, allowing clients such
as virt-install to avoid using spicevmc channel devices when not supported
by the target qemu.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The error message doesn't really convey the information that 3d
acceleration works only for the 'virtio' model and similarly the same
error would be reported if qemu doesn't support acceleration, which is
hard to debug.
Split and clarify the errors.
Noticed in https://gitlab.com/libvirt/libvirt/-/issues/388
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Use g_autofree in capabilities.c for some pointers still using manual cleanup,
and remove unnecessary cleanup.
Signed-off-by: Jiang Jiacheng <jiangjiacheng@huawei.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Change strings to use g_autofree.
Signed-off-by: Maxim Kostin <ttxinee@outlook.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Users can play all sorts of games with mount points. For
instance, they can unmount and mount back a hugetlbfs and only
after that attempt to hotplug memory.
This has an unfortunate consequence though. During memory
hotplug, when qemuProcessBuildDestroyMemoryPaths() is called the
path is created with very restrictive mode (0700) because under
the hood g_mkdir_with_parents(path, 0700) is called.
Therefore, create the driver generic portion of the path
separately.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2134009
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>