This allows users to restrict memory nodes without setting any specific
memory policy, then 'restrictive' mode is useful.
Signed-off-by: Luyao Zhong <luyao.zhong@intel.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The command line argument is called --hanshakefd (check out
lxc_controller.c:main()). But the command line builder puts only
--handshake. This works, because there is no other argument
sharing the prefix.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
I've encountered the following bug, but only on Gentoo with
systemd and CGroupsV2. I've started an LXC container successfully
but destroying it reported the following error:
error: Failed to destroy domain 'amd64'
error: internal error: failed to get cgroup backend for 'pathOfController'
Debugging showed, that CGroup hierarchy is full of surprises:
/sys/fs/cgroup/machine.slice/machine-lxc\x2d861\x2damd64.scope/
└── libvirt
├── dev-hugepages.mount
├── dev-mqueue.mount
├── init.scope
├── sys-fs-fuse-connections.mount
├── sys-kernel-config.mount
├── sys-kernel-debug.mount
├── sys-kernel-tracing.mount
├── system.slice
│ ├── console-getty.service
│ ├── dbus.service
│ ├── system-getty.slice
│ ├── system-modprobe.slice
│ ├── systemd-journald.service
│ ├── systemd-logind.service
│ └── tmp.mount
└── user.slice
For comparison, here's the same container on recent Rawhide:
/sys/fs/cgroup/machine.slice/machine-lxc\x2d13550\x2damd64.scope/
└── libvirt
Anyway, those nested directories should not be a problem, because
virCgroupKillRecursiveInternal() removes them recursively, right?
Sort of. The function really does remove nested directories, but
it assumes that every directory has the same controller as the
rest. Just take a look at virCgroupV2KillRecursive() - it gets
'Any' controller (the first one it found in ".scope") and then
passes it to virCgroupKillRecursiveInternal().
This assumption is not true though. The controllers found in
".scope" are the following:
cpuset cpu io memory pids
while "libvirt" has fewer:
cpuset cpu io memory
Up until now it's not problem, because of how we order
controllers internally - "cpu" is the first and thus picking
"Any" controller returns just that. But the rest of directories
has no controllers, their "cgroup.controllers" is just empty.
What fixes the bug is dropping @controller argument from
virCgroupKillRecursiveInternal() and letting each iteration work
pick its own controller.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The VIR_CGROUP_BACKEND_CALL() macro gets a backend for controller
and calls corresponding callback in it. If either is NULL then an
error message is printed out. However, the error message contains
only the intended callback func and not controller or backend
found.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Currently, only a subset of virCgroupKillRecursiveInternal()
arguments is printed into debug logs. Print all of them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
virDomainGetDiskErrors uses the weird semantics where we make the
caller query for the number of elements and then pass pre-allocated
structure.
The cleanup section errorneously used the 'count' variable to free the
allocated elements for the API but 'count' can be '-1' in cases when the
API returns failure, thus attempting to free beyond the end of the
array.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/155
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Use the appropriate type for the variable and refactor the XML parser to
parse it correctly using virXMLPropEnum.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Use the appropriate type for the variable and refactor the XML parser to
parse it correctly using virXMLPropEnum.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Commit 95f8e3237e which introduced XML schema validation
for snapshot XMLs always asserted the validation for the XML generated
by 'virsh snapshot-create-as' on the basis that it's libvirt-generated,
thus valid.
This unfortunately isn't true as users can influence certain bits of the
XML such as the disk image path which must be a full path. Thus if a
user tries to invoke virsh as:
$ virsh snapshot-create-as upstream --diskspec vda,file=relative.qcow2
error: XML document failed to validate against schema: Unable to validate doc against /path/to/domainsnapshot.rng
Extra element disks in interleave
Element domainsnapshot failed to validate content
They get a rather useless error from the libxml2 RNG validator.
With this fix applied, we get to the XML parser in libvirtd which has a
more reasonable error:
$ virsh snapshot-create-as upstream --diskspec vda,file=relative.qcow2
error: XML error: disk snapshot image path 'relative.qcow2' must be absolute
Instead users can force validation of the XML generated by 'virsh
snapshot-create-as' by passing the '--validate' flag.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Convenience function to return the value of an enum XML attribute.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Convenience function to return the value of an unsigned integer XML attribute.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Convenience function to return the value of an integer XML attribute.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Convenience function to return the value of an on / off XML attribute.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Convenience function to return the value of a yes / no XML attribute.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Switch @xml and @pctxt to g_autofree and get rid of the "error" and
"cleanup" labels.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Move the reporting of parsing error on the error path of the parser as
other code paths report their own errors already.
Additionally prefer printing the 'url' as document name if provided
instead of "[inline data]" as that usually gives a better hint at least
which kind of XML is being parsed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Remove the "block" formatting of function declarations and use uniform
spacing.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
Note that the comment for virStoragePoolSourceDevice::part_separator was wrong.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Note that the comment for virStorageAdapterFCHost::managed was wrong.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Note that the wrong "VIR_TRISTATE_*_ABSENT" was used in qemuDomainChangeNet.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Notable changes:
* cross-building container images are smaller because they
no longer include the native compilers;
* ccache is enabled for clang builds.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Coverity reported that this function can return NULL, so it should be
handled properly.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Currently virMediatedDeviceGetIOMMUGroupDev() looks up the iommu group
number and uses that to construct a path to the iommu group device.
virMediatedDeviceGetIOMMUGroupNum() then uses that device path and takes
the basename to get the group number. That's unnecessary extra string
manipulation for *GroupNum(). Reverse the implementations and make
*GroupDev() call *GroupNum().
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
virMediatedDeviceGetSysfsPath() (via g_strdup_printf()) is guaranteed to
return a non-NULL value, so remove the unnecessary checks for NULL.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Fixes: 8fe30b2167
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Over several years of debugging reports related to VM shutdown, destruction,
and cleanup, I've found that logging of all events received from libxl and
logging the entry of libxlDomainCleanup has proven useful. Add the these
debug messages upstream to aid in future debugging.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
When querying guest info via virDomainGetGuestInfo() the
'guest-get-disks' agent command is called. It may report disk
serial number which we parse, but never report nor use for
anything else.
As it turns out, it may help management application find matching
disk in their internals.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-By: Tomáš Golembiovský <tgolembi@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When running on systemd host the cgroup itself is removed by machined
so when we reach this code the directory no longer exist. If libvirtd
was running the whole time between starting and destroying VM the
detection is skipped because we still have both FD in memory. But if
libvirtd was restarted and no operation requiring cgroup devices
executed the FDs would be 0 and libvirt would try to detect them using
the cgroup directory. This results in reporting following errors:
libvirtd[955]: unable to open '/sys/fs/cgroup/machine.slice/machine-qemu\x2d1\x2dguest.scope/': No such file or directory
libvirtd[955]: Failed to remove cgroup for guest
When running on non-systemd host where we handle cgroups manually this
would not happen.
When destroying VM it is not necessary to detect the BPF prog and map
because the following code only closes the FDs without doing anything
else. We could run code that would try to detach the BPF prog from the
cgroup but that is not necessary as well. If the cgroup is removed and
there is no other FD open to the prog kernel will cleanup the prog and
map eventually.
Reported-by: Eric Farman <farman@linux.ibm.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Tested-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When nested cgroup was introduced it did not properly free file
descriptors for BPF prog and map. With nested cgroups we create the BPF
bits in the nested cgroup instead of the VM root cgroup.
This would leak the FDs which would be the last reference to the prog
and map so kernel would not remove the resources as well. It would only
happen once libvirtd process exits.
Fixes: 184245f53b
Reported-by: Eric Farman <farman@linux.ibm.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Tested-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Eric Farman <farman@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
After all devices were enumerated, the enumeration thread call
nodeDeviceUpdateMediatedDevices() to refresh the state of
mediated devices. This means that 'mdevctl' will be executed. But
it may be missing on some systems (e.g. mine) in which case we
should just skip the update of mdevs instead of failing whole
device enumeration.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
To speed up nodedev driver initialization, the device enumeration
is done in a separate thread. Once finished, the thread sets a
boolean variable that allows public APIs to be called (instead of
waiting for the thread to finish).
However, if there's an error in the device enumeration thread
then the control jumps over at the 'error' label and the boolean
is never set. This means, that any virNodeDev*() API is stuck
forever. Mark the initialization as complete (the thread is
quitting anyway) and let the APIs proceed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Although I have not experienced this in real life, there is a
possible race condition when creating new device, getting its XML
or parent or listing its capabilities. If the nodedev driver is
still enumerating devices (in a separate thread) and one of
virNodeDeviceGetXMLDesc(), virNodeDeviceGetParent(),
virNodeDeviceNumOfCaps(), virNodeDeviceListCaps() or
virNodeDeviceCreate() is called then it can lead to spurious
results because the device enumeration thread is removing devices
from or adding them to the internal list of devices (among with
their states).
Therefore, wait for things to settle down before proceeding with
any of the APIs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
This is more academic dispute than a real bug, but this is taken
from pthread_cond_broadcast(3p) man:
The pthread_cond_broadcast() or pthread_cond_signal() functions
may be called by a thread whether or not it currently owns the
mutex that threads calling pthread_cond_wait() or
pthread_cond_timedwait() have associated with the condition
variable during their waits; however, if predictable scheduling
behavior is required, then that mutex shall be locked by the
thread calling pthread_cond_broadcast() or
pthread_cond_signal().
Therefore, broadcast the initCond while the nodedev driver is
still locked.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>