Declare the function with G_GNUC_WARN_UNUSED_RESULT as we always want to
use the returned value.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
In one of my previous patches I've tried to postpone dropping
CAP_SETPCAP until the very end because it's needed for
capng_apply(). What I did not realize back then was that we might
not have the capability to begin with. Because of unknown reasons
capng_apply() pollutes logs only for CAPNG_SELECT_BOUNDS and not
for CAPNG_SELECT_CAPS.
Reproducer is really simple: run libvirtd as a regular user.
During its initialization, libvirtd will spawn some binaries
(dnsmasq, qemu-*, etc.) and while doing so it will try to drop
capabilities.
Anyway, let's call capng_apply(CAPNG_SELECT_BOUNDS) only if we
have the CAP_SETPCAP (which is tracked in need_setpcap variable).
Fixes: 438b50dda8
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1924218
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
After all capabilities were set (except for CAP_SETGID,
CAP_SETUID and CAP_SETPCAP) and after UID:GID was changed we drop
the last aforementioned capabilities (we couldn't drop them
before because we needed UID:GID and capabilities change).
Therefore, there's final capng_apply() call. However, it is
wrapped in one layer of parenthesis more than needed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If the function is called with maxlen equal to `INT_MAX`, adding
one will trigger a signed integer overflow.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This appears to be a copy-paste mistake from the check directly above.
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
On libvirtd startup, the list of priority worker threads is uninitialized
(`pool->prioWorkers` is NULL), and then "expanded" to zero (`prioWorkers`)
entries.
This causes `virThreadPoolExpand` to call `VIR_EXPAND_N` on a null pointer
and an increment of zero. The zero increment triggers `virReallocN` to not
actually allocate any memory and leave the pointer NULL, which, eventually,
causes `memset(NULL, 0, 0)` to be called in `virExpandN`.
`memset` is declared `__attribute__ ((__nonnull__ 1))`, which triggers the
following warning when libvirt is compiled with address sanitizing enabled:
$ meson -Dbuildtype=debug -Db_lundef=false -Db_sanitize=address,undefined
build && ninja -C build
$ ./build/run build/src/libvirtd
src/util/viralloc.c:82:5: runtime error: null pointer passed as
argument 1, which is declared to never be null
Signed-off-by: Tim Wiederhake <twiederh@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This was bothering someone as the debug message looked like there was an issue
despite it being just a debug message. Change it to what is actually happening
and why the name is being skipped.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
We have an example in virDirRead() documentation on how to use
the function. In there, the directory structure is plain DIR, but
that won't work anymore. Switch over to g_autoptr(DIR) which is
what we use now.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Introduce qos setting and cleaning method. Use ovs command to set qos
parameters on specific interface of qemu virtual machine.
When an ovs port is created, we add 'ifname' to external-ids. When setting
qos on an ovs port, query its qos and queue. If found, change qos on queried
queue and qos, otherwise create new queue and qos. When cleaning qos, query
and clean queues and qos in ovs table record by 'ifname' and 'vmid'.
Signed-off-by: Jinsheng Zhang <zhangjl02@inspur.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It all started as a simple bug: trying to move domain memory
between NUMA nodes (e.g. via virsh numatune) did not work. I've
traced the problem to qemuProcessHook() because that's where we
decide whether to rely on CGroups or use numactl APIs to satisfy
<numatune/>. The problem was that virCgroupControllerAvailable()
was telling us that cpuset controller is unavailable. This is
CGroupsV2, and pretty weird because CGroupsV2 definitely do
support cpuset controller and I had them mounted in a standard
way. What I found out (with Pavel's help) was that
virCgroupNewSelf() was looking into the following path to detect
supported controllers:
/sys/fs/cgroup/system.slice/cgroup.controllers
However, if there's no other VM running then the system.slice
only has 'memory' and 'pids' controllers. Therefore, we saw
'cpuset' as not available. The fix is to look at the top most
path, which has the full set of controllers:
/sys/fs/cgroup/cgroup.controllers
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1976690
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
In 'virResctrlAllocUpdateMask', mask is updated only if 'previous mask' is NULL.
By default, the bitmask for a cache resource for a VM is initialized with
'default-resctrl-group' bitmask. So the 'previous mask' would not be NULL and
mask won't get updated if cachetune is configured for a VM. This causes libvirt
to use same bitmask as 'default-resctrl-group' bitmask for a cache resource for
a VM. This patch fixes the issue.
Fixes: d8a354954a
Signed-off-by: Vinayak Kale <vkale@nvidia.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Bounding set capabilities were introduced in kernel commit of
v2.6.25-rc1~912. I guess it is safe to assume that all Linux
hosts we ran on have at least that version or newer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
There are few cases where we execute a virCommand with all caps
cleared (virCommandClearCaps()). For instance
dnsmasqCapsRefreshInternal() does just that. This means, that
after fork() and before exec() the virSetUIDGIDWithCaps() is
called. But since the caller did not want to change anything,
just drop capabilities, these are the values of arguments:
virSetUIDGIDWithCaps (uid=-1, gid=-1, groups=0x0, ngroups=0,
capBits=0, clearExistingCaps=true)
This means that indeed all capabilities will be dropped,
including CAP_SETPCAP. But this capability controls whether
capabilities can be set, IOW whether capng_apply() succeeds.
There are two calls of capng_apply() in the function. The
CAP_SETPCAP is dropped after the first call and thus the other
call (capng_apply(CAPNG_SELECT_BOUNDS);) fails.
The solution is to keep the capability for as long as needed
(just like CAP_SETGID and CAP_SETUID) and drop it only at the
very end (just like CAP_SETGID and CAP_SETUID).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1949388
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When removing check for return value of VIR_EXPAND_N this place was
incorrectly modified causing failure to start a VM with cputune
memorytune configured with useless error message:
error: Failed to start domain 'vm1'
error: An error occurred, but the cause is unknown
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1973094
Fixes: 7d2fd6ef01
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If given file is not found in $PATH then g_find_program_in_path()
returns NULL. However, g_canonicalize_filename() does not accept
NULL as input.
Fixes: 65c2901906
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The new version allows passing a virBuffer to format the string into.
This will be helpful in solving a memory lean in wrong usage of
virCommandToString and also in tests where we need to add a newline
after the command in certain cases.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Connecting a tap device to an Open vSwitch is done by adding a "port"
to the switch with the ovs-vsctl "add-port" command. The port will
have the same name as the tap device, but it is a separate entity, and
can survive beyond the destruction of the tap device (although under
normal circumstances the port will be deleted around the same time the
tap device is deleted).
This makes it possible for a port of a particular name to already
exist at the time libvirt calls ovs-vsctl to add that port. The
original commit of Open vSwitch support (commit df81004632, libvirt
0.9.10, Feb. 2012) used the "--may-exist" option to the add-port
command to indicate that a port of the desired name might already
exist, and that it was okay to simply re-use this port (rather than
failing with an error message).
Then in commit 33445ce844 (libvirt 1.2.7, April 2014) the command
was changed to use "--if-exists del-port blah" instead of
"--may-exist". The reason given was that there was a bug in OVS where
a stale port would be unusable even though it still existed; the
workaround was to forcibly delete any existing port prior to adding
the new port (of the same name). This is the ovs-vsctl command still
in use by libvirt today.
It recently came up in the discussion of a bug concerning guest packet
loss during OpenStack upgrades (https://bugzilla.redhat.com/1963164)
that the bug in OVS that necessitated the del-port workaround was
fixed quite a long time ago (August 2015):
e21c6643a0
thus rendering the workaround in libvirt unnecessary. The assertion in
that discussion is that this workaround is now the cause of the packet
loss being experienced during OpenStack upgrades. I'm not convinced
this is the case, but it does appear that there is no reason to carry
this workaround in libvirt any longer, so this patch reverts the code
back to the original behavior (using "--may-exist" instead of
"--if-exists del-port").
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The g_build_filename() would decide which separator
to use instead of hardcoding in g_strdup_printf().
Related issue: https://gitlab.com/libvirt/libvirt/-/issues/12
Signed-off-by: Luke Yue <lukedyue@gmail.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Cloud-Hypervisor is a KVM virtualization using hypervisor. It
functions similarly to qemu and the libvirt Cloud-Hypervisor driver
uses a very similar structure to the libvirt driver.
The biggest difference from the libvirt perspective is that the
"monitor" socket is seperated into two sockets one that commands are
issued to and one that events are notified from. The current
implementation only uses the command socket (running over a REST API
with json encoded data) with future changes to add support for the
event socket (to better handle shutdowns from inside the VM).
This patch adds support for the following initial VM actions using the
Cloud-Hypervsior API:
* vm.create
* vm.delete
* vm.boot
* vm.shutdown
* vm.reboot
* vm.pause
* vm.resume
To use the Cloud-Hypervisor driver, the v15.0 release of
Cloud-Hypervisor is required to be installed.
Some additional notes:
* The curl handle is persistent but not useful to detect ch process
shutdown/crash (a future patch will address this shortcoming)
* On a 64-bit host Cloud-Hypervisor needs to support PVH and so can
emulate 32-bit mode but it isn't fully tested (a 64-bit kernel and
32-bit userspace is fine, a 32-bit kernel isn't validated)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: William Douglas <william.douglas@intel.com>
libacl is Linux-only, so we don't need to explicitly check for
either the target platform or header availability, and we can
simply rely on cc.find_library() instead. The corresponding
preprocessor define is renamed to more accurately reflect the
nature of the check.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
There was a recent change in libxml2 that caused a trouble for
us. To us, <metadata/> in domain or network XMLs are just opaque
value where management application can store whatever data it
finds fit. At XML parser/formatter level, we just make a copy of
the element during parsing and then format it back. For
formatting we use xmlNodeDump() which allows caller to specify
level of indentation. Previously, the indentation was not
applied onto the very first line, but as of v2.9.12-2-g85b1792e
libxml2 is applying indentation also on the first line.
This does not work well with out virBuffer because as soon as we
call virBufferAsprintf() to append <metadata/> element,
virBufferAsprintf() will apply another level of indentation.
Instead of version checking, let's skip any indentation added by
libxml2 before virBufferAsprintf() is called.
Note, the problem is only when telling xmlNodeDump() to use
indentation, i.e. level argument is not zero. Therefore,
virXMLNodeToString() which also calls xmlNodeDump() is safe as it
passes zero.
Tested-by: Bjoern Walk <bwalk@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
I guess this is more of an academic problem, because if
<metadata/> content was problematic we would have caught the
error during parsing. Anyway, as is this function returns -1
without any error reported. Fix it by reporting one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
So far, we have to places where we format <metadata/> into XMLs:
domain and network. Bot places share the same code. Move it into
a helper function and just call it from those places.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
None of them are currently needed to pass our upstream CI, most were
either for ancient clang versions or coverity for silencing false
positives.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
They were added mostly randomly and we don't really want to keep working
around of false positives.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
virNetDevOpenvswitchInterfaceGetMaster is declared twice in
src/util/virnetdevopenvswitch.h. Remove the last one.
Signed-off-by: Peng Liang <liangpeng10@huawei.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The basic use case of VIR_IDENTITY_AUTORESTORE() is in
conjunction with virIdentityElevateCurrent(). What happens is
that virIdentityElevateCurrent() gets current identity (which
increases the refcounter of thread local virIdentity object) and
returns a pointer to it. Later, when the variable goes out of
scope the virIdentityRestoreHelper() is called which calls
virIdentitySetCurrent() over the old identity. But this means
that the refcounter is increased again.
Therefore, we have to explicitly decrease the refcounter by
calling g_object_unref().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
A secret can be marked with the "private" attribute. The intent was that
it is not possible for any libvirt client to be able to read the secret
value, it would only be accesible from within libvirtd. eg the QEMU
driver can read the value to launch a guest.
With the modular daemons, the QEMU, storage and secret drivers are all
running in separate daemons. The QEMU and storage drivers thus appear to
be normal libvirt client's from the POV of the secret driver, and thus
they are not able to read a private secret. This is unhelpful.
With the previous patches that introduced a "system token" to the
identity object, we can now distinguish APIs invoked by libvirt daemons
from those invoked by client applications.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This is essentially a way to determine if the current identity
is that of another libvirt daemon.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When talking to the secret driver, the callers inside libvirt daemons
need to be able to run with an elevated privileges that prove the API
calls are made by a libvirt daemon, not an end user application.
The virIdentityElevateCurrent method will take the current identity
and, if not already present, add the system token. The old current
identity is returned to the caller. With the VIR_IDENTITY_AUTORESTORE
annotation, the old current identity will be restored upon leaving
the codeblock scope.
... early work with regular privileges ...
if (something needing elevated privs) {
VIR_IDENTITY_AUTORESTORE virIdentity *oldident =
virIdentityElevateCurrent();
if (!oldident)
return -1;
... do something with elevated privileges ...
}
... later work with regular privileges ...
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When creating the system identity set the system token. The system
token is currently stored in a local path
/var/run/libvirt/common/system.token
Obviously with only traditional UNIX DAC in effect, this is largely
security through obscurity, if the client is running at the same
privilege level as the daemon. It does, however, reliably distinguish
an unprivileged client from the system daemons.
With a MAC system like SELinux though, or possible use of containers,
access can be further restricted.
A possible future improvement for Linux would be to populate the
kernel keyring with a secret for libvirt daemons to share.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>