This function doesn't have an overly verbose cleanup section as there
isn't any error code path. Unify it with the rest of the functions which
will simplify adding a possible error path.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This is similar to one of previous patches.
When receiving stream (on virStorageVolUpload() and subsequent
virStreamSparseSendAll()) we may receive a hole. If the volume we
are saving the incoming data into is a regular file we just
lseek() and ftruncate() to create the hole. But this won't work
if the file is a block device. If that is the case we must write
zeroes so that any subsequent reader reads nothing just zeroes
(just like they would from a hole in a regular file).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1852528
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This is very similar to previous commit.
The virshStreamInData() callback is used by virStreamSparseSendAll()
to detect whether the file the data is read from is in data or hole
section. The SendAll() will then send corresponding type of virStream
message to make server create a hole or write actual data. But the
callback uses virFileInData() even for block devices, which results in
an error. Just like in previous commit, emulate a DATA section
for block devices.
Partially resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1852528
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When handling sparse stream, a thread is executed. This thread
runs a read() or write() loop (depending what API is called; in
this case it's virStorageVolDownload() and this the thread run
read() loop). The read() is handled in virFDStreamThreadDoRead()
which is then data/hole section aware, meaning it uses
virFileInData() to detect data and hole sections and sends
TYPE_DATA or TYPE_HOLE virStream messages accordingly.
However, virFileInData() does not work with block devices. Simply
because block devices don't have data and hole sections. What we
can do though, is to mimic being always in a DATA section.
Partially resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1852528
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This callback is called when the server sends us STREAM_HOLE
meaning there is no real data, only zeroes. For regular files
we would just seek() beyond EOF and ftruncate() to create the
hole. But for block devices this won't work. Not only we can't
seek() beyond EOF, and ftruncate() will fail, this approach won't
fill the device with zeroes. We have to do it manually.
Partially resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1852528
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We can't use virFileInData() with block devices, but we can
emulate being in data section all the time (vol-upload case).
Alternatively, we can't just lseek() beyond EOF with block
devices to create a hole, we will have to write zeroes
(vol-download case). But to decide we need to know if the FD we
are reading data from / writing data to is a block device. Store
this information in _virshStreamCallbackData.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
These callback will need to know more that the FD they are
working on. Pass the structure that is passed to other stream
callbacks (e.g. virshStreamSource() or virshStreamSourceSkip())
instead of inventing a new one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
For libvirt, the volume is just a binary blob and it doesn't
interpret data on volume upload/download. But as it turns out,
this unspoken assumption is not clear to our users. Document it
explicitly.
Suggested in: https://bugzilla.redhat.com/show_bug.cgi?id=1851023#c17
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When netdev-add was qapified it took us by surprise and we had to
scramble to fix the internals to format conformant monitor arguments.
Add a last-resort early warning system if this happens to object-add or
device_add. Hopefully qemu developers notify us sooner than this.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We'll need to match that a certain part of the qemu schema hasn't grown
new properties unexpectedly. Add a helper which matches an 'object' QMP
schema entry against a template and reports errors if expected types
don't match or new entries are added.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Right now we're unconditionally adding RPATH information to the
installed binaries and libraries, but that's not always desired.
autotools seem to be smart enough to only include that information
when targeting a non-standard prefix, so most distro packages
don't actually contain it; moreover, both Debian and Fedora have
wiki pages encouraging packagers to avoid setting RPATH:
https://wiki.debian.org/RpathIssuehttps://fedoraproject.org/wiki/RPath_Packaging_Draft
Implement RPATH logic that Does The Right Thing™ in the most
common cases, while still offering users the ability to override
the default behavior if they have specific needs.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The ABOUT-NLS symlink pointing to po/README.rst is a leftover
from when we were using autotools as the build system, and now
that we're using Meson we can drop it.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The timeout argument for guest-agent-timeout is optional but it did not
have proper default value specified. Also update the virsh man page
accordingly.
Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Having a README file called "README" is necessary when using
autotools, and for quite some time now we've been complying with
that requirement by having a symlink with that name pointing to
README.rst, where the actual contents live. Now that we've moved
to Meson, we can drop the symlink.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Having a ChangeLog file is necessary when using autotools, but
now that we've moved to Meson we are no longer required to keep
it around.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Some commands were improperly converted from original POD file. Their
names were stripped after the first dash.
Fixes: ab06dd9db3
Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
src/security/apparmor/meson.build builds this path dynamically
based on the value of sysconfdir, so we should do the same here
or the code and the filesystem might end up disagreeing.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
This variable is used in src/security/meson.build to decide
whether to install the AppArmor profiles, and at the moment
even when the user specifies -Dapparmor_profiles=true they
don't get installed.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
In a very distant past, we came around machines that has not
continuous node IDs. This made us error out when constructing
capabilities XML. We resolved it by utilizing strange behaviour
of numa_node_to_cpus() in which it returned a mask with all bits
set for a non-existent node. However, this is not the only case
when it returns all ones mask - if the node exists and has enough
CPUs to fill the mask up (e.g. 128 CPUs).
The fix consists of using nodemask_isset(&numa_all_nodes, ..)
prior to calling numa_node_to_cpus() to determine if the node
exists.
Fixes: 628c935747
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1860231
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
There was a report on libvirt-users [1] about the domxml to/from
native converter in the Xen driver not handling PCI addresses
without a domain specification. This patch improves parsing of PCI
addresses in the converter and allows PCI addresses with only
bb:ss.f. xl.cfg(5) also allows either the dddd:bb:ss.f or bb:ss.f
format. A test has been added to check the conversion from xl.cfg
to domXML.
[1] https://www.redhat.com/archives/libvirt-users/2020-August/msg00040.html
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Update the events stap example because the event loop impl is replaced by
GLib based event loop impl after commit 55fe8110.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Han Han <hhan@redhat.com>
In libvirt 6.6 stopping guests with libvirt-guests.sh is broken.
As soon as there is more than one guest one can see
`systemctl stop libvirt-guests` failing and in the log we see:
libvirt-guests.sh[2455]: Running guests on default URI:
libvirt-guests.sh[2457]: /usr/lib/libvirt/libvirt-guests.sh: 120:
local: 2a49cb0f-1ff8-44b5-a61d-806b9e52dae2: bad variable name
libvirt-guests.sh[2462]: no running guests.
That is due do mutliple guests becoming a list of UUIDs. Without
recognizing this as one single string the assignment breaks when using 'local'
(which was recently added in 6.3.0). This is because local is defined as
local [option] [name[=value] ... | - ]
which makes the shell trying handle the further part of the string as
variable names. In the error above that string isn't a valid variable
name triggering the issue that is seen.
This depends on the shell being used. POSIX shells don't have 'local'
specified yet and for the common shells it depends. It worked in bash and
bash-in-POSIX-mode, but for example dash in POSIX mode triggers the issue.
To resolve that 'textify' all assignments that are strings or potentially
can become such lists (even if they are not using the local qualifier).
Fixes: 08071ec0 "tools: variables clean-up in libvirt-guests script"
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
After previous cleanups, some labels in some functions have
nothing but 'return' statement in them. Drop the labels and
replace 'goto'-s with respective return statements.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Again, instead of closing FDs explicitly, we can automatically
close them when they go out of their respective scopes.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
A cleanup function can be declared for virFDStreamMsg type so
that the structure doesn't have to be freed explicitly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
All callers of virFDStreamMsgQueuePush() have the same pattern:
they explicitly set @msg passed to NULL to avoid freeing it later
on. Well, the function can take address of the pointer and clear
it for them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The buffer that allocated in the virFDStreamThreadDoRead() can be
automatically freed, or if saved into the message structure it
can be stolen.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
So far, only ENOENT is ignored (to deal with kernels without
devmapper). However, as reported on the list, under certain
scenarios a different error can occur. For instance, when libvirt
is running inside a container which doesn't have permissions to
talk to the devmapper. If this is the case, then open() returns
-1 and sets errno=EPERM.
Assuming that multipath devices are fairly narrow use case and
using them in a restricted container is even more narrow the best
fix seems to be to ignore all open errors BUT produce a warning
on failure. To avoid flooding logs with warnings on kernels
without devmapper the level is reduced to a plain debug message.
Reported-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Reviewed-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When adding support for HMAT, in f0611fe883 I've introduced a
check which aims to validate /domain/cpu/numa/interconnects. As a
part of that, there is a loop which checks whether all <latency/>
with @cache attribute refer to an existing cache level. For
instance:
<cpu mode='host-model' check='partial'>
<numa>
<cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
<cache level='1' associativity='direct' policy='writeback'>
<size value='8' unit='KiB'/>
<line value='5' unit='B'/>
</cache>
</cell>
<interconnects>
<latency initiator='0' target='0' cache='1' type='access' value='5'/>
<bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
</interconnects>
</numa>
</cpu>
This XML defines that accessing L1 cache of node #0 from node #0
has latency of 5ns.
However, the loop was not written properly. Well, the check in
it, as it was always checking for the first cache in the target
node and not the rest. Therefore, the following example errors
out:
<cpu mode='host-model' check='partial'>
<numa>
<cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
<cache level='3' associativity='direct' policy='writeback'>
<size value='10' unit='KiB'/>
<line value='8' unit='B'/>
</cache>
<cache level='1' associativity='direct' policy='writeback'>
<size value='8' unit='KiB'/>
<line value='5' unit='B'/>
</cache>
</cell>
<interconnects>
<latency initiator='0' target='0' cache='1' type='access' value='5'/>
<bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
</interconnects>
</numa>
</cpu>
This errors out even though it is a valid configuration. The L1
cache under node #0 is still present.
Fixes: f0611fe883
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Both parsing and formatting of NBD migration jobs is QEMU specific and
since we're trying to create a hypervisor-agnostic module out of
qemu_domainjob.c, move the NBD XML handling bits to the qemu_domain
module instead. Additionally, move the respective NBD XML calls to
the 'parseJob'/'formatJob' callbacks of the
qemuDomainObjPrivateJobCallbacks structure.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Functions `qemuDomainRemoveInactiveJob` and
`qemuDomainRemoveInactiveJobLocked` had their declaration misplaced in
`qemu_domainjob` and were moved to `qemu_domain` where their definitions
reside.
Signed-off-by: Prathamesh Chavan <pc44800@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Most of our augeas files are generated during meson setup into build
directory and we were running augeas tests only for these files.
However, we have some other augeas and config files that are not
modified during meson setup and they are only in source directories.
In order to run tests for these files we need to provide different path
to both source and build directories.
Reported-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This will be used later to specify different include directories for
augparse binary to run augeas tests.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Commit 862cf2ace4 modified the generator
to base edit links in the root of the repository but forgot to add the
'docs/' prefix to the code generating kbase articles, manpages and the
internals documentation.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
In case qemuDumpToFd() returns zero followed by a VIR_CLOSE(fd) fail,
we'd jump to the "cleanup" label with "ret=0", potentially resulting in
an unexpected success return value.
Signed-off-by: Hao Wang <wanghao232@huawei.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
In one of my latest patch (v6.6.0~30) I was trying to remove
libdevmapper use in favor of our own implementation. However, the
code did not take into account that device mapper can be not
compiled into the kernel (e.g. be a separate module that's not
loaded) in which case /proc/devices won't have the device-mapper
major number and thus virDevMapperGetTargets() and/or
virIsDevMapperDevice() fails.
However, such failure is safe to ignore, because if device mapper
is missing then there can't be any multipath devices and thus we
don't need to allow the deps in CGroups, nor create them in the
domain private namespace, etc.
Fixes: 2249455654
Reported-by: Andrea Bolognani <abologna@redhat.com>
Reported-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
The device mapper major is needed in virIsDevMapperDevice() which
determines whether given device is managed by device-mapper. This
number is obtained by parsing /proc/devices and then stored in a
global variable so that the file doesn't have to be parsed again.
However, as it turns out this logic is flawed - the major number
is not static and can change as it can be specified as a
parameter when loading the dm-mod module.
Unfortunately, I was not able to come up with a good solution and
thus the /proc/devices file is being parsed every time we need
the device mapper major.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Tested-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>