As of 542899168c we learned libvirt to use UEFI for domains.
However, management applications may firstly query if libvirt
supports it. And this is where virConnectGetDomainCapabilities()
API comes handy.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
virStorageSourceInitChainElement initializes a new storage chain element
for use as a new disk source. If the new element doesn't contain the
driver name, copy it from the old source.
This fixes issue where a disk would forget the driver after a snapshot.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1140984
Commit b606bbb4 broke reporting of errors when setting of guest time
fails via the guest agent as the return value is not checked and later
overwritten by the return value qemuMonitorRTCResetReinjection();
Fix this by checking the return value before resetting the RTC
reinjection.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1142294
For tuning the network, alternative devices
for creating tap and vhost devices can be specified via:
<backend tap='/dev/net/tun' vhost='/dev/net-vhost'/>
Instead of checking upfront if the <driver> element will be needed
in a big condition, just format all the attributes into a string
and output the <driver> element if the string is not empty.
We already are checking for negative value, reporting an error, but
using wrong function and the check only succeeds when a value that
cannot be converted to number successfully is encountered. This patch
provides just a minor change in call of the right version
of function virStrToLong.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138539
The first one occurs in openvzDomainMigratePrepare3Params() where in
case no remote uri is given, the distant hostname is used. The name is
obtained via virGetHostname() which require callers to free the
returned value.
The second leak lies in openvzDomainMigratePerform3Params(). There's a
virCommand used later. However, at the beginning of the function
virCheckFlags() is called which returns. So the command created was
leaked.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, the setns() wrapper is supported only for x86_64 and i686
which leaves us failing to build on other platforms like arm, aarch64
and so on. This means, that the wrapper needs to be extended to those
platforms and make to fail on runtime not compile time.
The syscall numbers for other platforms was fetched using this
command:
kernel.git $ git grep "define.*__NR_setns" | grep -e arm -e powerpc -e s390
arch/arm/include/uapi/asm/unistd.h:#define __NR_setns (__NR_SYSCALL_BASE+375)
arch/arm64/include/asm/unistd32.h:#define __NR_setns 375
arch/powerpc/include/uapi/asm/unistd.h:#define __NR_setns 350
arch/s390/include/uapi/asm/unistd.h:#define __NR_setns 339
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The backing store string location offset 0 determines that the file
isn't present. The string size shouldn't be then checked:
from qemu.git/docs/specs/qcow2.txt
== Header ==
The first cluster of a qcow2 image contains the file header:
Byte 0 - 3: magic
QCOW magic string ("QFI\xfb")
4 - 7: version
Version number (valid values are 2 and 3)
8 - 15: backing_file_offset
Offset into the image file at which the backing file name
is stored (NB: The string is not null terminated). 0 if the
image doesn't have a backing file.
16 - 19: backing_file_size
Length of the backing file name in bytes. Must not be
longer than 1023 bytes. Undefined if the image doesn't have
a backing file. ^^^^^^^^^
This patch intentionally leaves the backing file string size check in
place in case a malformatted file would be presented to libvirt. Also
according to the docs the string size is maximum 1023 bytes, thus this
patch adds a check to verify that.
I was also able to verify that the check was done the same way in the
legacy qcow fromat (in qemu's code).
If there are no iothreads, then return from qemuProcessDetectIOThreadPIDs
without error; otherwise, the following occurs:
error: Failed to start domain $dom
error: An error occurred, but the cause is unknown
I noticed this with the recent iothread pinning code, but the
problem existed longer than that. The XML validation required
users to supply <cputune> children in a strict order, even though
there was no conceptual reason why they can't occur in any order.
docs/ changes best viewed with -w
* docs/schemas/domaincommon.rng (cputune): Add interleave.
* tests/qemuxml2argvdata/qemuxml2argv-cputune-iothreads.xml: Swap
up order, copying canonical form...
* tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreads.xml:
...here.
* tests/qemuxml2xmltest.c (mymain): Mark the difference.
Signed-off-by: Eric Blake <eblake@redhat.com>
This is a folloup to commit 5f719596, which checks for a route
conflicting with the standard libvirt default network subnet
(192.168.122.0/24). It turns out that $() strips the trailing newline
from the output of "ip route show", so there would be no match if the
route we were looking for was the final line of output. This can be
solved by adding ${nl} to the end of the output (just as we were
already adding it at the beginning of the output).
https://bugzilla.redhat.com/show_bug.cgi?id=1101574
Add an option 'iothreadpin' to the <cpuset> to allow for setting the
CPU affinity for each IOThread.
The iothreadspin will mimic the vcpupin with respect to being able to
assign each iothread to a specific CPU, although iothreads ids start
at 1 while vcpu ids start at 0. This matches the iothread naming scheme.
Modify qemuProcessStart() in order to allowing setting affinity to
specific CPU's for IOThreads. The process followed is similar to
that for the vCPU's.
This involves adding a function to fetch the IOThread id's via
qemuMonitorGetIOThreads() and adding them to iothreadpids[] list.
Then making sure all the cgroup data has been properly set up and
finally assigning affinity.
In order to support cpuset setting, introduce qemuSetupCgroupIOThreadsPin
and qemuSetupCgroupForIOThreads to mimic the existing Vcpu API's.
These will support having an 'iotrhreadpin' element in the 'cpuset' in
order to pin named IOThreads to specific CPU's. The IOThread pin names
will follow the IOThread naming scheme starting at 1 (eg "iothread1")
up through an including the def->iothreads value.
Add an iothread parameter to allow attaching to an IOThread, such as:
virsh attach-disk $dom $source $target --live --config --iothread 2 \
--targetbus virtio --driver qemu --subdriver raw --type disk
When spanning tree protocol is allowed in bridge settings, forward delay
value is set as well (default is 0 if omitted). Until now, there was no
check for delay value validity. Delay makes sense only as a positive
numerical value.
Note: However, even if you provide positive numerical value, brctl
utility only uses values from range <2,30>, so the number provided can
be modified (kernel most likely) to fall within this range.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1125764
Coverity complains that the comparison:
if (nfds && nfds > ((int)!!sock_path + (int)!!sock_path_ro))
could mean 'sock_path' is NULL. Later in virNetSocketNewListenUNIX
there's a direct dereference of path in the error path:
if (path[0] != '@')
A bit of sleuthing proves that upon entry to daemonSetupNetworking
there is no way for 'sock_path' to be NULL since daemonUnixSocketPaths
will set up 'sock_file' (although it may not set up 'sock_file_ro')
in all 3 paths.
Adjusted code to add ATTRIBUTE_NONNULL(3) on incoming path parameter and
then fixup the comparison of nfds to be a comparison against 2 or 1
depending on whether sock_path_ro is NULL or not.
Coverity complains about the calculation of the buf & len within
the PROBE macro. So to quiet things down, do the calculation prior
to usage in either write() or qemuMonitorIOWriteWithFD() calls and
then have the PROBE use the calculated values - which works.
Coverity complained that checking the return of virDomainCreate()
was not consistent amongst the callers - so added the return check
to the objecteventtest.c and adjust the virt-login-shell to compare
< 0 rather than just non zero for the failure condition.
Coverity complains that on the first pass through the for loop that
'params' cannot be true, thus the ternary setting to "&" cannot be
done. Since we can only ever get to this point once, drop the ternary
Seems when commit id 'ea130e3b' added the checks to ensure each of
the hard_limit, soft_limit, and swap_hard_limit wasn't set at
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED - a copy/paste error of using
the 'hard_limit' for each comparison was done. Adjust the code.
Coverity complains that because of how 'offset' is initialized to
0 (zero), the resulting math and comparison on rem is pointless.
According to the origin commit id '3ec128989', the code is a
replacement for gmtime(), but without the localtime() or GMT
calculations - so just remove this code and add a comment
indicating the removal
Since 98b9acf5aa
This was a false positive where Coverity was complaining that the
remoteDeserializeTypedParameters() could allocate 'params', but
none of the callers could return the allocated memory back to their
caller since on input the param was passed by value. Additionally,
the flow of the code was that if params was NULL on entry, then each
function would return 'nparams' as the number of params entries the
caller would need to allocate in order to call the function again
with 'nparams' and 'params' being set. By the time the deserialize
routine was called params would have something. For other callers
where the 'params' was passed by reference as NULL since it's expected
that the deserialize allocates the memory and then have that passed
back to the original caller to dispose there was no Coverity issue.
As it turns out Coverity didn't quite seem to understand the
relationship between 'nparams' and 'params'; however, if the
!userAllocated path of the deserialize code compared against
limit in any manner, then the Coverity error went away which
was quite strange, but useful.
As it turns out one code path remoteDomainGetJobStats had a
comparison against 'limit' while another remoteConnectGetAllDomainStats
did not assuming that limit would be checked. So I refactored the
code a bit to cause the limit check to occur in deserialize for
both conditions and then only made the check of current returned
size against the incoming *nparams fail the non allocation case.
This means the job code doesn't need to check the limit any more,
while the stats code now does check the limit.
Additionally, to help perhaps decipher which of the various
callers to the deserialize code caused the failure - I used
a #define to pass the __FUNCNAME__ of the caller along so that
error messages could have something like:
error: remoteConnectGetAllDomainStats: too many parameters '2' for nparams '0'
error: Reconnected to the hypervisor
(it's a contrived error just to show the funcname in the error)
The manufacurer and product from USB device itself are usually not particularly
useful -- they tend to be missing, or ugly (all-uppercase, padded with spaces,
etc.). Prefer what's in the usb id database and fall back to descriptors only
if the device is too new to be in database.
https://bugzilla.redhat.com/show_bug.cgi?id=1138887
This patch adds initial migration support to the OpenVZ driver,
using the VIR_DRV_FEATURE_MIGRATION_PARAMS family of migration
functions.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Sometimes libvirt is installed on a host that is already using the
network 192.168.122.0/24. If the libvirt-daemon-config-network package
is installed, this creates a conflict, since that package has been
hard-coded to create a virtual network that also uses
192.168.122.0/24. In the past libvirt has attempted to warn of /
remediate this situation by checking for conflicting routes when the
network is started, but it turns out that isn't always useful (for
example in the case that the *other* interface/network creating the
conflict hasn't yet been started at the time libvirtd start its own
networks).
This patch attempts to catch the problem earlier - at install
time. During the %post install script for
libvirt-daemon-config-network, we use a case statement to look through
the output of "ip route show" for a route that exactly matches
192.168.122.0/24, and if found we search for a similar route that
*doesn't* match (e.g. 192.168.124.0/24) (note that the search starts
with "124" instead of 123 because of reports of people already
modifying their L1 host's network to 192.168.123.0/24 in an attempt to
solve exactly the problem we are also trying to solve). When we find
an available route, we just replace all occurrences of "122" in the
default.xml that is being created with the newly found 192.168
subnet. This could obviously be made more complicated - examine the
template defaul.xml to automatically determine the existing network
address and mask rather than hard coding it in the specfile, etc, but
this scripting is simpler and gets the job done as long as we continue
to use 192.168.122.0/24 in the template. (If anyone with mad bash
skillz wants to suggest something to do that, by all means please do).
This is intended to at least "further reduce" occurrence of the
problems detailed in:
https://bugzilla.redhat.com/show_bug.cgi?id=811967
We stupidly modeled block job bandwidth after migration
bandwidth, which in turn was an 'unsigned long' and therefore
subject to 32-bit vs. 64-bit interpretations. To work around
the fact that 10-gigabit interfaces are possible but don't fit
within 32 bits, the original interface took the number scaled
as MiB/sec. But this scaling is rather coarse, and it might
be nice to tune bandwidth finer than in megabyte chunks.
Several of the block job calls that can set speed are fed
through a common interface, so it was easier to adjust them all
at once. Note that there is intentionally no flag for the new
virDomainBlockCopy; there, since the API already uses a 64-bit
type always, instead of a possible 32-bit type, and is brand
new, it was easier to just avoid scaling issues. As with the
previous patch that adjusted the query side (commit db33cc24),
omitting the new flag preserves old behavior, and the
documentation now mentions limits of what happens when a 32-bit
machine is on either client or server side.
* include/libvirt/libvirt.h.in (virDomainBlockJobSetSpeedFlags)
(virDomainBlockPullFlags)
(VIR_DOMAIN_BLOCK_REBASE_BANDWIDTH_BYTES)
(VIR_DOMAIN_BLOCK_COMMIT_BANDWIDTH_BYTES): New enums.
* src/libvirt.c (virDomainBlockJobSetSpeed, virDomainBlockPull)
(virDomainBlockRebase, virDomainBlockCommit): Document them.
* src/qemu/qemu_driver.c (qemuDomainBlockJobSetSpeed)
(qemuDomainBlockPull, qemuDomainBlockRebase)
(qemuDomainBlockCommit, qemuDomainBlockJobImpl): Support new flag.
Signed-off-by: Eric Blake <eblake@redhat.com>
Upstream qemu 1.4 added some drive-mirror tunables not present
when it was first introduced in 1.3. Management apps may want
to set these in some cases (for example, without tuning
granularity down to sector size, a copy may end up occupying
more bytes than the original because an entire cluster is
copied even when only a sector within the cluster is dirty,
although tuning it down results in more CPU time to do the
copy). I haven't personally needed to use the parameters, but
since they exist, and since the new API supports virTypedParams,
we might as well expose them.
Since the tuning parameters aren't often used, and omitted from
the QMP command when unspecified, I think it is safe to rely on
qemu 1.3 to issue an error about them being unsupported, rather
than trying to create a new capability bit in libvirt.
Meanwhile, all versions of qemu from 1.4 to 2.1 have a bug where
a bad granularity (such as non-power-of-2) gives a poor message:
error: internal error: unable to execute QEMU command 'drive-mirror': Invalid parameter 'drive-virtio-disk0'
because of abuse of QERR_INVALID_PARAMETER (which is supposed to
name the parameter that was given a bad value, rather than the
value passed to some other parameter). I don't see that a
capability check will help, so we'll just live with it (and it
has since been improved in upstream qemu).
* src/qemu/qemu_monitor.h (qemuMonitorDriveMirror): Add
parameters.
* src/qemu/qemu_monitor.c (qemuMonitorDriveMirror): Likewise.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockCopyCommon): Likewise.
(qemuDomainBlockRebase, qemuDomainBlockCopy): Adjust callers.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror): Likewise.
* tests/qemumonitorjsontest.c (qemuMonitorJSONDriveMirror): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The hard part of managing the disk copy is already coded; all
this had to do was convert the XML and virTypedParameters into
the internal representation.
With this patch, all blockcopy operations that used the old
API should also work via the new API. Additional extensions,
such as supporting the granularity tunable or a network rather
than file destination, will be added as later patches.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): New function.
Signed-off-by: Eric Blake <eblake@redhat.com>
In order to implement the new virDomainBlockCopy, the existing
block copy internal implementation needs to be adjusted. The
new function will parse XML into a storage source, and parse
typed parameters into integers, then call into the same common
backend. For now, it's easier to keep the same implementation
limits that only local file destinations are suported, but now
the check needs to be explicit. Similar to qemuDomainBlockJobImpl
consuming 'vm', this code also consumes the caller's 'mirror'
description of the destination.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Rename...
(qemuDomainBlockCopyCommon): ...and adjust parameters.
(qemuDomainBlockRebase): Adjust caller.
Signed-off-by: Eric Blake <eblake@redhat.com>
At the beginning when I was inventing <loader/> attributes and
<nvram/> I've introduced this @readonly attribute to the loader
element. It accepted values 'on' and 'off'. However, later, during the
review process, that has changed to 'yes' and 'no', but the example
XML snippet wasn't updated, so while the description is correct, the
example isn't.
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a domain is undefined, there are options to remove it's
managed save state or snapshots. However, there's another file
that libvirt creates per domain: the NVRAM variable store file.
Make sure that the file is not left behind if the domain is
undefined.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If we end up at the cleanup lable before we've VIR_EXPAND_N the list,
then calling virQEMUCapsFreeStringList() with a NULL proplist could
theoretically deref proplist if nproplist was set. Coverity doesn't
seem to acknowledge the relationship between proplist and nproplist
assuming in virQEMUCapsFreeStringList that nproplist could be at
least 1 and thus have a null deref. It only seems to follow the
NULL proplist.
Signed-off-by: John Ferlan <jferlan@redhat.com>