The virDomainSendProcessSignal method says the flags values
come from virDomainProcessSignalFlag, but this enum has
never existed. No flags are needed for this method.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Almost none of our virJSONValue*Get* functions accept const virJSONValue
pointers and it wouldn't even make sense since we sometimes modify what
we get. And because there is no reason for preventing callers of
virJSONValueObjectForeachKeyValue from modifying the values they get in
each iteration we can just stop doing it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Using a variable named 'stat' clashes with the system function
'stat()' causing compiler warnings on some platforms
cc1: warnings being treated as errors
../../src/qemu/qemu_monitor_text.c: In function 'parseMemoryStat':
../../src/qemu/qemu_monitor_text.c:604: error: declaration of 'stat' shadows a global declaration [-Wshadow]
/usr/include/sys/stat.h:455: error: shadowed declaration is here [-Wshadow]
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If the cpuset cgroup controller is disabled in /etc/libvirt/qemu.conf
QEMU virtual machines can in principle use all host CPUs, even if they
are hot plugged, if they have no explicit CPU affinity defined.
However, there's libvirt code supposed to handle the situation where
the libvirt daemon itself is not using all host CPUs. The code in
qemuProcessInitCpuAffinity attempts to set an affinity mask including
all defined host CPUs. Unfortunately, the resulting affinity mask for
the process will not contain the offline CPUs. See also the
sched_setaffinity(2) man page.
That means that even if the host CPUs come online again, they won't be
used by the QEMU process anymore. The same is true for newly hot
plugged CPUs. So we are effectively preventing that QEMU uses all
processors instead of enabling it to use them.
It only makes sense to set the QEMU process affinity if we're able
to actually grow the set of usable CPUs, i.e. if the process affinity
is a subset of the online host CPUs.
There's still the chance that for some reason the deliberately chosen
libvirtd affinity matches the online host CPU mask by accident. In this
case the behavior remains as it was before (CPUs offline while setting
the affinity will not be used if they show up later on).
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Tested-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
The functions to retrieve online and present host CPU information
are only supported on Linux for the time being.
This leads to runtime errors if these function are used on other
platforms. To avoid that, code in higher levels using the functions
must replicate the conditional compilation in higher level which
is error prone (and is plainly spoken ugly).
Adding a function virHostCPUHasBitmap that can be used to check
for host CPU bitmap support.
NB: There are other functions including the host CPU count that
are lacking support on all platforms, but they are too essential
in order to be bypassed.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
virQEMUCapsFindTarget is supposed to find an alternative QEMU binary if
qemu-system-$GUEST_ARCH doesn't exist. The alternative is using host
architecture when it is compatible with $GUEST_ARCH. But a special
treatment has to be applied for ppc64le since the QEMU binary is always
called qemu-system-ppc64.
Broken by me in v2.2.0-171-gf2e71550d.
https://bugzilla.redhat.com/show_bug.cgi?id=1403745
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Seems commit id '0257d06b' forgot to include formatstorage when updating
the docs to describe allowing zfs as a pool type and to furthermore note
that the pool's target path element will be generated rather than read.
Similarly commit id 'efab27afb' neglected to indicate that the target path
for a logical pool will now be generated by libvirt.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Almost all XML examples use <tag .../> rather than <tag ...></tag> if
the element is empty. Let's remove the two instances of the latter.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuAgentNotifyEvent accesses monitor structure and is called on qemu
reset/shutdown/suspend events under domain lock. Other monitor
functions on the other hand take monitor lock and don't hold domain lock.
Thus it is possible to have risky simultaneous access to the structure
from 2 threads. Let's take monitor lock here to make access exclusive.
In case of 0 filesystems *info is not set while according
to virDomainGetFSInfo contract user should call free on it even
in case of 0 filesystems. Thus we need to properly set
it. NULL will be enough as free eats NULLs ok.
The libvirt-domain.h documentation indicates that for a qcow2 file
in a filesystem being used for a backing store should report the disk
space occupied by a file; however, commit id '15fa84ac' altered the
code to trust that the wr_highest_offset should be used whenever
wr_highest_offset_valid was set.
As it turns out this will lead to indeterminite results. For an active
domain when qemu hasn't yet had the need to find the wr_highest_offset
value, qemu will report 0 even though qemu-img will report the proper
disk size. This causes reporting of the following XML:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/test-1g.qcow2'/>
to be as follows:
Capacity: 1073741824
Allocation: 0
Physical: 1074139136
with qemu-img indicating:
image: /path/to/test-1g.qcow2
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 1.0G
Once the backing source file is opened on the guest, then wr_highest_offset
is updated, but only to the high water mark and not the size of the file.
This patch will adjust the logic to check for the file backed qcow2 image
and enforce setting the allocation to the returned 'physical' value, which
is the 'actual-size' value from a 'query-block' operation.
NB: The other consumer of the wr_highest_offset output (GetAllDomainStats)
has a contract that indicates 'allocation' is the offset of the highest
written sector, so it doesn't need adjustment.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Instead of having duplicated code in qemuStorageLimitsRefresh and
virStorageBackendUpdateVolTargetInfo to get capacity specific data
about the storage backing source or volume -- create a common API
to handle the details for both.
As a side effect, virStorageFileProbeFormatFromBuf returns to being
a local/static helper to virstoragefile.c
For the QEMU code - if the probe is done, then the format is saved so
as to avoid future such probes.
For the storage backend code, there is no need to deal with the probe
since we cannot call the new API if target->format == NONE.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Instead of having duplicated code in qemuStorageLimitsRefresh and
virStorageBackendUpdateVolTargetInfoFD to fill in the storage backing
source or volume allocation, capacity, and physical values - create a
common API that will handle the details for both.
The common API will fill in "default" capacity values as well - although
those more than likely will be overridden by subsequent code. Having just
one place to make the determination of what the values should be will
make things be more consistent.
For the QEMU code - the data filled in will be for inactive domains
for the GetBlockInfo and DomainGetStatsOneBlock API's. For the storage
backend code - the data will be filled in during the volume updates.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '8dc27259' introduced virStorageSourceUpdateBlockPhysicalSize
in order to retrieve the physical size for a block backed source device
for an active domain since commit id '15fa84ac' changed to use the
qemuMonitorGetAllBlockStatsInfo and qemuMonitorBlockStatsUpdateCapacity
API's to (essentially) retrieve the "actual-size" from a 'query-block'
operation for the source device.
However, the code only was made functional for a BLOCK backing type
and it neglected to use qemuOpenFile, instead using just open. After
the open the block lseek would find the end of the block and set the
physical value, close the fd and return.
Since the code would return 0 immediately if the source device wasn't
a BLOCK backed device, the physical would be displayed incorrectly,
such as follows in domblkinfo for a file backed source device:
Capacity: 1073741824
Allocation: 0
Physical: 0
This patch will modify the algorithm to get the physical size for other
backing types and it will make use of the qemuDomainStorageOpenStat
helper in order to open/stat the source file depending on its type.
The qemuDomainGetStatsOneBlock will no longer inhibit printing errors,
but it will still ignore them leaving the physical value set to 0.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Split out the opening of the file and fetch of the stat buffer into a
helper qemuDomainStorageOpenStat. This will handle either opening the
local or remote storage.
Additionally split out the cleanup of that into a separate helper
qemuDomainStorageCloseStat which will either close the file or
call the virStorageFileDeinit function.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Originally added by commit id '89646e69' prior to commit id '15fa84ac'
and '71d2c172' which ensured that qemuStorageLimitsRefresh was only called
for inactive domains.
Adjust the comment describing the need for FIXME and move all the text
to the function description.
Signed-off-by: John Ferlan <jferlan@redhat.com>
1ec22be5 added code that detects the maximum cpu count according to
domain capabilities. The code fell back to the old command only if the
API was not supported. If the API fails for other reasons the command
would fail. There's no point in not trying the old API in such case.
https://bugzilla.redhat.com/show_bug.cgi?id=1402690
This flag is used in Virtuozzo backend implicitly, thus
we need to support it and don't fail if it's set.
Signed-off-by: Pavel Glushchak <pglushchak@virtuozzo.com>
This flag tells backend not to create instance
disks making behavior the same as in qemu driver.
Disk files have to be created beforehand on target
host manually or by upper management layer i.e.
OpenStack Nova.
Signed-off-by: Pavel Glushchak <pglushchak@virtuozzo.com>
When save/migrate a domain and we autogenerated a port, then if we
print the inactive domain config, write out a -1 for the socket value;
otherwise, it's possible that the subsequent start will fail if the
autogenerated websocket used conflicts with an existing running config
that also used autogenerated websockets.
Examples:
== A. Can not restore domain with autoconfigured websocket.
domain 1 and 2 have autoconfigured websocket.
1. domain 1 is started then, saved
2. domain 2 is started
3. domain 1 restoration is failed:
error: internal error: qemu unexpectedly closed the monitor: 2016-11-21T10:23:11.356687Z
qemu-kvm: -vnc 0.0.0.0:2,websocket=5700: Failed to start VNC server on `(null)':
Failed to bind socket: Address already in use
== B. Can not migrate domain with autoconfigured websocket.
domain 1 on host A, domain 2 on host B, both have autoconfigured websocket
1. domain 1 started, domain 2 started
2. domain 1 migration to host B is failed with the above error.
Some arguments in vshErrorHandler, vshReadlineCompletion and
cmdSelfTest functions are not used. Mark them as such.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In preparation to the code move to virnetdevtap.c, this change:
* renames virNetInterfaceStats to virNetDevTapInterfaceStats
* changes 'path' to 'ifname', to use the same vocable as other
method in virnetdevtap.c.
* Add the attributes checker
When vhostuser interfaces are used, the interface statistics
are not available in /proc/net/dev.
This change looks at the openvswitch interfaces statistics
tables to provide this information for vhostuser interface.
Note that in openvswitch world drop/error doesn't always make sense
for some interface type. When these informations are not available we
set them to 0 on the virDomainInterfaceStats.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If the 'nleases < 0' on return, then the subsequent call to
findLeaseInJSON will not produce the expected results (passed
in as a size_t, but nleases is a ssize_t). So check if the
returned value < 0 and if so, goto cleanup.
Found by Coverity as a NEGATIVE_RETURNS event
If the allocation fails in DO_TEST_FLUSH_PROLOGUE, then 'mgr == NULL',
but the code continues on - which won't be good. So modify the macro
to cause an immediate failure and jump to a cleanup label.
Found by Coverity as FORWARD_NULL event.
There's nothing to compress if the requested snapshot memory format is
set to 'raw' explicitly. After commit 9e14689ea libvirt would try to
run /sbin/raw to process the memory stream if the qemu.conf option
snapshot_image_format is set.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1402726
Since its introduction in 2012 this internal API did nothing.
Moreover we have the same API that does exactly the same:
virSecurityManagerDomainSetPathLabel.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>