Now that we have some qemuSecurity wrappers over
virSecurityManager APIs, lets make sure everybody sticks with
them. We have them for a reason and calling virSecurityManager
API directly instead of wrapper may lead into accidentally
labelling a file on the host instead of namespace.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Previously the code called virStorageSourceUpdateBlockPhysicalSize which
did not do anything on empty drives since it worked only on block
devices. After the refactor in c5f6151390 it's called for all devices
and thus attempts to deref the NULL path of empty drives.
Add a check that skips the update of the physical size if the storage
source is empty.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1420718
This follows the same check for disk, because we cannot remove iothread
if it's used by disk or by controller. It could lead to crashing QEMU.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
If virDomainDelIOThread API was called with VIR_DOMAIN_AFFECT_LIVE
and VIR_DOMAIN_AFFECT_CONFIG and both XML were already a different
it could result in removing iothread from config XML even if there
was a disk using that iothread.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The bare fact that mnt namespace is available is not enough for
us to allow/enable qemu namespaces feature. There are other
requirements: we must copy all the ACL & SELinux labels otherwise
we might grant access that is administratively forbidden or vice
versa.
At the same time, the check for namespace prerequisites is moved
from domain startup time to qemu.conf parser as it doesn't make
much sense to allow users to start misconfigured libvirt just to
find out they can't start a single domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
These functions do not need to see the whole virDomainDiskDef.
Moreover, they are going to be called from places where we don't
have access to the full disk definition. Sticking with
virStorageSource is more than enough.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that we have a function for properly assigning the blockdeviotune
info, let's use it instead of dropping the group name on every
assignment. Otherwise it will not work with both --live and --config
options.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
For example when both total_bytes_sec and total_bytes_sec_max are set,
but the former gets cleaned due to new call setting, let's say,
read_bytes_sec, we end up with this weird message for the command:
$ virsh blkdeviotune fedora vda --read-bytes-sec 3000
error: Unable to change block I/O throttle
error: unsupported configuration: value 'total_bytes_sec_max' cannot be set if 'total_bytes_sec' is not set
So let's make it more descriptive. This is how it looks after the change:
$ virsh blkdeviotune fedora vda --read-bytes-sec 3000
error: Unable to change block I/O throttle
error: unsupported configuration: cannot reset 'total_bytes_sec' when 'total_bytes_sec_max' is set
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1344897
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We were setting it based on whether it was supported and that lead to
setting it to NULL, which our JSON code caught. However it ended up
producing the following results:
$ virsh blkdeviotune fedora vda --total-bytes-sec-max 2000
error: Unable to change block I/O throttle
error: internal error: argument key 'group' must not have null value
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Extract the call to qemuDomainSelectHotplugVcpuEntities outside of
qemuDomainSetVcpusLive and decide whether to hotplug or unplug the
entities specified by the cpumap using a boolean flag.
This will allow to use qemuDomainSetVcpusLive in cases where we prepare
the list of vcpus to enable or disable by other means.
For the blockjobs, where libvirt is able to track the state internally
we can fix locking of images we can remove the appropriate locks.
Also when doing a pivoting operation we should not acquire the lock on
any of those images since both are actually locked already.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1302168
Images that became the backing chain of the current image due to the
snapshot need to be unlocked in the lock manager. Also if qemu was
paused during the snapshot the current top level images need to be
released until qemu is resumed so that they can be acquired properly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1191901
The code at first changed the definition and then rolled it back in case
of failure. This was ridiculous. Refactor the code so that the image in
the definition is changed only when the snapshot is successful.
The refactor will also simplify further fix of image locking when doing
snapshots.
Libvirt is able to properly model what happens to the backing chain
after a snapshot so there's no real need to redetect the data.
Additionally with the _REUSE_EXT flag this might end up in redetecting
wrong data if the user puts wrong backing chain reference into the
snapshot image.
The code at the very bottom of the DAC secdriver that calls
chown() should be fine with read-only data. If something needs to
be prepared it should have been done beforehand.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When coldplugging vcpus to a VM that already has a few hotpluggable
vcpus the code might generate invalid configuration as
non-hotpluggable cpus need to be clustered starting from vcpu 0.
This fix forces the added vcpus to be hotpluggable in such case.
Fixes a corner case described in:
https://bugzilla.redhat.com/show_bug.cgi?id=1370357
This patch adds support and documentation for
a generalized hardware cache event called cache_l1d
perf event.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com>
When changing the metadata via virDomainSetMetadata, we now
emit an event to notify the app of changes. This is useful
when co-ordinating different applications read/write of
custom metadata.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit 4b951d1e38 missed the fact that the
VM needs to be resumed after a live external checkpoint (memory
snapshot) where the cpus would be paused by the migration rather than
libvirt.
According to commit id '0282ca45a' the 'physical' value should
essentially be the last offset of the image or the host physical
size in bytes of the image container. However, commit id '15fa84ac'
refactored the GetBlockInfo to use the same returned data as the
GetStatsBlock API for an active domain. For the 'entry->physical'
that would end up being the "actual-size" as set through the
qemuMonitorJSONBlockStatsUpdateCapacityOne (commit '7b11f5e5').
Digging deeper into QEMU code one finds that actual_size is
filled in using the same algorithm as GetBlockInfo has used for
setting the 'allocation' field when the domain is inactive.
The difference in values is seen primarily in sparse raw files
and other container type files (such as qcow2), which will return
a smaller value via the stat API for 'st_blocks'. Additionally
for container files, the 'capacity' field (populated via the
QEMU "virtual-size" value) may be slightly different (smaller)
in order to accomodate the overhead for the container. For
sparse files, the state 'st_size' field is returned.
This patch thus alters the allocation and physical values for
sparse backed storage files to be more appropriate to the API
contract. The result for GetBlockInfo is the following:
capacity: logical size in bytes of the image (how much storage
the guest will see)
allocation: host storage in bytes occupied by the image (such
as highest allocated extent if there are no holes,
similar to 'du')
physical: host physical size in bytes of the image container
(last offset, similar to 'ls')
NB: The GetStatsBlock API allows a different contract for the
values:
"block.<num>.allocation" - offset of the highest written sector
as unsigned long long.
"block.<num>.capacity" - logical size in bytes of the block device
backing image as unsigned long long.
"block.<num>.physical" - physical size in bytes of the container
of the backing image as unsigned long long.
External disk-only snapshots with recent enough qemu don't require
libvirt to pause the VM. The logic determining when to resume cpus was
slightly flawed and attempted to resume them even if they were not
paused by the snapshot code. This normally was not a problem, but with
locking enabled the code would attempt to acquire the lock twice.
The fallout of this bug would be a error from the API, but the actual
snapshot being created. The bug was introduced with when adding support
for external snapshots with memory (checkpoints) in commit f569b87.
Resolves problems described by:
https://bugzilla.redhat.com/show_bug.cgi?id=1403691
When attaching a device to a domain that's using separate mount
namespace we must maintain /dev entries in order for qemu process
to see them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The libvirt-domain.h documentation indicates that for a qcow2 file
in a filesystem being used for a backing store should report the disk
space occupied by a file; however, commit id '15fa84ac' altered the
code to trust that the wr_highest_offset should be used whenever
wr_highest_offset_valid was set.
As it turns out this will lead to indeterminite results. For an active
domain when qemu hasn't yet had the need to find the wr_highest_offset
value, qemu will report 0 even though qemu-img will report the proper
disk size. This causes reporting of the following XML:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/test-1g.qcow2'/>
to be as follows:
Capacity: 1073741824
Allocation: 0
Physical: 1074139136
with qemu-img indicating:
image: /path/to/test-1g.qcow2
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 1.0G
Once the backing source file is opened on the guest, then wr_highest_offset
is updated, but only to the high water mark and not the size of the file.
This patch will adjust the logic to check for the file backed qcow2 image
and enforce setting the allocation to the returned 'physical' value, which
is the 'actual-size' value from a 'query-block' operation.
NB: The other consumer of the wr_highest_offset output (GetAllDomainStats)
has a contract that indicates 'allocation' is the offset of the highest
written sector, so it doesn't need adjustment.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Instead of having duplicated code in qemuStorageLimitsRefresh and
virStorageBackendUpdateVolTargetInfo to get capacity specific data
about the storage backing source or volume -- create a common API
to handle the details for both.
As a side effect, virStorageFileProbeFormatFromBuf returns to being
a local/static helper to virstoragefile.c
For the QEMU code - if the probe is done, then the format is saved so
as to avoid future such probes.
For the storage backend code, there is no need to deal with the probe
since we cannot call the new API if target->format == NONE.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Instead of having duplicated code in qemuStorageLimitsRefresh and
virStorageBackendUpdateVolTargetInfoFD to fill in the storage backing
source or volume allocation, capacity, and physical values - create a
common API that will handle the details for both.
The common API will fill in "default" capacity values as well - although
those more than likely will be overridden by subsequent code. Having just
one place to make the determination of what the values should be will
make things be more consistent.
For the QEMU code - the data filled in will be for inactive domains
for the GetBlockInfo and DomainGetStatsOneBlock API's. For the storage
backend code - the data will be filled in during the volume updates.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '8dc27259' introduced virStorageSourceUpdateBlockPhysicalSize
in order to retrieve the physical size for a block backed source device
for an active domain since commit id '15fa84ac' changed to use the
qemuMonitorGetAllBlockStatsInfo and qemuMonitorBlockStatsUpdateCapacity
API's to (essentially) retrieve the "actual-size" from a 'query-block'
operation for the source device.
However, the code only was made functional for a BLOCK backing type
and it neglected to use qemuOpenFile, instead using just open. After
the open the block lseek would find the end of the block and set the
physical value, close the fd and return.
Since the code would return 0 immediately if the source device wasn't
a BLOCK backed device, the physical would be displayed incorrectly,
such as follows in domblkinfo for a file backed source device:
Capacity: 1073741824
Allocation: 0
Physical: 0
This patch will modify the algorithm to get the physical size for other
backing types and it will make use of the qemuDomainStorageOpenStat
helper in order to open/stat the source file depending on its type.
The qemuDomainGetStatsOneBlock will no longer inhibit printing errors,
but it will still ignore them leaving the physical value set to 0.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Split out the opening of the file and fetch of the stat buffer into a
helper qemuDomainStorageOpenStat. This will handle either opening the
local or remote storage.
Additionally split out the cleanup of that into a separate helper
qemuDomainStorageCloseStat which will either close the file or
call the virStorageFileDeinit function.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Originally added by commit id '89646e69' prior to commit id '15fa84ac'
and '71d2c172' which ensured that qemuStorageLimitsRefresh was only called
for inactive domains.
Adjust the comment describing the need for FIXME and move all the text
to the function description.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In preparation to the code move to virnetdevtap.c, this change:
* renames virNetInterfaceStats to virNetDevTapInterfaceStats
* changes 'path' to 'ifname', to use the same vocable as other
method in virnetdevtap.c.
* Add the attributes checker
When vhostuser interfaces are used, the interface statistics
are not available in /proc/net/dev.
This change looks at the openvswitch interfaces statistics
tables to provide this information for vhostuser interface.
Note that in openvswitch world drop/error doesn't always make sense
for some interface type. When these informations are not available we
set them to 0 on the virDomainInterfaceStats.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>