External disk-only snapshots with recent enough qemu don't require
libvirt to pause the VM. The logic determining when to resume cpus was
slightly flawed and attempted to resume them even if they were not
paused by the snapshot code. This normally was not a problem, but with
locking enabled the code would attempt to acquire the lock twice.
The fallout of this bug would be a error from the API, but the actual
snapshot being created. The bug was introduced with when adding support
for external snapshots with memory (checkpoints) in commit f569b87.
Resolves problems described by:
https://bugzilla.redhat.com/show_bug.cgi?id=1403691
After qemu delivers the resume event it's already running and thus it's
too late to enter lockspaces since it may already have modified the
disk. The code only creates false log entries in the case when locking
is enabled. The lockspace needs to be acquired prior to starting cpus.
Given how intrusive previous patches are, it might happen that
there's a bug or imperfection. Lets give users a way out: if they
set 'namespaces' to an empty array in qemu.conf the feature is
suppressed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When attaching a device to a domain that's using separate mount
namespace we must maintain /dev entries in order for qemu process
to see them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When attaching a device to a domain that's using separate mount
namespace we must maintain /dev entries in order for qemu process
to see them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When attaching a device to a domain that's using separate mount
namespace we must maintain /dev entries in order for qemu process
to see them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When attaching a device to a domain that's using separate mount
namespace we must maintain /dev entries in order for qemu process
to see them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of trying to fix our security drivers, we can use a
simple trick to relabel paths in both namespace and the host.
I mean, if we enter the namespace some paths are still shared
with the host so any change done to them is visible from the host
too.
Therefore, we can just enter the namespace and call
SetAllLabel()/RestoreAllLabel() from there. Yes, it has slight
overhead because we have to fork in order to enter the namespace.
But on the other hand, no complexity is added to our code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When starting a domain and separate mount namespace is used, we
have to create all the /dev entries that are configured for the
domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When starting a domain and separate mount namespace is used, we
have to create all the /dev entries that are configured for the
domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When starting a domain and separate mount namespace is used, we
have to create all the /dev entries that are configured for the
domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When starting a domain and separate mount namespace is used, we
have to create all the /dev entries that are configured for the
domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When starting a domain and separate mount namespace is used, we
have to create all the /dev entries that are configured for the
domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When starting a domain and separate mount namespace is used, we
have to create all the /dev entries that are configured for the
domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Prime time. When it comes to spawning qemu process and
relabelling all the devices it's going to touch, there's inherent
race with other applications in the system (e.g. udev). Instead
of trying convincing udev to not touch libvirt managed devices,
we can create a separate mount namespace for the qemu, and mount
our own /dev there. Of course this puts more work onto us as we
have to maintain /dev files on each domain start and device
hot(un-)plug. On the other hand, this enhances security also.
From technical POV, on domain startup process the parent
(libvirtd) creates:
/var/lib/libvirt/qemu/$domain.dev
/var/lib/libvirt/qemu/$domain.devpts
The child (which is going to be qemu eventually) calls unshare()
to create new mount namespace. From now on anything that child
does is invisible to the parent. Child then mounts tmpfs on
$domain.dev (so that it still sees original /dev from the host)
and creates some devices (as explained in one of the previous
patches). The devices have to be created exactly as they are in
the host (including perms, seclabels, ACLs, ...). After that it
moves $domain.dev mount to /dev.
What's the $domain.devpts mount there for then you ask? QEMU can
create PTYs for some chardevs. And historically we exposed the
host ends in our domain XML allowing users to connect to them.
Therefore we must preserve devpts mount to be shared with the
host's one.
To make this patch as small as possible, creating of devices
configured for domain in question is implemented in next patches.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is a list of devices that qemu needs for its run (apart from
what's configured for domain). The devices on the list are
enabled in the CGroups by default so they will be good candidates
for initial /dev for new qemu.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We will need this function in near future so that we know what
/dev device corresponds to the SCSI device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We will need this function in near future so that we know what
/dev device corresponds to the SCSI device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We will need this function in near future so that we know what
/dev device corresponds to the USB device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Namely, virFileGetACLs, virFileSetACLs, virFileFreeACLs and
virFileCopyACLs. These functions are going to be required when we
are creating /dev for qemu. We have copy anything that's in
host's /dev exactly as is. Including ACLs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
libvirt libxl picks its own default with respect to the default NIC
to use. libxlMakeNic is the one responsible for this and on boot it
picks LIBXL_NIC_TYPE_VIF_IOEMU for HVM domains such that it accomodates
both PV and emulated one. The good behaving guest at boot will then
select the pv and unplug the emulated device.
Now, on HVM when attaching an interface it will pick the same default
that is LIBXL_NIC_TYPE_VIF_IOEMU which as a result will fail the attach
(see xen commit 32e9d0f ("libxl: nic type defaults to vif in hotplug for
hvm guest"). Xen doesn't yet support the hotplug of emulated devices,
but we don't want to rule out that case either, which might get support
in the future. Hence we simply reverse the defaults when we are
attaching the interface which allows libvirt to prefer the PV nic first
without adding "model='netfront'" following the same pattern as above
commit. Also to avoid ruling out the emulated one we set to
LIBXL_NIC_TYPE_IOEMU when setting a model type that is not 'netfront'.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
The virDomainSendProcessSignal method says the flags values
come from virDomainProcessSignalFlag, but this enum has
never existed. No flags are needed for this method.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Almost none of our virJSONValue*Get* functions accept const virJSONValue
pointers and it wouldn't even make sense since we sometimes modify what
we get. And because there is no reason for preventing callers of
virJSONValueObjectForeachKeyValue from modifying the values they get in
each iteration we can just stop doing it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Using a variable named 'stat' clashes with the system function
'stat()' causing compiler warnings on some platforms
cc1: warnings being treated as errors
../../src/qemu/qemu_monitor_text.c: In function 'parseMemoryStat':
../../src/qemu/qemu_monitor_text.c:604: error: declaration of 'stat' shadows a global declaration [-Wshadow]
/usr/include/sys/stat.h:455: error: shadowed declaration is here [-Wshadow]
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If the cpuset cgroup controller is disabled in /etc/libvirt/qemu.conf
QEMU virtual machines can in principle use all host CPUs, even if they
are hot plugged, if they have no explicit CPU affinity defined.
However, there's libvirt code supposed to handle the situation where
the libvirt daemon itself is not using all host CPUs. The code in
qemuProcessInitCpuAffinity attempts to set an affinity mask including
all defined host CPUs. Unfortunately, the resulting affinity mask for
the process will not contain the offline CPUs. See also the
sched_setaffinity(2) man page.
That means that even if the host CPUs come online again, they won't be
used by the QEMU process anymore. The same is true for newly hot
plugged CPUs. So we are effectively preventing that QEMU uses all
processors instead of enabling it to use them.
It only makes sense to set the QEMU process affinity if we're able
to actually grow the set of usable CPUs, i.e. if the process affinity
is a subset of the online host CPUs.
There's still the chance that for some reason the deliberately chosen
libvirtd affinity matches the online host CPU mask by accident. In this
case the behavior remains as it was before (CPUs offline while setting
the affinity will not be used if they show up later on).
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Tested-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
The functions to retrieve online and present host CPU information
are only supported on Linux for the time being.
This leads to runtime errors if these function are used on other
platforms. To avoid that, code in higher levels using the functions
must replicate the conditional compilation in higher level which
is error prone (and is plainly spoken ugly).
Adding a function virHostCPUHasBitmap that can be used to check
for host CPU bitmap support.
NB: There are other functions including the host CPU count that
are lacking support on all platforms, but they are too essential
in order to be bypassed.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
virQEMUCapsFindTarget is supposed to find an alternative QEMU binary if
qemu-system-$GUEST_ARCH doesn't exist. The alternative is using host
architecture when it is compatible with $GUEST_ARCH. But a special
treatment has to be applied for ppc64le since the QEMU binary is always
called qemu-system-ppc64.
Broken by me in v2.2.0-171-gf2e71550d.
https://bugzilla.redhat.com/show_bug.cgi?id=1403745
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Seems commit id '0257d06b' forgot to include formatstorage when updating
the docs to describe allowing zfs as a pool type and to furthermore note
that the pool's target path element will be generated rather than read.
Similarly commit id 'efab27afb' neglected to indicate that the target path
for a logical pool will now be generated by libvirt.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Almost all XML examples use <tag .../> rather than <tag ...></tag> if
the element is empty. Let's remove the two instances of the latter.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuAgentNotifyEvent accesses monitor structure and is called on qemu
reset/shutdown/suspend events under domain lock. Other monitor
functions on the other hand take monitor lock and don't hold domain lock.
Thus it is possible to have risky simultaneous access to the structure
from 2 threads. Let's take monitor lock here to make access exclusive.
In case of 0 filesystems *info is not set while according
to virDomainGetFSInfo contract user should call free on it even
in case of 0 filesystems. Thus we need to properly set
it. NULL will be enough as free eats NULLs ok.
The libvirt-domain.h documentation indicates that for a qcow2 file
in a filesystem being used for a backing store should report the disk
space occupied by a file; however, commit id '15fa84ac' altered the
code to trust that the wr_highest_offset should be used whenever
wr_highest_offset_valid was set.
As it turns out this will lead to indeterminite results. For an active
domain when qemu hasn't yet had the need to find the wr_highest_offset
value, qemu will report 0 even though qemu-img will report the proper
disk size. This causes reporting of the following XML:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/path/to/test-1g.qcow2'/>
to be as follows:
Capacity: 1073741824
Allocation: 0
Physical: 1074139136
with qemu-img indicating:
image: /path/to/test-1g.qcow2
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 1.0G
Once the backing source file is opened on the guest, then wr_highest_offset
is updated, but only to the high water mark and not the size of the file.
This patch will adjust the logic to check for the file backed qcow2 image
and enforce setting the allocation to the returned 'physical' value, which
is the 'actual-size' value from a 'query-block' operation.
NB: The other consumer of the wr_highest_offset output (GetAllDomainStats)
has a contract that indicates 'allocation' is the offset of the highest
written sector, so it doesn't need adjustment.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Instead of having duplicated code in qemuStorageLimitsRefresh and
virStorageBackendUpdateVolTargetInfo to get capacity specific data
about the storage backing source or volume -- create a common API
to handle the details for both.
As a side effect, virStorageFileProbeFormatFromBuf returns to being
a local/static helper to virstoragefile.c
For the QEMU code - if the probe is done, then the format is saved so
as to avoid future such probes.
For the storage backend code, there is no need to deal with the probe
since we cannot call the new API if target->format == NONE.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Instead of having duplicated code in qemuStorageLimitsRefresh and
virStorageBackendUpdateVolTargetInfoFD to fill in the storage backing
source or volume allocation, capacity, and physical values - create a
common API that will handle the details for both.
The common API will fill in "default" capacity values as well - although
those more than likely will be overridden by subsequent code. Having just
one place to make the determination of what the values should be will
make things be more consistent.
For the QEMU code - the data filled in will be for inactive domains
for the GetBlockInfo and DomainGetStatsOneBlock API's. For the storage
backend code - the data will be filled in during the volume updates.
Signed-off-by: John Ferlan <jferlan@redhat.com>