Now that all pieces are in place (hopefully) let's enable -blockdev.
We base the capability on presence of the fix for 'auto-read-only' on
files so that blockdev works properly, mandate that qemu supports
explicit SCSI id strings to avoid ABI regression and that the fix for
'savevm' is present so that internal snapshots work.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The 'savevm' HMP command didn't work properly with blockdev as it tried
to do snapshot of everything including the protocol nodes accessing
files which are not snapshottable. Qemu fixed this bug so now we need to
detect it to allow enabling blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The top level commands now can have 'feature' flags for fixes so add
support for querying those as well.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Initial implementation of 'auto-read-only' didn't reopen the backing
files when needed. For '-blockdev' to work we need to be able to tel
qemu to open a file read-only and change it during blockjobs as we label
backing chains with a sVirt label which does not allow writing. The
dynamic auto-read-only supports this as it reopens files when writing
is demanded.
Add a capability to detect that the posix file based backends support
the dynamic part.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The qemu driver will obey <backingStore> when we support blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
If user starts a blockcommit or a blockcopy then we modify access
for qemu on both images and leave it like that until the job
terminates. So far so good. Problem is, if user instead of
terminating the job (where we would modify the access again so
that the state before the job is restored) calls destroy on the
domain or if qemu dies whilst executing the block job. In this
case we don't ever clear the access we granted at the beginning.
To fix this, maybe a bit harsh approach is used, but it works:
after all labels were restored (that is after
qemuSecurityRestoreAllLabel() was called), we iterate over each
disk in the domain and remove XATTRs from the whole backing chain
and also from any file the disk is being mirrored to.
This would have been done at the time of pivot, but it isn't
because user decided to kill the domain instead. If we don't do
this and leave some XATTRs behind the domain might be unable to
start.
Also, secdriver can't do this because it doesn't know if there is
any job running. It's outside of its scope - the hypervisor
driver is responsible for calling secdriver's APIs.
Moreover, this is safe to call because we don't remember labels
for any member of a backing chain except of the top layer. But
that one was restored in qemuSecurityRestoreAllLabel() call done
earlier. Therefore, not only we don't remember labels (and thus
this is basically a NOP for other images in the backing chain) it
is also safe to call this when no blockjob was started in the
first place, or if some parts of the backing chain are shared
with some other domains - this is NOP, unless a block job is
active at the time of domain destroy.
https://bugzilla.redhat.com/show_bug.cgi?id=1741456#c19
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
There are four places where we remove image XATTRs and in all of
them we have the same for() loop with the same body. Move it into
a separate function because I'm about to introduce fifth place
where the same needs to be done.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Install the convertor function which enables the internals that will use
-blockdev to make qemu open the firmware image and stop using -drive.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The old way to instantiate a pflash device via -drive was a hack since
it's a platform device.
The modern approach calls for configuring it via -machine and takes the
node name as an argument.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
As a first step we will build the blockdevs which will be supposed to
back the pflash drives when moving away from -drive.
This code is similar to the way we build the blockdevs for the disk, but
skips the copy-on-read layer and doesn't implement any legacy approach.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Add a helper which will covert the PFLASH code file and variable file
into the virStorageSource objects stored in private data so that we can
use them with -blockdev while keeping the infrastructure to determine
the path to the loaders intact.
This is a temporary solution until we will want to do snapshots of the
pflash where we will be forced do track the full backing chain in the
XML.
In the meanwhile just convert it partially so that we can stop using
-drive.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
To allow converting the pflash drives to blockdev we will need a
virStorageSource to allow using our helpers. Temporarily prior to
coverting loader data to a virStorageSoruce add private data which will
house this.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Extract the old way to instantiate pflash devices to hold the firmware
via -drive to a separate function so that it can later be conditionally
disabled when -blockdev will be used.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commit 5751a0b6b1968bb2354b2ac21cc5938b93009590 added a helper function
called virDomainCapsFeaturesInitUnsupported which initialized all domain
capability features as unsupported.
When adding a new feature this would initialize it as unsupported also
for hypervisor drivers which the original author possibly didn't intend
to modify. To prevent accidental wrong value being reported in such case
revert back to initializing individual features in the hypervisor
drivers themselves.
This is not a straight revert as additonal patches modified how we store
the capabilities.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When starting a domain without a CPU model specified in the domain XML,
QEMU will choose a default one. Which is fine unless the domain gets
migrated to another host because libvirt doesn't perform any CPU ABI
checks and the virtual CPU provided by QEMU on the destination host can
differ from the one on the source host.
With QEMU 4.2.0 we can probe for the default CPU model used by QEMU for
a particular machine type and store it in the domain XML. This way the
chosen CPU model is more visible to users and libvirt will make sure
the guest will see the exact same CPU after migration.
Architecture specific notes
- aarch64: We only set the default CPU for TCG domains as KVM requires
explicit "-cpu host" to work.
- ppc64: The default CPU for KVM is "host" thanks to some hacks in QEMU,
we will translate the default model to the model corresponding to the
host CPU ("POWER8" on a Power8 host, "POWER9" on Power9 host, etc.).
This is not a problem as the corresponding CPU model is in fact an
alias for "host". This is probably not ideal, but it's not wrong and
the default virtual CPU configured by libvirt is the same QEMU would
use. TCG uses various CPU models depending on machine type and its
version.
- s390x: The default CPU for KVM is "host" while TCG defaults to "qemu".
- x86_64: The default CPU model (qemu64) is not runnable on any host
with KVM, but QEMU just disables unavailable features and starts
happily.
https://bugzilla.redhat.com/show_bug.cgi?id=1598151https://bugzilla.redhat.com/show_bug.cgi?id=1598162
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
QEMU 4.2.0 will report default CPU types used by each machine type and
we will want to start using it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Almost all TCG query-machines replies match KVM. The only exceptions are
4.2.0 replies on s390x which differ in the reported default CPU type.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Some specifics of machine types may depend on the accelerator and thus
the data should be moved to virQEMUCapsAccel. The TCG machine types are
just copied from the ones probed for KVM to simplify the changes to
qemucapabilitiestest data files.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function copies machine type data from one QEMU caps structure to
another.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In preparation for making machine types dependent on the accelerator,
the <machine> elements are formatted between <cpu type='kvm'> and
<cpu type='tcg'>.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All the code for formatting machine type data was moved to a standalone
virQEMUCapsFormatMachines function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All the code for loading machine type data was moved to a standalone
virQEMUCapsLoadMachines function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
To avoid duplicating code which selects the right virQEMUCapsAccel data
to be filled during probing.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It is a tiny wrapper around virQEMUCapsProbeQMPCPUDefinitions which will
soon get private parameters and thus it cannot be exposed outside
qemu_capabilities.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
And make it use virQEMUCapsGetAccel once rather than repeating the same
code in all functions called from virQEMUCapsFormatAccel.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
And make it use virQEMUCapsGetAccel once rather than repeating the same
code in all functions called from virQEMUCapsLoadAccel.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function can be used to get the pointer to all data which depend on
the accelerator.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This is container for capabilities data that depend on the accelerator.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The new functions are designed to load and format capabilities which
depend on the accelerator (host CPU expansion and CPU models).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to create a mapping between CPU model names and their
corresponding QOM types.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We will need to keep some QEMU-specific data for each CPU model
supported by a QEMU binary. Instead of complicating the generic
virDomainCapsCPUModelsPtr, we can just directly store
qemuMonitorCPUDefsPtr returned by the capabilities probing code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Most of the code moved to a new virQEMUCapsFetchCPUDefinitions function
and the existing virQEMUCapsFetchCPUModels just becomes a small wrapper
around virQEMUCapsFetchCPUDefinitions and virQEMUCapsCPUDefsToModels.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The functions return virDomainCapsCPUModelsPtr and thus they should be
called *CPUModels for consistency. Functions called *CPUDefinitions will
work on qemuMonitorCPUDefsPtr.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function translates qemuMonitorCPUDefsPtr (used by QEMU caps probing
code) into virDomainCapsCPUModelsPtr used by domain capabilities.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
While virDomainCapsCPUModel structure contains 'usable' field with
virDomainCapsCPUUsable type, the lower level structure specific to QEMU
driver used virTriStateBool for the same thing and we had to translate
between them.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's store qemuMonitorCPUDefInfo directly in the array of CPUs in
qemuMonitorCPUDefs rather then using an array of pointers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It is a container for a CPU models list (qemuMonitorCPUDefInfo) and a
number of elements in this list.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function would return a valid virDomainCapsCPUModelsPtr with empty
CPU models list if query-cpu-definitions exists in QEMU, but returns
GenericError meaning it's not in fact implemented. This behaviour is a
bit strange especially after such virDomainCapsCPUModels structure is
stored in capabilities XML and parsed back, which will result in NULL
virDomainCapsCPUModelsPtr rather than a structure containing nothing.
Let's just keep virDomainCapsCPUModelsPtr NULL if the QMP command is not
implemented and change the return value to int so that callers can
easily check for failure or success.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>