The 'savevm' HMP command didn't work properly with blockdev as it tried
to do snapshot of everything including the protocol nodes accessing
files which are not snapshottable. Qemu fixed this bug so now we need to
detect it to allow enabling blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The top level commands now can have 'feature' flags for fixes so add
support for querying those as well.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Initial implementation of 'auto-read-only' didn't reopen the backing
files when needed. For '-blockdev' to work we need to be able to tel
qemu to open a file read-only and change it during blockjobs as we label
backing chains with a sVirt label which does not allow writing. The
dynamic auto-read-only supports this as it reopens files when writing
is demanded.
Add a capability to detect that the posix file based backends support
the dynamic part.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The data is captured from qemu v4.2.0-rc2-19-g2061735ff0
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The qemu driver will obey <backingStore> when we support blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Until now we've only supported <backingStore> in an output mode. The
documentation for the element states that hypervisor drivers may start
to obey it in the future.
Update the documentation so that it mentions the recently added
'backingStoreInput' domain capability and explain what happens if it is
supported and <backingStore> is present on input.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Historically we've only supported the <backingStore> as an output-only
element for domain disks. The documentation states that it may become
supported on input. To allow management apps detectin once that happens
add a domain capability which will be asserted if the hypervisor driver
will be able to obey the <backingStore> as configured on input.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The XSL generator loads included HTML files relative to the source dir
but we need to tell it to load them from the build dir instead.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Some of the web content is only present in the source tree, thus when
viewing pages from the build tree they appear missing.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The API cross reference files are not used since
commit d3043afe5c
Author: Daniel Veillard <veillard@redhat.com>
Date: Mon Jan 21 08:08:33 2008 +0000
Remove docs/API*.html
* docs/API* docs/api.xsl docs/site.xsl docs/Makefile.am: remove the
generation of the API*.html files as it's not really useful here
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Define automake variables for all the data we need built and installed
and let automake generate the install rules normally.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Use a simple
if "substr" in line
before running a regular expression, which saves time,
especially if the regex has a capture group.
This reduces runtime of the check by almost 78 % for me.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
For checking whether a substring is present in a string,
using the pattern:
"str" in string
is slightly faster than:
string.find("str") != -1
Use it to shave off 4 % of the runtime of this script that
processes quite a few long source files.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Running regular expressions with capture groups is expensive.
Bail out early if the line does not start with a '#'.
This reduces the runtime of the check by two thirds.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Getting the hostname of a guest usually requires a in-guest agent,
or generally can be determined only on active domains.
Signed-off-by: Pino Toscano <ptoscano@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
If user starts a blockcommit or a blockcopy then we modify access
for qemu on both images and leave it like that until the job
terminates. So far so good. Problem is, if user instead of
terminating the job (where we would modify the access again so
that the state before the job is restored) calls destroy on the
domain or if qemu dies whilst executing the block job. In this
case we don't ever clear the access we granted at the beginning.
To fix this, maybe a bit harsh approach is used, but it works:
after all labels were restored (that is after
qemuSecurityRestoreAllLabel() was called), we iterate over each
disk in the domain and remove XATTRs from the whole backing chain
and also from any file the disk is being mirrored to.
This would have been done at the time of pivot, but it isn't
because user decided to kill the domain instead. If we don't do
this and leave some XATTRs behind the domain might be unable to
start.
Also, secdriver can't do this because it doesn't know if there is
any job running. It's outside of its scope - the hypervisor
driver is responsible for calling secdriver's APIs.
Moreover, this is safe to call because we don't remember labels
for any member of a backing chain except of the top layer. But
that one was restored in qemuSecurityRestoreAllLabel() call done
earlier. Therefore, not only we don't remember labels (and thus
this is basically a NOP for other images in the backing chain) it
is also safe to call this when no blockjob was started in the
first place, or if some parts of the backing chain are shared
with some other domains - this is NOP, unless a block job is
active at the time of domain destroy.
https://bugzilla.redhat.com/show_bug.cgi?id=1741456#c19
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
There are four places where we remove image XATTRs and in all of
them we have the same for() loop with the same body. Move it into
a separate function because I'm about to introduce fifth place
where the same needs to be done.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Install the convertor function which enables the internals that will use
-blockdev to make qemu open the firmware image and stop using -drive.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The old way to instantiate a pflash device via -drive was a hack since
it's a platform device.
The modern approach calls for configuring it via -machine and takes the
node name as an argument.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
As a first step we will build the blockdevs which will be supposed to
back the pflash drives when moving away from -drive.
This code is similar to the way we build the blockdevs for the disk, but
skips the copy-on-read layer and doesn't implement any legacy approach.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Add a helper which will covert the PFLASH code file and variable file
into the virStorageSource objects stored in private data so that we can
use them with -blockdev while keeping the infrastructure to determine
the path to the loaders intact.
This is a temporary solution until we will want to do snapshots of the
pflash where we will be forced do track the full backing chain in the
XML.
In the meanwhile just convert it partially so that we can stop using
-drive.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
To allow converting the pflash drives to blockdev we will need a
virStorageSource to allow using our helpers. Temporarily prior to
coverting loader data to a virStorageSoruce add private data which will
house this.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Extract the old way to instantiate pflash devices to hold the firmware
via -drive to a separate function so that it can later be conditionally
disabled when -blockdev will be used.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commit 5751a0b6b1 added a helper function
called virDomainCapsFeaturesInitUnsupported which initialized all domain
capability features as unsupported.
When adding a new feature this would initialize it as unsupported also
for hypervisor drivers which the original author possibly didn't intend
to modify. To prevent accidental wrong value being reported in such case
revert back to initializing individual features in the hypervisor
drivers themselves.
This is not a straight revert as additonal patches modified how we store
the capabilities.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Pre-Glib era which used malloc allowed the size of the client-side
buffers to be declared as 0, because malloc documents that it can either
return 0 or a unique pointer on 0 size allocations.
With glib this doesn't work anymore, because glib documents that for
such allocation requests NULL is always returned which results in an
error in our public API checks server-side.
This patch complements the fix in the RPC layer by explicitly erroring
out on the following combination of args used by our legacy APIs (their
moder equivalents don't suffer from this):
function(caller-allocated-array, size, ...) {
if (!caller-allocated-array && size > 0)
return error;
}
treating everything else as a valid input and potentially let that fail
on the server-side rather than client-side.
https://bugzilla.redhat.com/show_bug.cgi?id=1772842
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The qemu_domain_monitor_event_msg struct in qemu_protocol.x
defines event as a nonnull_string and qemuMonitorJSONIOProcessEvent
also errors out on a non-NULL event.
Drop the check to fix the build with static analysis.
This essentially reverts commit d343e8203d
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Adding build time self tests for basic (deprecated), doorbell and plain mode.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Shared memory devices need qemu to be able to access certain paths
either for the shared memory directly (mostly ivshmem-plain) or for a
socket (mostly ivshmem-doorbell).
Add logic to virt-aa-helper to render those apparmor rules based
on the domain configuration.
https://bugzilla.redhat.com/show_bug.cgi?id=1761645
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Acked-by: Jamie Strandboge <jamie@canonical.com>
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
There are currently broken use cases, e.g. snapshotting more than one disk at
once like:
$ virsh snapshot-create-as --domain eoan --disk-only --atomic
--diskspec vda,snapshot=no --diskspec vdb,snapshot=no
--diskspec vdc,file=/test/disk1.snapshot1.qcow,snapshot=external
--diskspec vdd,file=/test/disk2.snapshot1.qcow,snapshot=external
The command above will iterate from qemuDomainSnapshotCreateDiskActive and
eventually add /test/disk1.snapshot1.qcow first (appears in the rules)
to then later add /test/disk2.snapshot1.qcow and while doing so throwing
away the former rule causing it to fail.
All other calls to (re)load_profile already use append=true when adding
rules append=false is only used when restoring rules [1].
Fix this by letting AppArmorSetSecurityImageLabel use append=true as well.
Since this is removing a (unintentional) trigger to revoke all rules
appended so far we agreed on review to do some tests, but in the tests
no rules came back on:
- hot-plug
- hot-unplug
- snapshotting
Bugs:
https://bugs.launchpad.net/libvirt/+bug/1845506https://bugzilla.redhat.com/show_bug.cgi?id=1746684
[1]: https://bugs.launchpad.net/libvirt/+bug/1845506/comments/13
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Acked-by: Jamie Strandboge <jamie@canonical.com>
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
A lot of the code in AppArmorSetSecurityImageLabel is a duplicate of
what is in reload_profile, this refactors AppArmorSetSecurityImageLabel
to use reload_profile instead.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Acked-by: Jamie Strandboge <jamie@canonical.com>
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
reload_profile calls get_profile_name for no particular gain, lets
remove that call. The string isn't used in that function later on
and not registered/passed anywhere.
It can only fail if it either can't allocate or if the
virDomainDefPtr would have no uuid set (which isn't allowed).
Thereby the only "check" it really provides is if it can allocate the
string to then free it again.
This was initially added in [1] when the code was still in
AppArmorRestoreSecurityImageLabel (later moved) and even back then had
no further effect than described above.
[1]: https://libvirt.org/git/?p=libvirt.git;a=blob;f=src/security/security_apparmor.c;h=16de0f26f41689e0c50481120d9f8a59ba1f4073;hb=bbaecd6a8f15345bc822ab4b79eb0955986bb2fd#l487
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Acked-by: Jamie Strandboge <jamie@canonical.com>
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
While only used internally from libvirt the options still are misleading
enough to cause issues every now and then.
Group modes, options and an adding extra file and extend the wording of
the latter which had the biggest lack of clarity.
Both add a file to the end of the rules, but one re-generates the
rules from XML and the other keeps the existing rules as-is not
considering the XML content.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Acked-by: Jamie Strandboge <jamie@canonical.com>
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
When starting a domain without a CPU model specified in the domain XML,
QEMU will choose a default one. Which is fine unless the domain gets
migrated to another host because libvirt doesn't perform any CPU ABI
checks and the virtual CPU provided by QEMU on the destination host can
differ from the one on the source host.
With QEMU 4.2.0 we can probe for the default CPU model used by QEMU for
a particular machine type and store it in the domain XML. This way the
chosen CPU model is more visible to users and libvirt will make sure
the guest will see the exact same CPU after migration.
Architecture specific notes
- aarch64: We only set the default CPU for TCG domains as KVM requires
explicit "-cpu host" to work.
- ppc64: The default CPU for KVM is "host" thanks to some hacks in QEMU,
we will translate the default model to the model corresponding to the
host CPU ("POWER8" on a Power8 host, "POWER9" on Power9 host, etc.).
This is not a problem as the corresponding CPU model is in fact an
alias for "host". This is probably not ideal, but it's not wrong and
the default virtual CPU configured by libvirt is the same QEMU would
use. TCG uses various CPU models depending on machine type and its
version.
- s390x: The default CPU for KVM is "host" while TCG defaults to "qemu".
- x86_64: The default CPU model (qemu64) is not runnable on any host
with KVM, but QEMU just disables unavailable features and starts
happily.
https://bugzilla.redhat.com/show_bug.cgi?id=1598151https://bugzilla.redhat.com/show_bug.cgi?id=1598162
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
QEMU 4.2.0 will report default CPU types used by each machine type and
we will want to start using it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Almost all TCG query-machines replies match KVM. The only exceptions are
4.2.0 replies on s390x which differ in the reported default CPU type.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Some specifics of machine types may depend on the accelerator and thus
the data should be moved to virQEMUCapsAccel. The TCG machine types are
just copied from the ones probed for KVM to simplify the changes to
qemucapabilitiestest data files.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function copies machine type data from one QEMU caps structure to
another.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>