Now that we have a separate job type which will not trigger normal code
paths for terminating job we can remove the ad-hoc handling.
This possibly fixes the issue of a broken job inheriting the disk and
then finishing in which case we'd not detach the backing chain.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
To better track jobs we couldn't parse let's introduce a new job type
which will clarify semantics internally in few places.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
We will need to clear per-job type data when we will be marking a
blockjob as broken in the new way. Extract the code for future reuse.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Both failure to refresh and to dismiss the job are very unlikely but if
they happen there's not much we can do about the blockjob.
The concluded job handlers treat it as if the job failed if we don't
update the state to 'QEMU_BLOCKJOB_STATE_COMPLETED' which is probably
the safest thing to do here.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Otherwise it would get dropped later on as untracked despite us knowing
about it. Additionally since we cancelled it we must wait to dismiss it
which would not be possible if we unregister it. This also opened a
window for a race condition since the job state change event of the
just-cancelled job might be delivered prior to us unregistering the job
in which case everything would work properly.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Since we don't know what happened to the job we can't do much about it
but we can at least log that this happened.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
We must exit the monitor prior to refusing other work, otherwise the VM
object will become unusable.
This bug was introduced in commit v5.5.0-244-gc412383796 but thankfully
the code path was not excercised without QEMU_CAPS_BLOCKDEV.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Block jobs may be members of async jobs so it makes more sense to
refresh block job state after we do steps for async job recovery.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
qemu returns an error message in the job statistics even if the job was
cancelled to emphasize it was not successful. Libvirt didn't properly
transform it into QEMU_BLOCKJOB_STATE_CANCELLED though.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Commit ed56851f1b didn't wire up fetching of the statistics for the
job which are reported by 'query-jobs'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
The magic number is taken from the coreutils stat.c file since
there is no constant for it in normal system headers.
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This reverts commit 421c9550f5
qemuDomainBlockPullCommon calls virDomainObjEndAPI internally so the
original commit made us shed two references of @vm instead of one
getting us into a premature free of @vm.
This is not a straight revert as qemuDomainBlockPull was modified
meanwhile. I've also added a warning comment that @vm is consumed.
https://bugzilla.redhat.com/show_bug.cgi?id=1777230
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There are two daemons that wait for acquiring their pid files:
virtnetworkd and virtstoraged. This is undesirable as the idea
is to quit early if unable to acquire the pid file.
Fixes: v5.6.0-rc1~207.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Commit d30a1ad044 translated the symbol file checker from perl to
python by doing a literal translation in most cases. Unfortunately one
string formatting operation was not really translated into python
leaving users with non-helpful error:
'Symbol $1 is listed twice'
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In the past the network driver was (mistakenly) being called for all
interfaces, not just those of type='network', and so it had a chance
to validate all interface configs after the actual type of the
interface was known.
But since the network driver has been more completely/properly
separated from qemu, the network driver isn't called during the
startup of any interfaces except those with type='network', so this
validation no longer takes place for, e.g. <interface type='bridge'>
(or direct, etc). This in turn meant that a config could erroneously
specify a vlan tag, or bandwidth settings, for a type of interface
that didn't support it, and the domain would start without complaint,
just silently ignoring those settings.
This patch moves those validation checks out of the network driver,
and into virDomainActualNetDefValidate() so they will be done for all
interfaces, not just type='network'.
https://bugzilla.redhat.com/1741121
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
<interface> devices (virDomainNetDef) are a bit different from other
types of devices in that their actual type may come from a network (in
the form of a port connection), and that doesn't happen until the
domain is started. This means that any validation of an <interface> at
parse time needs to be a bit liberal in what it accepts - when
type='network', you could think that something is/isn't allowed, but
once the domain is started and a port is created by the configured
network, the opposite might be true.
To solve this problem hypervisor drivers need to do an extra
validation step when the domain is being started. I recently (commit
3cff23f7, libvirt 5.7.0) added a function to peform such validation
for all interfaces to the QEMU driver -
qemuDomainValidateActualNetDef() - but while that function is a good
single point to call for the multiple places that need to "start" an
interface (domain startup, device hotplug, device update), it can't be
called by the other hypervisor drivers, since 1) it's in the QEMU
driver, and 2) it contains some checks specific to QEMU. For
validation that applies to network devices on *all* hypervisors, we
need yet another interface validation function that can be called by
any hypervisor driver (not just QEMU) right after its network port has
been created during domain startup or hotplug. This patch adds that
function - virDomainActualNetDefValidate(), in the conf directory,
and calls it in appropriate places in the QEMU, lxc, and libxl
drivers.
This new function is the place to put all network device validation
that 1) is hypervisor agnostic, and 2) can't be done until we know the
"actual type" of an interface.
There is no framework for validation at domain startup as there is for
post-parse validation, but I don't want to create a whole elaborate
system that will only be used by one type of device. For that reason,
I just made a single function that should be called directly from the
hypervisors, when they are initializing interfaces to start a domain,
right after conditionally allocating the network port (and regardless
of whether or not that was actually needed). In the case of the QEMU
driver, qemuDomainValidateActualNetDef() is already called in all the
appropriate places, so we can just call the new function from
there. In the case of the other hypervisors, we search for
virDomainNetAllocateActualDevice() (which is the hypervisor-agnostic
function that calls virNetworkPortCreateXML()), and add the call to our
new function right after that.
The new function itself could be plunked down into many places in the
code, but we already have 3 validation functions for network devices
in 2 different places (not counting any basic validation done in
virDomainNetDefParseXML() itself):
1) post-parse hypervisor-agnostic
(virDomainNetDefValidate() - domain_conf.c:6145)
2) post-parse hypervisor-specific
(qemuDomainDeviceDefValidateNetwork() - qemu_domain.c:5498)
3) domain-start hypervisor-specific
(qemuDomainValidateActualNetDef() - qemu_domain.c:5390)
I placed (3) right next to (2) when I added it, specifically to avoid
spreading validation all over the code. For the same reason, I decided
to put this new function right next to (1) - this way if someone needs
to add validation specific to qemu, they go to one location, and if
they need to add validation applying to everyone, they go to the
other. It looks a bit strange to have a public function in between a
bunch of statics, but I think it's better than the alternative of
further fragmentation. (I'm open to other ideas though, of course.)
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
These all just return a scalar value, so there's no daisy-chained
fallout from changing them, and they can easily be combined in a
single patch.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
This also isn't required (due to the vportprofile being stored in the
NetDef as a pointer rather than being directly contained), but it
seemed dishonest to not mark it as const (and thus permit users to
modify its contents)
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
In this case, the virNetDevBandwidthPtr that is returned is not to a
region within the virDomainNetDef arg, but points elsewhere (the
NetDef has the pointer, not the entire object), so technically it's
not necessary to make the return value a const, but it's a bit
disingenuous to *not* do it.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
This is needed if we want to call the function when the
virDomainNetDef* we have is a const.
Since virDomainNetGetActualVlan returns a pointer to memory that is
within the virDomainNetDefPtr arg, the returned pointer must also be
made const. This leads to a cascade of other virNetDevVlanPtr's that
must be changed to "const virNetDevVlan *".
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
This makes it easier to understand which interface's config caused the
error.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Cole Robinson <crobinso@redhat.com>
The cpuModels member of _virQEMUCapsAccel struct is not a
virObject but regular struct with a free function defined:
qemuMonitorCPUDefsFree(). Use that when clearing parent structure
instead of virObjectUnref() to avoid a memleak:
==212322== 57,275 (48 direct, 57,227 indirect) bytes in 3 blocks are definitely lost in loss record 623 of 627
==212322== at 0x4838B86: calloc (vg_replace_malloc.c:762)
==212322== by 0x554A158: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.6000.6)
==212322== by 0x17B14BF5: qemuMonitorCPUDefsNew (qemu_monitor.c:3587)
==212322== by 0x17B27BA7: qemuMonitorJSONGetCPUDefinitions (qemu_monitor_json.c:5616)
==212322== by 0x17B14B0B: qemuMonitorGetCPUDefinitions (qemu_monitor.c:3559)
==212322== by 0x17A6AFBB: virQEMUCapsFetchCPUDefinitions (qemu_capabilities.c:2571)
==212322== by 0x17A6B2CC: virQEMUCapsProbeQMPCPUDefinitions (qemu_capabilities.c:2629)
==212322== by 0x17A70C00: virQEMUCapsInitQMPMonitorTCG (qemu_capabilities.c:4769)
==212322== by 0x17A70DDF: virQEMUCapsInitQMPSingle (qemu_capabilities.c:4820)
==212322== by 0x17A70E99: virQEMUCapsInitQMP (qemu_capabilities.c:4848)
==212322== by 0x17A71044: virQEMUCapsNewForBinaryInternal (qemu_capabilities.c:4891)
==212322== by 0x17A7119C: virQEMUCapsNewData (qemu_capabilities.c:4923)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
On s390 machines host-passthrough and host-model CPUs result in the same
guest ABI (with QEMU new enough to be able to tell us what "host" CPU is
expanded to, which was implemented around 2.9.0). So instead of using
host-passthrough CPU when there's no CPU specified in a domain XML we
can safely use host-model and benefit from CPU compatibility checks
during migration, snapshot restore and similar operations.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The match attribute is only relevant for custom mode CPUs. Reporting
failure when match == 'minimum' regardless on CPU mode can cause
unexpected failures. We should only report the error for custom CPUs. In
fact, calling virCPUs390Update on a custom mode CPU should always report
an error as optional features are not supported on s390 either.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Most likely for historical reasons our CPU def formatting code is
happily adding useless <model fallback='allow'/> for host-model CPUs. We
can just drop it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commit v0.8.4-66-g95ff6b18ec (9 years ago) changed the default value for
the cpu/@match attribute to 'exact' in a rather complicated way. It did
so only if <model> subelement was present and set -1 otherwise (which is
not expected to ever happen). Thus the following two equivalent XML
elements:
<cpu mode='host-model'/>
and
<cpu mode='host-model'>
<model/>
</cpu>
would be parsed differently. The former would end up with match == -1
while the latter would have match == 1 ('exact'). This is not a big deal
since the match attribute is ignored for host-model CPUs, but we can
simplify the code and make it a little bit saner anyway.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The test case for x86_64 and neither cirrus nor vga capability is of the
xml2argv type because it actually fails to parse the XML at all [*] which
is something that xml2xml tests don't seem to handle. xml2argv test fails
to produce a qemu argv for this case which xml2argv tests can handle.
[*] This is a consequence of the decision not to have a fallback if the
obvious choices (cirrus and vga) aren't viable due to missing QEMU caps.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Pavel Mores <pmores@redhat.com>
If a graphics device was added to XML that had no video device, libvirt
automatically added a video device which was always of type 'cirrus' on
x86_64, even if the underlying qemu didn't support cirrus.
This patch refines a bit the decision about the type of the video device.
Based on QEMU capabilities, cirrus is still preferred but only added if
QEMU supports it, otherwise VGA is used if supported by QEMU. There is now
no fallback as libvirt only aspires to generate a basic working config and
leaves anything more specific up to higher-level management tools.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Pavel Mores <pmores@redhat.com>
The test relied implicitly on default video device being cirrus. As we're
about to change that the test would start failing. To avoid this, just make
the test's requirement explicit.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Pavel Mores <pmores@redhat.com>
The default video device type selection algorithm we're about to deploy will
increase the amount of code dedicated to the task by amount enough to warrant
factoring the whole thing into its own function so as not to pollute the
caller qemuDomainDeviceVideoDefPostParse(). Do it now so that the actual
algorithm change later on is in a clean commit by itself and easy to review.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Pavel Mores <pmores@redhat.com>
The CentOS7 distro is quite old and the Ubuntu 18.04 distro
is already a year & half old. Adding a Fedora 31 image gives
us coverage of the newest stable distro release, and
fedora-rawhide gives us the cutting edge.
Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This reverts commit f4db846c32.
This patch results in the following error when trying to start
essentially any VM with default network:
unsupported configuration: QOS must be defined for network 'default'
Coverity didn't see that the bandwidth == NULL it complained about in
virNetDevBandwidthPlug was already checked properly in
networkCheckBandwidth, thus causing networkPlugBandwidth to return 0
and finish before a call to virNetDevBandwidthPlug would have been even
made.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This previous commit introduced a simpler free callback for
hash data with only 1 arg, the value to free:
commit 49288fac96
Author: Peter Krempa <pkrempa@redhat.com>
Date: Wed Oct 9 15:26:37 2019 +0200
util: hash: Add possibility to use simpler data free function in virHash
It missed two functions in the hash table code which need
to call the alternate data free function, virHashRemoveEntry
and virHashRemoveSet.
After the previous patch though, there is no code that
makes functional use of the 2nd key arg in the data
free function. There is merely one log message that can
be dropped.
We can thus purge the current virHashDataFree callback
entirely, and rename virHashDataFreeSimple to replace
it.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The virChrdevHashEntryFree method uses the hash 'key'
as the name of the logfile it has to remove. By storing
a struct as the value which contains the stream and
the dev path, we can avoid relying on the hash key
when free'ing entries.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Now that all pieces are in place (hopefully) let's enable -blockdev.
We base the capability on presence of the fix for 'auto-read-only' on
files so that blockdev works properly, mandate that qemu supports
explicit SCSI id strings to avoid ABI regression and that the fix for
'savevm' is present so that internal snapshots work.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The 'savevm' HMP command didn't work properly with blockdev as it tried
to do snapshot of everything including the protocol nodes accessing
files which are not snapshottable. Qemu fixed this bug so now we need to
detect it to allow enabling blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The top level commands now can have 'feature' flags for fixes so add
support for querying those as well.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Initial implementation of 'auto-read-only' didn't reopen the backing
files when needed. For '-blockdev' to work we need to be able to tel
qemu to open a file read-only and change it during blockjobs as we label
backing chains with a sVirt label which does not allow writing. The
dynamic auto-read-only supports this as it reopens files when writing
is demanded.
Add a capability to detect that the posix file based backends support
the dynamic part.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The data is captured from qemu v4.2.0-rc2-19-g2061735ff0
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The qemu driver will obey <backingStore> when we support blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Until now we've only supported <backingStore> in an output mode. The
documentation for the element states that hypervisor drivers may start
to obey it in the future.
Update the documentation so that it mentions the recently added
'backingStoreInput' domain capability and explain what happens if it is
supported and <backingStore> is present on input.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Historically we've only supported the <backingStore> as an output-only
element for domain disks. The documentation states that it may become
supported on input. To allow management apps detectin once that happens
add a domain capability which will be asserted if the hypervisor driver
will be able to obey the <backingStore> as configured on input.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The XSL generator loads included HTML files relative to the source dir
but we need to tell it to load them from the build dir instead.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Some of the web content is only present in the source tree, thus when
viewing pages from the build tree they appear missing.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The API cross reference files are not used since
commit d3043afe5c
Author: Daniel Veillard <veillard@redhat.com>
Date: Mon Jan 21 08:08:33 2008 +0000
Remove docs/API*.html
* docs/API* docs/api.xsl docs/site.xsl docs/Makefile.am: remove the
generation of the API*.html files as it's not really useful here
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Define automake variables for all the data we need built and installed
and let automake generate the install rules normally.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>