Commit fb64e176f4 forgot to delete the check that short-circuits the
disk alias creation if the alias is already present. The side effect
of this is that the creation qomName which is necessary to be able to
refer to disk frontends when -blockdev is used was skipped when user
aliases are used.
Fix it by deleting the check. Also prevent any potential memory leaks
from calling this function repeatedly by creating the qomName only when
it's not present.
https://bugzilla.redhat.com/show_bug.cgi?id=1741838
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
After previous commits, the function is not used anymore.
Remove it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Tested-by: Daniel Henrique Barboza <danielhb413@gmail.com>
KVM style of PCI devices assignment was dropped in kernel in
favor of vfio pci (see kernel commit v4.12-rc1~68^2~65). Since
vfio is around for quite some time now and is far superior
discourage people in using KVM style.
Ideally, I'd make QEMU_CAPS_VFIO_PCI implicitly assumed but turns
out qemu-3.0.0 doesn't support vfio-pci device for RISC-V.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Tested-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Simplify the command line formatter by complicating the validator.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Store the namespace URI as const char*, instead of in a function.
Suggested-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Now that virDomainXMLNamespace matches virXMLNamespace,
we no longer need to keep both around.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Neither the xmlDocPtr nor the root xmlNode (also passed
in the XPathContext) are interesting to the callees.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
QEMU-4.1 supports 'Direct Mode' for Hyper-V synthetic timers
(hv-stimer-direct CPU flag): Windows guests can request that timer
expiration notifications are delivered as normal interrupts (and not
VMBus messages). This is used by Hyper-V on KVM.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Now that we support blockdev for qemuDomainBlockCopy we can allow
copying to remote destinations as well.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Implement job handling for the block copy job (drive/blockdev-mirror)
when using -blockdev. In contrast to the previously implemented
blockjobs the block copy job introduces new images to the running qemu
instance, thus requires a bit more handling.
When copying to new images the code now makes use of blockdev-create to
format the images explicitly rather than depending on automagic qemu
behaviour.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
QEMU finally exposes an interface which allows us to instruct it to
format or create arbitrary images. This is required for blockdev
integration of block copy and snapshots as we need to pre-format images
prior to use with blockdev-add.
This path introduces job handling and also helpers for formatting and
attaching a whole image described by a virStorageSource.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rather than copying just the top level image, let's copy the full user
provided backing chain.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In commit 3f93884a4d where the job handling of commit jobs with
blockdev was added I've forgot to add a 'break' in the switch fomatting
the status XML. Thankfully this would not be a problem as the cases
where this fell through didn't have any code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The utility of the function is extremely limited as for block copy
we need to register the mirror chain earlier than when it's set with the
disk. This means that it would be open-coded in that case.
Avoid any weird usage and just open-code the only current usage, remove
the function, and reword the docs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commit 16ca234b56 refactored how the 'shallow' and 'reuse' flags
are accessed but neglected to fix the clearing of 'shallow' in case when
the disk has no backing chain. This means that we'd request a shallow
copy even without backing chain and also a few checks would work wrong.
Fix it by using the extracted variable everywhere.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Allow reusing original backing chain when doing a shallow copy without
reuse of external image. The existing logic didn't allow it but it will
be possible. Also add a note to explain that logic.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rename qemuDomainObjPrivateXMLFormatBlockjobFormatChain to
qemuDomainObjPrivateXMLFormatBlockjobFormatSource and add a 'chain'
parameter which allows controlling whether the backing chain is
formatted.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function ignores all errors from qemuStorageLimitsRefresh by calling
virResetLastError. This still logs them. Since qemuStorageLimitsRefresh
allows suppressing some, do so.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemuStorageLimitsRefresh uses qemuDomainStorageOpenStat internally and
there are callers which don't care about the error. Propagate the
skipInaccessible flag so that we can log less errors.
Callers currently don't care about the return value change.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
None of the callers of qemuDomainStorageUpdatePhysical care about
errors.
Use the new flag for qemuDomainStorageOpenStat which suppresses some
errors and move the reset of the rest of the uncommon errors into this
function. Document what is happening in a comment for the function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Some callers of this function actually don't care about errors and reset
it. The message is still logged which might irritate users in this case.
Add a boolean flag which will do few checks whether it actually makes
sense to even try opening the storage file. For local files we check
whether it exists and for remote files we at first see whether we even
have a storage driver backend for it in the first place before trying to
open it.
Other problems will still report errors but these are the most common
scenarios which can happen here.
This patch changes the return value of the function so that the caller
is able to differentiate the possibilities.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When QEMU supports flushing caches at the end of migration, we can
safely allow migration even if disk/driver/@cache is not none nor
directsync.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Acked-By: Peter Krempa <pkrempa@redhat.com>
QEMU 4.0.0 and newer automatically drops caches at the end of migration.
Let's check for this capability so that we can allow migration when disk
cache is turned on.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Acked-By: Peter Krempa <pkrempa@redhat.com>
The original message was logically incorrect: cache != none or cache !=
directsync is always true. But even replacing "or" with "and" doesn't
make it more readable for humans.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Acked-By: Peter Krempa <pkrempa@redhat.com>
In the first stage of incoming migration (qemuMigrationDstPrepareAny) we
call qemuMigrationEatCookie when there's no vm object created yet and
thus we don't have any private data to pass.
Broken by me in commit v5.6.0-109-gbf15b145ec.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Back in July 2010, commit 6ea90b84 (meant to resolve
https://bugzilla.redhat.com/571991 ) added code to set the MAC address
of any tap device to the associated guest interface's MAC, but with
the first byte replaced with 0xFE. This was done in order to assure
that
1) the tap MAC and guest interface MAC were different (otherwise L2
forwarding through the tap would not work, and the kernel would
repeatedly issue a warning stating as much).
2) any bridge device that had one of these taps attached would *not*
take on the MAC of the tap (leading to network instability as
guests started and stopped)
A couple years later, https://bugzilla.redhat.com/798467 was filed,
complaining that a user could configure a tap-based guest interface to
have a MAC address that itself had a first byte of 0xFE, silently
(other than the kernel warning messages) resulting in a non-working
configuration. This was fixed by commit 5d571045, which logged an
error and failed the guest start / interface attach if the MAC's first
byte was 0xFE.
Although this restriction only reduces the potential pool of MAC
addresses from 2^46 (last two bits of byte 1 must be set to 10) by
2^32 (still 4 orders of magnitude larger than the entire IPv4 address
space), it also means that management software that autogenerates MAC
addresses must have special code to avoid an 0xFE prefix. Now after 7
years, someone has noticed this restriction and requested that we
remove it.
So instead of failing when 0xFE is found as the first byte, this patch
removes the restriction by just replacing the first byte in the tap
device MAC with 0xFA if the first byte in the guest interface is
0xFE. 0xFA is the next-highest value that still has 10 as the lowest
two bits, and still
2) meets the requirement of "tap MAC must be different from guest
interface MAC", and
3) is high enough that there should never be an issue of the attached
bridge device taking on the MAC of the tap.
The result is that *any* MAC can be chosen by management software
(although it would still not work correctly if a multicast MAC (lowest
bit of first byte set to 1) was chosen), but that's a different
issue).
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com
QEMU version 2.12.1 introduced a performance feature under commit
be7773268d98 ("target-i386: add KVM_HINTS_DEDICATED performance hint")
This patch adds a new KVM feature 'hint-dedicated' to set this performance
hint for KVM guests. The feature is off by default.
To enable this hint and have libvirt add "-cpu host,kvm-hint-dedicated=on"
to the QEMU command line, the following XML code needs to be added to the
guest's domain description in conjunction with CPU mode='host-passthrough'.
<features>
<kvm>
<hint-dedicated state='on'/>
</kvm>
</features>
...
<cpu mode='host-passthrough ... />
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Signed-off-by: Menno Lageman <menno.lageman@oracle.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
QEMU shows a warning message if partial NUMA mapping is set. This patch
adds a warning message in libvirt when editing the XML. It must be an
error in future, when QEMU remove this ability.
Signed-off-by: Maxiwell S. Garcia <maxiwell@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The virtqemud daemon will be responsible for providing the qemu API
driver functionality. The qemu driver is still loaded by the main
libvirtd daemon at this stage, so virtqemud must not be running at
the same time.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When running in libvirtd, we are happy for any of the drivers to simply
skip their initialization in virStateInitialize, as other drivers are
still potentially useful.
When running in per-driver daemons though, we want the daemon to abort
startup if the driver cannot initialize itself, as the daemon will be
useless without it.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Using @VARNAME@ is a normal style of automake, so lets match that.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Instead of each subdir containing its own custom rule for checking the
augeas tests, use common rule for all.
The new rule searches both src + build dirs for include files, since
some augeas files will be auto-generated very shortly.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The current make rules are inconsistent about which directory the
augeas test files are created in. Put them all in the same dir as
their source.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
We already have a variable that lists all augeas test files, so we can
add everything to CLEANFILES at once.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The augeas-gentest.pl program merges a config file into a augeas
file, saving the output to a new file. It is going to be useful
to further process the output file, and it would be easier if this can
be done with a pipeline, so change augeas-gentest.pl to write to stdout
instead of a file.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Redefining a domain via virDomainDefineXML should not give different results
based on an already existing definition.
Also, there's a crasher somewhere in the code:
https://bugzilla.redhat.com/show_bug.cgi?id=1739338
This reverts commit 94b3aa55f8
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Since qemuDomainDeviceDefPostParse callback requires qemuCaps, we need
to make sure it gets the capabilities stored in the domain's private
data if the domain is running. Passing NULL may cause QEMU capabilities
probing to be triggered in case QEMU binary changed in the meantime.
When this happens while a running domain object is locked, QMP event
delivered to the domain before QEMU capabilities probing finishes will
deadlock the event loop.
QEMU capabilities lookup (via domainPostParseDataAlloc callback) is
hidden inside virDomainDeviceDefPostParseOne with no way to pass
qemuCaps to virDomainDeviceDef* functions. This patch fixes all
remaining paths leading to virDomainDeviceDefPostParse.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
Several general snapshot and checkpoint APIs were lazily passing NULL as
the parseOpaque pointer instead of letting their callers pass the right
data. This patch fixes all paths leading to virDomainDefParseNode.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to virDomainDefPostParse.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
Several general functions from domain_conf.c were lazily passing NULL as
the parseOpaque pointer instead of letting their callers pass the right
data. This patch fixes all paths leading to virDomainDefCopy to do the
right thing.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuMigrationCookieXMLParse.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to virDomainDefParseString.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuMigrationAnyPrepareDef.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainSaveImageOpen.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefFormatBufInternal.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since qemuDomainDefPostParse callback requires qemuCaps, we need to make
sure it gets the capabilities stored in the domain's private data if the
domain is running. Passing NULL may cause QEMU capabilities probing to
be triggered in case QEMU binary changed in the meantime. When this
happens while a running domain object is locked, QMP event delivered to
the domain before QEMU capabilities probing finishes will deadlock the
event loop.
This patch fixes all paths leading to qemuDomainDefCopy.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that 100% of libvirt code is forbidden in a SUID environment,
we no longer need to worry about whether env variables are
trustworthy or not. The virt-login-shell setuid program, which
does not link to any libvirt code, will purge all environment
variables, except $TERM, before invoking the virt-login-shell-helper
program which uses libvirt.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Now that 100% of libvirt code is forbidden in a SUID environment,
we no longer need to worry about whether env variables are
trustworthy or not. The virt-login-shell setuid program, which
does not link to any libvirt code, will purge all environment
variables, except $TERM, before invoking the virt-login-shell-helper
program which uses libvirt.
Thus we only need one API for env passthrough in virCommand.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
If virQEMUDriverGetCapabilities returns NULL, then a subsequent
deref of @caps would cause an error, so we just return failure.
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The qemu driver already does some <rng> model validation, based on
qemuCaps. However, the logic for exposing <rng> model values in domcaps
is basically identical. This drops the qemuCaps checking and compares
against the domCaps data directly.
This approach makes it basically impossible to add a new <rng> model to
the qemu driver without extending domcaps. The validation can also
be shared with other drivers eventually.
Reviewed-by: Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Fill in virDomainCaps at Validate time and use it to call
virDomainCapsDeviceDefValidate
Reviewed-by: Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
qemuCaps is tied to a binary on disk. domCaps is tied to a combo
of binary+machine+arch+virttype values. For the qemu driver this almost
entirely translates to a permutation of qemuCaps though
Upcoming patches want to use the domCaps data store at XML validate
time, but we need to cache the data so we aren't repeatedly
regenerating it.
Add a domCapsCache hash table to qemuCaps. This ensures that the domCaps
cache is blown away whenever qemuCaps needs to be regenerated. Similarly
when qemuCaps is invalidated, the next call to virQEMUCapsCacheLookup
will unref qemuCaps and free our cache as well.
Adjust virQEMUDriverGetDomainCapabilities to search the cache and add
to it if we don't find a hit.
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
For now it's just a helper for building a qemu virDomainCapsPtr,
used in qemuConnectGetDomainCapabilities
Reviewed-by: Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
The model logic is taken from qemuDomainRNGDefValidate
Reviewed-by: Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
This way it is obvious when adding a new resource control type
that stats helper func needs to be updated too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If the host doesn't have resctrl then the monitor is going to be
NULL and we must avoid dereferencing it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The capabilities object must be unrefed when no longer needed.
Use VIR_AUTOUNREF() for that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
I find this function more readable if checks for passed storage
source are done first and backing chain is done last. Mixing them
together does not hurt, but is less readable.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
ACKed-by: Peter Krempa <pkrempa@redhat.com>
The format string for a PCI address is copied over and over
again, often with slight adjustments. Introduce global
VIR_PCI_DEVICE_ADDRESS_FMT macro that holds the formatting string
and use it wherever possible.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
While it's true that older QEMUs were not able to deal with PCI
domains, we don't support those versions anymore (see
4a42ece13a). Therefore it is safe to always format fully
expanded PCI address. Format PCI domain always as it will
simplify next commits.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Export virResctrlMonitorGetStats and make
virResctrlMonitorGetCacheOccupancy obsoleted.
Signed-off-by: Wang Huaqiang <huaqiang.wang@intel.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Refactor 'virResctrlMonitorStats' to track multiple statistical
records.
Signed-off-by: Wang Huaqiang <huaqiang.wang@intel.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Refactor and rename 'virResctrlMonitorFreeStats' to
'virResctrlMonitorStatsFree' to free one
'virResctrlMonitorStatsPtr' object.
Signed-off-by: Wang Huaqiang <huaqiang.wang@intel.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
'default monitor of an allocation' is defined as the resctrl
monitor group that created along with an resctrl allocation,
which is created by resctrl file system. If the monitor group
specified in domain configuration file is happened to be a
default monitor group of an allocation, then it is not necessary
to create monitor group since it is already created. But if
an monitor group is not an allocation default group, you
should create the group under folder
'/sys/fs/resctrl/mon_groups' and fill the vcpu PIDs to 'tasks'
file.
Signed-off-by: Wang Huaqiang <huaqiang.wang@intel.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
hv-spinlocks is not a CPUID feature and should not be checked as such.
While starting a domain with hv-spinlocks enabled, we would report a
warning about unsupported hyperv spinlocks feature even though it was
set properly.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Originally the names of the KVM CPU features were only used internally
for looking up their CPUID bits. So we used "__kvm_" prefix for them to
make sure the names do not collide with normal CPU features stored in
our CPU map.
But with QEMU 4.1 we check which features were enabled or disabled by a
freshly started QEMU process using their names rather than their CPUID
bits (mostly because of MSR features). Thus we need to change our made
up internal names into the actual names used by QEMU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
All the features are hyperv features even though they are provided by
KVM with QEMU. The "KVM" part in the macro names does not make a lot of
sense.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Starting with QEMU 4.1, we're using the canonical feature names on the
command line and avoid aliases to prepare for possible deprecation of
all aliases in QEMU. But we do so only for features from our CPU map,
hyperv features defined in the code were unchanged and this patch fixes
it. Some features use "hv-" prefix unconditionally because they were
introduced recently enough to always support spelling with a dash.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Originally the names of the hyperv CPU features were only used
internally for looking up their CPUID bits. So we used "__kvm_hv_"
prefix for them to make sure the names do not collide with normal CPU
features stored in our CPU map.
But with QEMU 4.1 we check which features were enabled or disabled by a
freshly started QEMU process using their names rather than their CPUID
bits (mostly because of MSR features). Thus we need to change our made
up internal names into the actual names used by QEMU. Most of the names
are only used with QEMU 4.1 and newer and the reset was introduced with
QEMU recently enough to already support spelling with "-". Thus we don't
need to define them as "hv_*" with a translation to "hv-*" for new QEMU.
Without this patch libvirt would mistakenly report all hyperv features
as unavailable and refuse to start any domain using them with QEMU 4.1.
Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Earlier patches mentioned that the initial implementation will prevent
snapshots and checkpoints from being used on the same domain at once.
However, the actual restriction is done in this separate patch to make
it easier to lift that restriction via a revert, when we are finally
ready to tackle that integration in the future.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Time to actually issue the QMP transactions that create and delete
persistent checkpoints, resolving TODOs intentionally left earlier in
the series. For create, we only need one transaction: inside, we
visit all disks affected by the checkpoint, and create a new enabled
bitmap, as well as disabling the bitmap of the first ancestor
checkpoint (if any) that also had a bitmap. For deletion, we need
multiple QMP calls: for each disk, if there is an ancestor checkpoint
with a bitmap, then the bitmap must be merged (including activating
the ancestor bitmap if the leaf node changes), all before deleting the
bitmap from the checkpoint being removed.
Signed-off-by: Eric Blake <eblake@redhat.com>
Qemu bitmap operations require knowing the node name associated with
the format layer (the qcow2 file); as upcoming patches will be
grabbing that information frequently, make a helper function to access
it.
Another potential benefit of this function is that we have a single
place where we could insert a QMP node-name scraping call if we don't
currently know the node name, when -blockdev is not supported;
however, the goal is that we hopefully don't ever have to do that
because we instead scrape node names only at the point where they
change.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
A lot of this work heavily copies from the existing snapshot APIs.
What's more, this patch is (intentionally) very similar to the
checkpoint code just added in the test driver, to the point that qemu
checkpoints are not fully usable in this patch, but it at least
bisects and builds cleanly. The separation between patches is done
because the grunt work of saving and restoring XML and tracking
relations between checkpoints is common to the test driver, while the
later patch adding integration with QMP is specific to qemu.
Also note that the interlocking to prevent checkpoints and snapshots
from existing at the same time will be a separate patch, to make it
easier to revert that restriction when we finally round out the design
for supporting interaction between the two concepts.
Signed-off-by: Eric Blake <eblake@redhat.com>
This is similar to the existing directory for snapshots; the domain
will save one xml file per checkpoint, for reloading on the next
libvirtd restart. Fortunately, since checkpoints mandate RNG
validation, we are assured that the checkpoint name will be usable as
a file name (no abuse of '../escape' as a checkpoint name, for
example).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Introduce the handler for finalizing a block commit and active bloc
commit job which will allow to use it with blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce the handler for finalizing a block pull job which will allow
to use it with blockdev.
This patch also contains some additional machinery which is required to
store all the relevant job data in the status XML which will also be
reused with other block job types.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In case of an incoming migration we do not need to run swtpm_setup
with all the parameters but only want to get the benefit of it
creating a TPM state file for us that we can then label with an
SELinux label. The actual state will be overwritten by the in-
coming state. So we have to pass an indicator for incomingMigration
all the way to the command line parameter generation for swtpm_setup.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
If we are using -blockdev, then node names are always available
(because we set them). But when not using it, we have to scrape node
names from QMP, and want to do so as infrequently as possible. We
were scraping node names after reconnecting a new libvirtd to an
existing guest (see qemuProcessReconnect), and after any block job
that may have changed the set of node names we care about (legacy
block jobs), but forgot to scrape the names when first starting a
guest. Do so now in order to allow the checkpoint code to always have
access to a node name without having to repeat a node name scrape
itself.
Future patches may need to clean up qemuDomainSetBlockThreshold (if
node names are always available, then it doesn't need to repeat a
scrape) and/or hotplug and media changes (if the addition of new nodes
can result in a null node name, then scraping at that point in time
would be appropriate). But for now, this patch addresses only the
most common instance of a missing node name.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Use the existing variables rather then calling virTPMSwtpmXYZ().
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Message-Id: <20190726205633.2041912-2-stefanb@linux.vnet.ibm.com>
Create an empty log file if the log file was removed, otherwise the
transaction to set the security labels on the file will fail.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190726210706.24440-3-stefanb@linux.ibm.com>
Set the transactionStarted to false if the commit failed. If this is not
done, then the failure path will report 'no transaction is set' and hide
more useful error reports.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20190726210706.24440-2-stefanb@linux.ibm.com>
Starting with QEMU 4.1 qemuMonitorCPUModelInfo structure in virQEMUCaps
stores only canonical feature names which may differ from the name used
by libvirt. We need translate these canonical names into libvirt names
for further consumption.
This fixes a bug in qemuConnectBaselineHypervisorCPU which would remove
all features for which libvirt's spelling differs from the QEMU's
preferred name. For example, the following result of
qemuConnectBaselineHypervisorCPU on my host with QEMU 4.1 is wrong:
<cpu mode='custom' match='exact'>
<model fallback='forbid'>Skylake-Client</model>
<vendor>Intel</vendor>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='clflushopt'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='xsaves'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='invtsc'/>
<feature policy='disable' name='pclmuldq'/>
<feature policy='disable' name='lahf_lm'/>
</cpu>
The 'pclmuldq' and 'lahf_lm' should not be disabled in the baseline CPU
as they are supported by QEMU on this host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Since swtpm does not support getting started without password
once it was created with encryption enabled, we don't allow
encryption to be removed. Similarly, we do not allow encryption
to be added once swtpm has run. We also prevent chaning the type
of the TPM backend since the encrypted state is still around and
the next time one was to switch back to the emulator backend
and forgot the encryption the TPM would not work.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This patch now passes the passphrase as a migration key to swtpm.
This now encrypts the state of the TPM while a VM is migrated between
hosts or when suspended into a file. Since the migration key secret
is the same as the state encryption secret, this now requires that
the migration destination host has the same secret value.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Allow vTPM state encryption when swtpm_setup and swtpm support
passing a passphrase using a file descriptor.
This patch enables the encryption of the vTPM state only. It does
not encrypt the state during migration, so the destination secret
does not need to have the same password at this point.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Check whether previously found executables were updated and if
so look for them again. This helps to use updated features of
swtpm and its tools upon updating them.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Move qemuTPMEmulatorInit to virTPMEmulatorInit in virtpm.c and introduce
a few functions to query the executables needed for virCommands.
Add locking to protect the tool paths and return a copy of the tool paths
to callers wanting to access them so that we can run the initialization
function multiples time later on and detect when the executable gets updated.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
qemuBlockJobRewriteConfigDiskSource rewrites the disk source only
according to the 'target'. This means that if someone would change the
inactive config of the VM to refer to a different disk a block job would
rewrite it when finishing a job which modifies the disk source.
Make sure that this does not happen by verifying that the source of the
config disk is the same.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
ACKed-by: Eric Blake <eblake@redhat.com>
Since we copy everything from the original storage source including some
runtime data which are not relevant for the config we should clear them.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
ACKed-by: Eric Blake <eblake@redhat.com>
Both active block commit and block copy modify the disk source of the
active definition and thus also must modify the corresponding inactive
definition source so that the VM starts up later. This is currently
implemented in the legacy block job handler but the logic will be useful
also for the new handlers. Split it out which also simplifies it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
ACKed-by: Eric Blake <eblake@redhat.com>
The <mirror> subelement is used in two ways: in a commit job to point to
existing storage, and in a block-copy job to point to additional
storage. We need a way to track only the distinct storage.
This patch introduces qemuBlockJobDiskRegisterMirror which registers the
mirror chain separately only for jobs which require it. This also comes
with remembering that in the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
ACKed-by: Eric Blake <eblake@redhat.com>
Commit c412383796 used a value from wrong enum when setting the disk's
mirrorState variable. This meant that a 'READY' job would show up as
'PIVOTING'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
ACKed-by: Eric Blake <eblake@redhat.com>
When returning to asynchronous block job handling the flag which
determines the handling method should be reset prior to flushing
outstanding events. If there's an event to process the handler may
invoke the monitor and another event may be received. We'd not process
that one. Reset the flag earlier.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemuDomainSnapshotDiskDataCollect copies the source of the disk from the
live config into the inactive config. Move this operation earlier so
that if we initialize it for use for the particular instance the
run-time-only data is not copied.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In case when the backing store can be represented with something
simpler such as a URI we can use it rather than falling back to the
json: pseudo-protocol.
In cases when it's not worth it (e.g. with the old ugly NBD or RBD
strings) let's switch to json.
The function is exported as we'll need it when overwriting the ugly
strings qemu would come up with during blockjobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The block commit API checked 'disk->src->path' to see whether there
is a reasonable disk source to be committed. As the top image can be
e.g. backed by NBD the check is not good enough. Replace it by
virStorageSourceIsEmpty.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
For the modern use cases we are going to use 'blockdev-snapshot' instead
of 'blockdev-snapshot-sync'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemuBuildStorageSourceChainAttachPrepareBlockdev prepares the full
backing chain for attachment via blockdev. For snapshots we'll need to
prepare one image only as it needs to be plugged on top of the existing
chain.
This patch introduces qemuBuildStorageSourceChainAttachPrepareBlockdevTop
which prepares only @top similarly to the original function by splitting
out the functionality into an internal function so that the API does not
change.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In commit 042c95bd19 qemuBuildStorageSourceChainAttachPrepareBlockdev
was added but the comment for the function mentions
qemuBuildStorageSourceChainAttachPrepareDrive. Fix the mistake.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that we track the job separately we watch only for the abort of the
one single block job so the comment is no longer accurate. Also
describing asynchronous operation is not really necessary.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
With -blockdev:
- we track the job and check it after restart
- have the ability to ask qemu to persist it to collect result
- have the ability to report errors.
This solves all points the comment outlined so remove it. Also all jobs
handle the disk state modification along with the event so there's
nothing special the comment says.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use job-complete/job-abort instead of the blockjob-* variants for
blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
As the error message is now available and we know whether the job failed
we can report an error straight away rather than having the user check
the event.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Set the correct job states after the operation is requested in qemu.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When initiating a pivot or abort of a block job we need to track which
one was initiated. Currently it was done via data stashed in
virDomainDiskDef. Add possibility to track this also together with the
job itself.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Do decisions based on the configuration of the job rather than the data
stored with the disk.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the stored job name rather than passing in the disk alias when
referring to the job which allows the same code to work also when
-blockdev will be used.
Note that this API does not require the change to use 'query-job' as it
will ever only work with blockjobs bound to disks due to the arguments
which allow only referring to a disk. For the disk-less jobs we'll need
to add a separate API later.
The change to qemuMonitorGetBlockJobInfo is required as the API was
stripping the 'drive-' prefix when returning the data which is not
desired any more.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
virDomainSnapshotFindByName(list, NULL) should return NULL, rather
than the internal-use-only metaroot. Most existing callers pass in a
non-NULL name; the few external callers that don't are immediately
calling virDomainMomentSetParent (which indeed needs the metaroot
rather than NULL if the parent name is NULL); but as the leaky
abstraction is ugly, it is worth instead making
virDomainMomentSetParent static and adding a new function for
resolving the parent link of a brand new moment within its list. The
existing external uses of virDomainMomentSetParent always succeed
(either the new moment has parent_name of NULL to become a new root,
or has parent_name set to a strdup of the previous current moment);
hence, our new function does not need a return value (but it still has
a VIR_WARN in case future uses break our assumptions about failure
being impossible).
Missed when commit 02c4e24d refactored things to attempt to remove
direct metaroot manipulations out of the qemu and test drivers into
internal-only details, and made more obvious when commit dc8d3dc6
factored it out into a separate file.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Some VM configurations may result in a large number of threads created by
the associated qemu process which can exceed the system default limit. The
maximum number of threads allowed per process is controlled by the pids
cgroup controller and is set to 16k when creating VMs with systemd's
machined service. The maximum number of threads per process is recorded
in the pids.max file under the machine's pids controller cgroup hierarchy,
e.g.
$cgrp-mnt/pids/machine.slice/machine-qemu\\x2d1\\x2dtest.scope/pids.max
Maximum threads per process is controlled with the TasksMax property of
the systemd scope for the machine. This patch adds an option to qemu.conf
which can be used to override the maximum number of threads allowed per
qemu process. If the value of option is greater than zero, it will be set
in the TasksMax property of the machine's scope after creating the machine.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
It's already dereffed in the initialization and shouldn't be NULL
unless virJSONValueArraySize after a virJSONValueObjectGetArray
could return a NULL data entry.
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Peter Krempa <pkrempa@redhat.com>
Report in logs when we don't find existing block job data and create it
just to handle the job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
While this function does start a block job in case when we'd not be able
to get our internal data for it, the handler sets the job state to
QEMU_BLOCKJOB_STATE_RUNNING anyways, thus qemuBlockJobStartupFinalize
would just unref the job.
Since the other usage of qemuBlockJobStartupFinalize in the other part
of the event handler was a bug replace this one anyways even if it would
not cause problems.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The block job event handler qemuProcessHandleBlockJob looks at the block
job data to see whether the job requires synchronous handling. Since the
block job event may arrive before we continue the job handling (if the
job has no data to copy) we could hit the state when the job is still
set as QEMU_BLOCKJOB_STATE_NEW (as we move it to the
QEMU_BLOCKJOB_STATE_RUNNING state only after returning from monitor).
If the event handler uses qemuBlockJobStartupFinalize it would
unregister and free the job. Thankfully this is not a big problem for
legacy blockjobs as we don't need much data for them but since we'd
re-instantiate the job data structure we'd report wrong job type for
active commit as qemu reports it as a regular commit job.
Fix it by not using qemuBlockJobStartupFinalize function in
qemuProcessHandleBlockJob as it is not starting the job anyways.
https://bugzilla.redhat.com/show_bug.cgi?id=1721375
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commit 5ff46aaa7f added a new parameter but neglected to fix the NONNULL
declarations.
Reported-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
When removing the disk fronted while any block job is still active we
need to transfer the ownership of the backing chain to the job itself as
the job still holds the reference to the chain members and thus attempts
to remove them would fail.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In cases when the disk frontend was unplugged while a blockjob was
running the blockjob inherits the backing chain. When the blockjob is
then terminated we need to unplug the chain as it will not be used any
more.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The PR manager is a property of the format layer in qemu so we need to
be able to track it also in the chains of orphaned block jobs.
Add a helper for qemu to look also into the blockjob state.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When the guest unplugs the disk frontend libvirt is responsible for
deleting the backend. Since a blockjob may still have a reference to the
backing chain when it is running we'll have to store the metadata for
the unplugged disk for future reference.
This patch adds 'chain' and 'mirrorChain' fields to 'qemuBlockJobData'
to keep them around with the job along with status XML machinery and
tests. Later patches will then add code to change the ownership of the
chain when unplugging the disk backend.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Refresh the state of the jobs and process any events that might have
happened while libvirt was not running.
The job state processing requires some care to figure out if a job
needs to be bumped.
For any invalid job try doing our best to cancel it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add the infrastructure to handle block job events in the -blockdev era.
Some complexity is required as qemu does not bother to notify whether
the job was concluded successfully or failed. Thus it's necessary to
re-query the monitor.
To minimize the possibility of stuck jobs save the state into the XML
prior to handling everything so that the reconnect code can potentially
continue with the cleanup.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add support for handling the event either synchronously or
asynchronously using the event thread.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
With blockdev we'll need to use the JOB_STATUS_CHANGE so gate the old
events by the blockdev capability.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This new state is entered when qemu finished the job but libvirt does
not know whether it was successful or not.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that the blockjob handling code deals with the status XML we don't
need to save it explicitly when starting blockjobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that block job data is stored in the status XML portion we need to
make sure that everything which changes the state also saves the status
XML. The job registering function is used while parsing the status XML
so in that case we need to skip the XML saving.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to store the block job state in the status XML so that we can
properly recover any data when reconnecting after startup and also in
the end to be able to do any transition of the backing chain that
happened while libvirt was not connected to the monitor.
First step is to note the name, type, state and corresponding disk into
the status XML.
We also need to make sure that a broken blockjob does not make libvirt
lose the VM, thus many of the errors just mark the job as invalid.
Later on we'll cancel all invalid jobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The job data saved in the XML may be partially invalid e.g. if something
is missing. To prevent losing a domain with such a job add a flag to the
job data so that job APIs can ignore such a job and we can just cancel
it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When parsing the status XML we need to register all existing jobs.
Export the functions so that they are usable in other modules.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Later on we'll format these values into the status XML so the from/to
string functions will come handy. The implementation also notes that
these will be used in the status XML to avoid somebody changing the
values.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add the job structure to the table when instantiating a new job and
remove it when it terminates/fails.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Block jobs currently belong to disks only so we can look up the block
job data for them in the corresponding disks. This won't be the case
when using blockdev as certain jobs don't even correspond to a disk and
most of them can run on a part of the backing chain.
Add a global table of blockjobs which can be used to look up the data
for the blockjobs when the job events need to be processed.
The table is a hash table organized by job name and has a reference to
the job. New and running jobs will later be added to this table.
Reference counting will allow to reap job state for synchronous callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The legacy job handler does not look at the old job state so we can
update it earlier.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
There's no need to do it if the job is not completed. The new helper
allows to do this with much less hassle in the correct place.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rename and move qemuBlockJobTerminate to qemuBlockJobUnregister and
separate bits from qemuBlockJobDiskNew which register the job with the
disk. This creates an unified interface for other APIs to use.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Similarly to qemuDomainSaveStatus add a helper to save the config XML
named qemuDomainSaveConfig.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rename qemuDomainObjSaveJob and create a wrapper for it which does not
require 'driver' to be passed and export it so that other palces can
easily save the status XML without having to invoke virDomainSaveStatus
which has unpleasing parameters.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
QEMU allows us to create storage on certain network protocols which
allow image creation through their API. Wire up the generator for using
it with libvirt as well as for local files.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
'blockdev-add' allows us to use qemu to format images to our desired
format. This patch implements helpers which convert a
virStorageSourcePtr into JSON objects describing the required
configuration.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>