The gotShutdown bool has been redundant since we started setting
VIR_DOMAIN_SHUTDOWN state after receiving SHUTDOWN event from QEMU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
When qemuProcessReconnectHelper was introduced (commit d38897a5d)
reconnection failure used VIR_DOMAIN_SHUTOFF_FAILED; however, that
was changed in commit bda2f17d to either VIR_DOMAIN_SHUTOFF_CRASHED
or VIR_DOMAIN_SHUTOFF_UNKNOWN.
When QEMU_CAPS_NO_SHUTDOWN checking was removed in commit fe35b1ad6
the conditional state was just left at VIR_DOMAIN_SHUTOFF_CRASHED.
So introduce qemuDomainIsUsingNoShutdown which will manage the
condition when the domain was started with -no-shutdown so that
when/if reconnection failure occurs we can restore the decision
point used to determine whether CRASHED or UNKNOWN is provided.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
This function updates the used QEMU capabilities of @vm by querying
the QEMU capabilities cache.
Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Thanks to the previous commit the RESUME event handler knows what reason
should be used when changing the domain state to VIR_DOMAIN_RUNNING, but
the emitted VIR_DOMAIN_EVENT_RESUMED event still uses a generic
VIR_DOMAIN_EVENT_RESUMED_UNPAUSED detail. Luckily, the event detail can
be easily deduced from the running reason, which saves us from having to
pass one more value to the handler.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Whenever we get the RESUME event from QEMU, we change the state of the
affected domain to VIR_DOMAIN_RUNNING with VIR_DOMAIN_RUNNING_UNPAUSED
reason. This is fine if the domain is resumed unexpectedly, but when we
sent "cont" to QEMU we usually have a better reason for the state
change. The better reason is used in qemuProcessStartCPUs which also
sets the domain state to running if qemuMonitorStartCPUs reports
success. Thus we may end up with two state updates in a row, but the
final reason is correct.
This patch is a preparation for dropping the state change done in
qemuMonitorStartCPUs for which we need to pass the actual running reason
to the RESUME event handler and use it there instead of
VIR_DOMAIN_RUNNING_UNPAUSED.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Create a qemuDomainRemoveInactiveJobLocked which copies
qemuDomainRemoveInactiveJob except of course calling
another new helper qemuDomainRemoveInactiveLocked.
The qemuDomainRemoveInactiveLocked is a copy of
qemuDomainRemoveInactive except that instead of calling
virDomainObjListRemove it calls virDomainObjListRemoveLocked.
Signed-off-by: Wang Yechao <wang.yechao255@zte.com.cn>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The copy-on-read feature is expressed by adding a new node layer in
qemu when using -blockdev. Since we will keep these per-disk (as opposed
to per storage source) we need to store the appropriate node names in
the disk definition.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When using -blockdev you need to use the qom path to refer to the disk
fronends. Add means for storing the path and getting it after restart.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Node names for block objects in qemu need to be unique for an instance
of the qemu process. Add a counter to generate objects sequentially and
store it in the status XML so that we can restore it.
The helpers added allow to create new node names and reset the counter
after the VM process terminates.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We don't use it for anything useful so it does not make much sense to
extract it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that we have a saner replacement for checking if the disk source is
the same use it instead of formatting qemu command-line chunks.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The disk backend alias was historically the alias of the -drive backing
the storage. For setups with -blockdev this will become more complex as
it will depend on other configs and generally will differ.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In some cases backing chain needs to be cleared prior to re-detection.
Move this step out of qemuDomainDetermineDiskChain as only certain
places need it and the function itself is able to skip to the end of the
chain to perform detection.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This event is emitted on the monitor if one of pr-managers lost
connection to its pr-helper process. What libvirt needs to do is
restart the pr-helper process iff it corresponds to managed
pr-manager.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Once we called qemuDomainObjEnterRemote to talk to the destination
daemon during a peer to peer migration, the vm lock is released and we
only hold an async job. If the source domain dies at this point the
monitor EOF callback is allowed to do its job and (among other things)
clear all private data irrelevant for stopped domain. Thus when we call
qemuDomainObjExitRemote, the domain may already be gone and we should
avoid touching runtime private data (such as current job info).
In other words after acquiring the lock in qemuDomainObjExitRemote, we
need to check the domain is still alive. Unless we're doing offline
migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1589730
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The point is to break QEMU_JOB_* into smaller pieces which
enables us to achieve higher throughput. For instance, if there
are two threads, one is trying to query something on qemu
monitor while the other is trying to query something on agent
monitor these two threads would serialize. There is not much
reason for that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Introduce guest agent specific job categories to allow threads to
run agent monitor specific jobs while normal monitor jobs can
also be running.
Alter _qemuDomainJobObj in order to duplicate certain fields that
will be used for guest agent specific tasks to increase
concurrency and throughput and reduce serialization.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
The aim of this API is to allow the caller to do best effort.
Some functions can work even when acquiring the job fails (e.g.
qemuConnectGetAllDomainStats()). But what they can't bear is
delay if they have to wait up to 30 seconds for each domain that
is processing some other job.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
And replace all calls with virObjectEventStateQueue such that:
qemuDomainEventQueue(driver, event);
becomes:
virObjectEventStateQueue(driver->domainEventState, event);
And remove NULL checking from all callers.
Signed-off-by: Anya Harter <aharter@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Remove the call to the validating function from the function which sets
stuff up.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Convert the function to just prepare data for the disk. Callers need to
do the looping since there's more to do than just copy the data around.
The code path in qemuDomainPrepareDiskSource doesn't need to loop over
the chain yet, since there currently is no chain at this point. This
will be addressed later in the blockdev series where we will setup much
more stuff.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It's desired to keep the alias around to allow referencing of the secret
object used with qemu. Add set of APIs which will destroy all data
except the alias.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function checks whether the storage source requires authentication
secret setup. Rename it accordingly.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This helper checks that the vm has the master key setup and libvirt
supports the given encryption algorithm.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Create a new vsock endpoint by opening /dev/vhost-vsock,
set the requested CID via ioctl (or assign a free one if auto='yes'),
pass the file descriptor to QEMU and build the command line.
https://bugzilla.redhat.com/show_bug.cgi?id=1291851
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Allow saving various aspects necessary to do NBD migration via blockdev
by storing a 'virStorageSource' in the disk private data meant to store
the NBD target of migration. Along with this add code to parse and
format it into the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Rather than always checking which path to use pre-assign it when
preparing storage source.
This reduces the need to pass 'vm' around too much. For later use the
path can be retrieved from the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Before we exec() qemu we have to spawn pr-helper processes for
all managed reservations (well, technically there can only one).
The only caveat there is that we should place the process into
the same namespace and cgroup as qemu (so that it shares the same
view of the system). But we can do that only after we've forked.
That means calling the setup function between fork() and exec().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
For command line we need two things:
1) -object pr-manager-helper,id=$alias,path=$socketPath
2) -drive file.pr-manager=$alias
In -object pr-manager-helper we tell qemu which socket to connect
to, then in -drive file-pr-manager we just reference the object
the drive in question should use.
For managed PR helper the alias is always "pr-helper0" and socket
path "${vm->priv->libDir}/pr-helper0.sock".
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Add helper which will map values of disk cache mode to the flags which
are accepted by various parts of the qemu block layer.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
We store the flags passed to the API which started the migration. Let's
use them instead of a separate bool to check if post-copy migration was
requested.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We store the flags passed to the API which started QEMU_ASYNC_JOB_DUMP
and we can use them to check whether a memory-only dump is running.
There's no need for a specific bool flag.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When an async job is running, we sometimes need to know how it was
started to distinguish between several types of the job, e.g., post-copy
vs. normal migration. So far we added a specific bool item to
qemuDomainJobObj for such cases, which doesn't scale very well and
storing such bools in status XML would be painful so we didn't do it.
A better approach is to store the flags passed to the API which started
the async job, which can be easily stored in status XML.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The function checks whether QEMU supports TLS migration and stores the
original value of tls-creds parameter to priv->migTLSAlias. This is no
longer needed because we already have the original value stored in
priv->migParams.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Any job which touches migration parameters will first store their
original values (i.e., QEMU defaults) to qemuDomainJobObj to make it
easier to reset them back once the job finishes.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Since the function is tightly connected to migration, it was renamed as
qemuMigrationCapsCheck and moved to qemu_migration_params.c.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that we assume QEMU_CAPS_NETDEV, the only thing left to check
is whether we need to use the legacy -net syntax because of
a non-conforming armchitecture.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
It will be necessary to initialize various aspects for the detected
members of the backing chain. Add a function that will handle it and
call it from qemuDomainPrepareDiskSource and qemuDomainDetermineDiskChain
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
During domain startup there are many places where we need to acquire
secrets. Currently code passes around a virConnectPtr, except in the
places where we pass in NULL. So there are a few codepaths where ability
to start guests using secrets will fail. Change to acquire a handle to
the secret driver when needed.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Since it may be possible that the state is unknown in some cases we
should store it as a tristate so that other code using it can determine
whether the state was updated.
Handle a DUMP_COMPLETED event processing the status, stats, and
error string. Use the @status in order to copy the error that
was generated whilst processing the @stats data. If an error was
provided by QEMU, then use that instead.
If there's no async job, we can just ignore the data.
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Define the qemuMonitorDumpStats as a new job JobStatsType to handle
being able to get memory dump statistics. For now do nothing with
the new TYPE_MEMDUMP.
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Add a TYPE_SAVEDUMP so that when coalescing stats for a save or
dump we don't needlessly try to get the mirror stats for a migration.
Other conditions can still use MIGRATION and SAVEDUMP interchangably
including usage of the @migStats field to fetch/store the data.
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>