When commit 6ac402c456 added the API whenever VIR_DOMAIN_MEM_MAXIMUM
was passed the code always checked whether the domain was active and
therefore failed with an error even though only a config change was
requested. Fix the issue by replacing virDomainObjGetOneDef with
virDomainObjGetOneDefState which tells us what definition we're
performing the change on.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Commit d5572f62e3 forgot to add maxthreads to the non-Linux definition
of the function, thus breaking the MinGW build.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Acked-by: Peter Krempa <pkrempa@redhat.com>
Changes noticed while copying to similar aspects of checkpoints.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Any message that is easy to trigger (as evidenced by the testsuite
update) should not use 'internal error' as its category.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
virDomainSnapshotFindByName(list, NULL) should return NULL, rather
than the internal-use-only metaroot. Most existing callers pass in a
non-NULL name; the few external callers that don't are immediately
calling virDomainMomentSetParent (which indeed needs the metaroot
rather than NULL if the parent name is NULL); but as the leaky
abstraction is ugly, it is worth instead making
virDomainMomentSetParent static and adding a new function for
resolving the parent link of a brand new moment within its list. The
existing external uses of virDomainMomentSetParent always succeed
(either the new moment has parent_name of NULL to become a new root,
or has parent_name set to a strdup of the previous current moment);
hence, our new function does not need a return value (but it still has
a VIR_WARN in case future uses break our assumptions about failure
being impossible).
Missed when commit 02c4e24d refactored things to attempt to remove
direct metaroot manipulations out of the qemu and test drivers into
internal-only details, and made more obvious when commit dc8d3dc6
factored it out into a separate file.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Some VM configurations may result in a large number of threads created by
the associated qemu process which can exceed the system default limit. The
maximum number of threads allowed per process is controlled by the pids
cgroup controller and is set to 16k when creating VMs with systemd's
machined service. The maximum number of threads per process is recorded
in the pids.max file under the machine's pids controller cgroup hierarchy,
e.g.
$cgrp-mnt/pids/machine.slice/machine-qemu\\x2d1\\x2dtest.scope/pids.max
Maximum threads per process is controlled with the TasksMax property of
the systemd scope for the machine. This patch adds an option to qemu.conf
which can be used to override the maximum number of threads allowed per
qemu process. If the value of option is greater than zero, it will be set
in the TasksMax property of the machine's scope after creating the machine.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Update the current or max memory, on the persistent or live definition
depending on the flags which are currently ignored.
Signed-off-by: Ilias Stamatis <stamatis.iliass@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Make the test driver only support the VIR_TYPED_PARAM_STRING flag for
now.
Signed-off-by: Ilias Stamatis <stamatis.iliass@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
If the dup2 fails, then we aren't going to get the correct result.
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Peter Krempa <pkrempa@redhat.com>
Avoid the chance that sysconf(_SC_OPEN_MAX) returns -1 and thus
would cause virBitmapNew would attempt to allocate a very large
bitmap.
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Peter Krempa <pkrempa@redhat.com>
Avoid the chance that qemuMonitorTestNewSimple could return NULL
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Peter Krempa <pkrempa@redhat.com>
It's already dereffed in the initialization and shouldn't be NULL
unless virJSONValueArraySize after a virJSONValueObjectGetArray
could return a NULL data entry.
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Peter Krempa <pkrempa@redhat.com>
Ever since --parallel-connections option for virsh migrate was
introduced we did not properly check the return value of
vshCommandOptInt. We would set VIR_MIGRATE_PARAM_PARALLEL_CONNECTIONS
parameter even if vshCommandOptInt returned 0 (which means
--parallel-connections was not specified) when another int option which
was checked earlier was specified with a nonzero value.
Specifically, running virsh migrate with either
--auto-converge-increment, --auto-converge-initial, --comp-mt-dthreads,
--comp-mt-threads, or --comp-mt-level would set
VIR_MIGRATE_PARAM_PARALLEL_CONNECTIONS parameter and if --parallel
option was not used, libvirt would complain
error: invalid argument: Turn parallel migration on to tune it
even though --parallel-connections option was not used at all.
https://bugzilla.redhat.com/show_bug.cgi?id=1726643
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Report in logs when we don't find existing block job data and create it
just to handle the job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
While this function does start a block job in case when we'd not be able
to get our internal data for it, the handler sets the job state to
QEMU_BLOCKJOB_STATE_RUNNING anyways, thus qemuBlockJobStartupFinalize
would just unref the job.
Since the other usage of qemuBlockJobStartupFinalize in the other part
of the event handler was a bug replace this one anyways even if it would
not cause problems.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The block job event handler qemuProcessHandleBlockJob looks at the block
job data to see whether the job requires synchronous handling. Since the
block job event may arrive before we continue the job handling (if the
job has no data to copy) we could hit the state when the job is still
set as QEMU_BLOCKJOB_STATE_NEW (as we move it to the
QEMU_BLOCKJOB_STATE_RUNNING state only after returning from monitor).
If the event handler uses qemuBlockJobStartupFinalize it would
unregister and free the job. Thankfully this is not a big problem for
legacy blockjobs as we don't need much data for them but since we'd
re-instantiate the job data structure we'd report wrong job type for
active commit as qemu reports it as a regular commit job.
Fix it by not using qemuBlockJobStartupFinalize function in
qemuProcessHandleBlockJob as it is not starting the job anyways.
https://bugzilla.redhat.com/show_bug.cgi?id=1721375
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
virCgroupRemove return -1 when removing cgroup failed.
But there are retry code to remove cgroup in QemuProcessStop:
retry:
if ((ret = qemuRemoveCgroup(vm)) < 0) {
if (ret == -EBUSY && (retries++ < 5)) {
usleep(200*1000);
goto retry;
}
VIR_WARN("Failed to remove cgroup for %s",
vm->def->name);
}
The return value of qemuRemoveCgroup will never be equal to "-EBUSY",
so change the return value of virCgroupRemove if failed.
Signed-off-by: Wang Yechao <wang.yechao255@zte.com.cn>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Shutting down the daemon after 30 seconds of being idle is a little bit
too aggressive. Especially when using 'virsh' in single-shot mode, as
opposed to interactive shell mode, it would not be unusual to have
more than 30 seconds between commands. This will lead to the daemon
shutting down and starting up between a series of commands.
Increasing the shutdown timer to 2 minutes will make it less likely that
the daemon will shutdown while the user is in the middle of a series of
commands.
Reviewed-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Instead of having each caller pass in the desired logfile name, pass in
the binary name instead. The logging code can then just derive a logfile
name by appending ".log".
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This adds detection of a Quobyte as a shared file system for live
migration.
Signed-off-by: Silvan Kaiser <silvan@quobyte.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We have two functions: virPCIDeviceAddressIsEqual() defined only
on Linux and virPCIDeviceAddressEqual() defined everywhere. And
both of them do the same. Drop the former in favour of the
latter.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Commit 5ff46aaa7f added a new parameter but neglected to fix the NONNULL
declarations.
Reported-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
When removing the disk fronted while any block job is still active we
need to transfer the ownership of the backing chain to the job itself as
the job still holds the reference to the chain members and thus attempts
to remove them would fail.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In cases when the disk frontend was unplugged while a blockjob was
running the blockjob inherits the backing chain. When the blockjob is
then terminated we need to unplug the chain as it will not be used any
more.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The PR manager is a property of the format layer in qemu so we need to
be able to track it also in the chains of orphaned block jobs.
Add a helper for qemu to look also into the blockjob state.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When the guest unplugs the disk frontend libvirt is responsible for
deleting the backend. Since a blockjob may still have a reference to the
backing chain when it is running we'll have to store the metadata for
the unplugged disk for future reference.
This patch adds 'chain' and 'mirrorChain' fields to 'qemuBlockJobData'
to keep them around with the job along with status XML machinery and
tests. Later patches will then add code to change the ownership of the
chain when unplugging the disk backend.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Refresh the state of the jobs and process any events that might have
happened while libvirt was not running.
The job state processing requires some care to figure out if a job
needs to be bumped.
For any invalid job try doing our best to cancel it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add the infrastructure to handle block job events in the -blockdev era.
Some complexity is required as qemu does not bother to notify whether
the job was concluded successfully or failed. Thus it's necessary to
re-query the monitor.
To minimize the possibility of stuck jobs save the state into the XML
prior to handling everything so that the reconnect code can potentially
continue with the cleanup.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add support for handling the event either synchronously or
asynchronously using the event thread.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
With blockdev we'll need to use the JOB_STATUS_CHANGE so gate the old
events by the blockdev capability.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This new state is entered when qemu finished the job but libvirt does
not know whether it was successful or not.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that the blockjob handling code deals with the status XML we don't
need to save it explicitly when starting blockjobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Now that block job data is stored in the status XML portion we need to
make sure that everything which changes the state also saves the status
XML. The job registering function is used while parsing the status XML
so in that case we need to skip the XML saving.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to store the block job state in the status XML so that we can
properly recover any data when reconnecting after startup and also in
the end to be able to do any transition of the backing chain that
happened while libvirt was not connected to the monitor.
First step is to note the name, type, state and corresponding disk into
the status XML.
We also need to make sure that a broken blockjob does not make libvirt
lose the VM, thus many of the errors just mark the job as invalid.
Later on we'll cancel all invalid jobs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The job data saved in the XML may be partially invalid e.g. if something
is missing. To prevent losing a domain with such a job add a flag to the
job data so that job APIs can ignore such a job and we can just cancel
it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
When parsing the status XML we need to register all existing jobs.
Export the functions so that they are usable in other modules.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Later on we'll format these values into the status XML so the from/to
string functions will come handy. The implementation also notes that
these will be used in the status XML to avoid somebody changing the
values.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add the job structure to the table when instantiating a new job and
remove it when it terminates/fails.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>