The code to add device type to the commandline was identical for lsi
and other models of SCSI controllers, but was duplicated (with the
exception of a minor ordering difference of the if-else clauses) for
the two cases. This patch replaces those two with a single instance of
the code just before the if().
This patch makes qemuValideDevicePCISlotsChipsets() more consistent in
appearance by replacing several clauses of an if with the equivalent
call to qemuDomainMachineIsI440FX. The if was checking exactly the
same items, just in a slightly different order.
Since libvirt doesn't call to update the new balloon size in qemu add
code that will handle tweaking of the size of the current balloon
statistic until qemu reports the new size using the event.
Use the new domain list collection helpers to avoid going through
virDomainPtrs.
This additionally implements filter capability when called through the
api that accepts domain list filters.
Extend it to a universal helper used for clearing lists of any objects.
Note that the argument type is specifically void * to allow implicit
typecasting.
Additionally add a helper that works on non-NULL terminated arrays once
we know the length.
https://bugzilla.redhat.com/show_bug.cgi?id=890648
So, imagine you've issued an API that involves guest agent. For
instance, you want to query guest's IP addresses. So the API acquires
QUERY_JOB, locks the guest agent and issues the agent command.
However, for some reason, guest agent replies to initial ping
correctly, but then crashes tragically while executing real command
(in this case guest-network-get-interfaces). Since initial ping went
well, libvirt thinks guest agent is accessible and awaits reply to the
real command. But it will never come. What will is a monitor event.
Our handler (processSerialChangedEvent) will try to acquire
MODIFY_JOB, which will fail obviously because the other thread that's
executing the API already holds a job. So the event handler exits
early, and the QUERY_JOB is never released nor ended.
The way how to solve this is to put flag somewhere in the monitor
internals. The flag is called @running and agent commands are issued
iff the flag is set. The flag itself is set when we connect to the
agent socket. And unset whenever we see DISCONNECT event from the
agent. Moreover, we must wake up all the threads waiting for the
agent. This is done by signalizing the condition they're waiting on.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Running shutdown with mode agent on a shutoff domain gives cryptic
error message:
virsh # shutdown --mode agent gentoo
error: Failed to shutdown domain gentoo
error: Guest agent is not responding: QEMU guest agent is not connected
After this patch, the error is more clear:
virsh # shutdown --mode agent gentoo
error: Failed to shutdown domain gentoo
error: Requested operation is not valid: domain is not running
Reported-by: Martin Kletzander <mkletzan@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Allow ccw devices to be used with multiqueues. ccw provides a one to
one relation of fds to queues and does not support the vectors option.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Reviewed-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Coverity points out that qemuMonitorGetAllBlockStatsInfo could return a
-1 and thus not fill in 'stats' (leaving it NULL). Then the call to
qemuMonitorBlockStatsUpdateCapacity will dereference it.
Coverity complains over the [n]values pairing in virQEMUCapsFreeStringList
and rather than make a bunch if "if values" checks prior to calling, by
just adding the values check inside the free function we avoid the chance
that somehow nvalues is > 0, while values == NULL
Coverity points out it was possible to have a zero return from
qemuBuildRNGBackendProps thus not filling in 'props' and then
causing a NULL dereference on the next call.
Coverity notes that ->ifname is used after the VIR_FREE done in the
code path after the call to virNetDevMacVLanDeleteWithVPortProfile
by a call to virNetDevOpenvswitchRemovePort.
Since the ->ifname will be VIR_FREE()'d eventually in virDomainNetDefFree
just remove the extraneous VIR_FREE here.
When originally added, the Openvswitch code wasn't present and checks
were made for non NULL prior to use.
Coverity complains that in the error paths both the < 0 condition and
the success path after the qemuDomainObjExitMonitor failure will end
up going to cleanup. So just use ignore_value in this error path to
resolve the complaint.
The only version that's supported in QEMU is version 2, currently.
Fortunately, it is enabled by aarch64 automatically, so there's
nothing for us that needs to be put onto command line.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When migrating a domain while changing its name and using
VIR_MIGRATE_PERSIST_DEST flag, libvirt would fail to properly change the
name in the persistent definition. The inconsistency results in weird
behavior when dumping domain XML, destroying the domain, restarting
libvirtd and likely in several other situations.
Since the new name is already stored in vm->def->name, we just need to
make sure the persistent definition uses this new name too.
https://bugzilla.redhat.com/show_bug.cgi?id=1076354
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Set the capability based on qmp query, or qemu version. The qmp query
includes vmport with 2.2, but no longer with 2.3. It lists only
non-machine specific capabilities, so check the qemu version too until a
machine-specific query is supported.
Now that we have macros for exclusive flags and flag requirements we can
use them to cleanup the code for setvcpus and error out for all wrong
flag combination.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Found by Laine and discussed a bit on internal IRC.
Commit id c56fe7f1d6 added support for creating a command line to support
scsi-disk.channel.
Series was here:
http://www.redhat.com/archives/libvir-list/2012-February/msg01052.html
Which pointed to a design proposal here:
http://permalink.gmane.org/gmane.comp.emulators.libvirt/50428
Which states (in part):
Libvirt should check for the QEMU "scsi-disk.channel" property. If it
is unavailable, QEMU will only support channel=lun=0 and 0<=target<=7.
However, the check added was ensuring that bus != lun *and* bus != 0. So
if bus == lun and both were non zero, we'd never make the second check.
Changing this to an *or* check fixes the check, but still is less readable
than the just checking each for 0
Since the qemu capabilities are not initialized for offline VMs the
caller might get suboptimal error message:
$ virsh blockjob VM PATH --bandwidth 1
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the checks after we make sure that the VM is alive.
In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
We wait until they are all ready, or one of them aborts.
In qemuMigrationCancelDriveMirror, we wait until all mirrors are
properly stopped. This is necessary to ensure that destination VM is
fully in sync with the (paused) source VM.
If a drive mirror can not be cancelled, then the destination is not in a
consistent state. In this case it is not safe to continue with the
migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The !modern code path needs to call qemuBlockJobEventProcess directly.
the modern code path will call it via qemuBlockJobSyncWait.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Other threads may be blocked in qemuBlockJobSyncWait. Ensure that
they're woken up when the domain is stopped.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
qemuBlockJobSyncBegin and qemuBlockJobSyncEnd delimit a region of code
where block job events are processed "synchronously".
qemuBlockJobSyncWait and qemuBlockJobSyncWaitWithTimeout wait for an
event generated by a block job.
The Wait* functions may be called multiple times while the synchronous
block job is active. Any pending block job event will be processed by
only when Wait* or End is called. disk->blockJobStatus is reset by
these functions, so if it is needed a pointer to a
virConnectDomainEventBlockJobStatus variable should be passed as the
last argument. It is safe to pass NULL if you do not care about the
block job status.
All functions assume the VM object is locked. The Wait* functions will
unlock the object for as long as they are waiting. They will return -1
and report an error if the domain exits before an event is received.
Typical use is as follows:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
qemuBlockJobSyncBegin(disk);
... start block job ...
if (qemuBlockJobSyncWait(driver, vm, disk, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
... possibly start other block jobs
or wait for further events ...
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
To perform other tasks periodically while waiting for an event:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
unsigned long long timeout = 500 * 1000ull; /* milliseconds */
qemuBlockJobSyncBegin(disk);
... start block job ...
do {
... do other task ...
if (qemuBlockJobSyncWaitWithTimeout(driver, vm, disk,
timeout, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
} while (status == -1);
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
We will want to use synchronous block jobs from qemu_migration as well,
so split this function out into a new source file.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The documentation states that for shallow block copy the image has to
have the same guest visible content as backing file of the current
image if the file is being reused. This condition can be achieved also
with a raw file (or a qcow without a backing file) so remove the
condition that would disallow it.
(This patch additionally fixes crash described in
https://bugzilla.redhat.com/show_bug.cgi?id=1215569 )
It would be used in qemumonitorjsontest, thus we make it non-static.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Zhou Yimin <zhouyimin@huawei.com>
If we received zero iothreads from the monitor, but were perhaps
expecting to receive something, then the code was skipping the check
to ensure what's in the monitor matches our expectations. So invert
the checks to check that what we get back matches expectations and
then check there are zero iothreads returned.
Rather than have a separate routine to parse the alias of an iothread
returned from qemu in order to get the iothread_id value, parse the alias
when returning and just return the iothread_id in qemuMonitorIOThreadInfoPtr
This set of patches removes the function, changes the "char *name" to
"unsigned int" and handles all the fallout.
Coverity notes that the switch() used to check 'connected' values has
two DEADCODE paths (_DEFAULT & _LAST). Since 'connected' is a boolean
it can only be one or the other (CONNECTED or DISCONNECTED), so it just
seems pointless to use a switch to get "all" values. Convert to if-else
Add qemuDomainAddIOThread and qemuDomainDelIOThread in order to add or
remove an IOThread to/from the host either for live or config optoins
The implementation for the 'live' option will use the iothreadpids list
in order to make decision, while the 'config' option will use the
iothreadids list. Additionally, for deletion each may have to adjust
the iothreadpin list.
IOThreads are implemented by qmp objects, the code makes use of the existing
qemuMonitorAddObject or qemuMonitorDelObject APIs.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Remove the iothreadspin array from cputune and replace with a cpumask
to be stored in the iothreadids list.
Adjust the test output because our printing goes in order of the iothreadids
list now.
Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the
'thread_id' as returned from the live qemu monitor data.
Remove the iothreadpids list from _qemuDomainObjPrivate and replace with
the new iothreadids 'thread_id' element.
Rather than use the default numbering scheme of 1..number of iothreads
defined for the domain, use the iothreadid's list for the iothread_id
Since iothreadids list keeps track of the iothread_id's, these are
now used in place of the many places where a for loop would "know"
that the ID was "+ 1" from the array element.
The new tests ensure usage of the <iothreadid> values for an exact number
of iothreads and the usage of a smaller number of <iothreadid> values than
iothreads that exist (and usage of the default numbering scheme).
If a user hot-attaches the guest agent channel libvirt would ignore it
until the restart of libvirtd or shutdown/destroy and start of the VM
itself.
This patch adds code that opens or closes the guest agent connection
according to the state of the guest agent channel according to
connect/disconnect events.
To allow opening the channel from the event handler qemuConnectAgent
needed to be exported.
When the guest agent channel gets hotplugged to a VM, libvirt would
still report that "QEMU guest agent is not configured" rather than
stating that the connection was not established yet.
Currently the code won't be able to connect to the agent after hotplug
but that will change in a later patch.
As the qemuFindAgentConfig() helper is quite helpful in this case move
it to a more usable place and export it.