If qemu supports multi function PCI device, the format of the PCI address passed
to qemu is "bus=pci.0,multifunction=on,addr=slot.function".
If qemu does not support multi function PCI device, the format of the PCI address
passed to qemu is "bus=pci.0,addr=slot".
Hot pluging/unpluging multi PCI device is not supported now. So the function
of hotplugged PCI device must be 0. When we hot unplug it, we should set release
all functions in the slot.
We save all used PCI address in the hash table. The key is generated by domain,
bus and slot now. We will support multi function PCI device, so the key should
be generated by domain, bus, slot and function.
We do not support to hot unplug multi function PCI device now. If the device is
one function of multi function PCI device, we shoul not allow to hot unplugg
it.
Detected by Coverity. All existing callers happen to be in
range, so this isn't too serious.
* src/qemu/qemu_cgroup.c (qemuCgroupControllerActive): Check
bounds before dereference.
When peer-2-peer migration was invoked by a client supporting
v3, but where the target server only supported v2, we'd not
correctly shutdown the guest.
* src/qemu/qemu_migration.c: Ensure guest is shutdown in
v2 peer 2 peer migration
The v2 migration protocol doesn't use cookies, so we should not
be raising an error if the cookie parameters are NULL.
* src/qemu/qemu_migration.c: Don't raise error if cookie is NULL
The error code for virKillProcess is returned in the errno variable
not the return value. THis mistake caused the logs to be filled with
errors when shutting down QEMU processes
* src/qemu/qemu_process.c: Fix process kill check.
This commit is safe precisely because there has been no release
for any of the enum values being deleted (they were added post-0.9.1).
After the 0.9.2 release, we can then take advantage of
virDomainModificationImpact in more places.
* include/libvirt/libvirt.h.in (virDomainModificationImpact): New
enum.
(virDomainSchedParameterFlags, virMemoryParamFlags): Delete, since
these were never released, and the new enum works fine here.
* src/libvirt.c (virDomainGetMemoryParameters)
(virDomainSetMemoryParameters)
(virDomainGetSchedulerParametersFlags)
(virDomainSetSchedulerParametersFlags): Update documentation.
* src/qemu/qemu_driver.c (qemuDomainSetMemoryParameters)
(qemuDomainGetMemoryParameters, qemuSetSchedulerParametersFlags)
(qemuSetSchedulerParameters, qemuGetSchedulerParametersFlags)
(qemuGetSchedulerParameters): Adjust clients.
* tools/virsh.c (cmdSchedinfo, cmdMemtune): Likewise.
Based on ideas by Daniel Veillard and Hu Tao.
Detected by Coverity. This leaked a cpumap on every iteration
of the loop. Leak introduced in commit 1cc4d02 (v0.9.0).
* src/qemu/qemu_process.c (qemuProcessSetVcpuAffinites): Plug
leak, and hoist allocation outside loop.
In v3 migration, once migration is completed, the VM needs
to be left in a paused state until after Finish3 has been
executed on the target. Only then will the VM be killed
off. When using non-JSON QEMU monitor though, we don't
receive any 'STOP' event from QEMU, so we need to manually
set our state offline & thus release lock manager leases.
It doesn't hurt to run this on the JSON case too, just in
case the event gets lost somehow
* src/qemu/qemu_migration.c: Explicitly set VM state to
paused when migration completes
The change 18c2a59206 caused
some regressions in behaviour of virDomainBlockStats
and virDomainBlockInfo in the QEMU driver.
The virDomainBlockInfo API stopped working for inactive
guests if querying a block device.
The virDomainBlockStats API did not promptly report
an error if the guest was not running in some cases.
* src/qemu/qemu_driver.c: Fix inactive guest handling
in BlockStats/Info APIs
The qemuAuditDisk calls in disk hotunplug operations were being
passed 'ret >= 0', but the code which sets ret to 0 was not yet
executed, and the error path had already jumped to the 'cleanup'
label. This meant hotunplug failures were never audited, and
hotunplug success was audited as a failure
* src/qemu/qemu_hotplug.c: Fix auditing of hotunplug
Commit 4454a9efc7 introduced bad
behaviour on the VIR_EVENT_HANDLE_ERROR condition. This condition
is only hit when an invalid FD is used in poll() (typically due
to a double-close bug). The QEMU monitor code was treating this
condition as non-fatal, and thus libvirt would poll() in a fast
loop forever burning 100% CPU. VIR_EVENT_HANDLE_ERROR must be
handled in the same way as VIR_EVENT_HANDLE_HANGUP, killing the
QEMU instance.
* src/qemu/qemu_monitor.c: Treat VIR_EVENT_HANDLE_ERROR as EOF
* src/conf/domain_conf.c, src/conf/domain_conf.h: APIs for
inserting/finding/removing virDomainLeaseDefPtr instances
* src/qemu/qemu_driver.c: Wire up hotplug/unplug for leases
* src/qemu/qemu_hotplug.h, src/qemu/qemu_hotplug.c: Support
for hotplug and unplug of leases
Some lock managers associate state with leases, allowing a process
to temporarily release its leases, and re-acquire them later, safe
in the knowledge that no other process has acquired + released the
leases in between.
This is already used between suspend/resume operations, and must
also be used across migration. This passes the lockstate in the
migration cookie. If the lock manager uses lockstate, then it
becomes compulsory to use the migration v3 protocol to get the
cookie support.
* src/qemu/qemu_driver.c: Validate that migration v2 protocol is
not used if lock manager needs state transfer
* src/qemu/qemu_migration.c: Transfer lock state in migration
cookie XML
The QEMU integrates with the lock manager instructure in a number
of key places
* During startup, a lock is acquired in between the fork & exec
* During startup, the libvirtd process acquires a lock before
setting file labelling
* During shutdown, the libvirtd process acquires a lock
before restoring file labelling
* During hotplug, unplug & media change the libvirtd process
holds a lock while setting/restoring labels
The main content lock is only ever held by the QEMU child process,
or libvirtd during VM shutdown. The rest of the operations only
require libvirtd to hold the metadata locks, relying on the active
QEMU still holding the content lock.
* src/qemu/qemu_conf.c, src/qemu/qemu_conf.h,
src/qemu/libvirtd_qemu.aug, src/qemu/test_libvirtd_qemu.aug:
Add config parameter for configuring lock managers
* src/qemu/qemu_driver.c: Add calls to the lock manager
Update the qemuDomainMigrateBegin method so that it accepts
an optional incoming XML document. This will be validated
for ABI compatibility against the current domain config,
and if this check passes, will be passed back out for use
by the qemuDomainMigratePrepare method on the target
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h,
src/qemu/qemu_migration.c: Allow custom XML to be passed
Currently the QEMU monitor I/O handler code uses errno values
to report errors. This results in a sub-optimal error messages
on certain conditions, in particular when parsing JSON strings
malformed data simply results in 'EINVAL'.
This changes the code to use the standard libvirt error reporting
APIs. The virError is stored against the qemuMonitorPtr struct,
and when a monitor API is run, any existing stored error is copied
into that thread's error local
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_text.c: Use
virError APIs for all monitor I/O handling code
Currently whenever there is any failure with parsing the monitor,
this is treated in the same was as end-of-file (ie QEMU quit).
The domain is terminated, if not already dead.
With this change, failures in parsing the monitor stream do not
result in the death of QEMU. The guest continues running unchanged,
but all further use of the monitor will be disabled.
The VMM_FAILURE event will be emitted, and the mgmt application
can decide when to kill/restart the guest to re-gain control
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h: Run a
different callback for monitor EOF vs error conditions.
* src/qemu/qemu_process.c: Emit VMM_FAILURE event when monitor
fails
* src/qemu/qemu_driver.c (qemuGetSchedulerParameters): Move
guts...
(qemuGetSchedulerParametersFlags): ...to new callback, and honor
flags more accurately.
This patch allows to modify interfaces of domain(qemu)
* src/conf/domain_conf.c src/conf/domain_conf.h src/libvirt_private.syms:
(virDomainNetInsert) : Insert a network device to domain definition.
(virDomainNetIndexByMac) : Returns an index of net device in array.
(virDomainNetRemoveByMac): Remove a NIC of passed MAC address.
* src/qemu/qemu_driver.c
(qemuDomainAttachDeviceConfig): add codes for NIC.
(qemuDomainDetachDeviceConfig): add codes for NIC.
Originally most of libvirt domain-specific calls were blocking
during a migration.
A new mechanism to allow specific calls (blkstat/blkinfo) to be
executed in such condition has been implemented.
In the long term it'd be desirable to get a more general
solution to mark further APIs as migration safe, without needing
special case code.
* src/qemu/qemu_migration.c: add some additional job signal
flags for doing blkstat/blkinfo during a migration
* src/qemu/qemu_domain.c: add a condition variable that can be
used to efficiently wait for the migration code to clear the
signal flag
* src/qemu/qemu_driver.c: execute blkstat/blkinfo using the
job signal flags during migration
When modifying the disk devices of a live domain and the domain
configuration, the function qemuDomainAttachDeviceConfig
first sets dev->data->disk to NULL. Later qemuDomainAttachDeviceLive
accesses dev->data.disk and causes a segfault.
* src/qemu/qemu_driver.c: fix qemuDomainModifyDeviceFlags() accordingly
http://lists.gnu.org/archive/html/qemu-devel/2011-05/threads.html#02162
Currently, qemu silently clips any JSON integer in the range
0x8000000000000000 - 0xffffffffffffffff (all numbers in this range
will be clipped to 0x7fffffffffffffff == LLONG_MAX).
To avoid this, pass these as signed 64 bit integers in the QMP
request.
The current virDomainMigrateFinish3 method signature attempts to
distinguish two types of errors, by allowing return with ret== 0,
but ddomain == NULL, to indicate a failure to start the guest.
This is flawed, because when ret == 0, there is no way for the
virErrorPtr details to be sent back to the client.
Change the signature of virDomainMigrateFinish3 so it simply
returns a virDomainPtr, in the same way as virDomainMigrateFinish2
The disk locking code will protect against the only possible
failure mode this doesn't account for (loosing conenctivity to
libvirtd after Finish3 starts the CPUs, but before the client
sees the reply for Finish3).
* src/driver.h, src/libvirt.c, src/libvirt_internal.h: Change
virDomainMigrateFinish3 to return a virDomainPtr instead of int
* src/remote/remote_driver.c, src/remote/remote_protocol.x,
daemon/remote.c, src/qemu/qemu_driver.c, src/qemu/qemu_migration.c:
Update for API change
When doing migration, if an error occurs in Perform, it must not
be overwritten during Finish/Confirm steps. If an error occurs
in Finish, it must not be overwritten in Confirm.
Previous commit a9d12c2444 added
code to qemudDomainMigrateFinish2 to preserve the error. This
is not the right place, because it is not applicable in non-p2p
migration. The src/libvirt.c virDomainMigrateV2/3 methods need
code to preserve errors for non-p2p migration, while the
doPeer2PeerMigrate2 and doPeer2PeerMigrate3 methods contain
code to preverse errors for p2p migration.
Remove the bogus error preservation from qemudDomainMigrateFinish2
and qemudDomainMigrateFinish3.
Fix virDomainMigrateV3 and doPeer2PeerMigrate3 so that they
preserve any error hit during the Finish3 step, before invoking
Confirm3.
Finally if qemuMigrationFinish fails to resume the CPUs, it must
preserve the error before tearing down the VM, so that VM cleanup
doesn't overwrite it.
* src/libvirt.c: Preserve error before invoking Confirm3
* src/qemu/qemu_driver.c: Remove bogus error preservation
code in qemudDomainMigrateFinish2/qemudDomainMigrateFinish3
* src/qemu/qemu_migration.c: Preserve error before invoking Confirm3
and after resume fails in qemuMigrationFinish.
* src/libvirt.c: Add further debug lines in helper APIs for
migration
* src/qemu/qemu_migration.c: Add debug lines for all internal
migration API parameters
Even when failing to start CPUs, the finish method was returning
a success result. Fix this so that the QEMU process is killed
off when finish fails under v3 protocol. Also rename the
killOnFinish boolean to 'v3proto' to make it clearer that this
is a tunable based on the migration protocol version
* src/qemu/qemu_driver.c: Update for API change
* src/qemu/qemu_migration.c, src/qemu/qemu_migration.h: Kill
VM in qemuMigrationFinish if failing to start CPUs
The SPICE seamless migration process requires data to be passed
back from the target host, to the source host via a cookie.
The cookie includes the target host's hostname, but this was not
stored, merely validated. This patch explicitly records the
remote hostname after parsing the cookie, and uses it when
initiating the SPICE migration
* qemu/qemu_migration.c: Fix SPICE seamless migration hostname
Before running perform in peer-2-peer migration, the current
guest state must be recorded, so that non-live migration can
currently unpause a running guest on completion.
* src/qemu/qemu_migration.c: Move check for offline guest
to fix non-live migration
The virDomainMigratePerform3 currently has a single URI parameter
whose meaning varies. It is either
- A QEMU migration URI (normal migration)
- A libvirtd connection URI (peer2peer migration)
Unfortunately when using peer2peer migration, without also
using tunnelled migration, it is possible that both URIs are
required.
This adds a second URI parameter to the virDomainMigratePerform3
method, to cope with this scenario. Each parameter how has a fixed
meaning.
NB, there is no way to actually take advantage of this yet,
since virDomainMigrate/virDomainMigrateToURI do not have any
way to provide the 2 separate URIs
* daemon/remote.c, src/remote/remote_driver.c,
src/remote/remote_protocol.x, src/remote_protocol-structs: Add
the second URI parameter to perform3 message
* src/driver.h, src/libvirt.c, src/libvirt_internal.h: Add
the second URI parameter to Perform3 method
* src/libvirt_internal.h, src/qemu/qemu_migration.c,
src/qemu/qemu_migration.h: Update to handle URIs correctly
This extends the v3 migration protocol such that the
virDomainMigrateBegin3 and virDomainMigratePerform3
methods accept an application supplied XML config for
the target VM.
If the 'xmlin' parameter is NULL, then Begin3 uses the
current guest XML as normal. A driver implementing the
Begin3 method should either reject all non-NULL 'xmlin'
parameters, or strictly validate that the app supplied
XML does not change guest ABI.
The Perform3 method also needed the xmlin parameter to
cope with the Peer2Peer migration sequence.
NB it is not yet possible to use this capability since
neither of the public virDomainMigrate/virDomainMigrateToURI
methods have a way to pass in XML.
* daemon/remote.c, src/remote/remote_driver.c,
src/remote/remote_protocol.x, src/remote_protocol-structs:
Add 'remote_string xmlin' parameter to begin3/perform3
RPC messages
* src/libvirt.c, src/driver.h, src/libvirt_internal.h: Add
'const char *xmlin' parameter to Begin3/Perform3 methods
* src/qemu/qemu_driver.c, src/qemu/qemu_migration.c,
src/qemu/qemu_migration.h: Pass xmlin parameter around
migration methods
Saving domain to previously created file changes also its ownership.
This is certainly not what users want if some conditions are met:
it is a regular, local file and dynamic_ownership is off.
NB: the enum that uses the string vnet-host (now changed to vhost-net)
is used in XML, but fortunately that hasn't been in an official
release yet, so it can still be fixed.
Since -vnc uses ':' to separate the address from the port, raw
IPv6 addresses need to be escaped like [addr]:port
* src/qemu/qemu_command.c: Escape raw IPv6 addresses with []
* tests/qemuxml2argvdata/qemuxml2argv-graphics-vnc.args,
tests/qemuxml2argvdata/qemuxml2argv-graphics-vnc.xml: Tweak
to test Ipv6 escaping
* docs/schemas/domain.rng: Allow Ipv6 addresses, or hostnames
in <graphics> listen attributes
The qemuMigrationConfirm method shouldn't deal with final VM
cleanup, since it can be called from the peer2peer migration,
which expects to still use the 'vm' object afterwards.
Push the cleanup code out of qemuMigrationConfirm, into its
caller, qemuDomainMigrateConfirm3
* src/qemu/qemu_driver.c: Add VM cleanup code to
qemuDomainMigrateConfirm3
* src/qemu/qemu_migration.c, src/qemu/qemu_migration.h: Remove
job handling cleanup from qemuMigrationConfirm