Currently the scan of the /proc/mounts file used to find cgroup mount
points doesn't take into account that mount points may hidden by other
mount points. For, example in certain Kubernetes environments the
/proc/mounts contains the following lines:
cgroup /sys/fs/cgroup/net_prio,net_cls cgroup ...
tmpfs /sys/fs/cgroup tmpfs ...
cgroup /sys/fs/cgroup/net_cls,net_prio cgroup ...
In this particular environment the first mount point is hidden by the
second one. The correct mount point is the third one, but libvirt will
never process it because it only checks the first mount point for each
controller (net_cls in this case). So libvirt will try to use the first
mount point, which doesn't actually exist, and the complete detection
process will fail.
To avoid that issue this patch changes the virCgroupDetectMountsFromFile
function so that when there are duplicates it takes the information from
the last line in /proc/mounts. This requires removing the previous
explicit condition to skip duplicates, and adding code to free the
memory used by the processing of duplicated lines.
Related-To: https://bugzilla.redhat.com/1468214
Related-To: https://github.com/kubevirt/libvirt/issues/4
Signed-off-by: Juan Hernandez <jhernand@redhat.com>
(cherry picked from commit dacd160d74)
If we are encoding a block of data that is 16 bytes in length,
we cannot leave it as 16 bytes, we must pad it out to the next
block boundary, 32 bytes. Without this padding, the decoder will
incorrectly treat the last byte of plain text as the padding
length, as it can't distinguish padded from non-padded data.
The problem exhibited itself when using a 16 byte passphrase
for a LUKS volume
$ virsh secret-set-value 55806c7d-8e93-456f-829b-607d8c198367 \
$(echo -n 1234567812345678 | base64)
Secret value set
$ virsh start demo
error: Failed to start domain demo
error: internal error: process exited while connecting to monitor: >>>>>>>>>>Len 16
2017-05-02T10:35:40.016390Z qemu-system-x86_64: -object \
secret,id=virtio-disk1-luks-secret0,data=SEtNi5vDUeyseMKHwc1c1Q==,\
keyid=masterKey0,iv=zm7apUB1A6dPcH53VW960Q==,format=base64: \
Incorrect number of padding bytes (56) found on decrypted data
Notice how the padding '56' corresponds to the ordinal value of
the character '8'.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
(cherry picked from commit 71890992da)
/etc/libvirt/nwfilter/*.xml files are installed with no UUID, which
means libvirtd will automatically alter all of them once it starts. Thus
RPM verification will always fail on them. Let's use a trick similar to
the default network XML and store nwfilter XMLs in /usr/share. They will
be copied into /etc in %post. Additionally the /etc files are marked as
%ghost so that they are uninstalled if the RPM package is removed.
Note that the %post script overwrites existing files with new ones on
upgrade, which is what has always been happening.
https://bugzilla.redhat.com/show_bug.cgi?id=1431581https://bugzilla.redhat.com/show_bug.cgi?id=1378774
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
(cherry picked from commit 1d3963dba5)
The port is stored in graphics configuration and it will
also get released in qemuProcessStop in case of error.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1397440
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
(cherry picked from commit c23b7b81db)
Rather than waiting until we've free'd up all the resources, cause the
'workerPool' thread pool to flush as soon as possible during stateCleanup.
Otherwise, it's possible something waiting to run will SEGV such as is the
case during race conditions of simultaneous exiting libvirtd and qemu process.
Resolves the following crash:
[1] crash backtrace: (bt is shortened a bit):
0 0x00007ffff7282f2b in virClassIsDerivedFrom
(klass=0xdeadbeef, parent=0x55555581d650) at util/virobject.c:169
1 0x00007ffff72835fd in virObjectIsClass
(anyobj=0x7fffd024f580, klass=0x55555581d650) at util/virobject.c:365
2 0x00007ffff7283498 in virObjectLock
(anyobj=0x7fffd024f580) at util/virobject.c:317
3 0x00007ffff722f0a3 in virCloseCallbacksUnset
(closeCallbacks=0x7fffd024f580, vm=0x7fffd0194db0,
cb=0x7fffdf1af765 <qemuProcessAutoDestroy>)
at util/virclosecallbacks.c:164
4 0x00007fffdf1afa7b in qemuProcessAutoDestroyRemove
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_process.c:6365
5 0x00007fffdf1adff1 in qemuProcessStop
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0, reason=VIR_DOMAIN_SHUTOFF_CRASHED,
asyncJob=QEMU_ASYNC_JOB_NONE, flags=0)
at qemu/qemu_process.c:5877
6 0x00007fffdf1f711c in processMonitorEOFEvent
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_driver.c:4545
7 0x00007fffdf1f7313 in qemuProcessEventHandler
(data=0x555555832710, opaque=0x7fffd00f3a60) at qemu/qemu_driver.c:4589
8 0x00007ffff72a84c4 in virThreadPoolWorker
(opaque=0x555555805da0) at util/virthreadpool.c:167
Thread 1 (Thread 0x7ffff7fb1880 (LWP 494472)):
1 0x00007ffff72a7898 in virCondWait
(c=0x7fffd01c21f8, m=0x7fffd01c21a0) at util/virthread.c:154
2 0x00007ffff72a8a22 in virThreadPoolFree
(pool=0x7fffd01c2160) at util/virthreadpool.c:290
3 0x00007fffdf1edd44 in qemuStateCleanup ()
at qemu/qemu_driver.c:1102
4 0x00007ffff736570a in virStateCleanup ()
at libvirt.c:807
5 0x000055555556f991 in main (argc=1, argv=0x7fffffffe458) at libvirtd.c:1660
(cherry picked from commit 97338eaa7b)
Do not dereference the 'dmn' until after the virStateCleanup is completed.
During initialization, virStateInitialize requires/uses the "dmn" as the
argument to/for the daemonInhibitCallback functions. Thus, cleanup cannot
dereference 'dmn' until after calling the virStateCleanup which calls the
the daemonInhibitCallback using 'dmn'; otherwise, the following crash occurs:
backtrace (shortened a bit)
1 0x00007fd3a791b2e6 in virCondWait (c=<optimized out>, m=<optimized out>)
at util/virthread.c:154
2 0x00007fd3a791bcb0 in virThreadPoolFree (pool=0x7fd38024ee00)
at util/virthreadpool.c:266
3 0x00007fd38edaa00e in qemuStateCleanup () at qemu/qemu_driver.c:1116
4 0x00007fd3a79abfeb in virStateCleanup () at libvirt.c:808
5 0x00007fd3a85f2c9e in main (argc=<optimized out>, argv=<optimized out>)
at libvirtd.c:1660
Thread 1 (Thread 0x7fd38722d700 (LWP 32256)):
0 0x00007fd3a7900910 in virClassIsDerivedFrom
(klass=0xdfd36058d4853, parent=0x7fd3a8f394d0) at util/virobject.c:169
1 0x00007fd3a7900c4e in virObjectIsClass
(anyobj=anyobj@entry=0x7fd3a8f2f850, klass=<optimized out>)
at util/virobject.c:365
2 0x00007fd3a7900c74 in virObjectLock (anyobj=0x7fd3a8f2f850)
at util/virobject.c:317
3 0x00007fd3a7a24d5d in virNetDaemonRemoveShutdownInhibition
(dmn=0x7fd3a8f2f850) at rpc/virnetdaemon.c:547
4 0x00007fd38ed722cf in qemuProcessStop
(driver=driver@entry=0x7fd380103810, vm=vm@entry=0x7fd38025b6d0,
reason=reason@entry=VIR_DOMAIN_SHUTOFF_SHUTDOWN,
asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_NONE, flags=flags@entry=0)
at qemu/qemu_process.c:5786
5 0x00007fd38edd9428 in processMonitorEOFEvent
(vm=0x7fd38025b6d0, driver=0x7fd380103810) at qemu/qemu_driver.c:4588
6 qemuProcessEventHandler (data=<optimized out>, opaque=0x7fd380103810)
at qemu/qemu_driver.c:4632
7 0x00007fd3a791bb55 in virThreadPoolWorker
(opaque=opaque@entry=0x7fd3a8f1e4c0) at util/virthreadpool.c:145
(cherry picked from commit 85c3a1820a)
When trying to migrate a huge page enabled guest, I've noticed
the following crash. Apparently, if no specific hugepages are
requested:
<memoryBacking>
<hugepages/>
</memoryBacking>
and there are no hugepages configured on the destination, we try
to dereference a NULL pointer.
Program received signal SIGSEGV, Segmentation fault.
0x00007fcc907fb20e in qemuGetHugepagePath (hugepage=0x0) at qemu/qemu_conf.c:1447
1447 if (virAsprintf(&ret, "%s/libvirt/qemu", hugepage->mnt_dir) < 0)
(gdb) bt
#0 0x00007fcc907fb20e in qemuGetHugepagePath (hugepage=0x0) at qemu/qemu_conf.c:1447
#1 0x00007fcc907fb2f5 in qemuGetDefaultHugepath (hugetlbfs=0x0, nhugetlbfs=0) at qemu/qemu_conf.c:1466
#2 0x00007fcc907b4afa in qemuBuildMemoryBackendStr (size=4194304, pagesize=0, guestNode=0, userNodeset=0x0, autoNodeset=0x0, def=0x7fcc70019070, qemuCaps=0x7fcc70004000, cfg=0x7fcc5c011800, backendType=0x7fcc95087228, backendProps=0x7fcc95087218,
force=false) at qemu/qemu_command.c:3297
#3 0x00007fcc907b4f91 in qemuBuildMemoryCellBackendStr (def=0x7fcc70019070, qemuCaps=0x7fcc70004000, cfg=0x7fcc5c011800, cell=0, auto_nodeset=0x0, backendStr=0x7fcc70020360) at qemu/qemu_command.c:3413
#4 0x00007fcc907c0406 in qemuBuildNumaArgStr (cfg=0x7fcc5c011800, def=0x7fcc70019070, cmd=0x7fcc700040c0, qemuCaps=0x7fcc70004000, auto_nodeset=0x0) at qemu/qemu_command.c:7470
#5 0x00007fcc907c5fdf in qemuBuildCommandLine (driver=0x7fcc5c07b8a0, logManager=0x7fcc70003c00, def=0x7fcc70019070, monitor_chr=0x7fcc70004bb0, monitor_json=true, qemuCaps=0x7fcc70004000, migrateURI=0x7fcc700199c0 "defer", snapshot=0x0,
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, standalone=false, enableFips=false, nodeset=0x0, nnicindexes=0x7fcc95087498, nicindexes=0x7fcc950874a0, domainLibDir=0x7fcc700047c0 "/var/lib/libvirt/qemu/domain-1-fedora") at qemu/qemu_command.c:9547
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
(cherry picked from commit 647db05e9a)
There is a possibility that qemu driver frees by unreferencing its
closeCallbacks pointer as it has the only reference to the object,
while in fact not all users of CloseCallbacks called thier
virCloseCallbacksUnset.
Backtrace is the following:
Thread #1:
0 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
1 in virCondWait (c=<optimized out>, m=<optimized out>)
at util/virthread.c:154
2 in virThreadPoolFree (pool=0x7f0810110b50)
at util/virthreadpool.c:266
3 in qemuStateCleanup () at qemu/qemu_driver.c:1116
4 in virStateCleanup () at libvirt.c:808
5 in main (argc=<optimized out>, argv=<optimized out>)
at libvirtd.c:1660
Thread #2:
0 in virClassIsDerivedFrom (klass=0xdeadbeef, parent=0x7f0837c694d0) at util/virobject.c:169
1 in virObjectIsClass (anyobj=anyobj@entry=0x7f08101d4760, klass=<optimized out>) at util/virobject.c:365
2 in virObjectLock (anyobj=0x7f08101d4760) at util/virobject.c:317
3 in virCloseCallbacksUnset (closeCallbacks=0x7f08101d4760, vm=vm@entry=0x7f08101d47b0, cb=cb@entry=0x7f081d078fc0 <qemuProcessAutoDestroy>) at util/virclosecallbacks.c:163
4 in qemuProcessAutoDestroyRemove (driver=driver@entry=0x7f081018be50, vm=vm@entry=0x7f08101d47b0) at qemu/qemu_process.c:6368
5 in qemuProcessStop (driver=driver@entry=0x7f081018be50, vm=vm@entry=0x7f08101d47b0, reason=reason@entry=VIR_DOMAIN_SHUTOFF_SHUTDOWN, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_NONE, flags=flags@entry=0) at qemu/qemu_process.c:5854
6 in processMonitorEOFEvent (vm=0x7f08101d47b0, driver=0x7f081018be50) at qemu/qemu_driver.c:4585
7 qemuProcessEventHandler (data=<optimized out>, opaque=0x7f081018be50) at qemu/qemu_driver.c:4629
8 in virThreadPoolWorker (opaque=opaque@entry=0x7f0837c4f820) at util/virthreadpool.c:145
9 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
10 in start_thread () from /lib64/libpthread.so.0
Let's reference CloseCallbacks object in virCloseCallbacksSet and
unreference in virCloseCallbacksUnset.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
(cherry picked from commit f47b91148a)
If a transient storage pool is deemed inactive after libvirtd restart it
would not be deleted from the list. Reuse virStoragePoolUpdateInactive
along with a refactor necessary to properly update the state.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242801
(cherry picked from commit f3a8e80c13)
After a pool is made inactive the definition objects need to be updated
(if a new definition is prepared) and transient pools need to be
completely removed. Split out the code doing these steps into a separate
function for later reuse.
(cherry picked from commit aced6b2356)
https://bugzilla.redhat.com/show_bug.cgi?id=1405269
If a secret was not provided for what was determined to be a LUKS
encrypted disk (during virStorageFileGetMetadata processing when
called from qemuDomainDetermineDiskChain as a result of hotplug
attach qemuDomainAttachDeviceDiskLive), then do not attempt to
look it up (avoiding a libvirtd crash) and do not alter the format
to "luks" when adding the disk; otherwise, the device_add would
fail with a message such as:
"unable to execute QEMU command 'device_add': Property 'scsi-hd.drive'
can't find value 'drive-scsi0-0-0-0'"
because of assumptions that when the format=luks that libvirt would have
provided the secret to decrypt the volume.
Access to unlock the volume will thus be left to the application.
(cherry picked from commit 7f7d990483)
Pool types that have the VIR_STORAGE_POOL_SOURCE_NAME flag set
allow omitting the <name> element and instead fill out the pool name
from the <source><name> element.
Relax the schema to make <name> optional for these pools.
Expressing that at least one of these is required is out of scope
of the schema.
(cherry picked from commit 8ef12b96fa)
Commit 839a060 tied the lifecycle of virtlogd more
closely to that of libvirtd. Unfortunately, while starting
virtlogd when libvirtd is started is definitely a good idea,
restarting virtlogd or shutting it down at any time outside
of system poweroff is not.
Revert part of that commit by removing the PartOf= lines,
meaning that only startup requests will be propagated from
libvirtd to virtlogd.
Resolves: https://bugzilla.redhat.com/1372576
(cherry picked from commit f496ce1df3)
We already guarantee that virtlogd.socket is enabled/disabled
along with libvirtd.service, but if libvirtd.service has just
been installed and is started before rebooting, then
virtlogd.socket will not be running and guest startup will
fail.
Add Requires=virtlogd.socket to libvirtd.service to make sure
virtlogd.socket is always started along with libvirtd.service,
and add Before=libvirtd.service to both virtlogd.socket and
virtlogd.service so that virtlogd never disappears before
libvirtd has exited.
Also add PartOf=libvirtd.service to both virtlogd.socket and
virtlogd.service, so that virtlogd can be shut down when not
needed.
Resolves: https://bugzilla.redhat.com/1372576
(cherry picked from commit 839a060890)
New maint release version numbers of just A.B.C format, not the old
A.B.C.D format. Adjust the check that dynamically changes the Source
URL for maint releases
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
(cherry picked from commit 1d07a5bf3c)
Thanks to the complex capability caching code virQEMUCapsProbeQMP was
never called when we were starting a new qemu VM. On the other hand,
when we are reconnecting to the qemu process we reload the capability
list from the status XML file. This means that the flag preventing the
function being called was not set and thus we partially reprobed some of
the capabilities.
The recent addition of CPU hotplug clears the
QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS if the machine does not support it.
The partial re-probe on reconnect results into attempting to call the
unsupported command and then killing the VM.
Remove the partial reprobe and depend on the stored capabilities. If it
will be necessary to reprobe the capabilities in the future, we should
do a full reprobe rather than this partial one.
(cherry picked from commit b87a11340f)
commit 9065cfaa added the ability to disable DNS services for a
libvirt virtual network. If neither DNS nor DHCP is needed for a
network, then we don't need to start dnsmasq, so code was added to
check for this.
Unfortunately, it was written with a great lack of attention to detail
(I can say that, because I was the author), and the loop that checked
if DHCP is needed for the network would never end if the network had
multiple IP addresses and the first <ip> had no <dhcp> subelement
(which would have contained a <range> or <host> subelement, thus
requiring DHCP services).
This patch rewrites the check to be more compact and (more
importantly) finite.
This bug was present in release 2.2.0 and 2.3.0, so will need to be
backported to any relevant maintainence branches.
Reported here:
https://www.redhat.com/archives/libvirt-users/2016-October/msg00032.htmlhttps://www.redhat.com/archives/libvirt-users/2016-October/msg00045.html
(cherry picked from commit bbb333e481)
When I added support for the pcie-expander-bus controller in commit
bc07251f, I incorrectly thought that it only had a single slot
available. Actually it has 32 slots, just like the root complex aka
pcie-root (the part that I *did* get correct is that unlike pcie-root
a pcie-expander-bus doesn't allow any integrated endpoint devices -
only pcie-root-ports and dmi-to-pci-controllers are allowed).
(cherry picked from commit 22afd44171)
If this reminds you of a commit message from around a year ago, it's
41c2aa729f and yes, we're dealing with
"the same thing" again. Or f309db1f4d and
it's similar.
There is a logic in place that if there is no real need for
memory-backend-file, qemuBuildMemoryBackendStr() returns 0. However
that wasn't the case with hugepage backing. The reason for that was
that we abused the 'pagesize' variable for storing that information, but
we should rather have a separate one that specifies whether we really
need the new object for hugepage backing. And that variable should be
set only if this particular NUMA cell needs special treatment WRT
hugepages.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1372153
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
(cherry picked from commit 4372a7845acbc6974f6027ef68e7dd3eeb47f425)
When we wanted to break huge and unmaintainable virsh into
smaller files first thing we did was to just move funcs into
virsh-.c files and then #include them from virsh. Having it done
this way we also needed to have them listed under EXTRA_DIST.
However, things got changed since then and now all the virsh-*.c
files are proper source files. Therefore they are listed under
virsh_SOURCES too. But for some reason we forgot to remove them
from EXTRA_DIST.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The libxl driver has long supported migration V3 but has never
indicated so in the connectSupportsFeature API. As a result, apps
such as virt-manager that use the more generic virDomainMigrate API
fail with
libvirtError: this function is not supported by the connection driver:
virDomainMigrate
Add VIR_DRV_FEATURE_MIGRATION_V3 to the list of features marked as
supported in the connectSupportsFeature API.
Test 12 from objecteventtest (createXML add event) segaults on FreeBSD
with bus error.
At some point it calls testNodeDeviceDestroy() from the test driver. And
it fails when it tries to unlock the device in the "out:" label of this
function.
Unlocking fails because the previous step was a call to
virNodeDeviceObjRemove from conf/node_device_conf.c. This function
removes the given device from the device list and cleans up the object,
including destroying of its mutex. However, it does not nullify the pointer
that was given to it.
As a result, we end up in testNodeDeviceDestroy() here:
out:
if (obj)
virNodeDeviceObjUnlock(obj);
And instead of skipping this, we try to do Unlock and fail because of
malformed mutex.
Change virNodeDeviceObjRemove to use double pointer and set pointer to
NULL.
As bhyve currently doesn't use controller addressing and simply
uses 1 implicit controller for 1 disk device, the scheme looks the
following:
pci addrees -> (implicit controller) -> disk device
So in fact we identify disk devices by pci address of implicit
controller and just pass it this way to bhyve in a form:
-s pci_addr,ahci-(cd|hd),/path/to/disk
Therefore, we cannot use virDeviceInfoPCIAddressWanted() because it
does not expect that disk devices might need PCI address assignment.
As a result, if a disk was specified without address, it will not be
generated and domain will to start.
Until proper controller addressing is implemented in the bhyve
driver, force each disk to have PCI address generated if it was not
specified by user.
Setting vcpu count when cpu topology is specified may result into an
invalid configuration. Since the topology can't be modified, reject the
setting if it doesn't match the requested topology. This will allow
fixing the topology in case it was broken.
Partially fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1370066
Validating the vcpu count is more intricate and doing it in the XML
parser will make previously valid configs (with older qemus) vanish.
Now that we have a very similar check in the qemu domain validation
callback we can do it in a more appropriate place.
This basically reverts commit b54de0830a.
Partially resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1370066
ce43cca0e refactored the helper to prepare it for sparse topologies but
forgot to fix the iterator used to fill the structures. This would
result into a weirdly sparse populated array and possible out of bounds
access and crash once sparse vcpu topologies were allowed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1369988
virVcpuInfo contains the vcpu number that the data refers to. Report
what's returned by the daemon rather than the sequence number as with
sparse vcpu topologies they won't match.
While dettaching/attaching device in OpenStack, nova
calls vzDomainDettachDevice twice, because the update of the internal
configuration of the ct comes a bit latter than the update event.
As the result, we suffer from the second call to dettach the same device.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
libvirt-python passes parameter bandwidth = 0
by default. This means that bandwidth is unlimited.
VZ driver doesn't support bandwidth rate limiting,
but we still need to handle it and fail if bandwidth > 0.
Signed-off-by: Pavel Glushchak <pglushchak@virtuozzo.com>
* Added VIR_MIGRATE_LIVE, VIR_MIGRATE_UNDEFINE_SOURCE and
VIR_MIGRATE_PERSIST_DEST to supported migration flags
Signed-off-by: Pavel Glushchak <pglushchak@virtuozzo.com>
When support for auto-creating tap devices was added to <interface
type='ethernet'> in commit 9c17d6, the code assumed that
virNetDevTapCreate() would honor the VIR_NETDEV_TAP__CREATE_IFUP flag
that is supported by virNetDevTapCreateInBridgePort(). That isn't the
case - the latter function performs several operations, and one of
them is setting the tap device online. But virNetDevTapCreate() *only*
creates the tap device, and relies on the caller to do everything
else, so qemuInterfaceEthernetConnect() needs to call
virNetDevSetOnline() after the device is successfully created.
The linkstate setting of an <interface> is only meant to change the
online status reported to the guest system by the emulated network
device driver in qemu, but when support for auto-creating tap devices
for <interface type='ethernet'> was added in commit 9717d6, a chunk of
code was also added to qemuDomainChangeNetLinkState() that sets the
online status of the tap device (i.e. the *host* side of the
interface) for type='ethernet'. This was never done for tap devices
used in type='bridge' or type='network' interfaces, nor was it done in
the past for tap devices created by external scripts for
type='ethernet', so we shouldn't be doing it now.
This patch removes the bit of code in qemuDomainChangeNetLinkState()
that modifies online status of the tap device.
The call to virNetDevIPInfoAddToDev() that sets up tap device IP
addresses and routes was somehow incorrectly placed in
qemuInterfaceStopDevice() instead of qemuInterfaceStartDevice() in
commit fe8567f6. This fixes that error by moving the call to
virNetDevIPInfoAddToDev() to qemuInterfaceStartDevice().
Signed-off-by: Vasiliy Tolstov <v.tolstov@selfip.ru>
This patch removes the old vcpu unplug code completely and replaces it
with the new code using device_del. The old hotplug code basically never
worked with any recent qemu and thus is useless.
As the new code is using device_del all the implications of using it
are present. Contrary to the device deletion code, the vcpu deletion
code fails if the unplug request is not executed in time.
To allow unplugging the vcpus, hotplugging of vcpus on platforms which
require to plug multiple logical vcpus at once or plugging them in an
arbitrary order it's necessary to use the new device_add interface for
vcpu hotplug.
This patch adds support for the device_add interface using the old
setvcpus API by implementing an algorithm to select the appropriate
entities to plug in.
Add support for using the new approach to hotplug vcpus using device_add
during startup of qemu to allow sparse vcpu topologies.
There are a few limitations imposed by qemu on the supported
configuration:
- vcpu0 needs to be always present and not hotpluggable
- non-hotpluggable cpus need to be ordered at the beginning
- order of the vcpus needs to be unique for every single hotpluggable
entity
Qemu also doesn't really allow to query the information necessary to
start a VM with the vcpus directly on the commandline. Fortunately they
can be hotplugged during startup.
The new hotplug code uses the following approach:
- non-hotpluggable vcpus are counted and put to the -smp option
- qemu is started
- qemu is queried for the necessary information
- the configuration is checked
- the hotpluggable vcpus are hotplugged
- vcpus are started
This patch adds a lot of checking code and enables the support to
specify the individual vcpu element with qemu.
The vcpu order information is extracted only for hotpluggable entities,
while vcpu definitions belonging to the same hotpluggable entity need
to all share the order information.
We also can't overwrite it right away in the vcpu info detection code as
the order is necessary to add the hotpluggable vcpus enabled on boot in
the correct order.
The helper will store the order information in places where we are
certain that it's necessary.