PPC driver needs to convert POWERx_v* legacy CPU model names into POWERx
to maintain backward compatibility with existing domains. This patch
adds a new step into the guest CPU configuration work flow which CPU
drivers can use to convert legacy CPU definitions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
It was introduced by commit 7a51d9ebb, which started to use
monitor commands without job acquiring, which is unsafe and leads
to simultaneous access to vm->mon structure by different threads.
Crash backtrace is the following (shortened):
Program received signal SIGSEGV, Segmentation fault.
qemuMonitorSend (mon=mon@entry=0x7f4ef4000d20, msg=msg@entry=0x7f4f18e78640) at qemu/qemu_monitor.c:1011
1011 while (!mon->msg->finished) {
0 qemuMonitorSend () at qemu/qemu_monitor.c:1011
1 0x00007f691abdc720 in qemuMonitorJSONCommandWithFd () at qemu/qemu_monitor_json.c:298
2 0x00007f691abde64a in qemuMonitorJSONCommand at qemu/qemu_monitor_json.c:328
3 qemuMonitorJSONQueryCPUs at qemu/qemu_monitor_json.c:1408
4 0x00007f691abcaebd in qemuMonitorGetCPUInfo g@entry=false) at qemu/qemu_monitor.c:1931
5 0x00007f691ab96863 in qemuDomainRefreshVcpuHalted at qemu/qemu_domain.c:6309
6 0x00007f691ac0af99 in qemuDomainGetStatsVcpu at qemu/qemu_driver.c:18945
7 0x00007f691abef921 in qemuDomainGetStats at qemu/qemu_driver.c:19469
8 qemuConnectGetAllDomainStats at qemu/qemu_driver.c:19559
9 0x00007f693382e806 in virConnectGetAllDomainStats at libvirt-domain.c:11546
10 0x00007f6934470c40 in remoteDispatchConnectGetAllDomainStats at remote.c:6267
(gdb) p mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
This change fixes it by calling qemuDomainRefreshVcpuHalted only when job is acquired.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
For machinetypes with a pci-root bus (all legacy PCI), libvirt will
make a "fake" reservation for one extra slot prior to assigning
addresses to unaddressed PCI endpoint devices in the domain. This will
trigger auto-adding of a pci-bridge for the final device to be
assigned an address *if that device would have otherwise instead been
the last device on the last available pci-bridge*; thus it assures
that there will always be at least one slot left open in the domain's
bus topology for expansion (which is important both for hotplug (since
a new pci-bridge can't be added while the guest is running) as well as
for offline additions to the config (since adding a new device might
otherwise in some cases require re-addressing existing devices, which
we want to avoid)).
It's important to note that for the above case (legacy PCI), we must
check for the special case of all slots on all buses being occupied
*prior to assigning any addresses*, and avoid attempting to reserve
the extra address in that case, because there is no free address in
the existing topology, so no place to auto-add a pci-bridge for
expansion (i.e. it would always fail anyway). Since that condition can
only be reached by manual intervention, this is acceptable.
For machinetypes with pcie-root (Q35, aarch64 virt), libvirt's
methodology for automatically expanding the bus topology is different
- pcie-root-ports are plugged into slots (soon to be functions) of
pcie-root as needed, and the new endpoint devices are assigned to the
single slot in each pcie-root-port. This is done so that the devices
are, by default, hotpluggable (the slots of pcie-root don't support
hotplug, but the single slot of the pcie-root-port does). Since
pcie-root-ports can only be plugged into pcie-root, and we don't
auto-assign endpoint devices to the pcie-root slots, this means
topology expansion doesn't compete with endpoint devices for slots, so
we don't need to worry about checking for all "useful" slots being
free *prior* to assigning addresses to new endpoint devices - as a
matter of fact, if we attempt to reserve the open slots before the
used slots, it can lead to errors.
Instead this patch just reserves one slot for a "future potential"
PCIe device after doing the assignment for actual devices, but only
if the only PCI controller defined prior to starting address
assignment was pcie-root, and only if we auto-added at least one PCI
controller during address assignment. This assures two things:
1) that reserving the open slots will only be done when the domain is
initially defined, never at any time after, and
2) that if the user understands enough about PCI controllers that they
are adding them manually, that we don't mess up their plan by
adding extras - if they know enough to add one pcie-root-port, or
to manually assign addresses such that no pcie-root-ports are
needed, they know enough to add extra pcie-root-ports if they want
them (this could be called the "libguestfs clause", since
libguestfs needs to be able to create domains with as few
devices/controllers as possible).
This is set to reserve a single free port for now, but could be
increased in the future if public sentiment goes in that direction
(it's easy to increase later, but essentially impossible to decrease)
Real Q35 hardware has an ICH9 chip that includes several integrated
devices at particular addresses (see the file docs/q35-chipset.cfg in
the qemu source). libvirt already attempts to put the first two sets
of ich9 USB2 controllers it finds at 00:1D.* and 00:1A.* to match the
real hardware. This patch does the same for the ich9 "HD audio"
device.
The main inspiration for this patch is that currently the *only*
device in a reasonable "workstation" type virtual machine config that
requires a legacy PCI slot is the audio device, Without this patch,
the standard Q35 machine created by virt-manager will have a
dmi-to-pci-bridge and a pci-bridge just for the sound device; with the
patch (and if you change the sound device model from the default
"ich6" to "ich9"), the machine definition constructed by virt-manager
has absolutely no legacy PCI controllers - any legacy PCI devices
(e.g. video and sound) are on pcie-root as integrated devices.
Previously we added a set of EHCI+UHCI controllers to Q35 machines to
mimic real hardware as closely as possible, but recent discussions
have pointed out that the nec-usb-xhci (USB3) controller is much more
virtualization-friendly (uses less CPU), so this patch switches the
default for Q35 machinetypes to add an XHCI instead (if it's
supported, which it of course *will* be).
Since none of the existing test cases left out USB controllers in the
input XML, a new Q35 test case was added which has *no* devices, so
ends up with only the defaults always put in by qemu, plus those added
by libvirt.
Now the a dmi-to-pci-bridge is automatically added just as it's needed
(when a pci-bridge is being added), we no longer have any need to
force-add one to every single Q35 domain.
Previously libvirt would only add pci-bridge devices automatically
when an address was requested for a device that required a legacy PCI
slot and none was available. This patch expands that support to
dmi-to-pci-bridge (which is needed in order to add a pci-bridge on a
machine with a pcie-root), and pcie-root-port (which is needed to add
a hotpluggable PCIe device). It does *not* automatically add
pcie-switch-upstream-ports or pcie-switch-downstream-ports (and
currently there are no plans for that).
Given the existing code to auto-add pci-bridge devices, automatically
adding pcie-root-ports is fairly straightforward. The
dmi-to-pci-bridge support is a bit tricky though, for a few reasons:
1) Although the only reason to add a dmi-to-pci-bridge is so that
there is a reasonable place to plug in a pci-bridge controller,
most of the time it's not the presence of a pci-bridge *in the
config* that triggers the requirement to add a dmi-to-pci-bridge.
Rather, it is the presence of a legacy-PCI device in the config,
which triggers auto-add of a pci-bridge, which triggers auto-add of
a dmi-to-pci-bridge (this is handled in
virDomainPCIAddressSetGrow() - if there's a request to add a
pci-bridge we'll check if there is a suitable bus to plug it into;
if not, we first add a dmi-to-pci-bridge).
2) Once there is already a single dmi-to-pci-bridge on the system,
there won't be a need for any more, even if it's full, as long as
there is a pci-bridge with an open slot - you can also plug
pci-bridges into existing pci-bridges. So we have to make sure we
don't add a dmi-to-pci-bridge unless there aren't any
dmi-to-pci-bridges *or* any pci-bridges.
3) Although it is strongly discouraged, it is legal for a pci-bridge
to be directly plugged into pcie-root, and we don't want to
auto-add a dmi-to-pci-bridge if there is already a pci-bridge
that's been forced directly into pcie-root.
Although libvirt will now automatically create a dmi-to-pci-bridge
when it's needed, the code still remains for now that forces a
dmi-to-pci-bridge on all domains with pcie-root (in
qemuDomainDefAddDefaultDevices()). That will be removed in a future
patch.
For now, the pcie-root-ports are added one to a slot, which is a bit
wasteful and means it will fail after 31 total PCIe devices (30 if
there are also some PCI devices), but helps keep the changeset down
for this patch. A future patch will have 8 pcie-root-ports sharing the
functions on a single slot.
Andrea had the right idea when he disabled the "reserve an extra
unused slot" bit for aarch64/virt. For *any* PCI Express-based
machine, it is pointless since 1) an extra legacy-PCI slot can't be
used for hotplug, since hotplug into legacy PCI slots doesn't work on
PCI Express machinetypes, and 2) even for "coldplug" expansion,
everybody will want to expand using Express controllers, not legacy
PCI.
This patch eliminates the extra slot reserve unless the system has a
pci-root (i.e. legacy PCI)
The nec-usb-xhci device (which is a USB3 controller) has always
presented itself as a PCI device when plugged into a legacy PCI slot,
and a PCIe device when plugged into a PCIe slot, but libvirt has
always auto-assigned it to a legacy PCI slot.
This patch changes that behavior to auto-assign to a PCIe slot on
systems that have pcie-root (e.g. Q35 and aarch64/virt).
Since we don't yet auto-create pcie-*-port controllers on demand, this
means a config with an nec-xhci USB controller that has no PCI address
assigned will also need to have an otherwise-unused pcie-*-port
controller specified:
<controller type='pci' model='pcie-root-port'/>
<controller type='usb' model='nec-xhci'/>
(this assumes there is an otherwise-unused slot on pcie-root to accept
the pcie-root-port)
The e1000e is an emulated network device based on the Intel 82574,
present in qemu 2.7.0 and later. Among other differences from the
e1000, it presents itself as a PCIe device rather than legacy PCI. In
order to get it assigned to a PCIe controller, this patch updates the
flags setting for network devices when the model name is "e1000e".
(Note that for some reason libvirt has never validated the network
device model names other than to check that there are no dangerous
characters in them. That should probably change, but is the subject of
another patch.)
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1343094
libvirt previously assigned nearly all devices to a "hotpluggable"
legacy PCI slot even on machines with a PCIe root bus (and even though
most such machines don't even support hotplug on legacy PCI slots!)
Forcing all devices onto legacy PCI slots means that the domain will
need a dmi-to-pci-bridge (to convert from PCIe to legacy PCI) and a
pci-bridge (to provide hotpluggable legacy PCI slots which, again,
usually aren't hotpluggable anyway).
To help reduce the need for these legacy controllers, this patch tries
to assign virtio-1.0-capable devices to PCIe slots whenever possible,
by setting appropriate connectFlags in
virDomainCalculateDevicePCIConnectFlags(). Happily, when that function
was written (just a few commits ago) it was created with a
"virtioFlags" argument, set by both of its callers, which is the
proper connectFlags to set for any virtio-*-pci device - depending on
the arch/machinetype of the domain, and whether or not the qemu binary
supports virtio-1.0, that flag will have either been set to PCI or
PCIe. This patch merely enables the functionality by setting the flags
for the device to whatever is in virtioFlags if the device is a
virtio-*-pci device.
NB: the first virtio video device will be placed directly on bus 0
slot 1 rather than on a pcie-root-port due to the override for primary
video devices in qemuDomainValidateDevicePCISlotsQ35(). Whether or not
to change that is a topic of discussion, but this patch doesn't change
that particular behavior.
NB2: since the slot must be hotpluggable, and pcie-root (the PCIe root
complex) does *not* support hotplug, this means that suitable
controllers must also be in the config (i.e. either pcie-root-port, or
pcie-downstream-port). For now, libvirt doesn't add those
automatically, so if you put virtio devices in a config for a qemu
that has PCIe-capable virtio devices, you'll need to add extra
pcie-root-ports yourself. That requirement will be eliminated in a
future patch, but for now, it's simple to do this:
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
...
Partially Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1330024
This patch cleans up the connect flags for certain types/models of
devices that aren't PCI to return 0. In the future that may be used as
an indicator to the caller about whether or not a device needs a PCI
address. For now it's just ignored, except for in
virDomainPCIAddressEnsureAddr() - called during device hotplug - (and
in some cases actually needs to be re-set to PCI|HOTPLUGGABLE just in
case someone (in some old config) has manually set a PCI address for a
device that isn't PCI.
Before now, all the qemu hotplug functions assumed that all devices to
be hotplugged were legacy PCI endpoint devices
(VIR_PCI_CONNECT_TYPE_PCI_DEVICE). This worked out "okay", because all
devices *are* legacy PCI endpoint devices on x86/440fx machinetypes,
and hotplug didn't work properly on machinetypes using PCIe anyway
(hotplugging onto a legacy PCI slot doesn't work, and until commit
b87703cf any attempt to manually specify a PCIe address for a
hotplugged device would be erroneously rejected).
This patch makes all qemu hotplug operations honor the pciConnectFlags
set by the single all-knowing function
qemuDomainDeviceCalculatePCIConnectFlags(). This is done in 3 steps,
but in a single commit since we would have to touch the other points
at each step anyway:
1) add a flags argument to the hypervisor-agnostic
virDomainPCIAddressEnsureAddr() (previously it hardcoded
..._PCI_DEVICE)
2) add a new qemu-specific function qemuDomainEnsurePCIAddress() which
gets the correct pciConnectFlags for the device from
qemuDomainDeviceConnectFlags(), then calls
virDomainPCIAddressEnsureAddr().
3) in qemu_hotplug.c replace all calls to
virDomainPCIAddressEnsureAddr() with calls to
qemuDomainEnsurePCIAddress()
So in effect, we're putting a "shim" on top of all calls to
virDomainPCIAddressEnsureAddr() that sets the right pciConnectFlags.
Set pciConnectFlags in each device's DeviceInfo and then use those
flags later when validating existing addresses in
qemuDomainCollectPCIAddress() and when assigning new addresses with
qemuDomainPCIAddressReserveNextAddr() (rather than scattering the
logic about which devices need which type of slot all over the place).
Note that the exact flags set by
qemuDomainDeviceCalculatePCIConnectFlags() are different from the
flags previously set manually in qemuDomainCollectPCIAddress(), but
this doesn't matter because all validation of addresses in that case
ignores the setting of the HOTPLUGGABLE flag, and treats PCIE_DEVICE
and PCI_DEVICE the same (this lax checking was done on purpose,
because there are some things that we want to allow the user to
specify manually, e.g. assigning a PCIe device to a PCI slot, that we
*don't* ever want libvirt to do automatically. The flag settings that
we *really* want to match are 1) the old flag settings in
qemuDomainAssignDevicePCISlots() (which is HOTPLUGGABLE | PCI_DEVICE
for everything except PCI controllers) and 2) the new flag settings
done by qemuDomainDeviceCalculatePCIConnectFlags() (which are
currently exactly that - HOTPLUGGABLE | PCI_DEVICE for everything
except PCI controllers).
The lowest level function of this trio
(qemuDomainDeviceCalculatePCIConnectFlags()) aims to be the single
authority for the virDomainPCIConnectFlags to use for any given device
using a particular arch/machinetype/qemu-binary.
qemuDomainFillDevicePCIConnectFlags() sets info->pciConnectFlags in a
single device (unless it has no virDomainDeviceInfo, in which case
it's a NOP).
qemuDomainFillAllPCIConnectFlags() sets info->pciConnectFlags in all
devices that have a virDomainDeviceInfo
The latter two functions aren't called anywhere yet. This commit is
just making them available. Later patches will replace all the current
hodge-podge of flag settings with calls to this single authority.
Coverity identified that this variable might be leaked. And it's
right. If an error occurred and we have to roll back the control
jumps to try_remove label where we save the current error (see
0e82fa4c34 for more info). However, inside the code a jump onto
other label is possible thus leaking the error object.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
As was suggested in an earlier review comment[1], we can
catch some additional code points by cleaning up how we use the
hostdev subsystem type in some switch statements.
[1] End of https://www.redhat.com/archives/libvir-list/2016-September/msg00399.html
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
The memory device alias needs to be treated as machine ABI as qemu is
using it in the migration stream for section labels. To simplify this
generate the alias from the slot number unless an existing broken
configuration is detected.
With this patch the aliases are predictable and even certain
configurations which would not be migratable previously are fixed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1359135
As with other devices assign the slot number right away when adding the
device. This will make the slot numbers static as we do with other
addressing elements and it will ultimately simplify allocation of the
alias in a static way which does not break with qemu.
Detect on reconnect to a running qemu VM whether the alias of a
hotpluggable memory device (dimm) does not match the dimm slot number
where it's connected to. This is necessary as qemu is actually
considering the alias as machine ABI used to connect the backend object
to the dimm device.
This will require us to keep them consistent so that we can reliably
restore them on migration. In some situations it was currently possible
to create a mismatched configuration and qemu would refuse to restore
the migration stream.
To avoid breaking existing VMs we'll need to keep the old algorithm
though.
Simplify handling of the 'dimm' address element by allowing to specify
the slot number only. This will allow libvirt to allocate slot numbers
before starting qemu.
https://bugzilla.redhat.com/show_bug.cgi?id=1386976
We have everything ready. Actually the only limitation was our
check that denied hotplug of vhost-user.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If there is an error hotpluging a net device (for whatever
reason) a rollback operation is performed. However, whilst doing
so various helper functions that are called report errors on
their own. This results in the original error to be overwritten
and thus misleading the user.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Even though using /dev/shm/asdf as the backend, we still need to make
the mapping shared. The original patch forgot to add that parameter.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1392031
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
../../src/qemu/qemu_capabilities.c:3757: error: declaration of
'basename' shadows a global declaration [-Wshadow]
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Function qemuDomainAttachShmemDevice() steals the device data if the
hotplug was successful, but the condition checked for unsuccessful
execution otherwise.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Propagate the selected or default level to qemu if it's supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1376009
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This helps in selecting log level of the gluster gfapi, output to stderr.
The option is 'gluster_debug_level', can be tuned by editing
'/etc/libvirt/qemu.conf'
Debug levels ranges 0-9, with 9 being the most verbose, and 0
representing no debugging output. The default is the same as it was
before, which is a level of 4. The current logging levels defined in
the gluster gfapi are:
0 - None
1 - Emergency
2 - Alert
3 - Critical
4 - Error
5 - Warning
6 - Notice
7 - Info
8 - Debug
9 - Trace
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Allow detecting capabilities according to the qemu QMP schema. This is
necessary as sometimes the availability of certain options depends on
the presence of a field in the schema.
This patch adds support for loading the QMP schema when detecting qemu
capabilities and adds a very simple query language to allow traversing
the schema and selecting a certain element from it.
The infrastructure in this patch uses a query path to set a specific
capability flag according to the availability of the given element in
the schema.
https://bugzilla.redhat.com/show_bug.cgi?id=1379196
Add check in qemuCheckDiskConfig for an invalid combination
of using the 'scsi' bus for a block 'lun' device and any disk
source format other than 'raw'.
Commit c29e6d4805 cause build failure on RHEL-6:
../../src/qemu/qemu_capabilities.c: In function 'virQEMUCapsIsValid':
../../src/qemu/qemu_capabilities.c:4085: error: declaration of 'ctime'
shadows a global declaration [-Wshadow]
/usr/include/time.h:258: error: shadowed declaration is here [-Wshadow]
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's keep all run time validation of cached QEMU capabilities in
virQEMUCapsIsValid and call it whenever we access the cache.
virQEMUCapsInitCached should keep only the checks which do not make
sense once the cache is loaded in memory.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
virQEMUCapsLoadCache loads QEMU capabilities from a file, but strangely
enough it returns the loaded QEMU binary ctime in qemuctime parameter
instead of storing it in qemuCaps.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This is needed in order to migrate a domain with shmem devices as that
is not allowed to migrate.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
QEMU added support for ivshmem-plain and ivshmem-doorbell. Those are
reworked varians of legacy ivshmem that are compatible from the guest
POV, but not from host's POV and have sane specification and handling.
Details about the newer device type can be found in qemu's commit
5400c02b90bb:
http://git.qemu.org/?p=qemu.git;a=commit;h=5400c02b90bb
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We're keeping some things at default and that's not something we want to
do intentionally. Let's save some sensible defaults upfront in order to
avoid having problems later. The details for the defaults (of the newer
implementation) can be found in qemu's commit 5400c02b90bb:
http://git.qemu.org/?p=qemu.git;a=commit;h=5400c02b90bb
Since we are merely saving the defaults it will not change the guest ABI
and thanks to the fact that we're doing it in the PostParse callback it
will not break the ABI stability checks.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The old ivshmem is deprecated in QEMU, so let's use the better
ivshmem-{plain,doorbell} variants instead.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Unlike other migration capabilities, post-copy is also set on the
destination host which means it doesn't disappear once domain is
migrated. As a result of that other functionality which internally uses
migration to a file (virDomainManagedSave, virDomainSave,
virDomainCoreDump) may fail after migration because the post-copy
capability is still set.
https://bugzilla.redhat.com/show_bug.cgi?id=1374718
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If we failed to unlink old dom cfg file, we goto rollback.
But inside rollback, we fogot to unlink the new dom cfg file.
This patch fixes this issue.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Whilst working on another issue, I've noticed that in some
functions we have a local @driver variable among with access to
global @qemu_driver variable. This makes no sense.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Rather than waiting until we've free'd up all the resources, cause the
'workerPool' thread pool to flush as soon as possible during stateCleanup.
Otherwise, it's possible something waiting to run will SEGV such as is the
case during race conditions of simultaneous exiting libvirtd and qemu process.
Resolves the following crash:
[1] crash backtrace: (bt is shortened a bit):
0 0x00007ffff7282f2b in virClassIsDerivedFrom
(klass=0xdeadbeef, parent=0x55555581d650) at util/virobject.c:169
1 0x00007ffff72835fd in virObjectIsClass
(anyobj=0x7fffd024f580, klass=0x55555581d650) at util/virobject.c:365
2 0x00007ffff7283498 in virObjectLock
(anyobj=0x7fffd024f580) at util/virobject.c:317
3 0x00007ffff722f0a3 in virCloseCallbacksUnset
(closeCallbacks=0x7fffd024f580, vm=0x7fffd0194db0,
cb=0x7fffdf1af765 <qemuProcessAutoDestroy>)
at util/virclosecallbacks.c:164
4 0x00007fffdf1afa7b in qemuProcessAutoDestroyRemove
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_process.c:6365
5 0x00007fffdf1adff1 in qemuProcessStop
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0, reason=VIR_DOMAIN_SHUTOFF_CRASHED,
asyncJob=QEMU_ASYNC_JOB_NONE, flags=0)
at qemu/qemu_process.c:5877
6 0x00007fffdf1f711c in processMonitorEOFEvent
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_driver.c:4545
7 0x00007fffdf1f7313 in qemuProcessEventHandler
(data=0x555555832710, opaque=0x7fffd00f3a60) at qemu/qemu_driver.c:4589
8 0x00007ffff72a84c4 in virThreadPoolWorker
(opaque=0x555555805da0) at util/virthreadpool.c:167
Thread 1 (Thread 0x7ffff7fb1880 (LWP 494472)):
1 0x00007ffff72a7898 in virCondWait
(c=0x7fffd01c21f8, m=0x7fffd01c21a0) at util/virthread.c:154
2 0x00007ffff72a8a22 in virThreadPoolFree
(pool=0x7fffd01c2160) at util/virthreadpool.c:290
3 0x00007fffdf1edd44 in qemuStateCleanup ()
at qemu/qemu_driver.c:1102
4 0x00007ffff736570a in virStateCleanup ()
at libvirt.c:807
5 0x000055555556f991 in main (argc=1, argv=0x7fffffffe458) at libvirtd.c:1660
We don't support cpu pinning for TCG domains because QEMU runs them in
one thread only. But vcpupin command was able to set them, which
resulted in a failed startup, so make sure that doesn't happen.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>