I tried to attach a SCSI LUN to two different guests, and forgot
to specify "shareable" in the hostdev XML. Attaching the device
to the second guest failed, but the message was not helpful in
telling me what I was doing wrong:
$ cat scsi_scratch_disk.xml
<hostdev mode='subsystem' type='scsi'>
<source>
<adapter name='scsi_host3'/>
<address bus='0' target='15' unit='1074151456'/>
</source>
</hostdev>
$ virsh attach-device dasd_sles_d99c scsi_scratch_disk.xml
Device attached successfully
$ virsh attach-device dasd_fedora_0e1e scsi_scratch_disk.xml
error: Failed to attach device from scsi_scratch_disk.xml
error: internal error: Unable to prepare scsi hostdev: scsi_host3:0:15:1074151456
I eventually discovered my error, but thought it was weird that
Libvirt doesn't provide something more helpful in this case.
Looking over the code we had just gone through, I commented out
the "internal error" message, and got something more useful:
$ virsh attach-device dasd_fedora_0e1e scsi_scratch_disk.xml
error: Failed to attach device from scsi_scratch_disk.xml
error: Requested operation is not valid: SCSI device 3:0:15:1074151456 is already in use by other domain(s) as 'non-shareable'
Looking over the error paths here, we seem to issue better
messages deeper in the callchain so these "internal error"
messages overwrite any of them. Remove them, so that the
more detailed errors are seen.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
0feebab2 adds calling qemuBlockNodeNamesDetect for completed job
on updating block jobs. This affects cancelling drive mirror logic as
this function drops vm lock. Now we have to recheck all disks
before the disk with the completed block job before going
to wait for block job events.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuDomainGetNumaParameters would return the automatic nodeset even for
the persistent config if the domain was running. This is incorrect since
the automatic nodeset will be re-queried upon starting the vm.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1445325
While peer-to-peer migration enters the Confirm phase even if the
Perform phase fails, the client which initiated a non-p2p migration will
never call virDomainMigrateConfirm* API if the Perform phase failed.
Thus we need to explicitly reset migration before reporting a failure
from the Perform phase API.
https://bugzilla.redhat.com/show_bug.cgi?id=1425003
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Migration with old QEMU which does not support query-migrate-parameters
would fail because the QMP command is called unconditionally since the
introduction of TLS migration. Previously it was only called if the user
explicitly requested a feature which uses QEMU migration parameters. And
even then the situation was not ideal, instead of reporting an
unsupported feature we'd just complain about missing QMP command.
Trivially no migration parameters are supported when
query-migrate-parameters QMP command is missing. There's no need to
report an error if it is missing, the callers will report better error
if needed.
https://bugzilla.redhat.com/show_bug.cgi?id=1441934
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
it should be a comparison of modes between new and old devices. So
the argument of the second virDomainNetGetActualDirectMode should be
newdev.
Signed-off-by: ZhiPeng Lu <lu.zhipeng@zte.com.cn>
This patch makes use of the virNetDevSetCoalesce() function to make
appropriate settings effective for devices that support them.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1414627
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We are currently parsing only rx/frames/max because that's the only
value that makes sense for us. The tun device just added support for
this one and the others are only supported by hardware devices which
we don't need to worry about as the only way we'd pass those to the
domain is using <hostdev/> or <interface type='hostdev'/>. And in
those cases the guest can modify the settings itself.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In the vcpu hotplug code if exit from the monitor failed we would still
attempt to save the status XML. When the daemon is terminated the
monitor socket is closed. In such case, the written status XML would not
contain the monitor path and thus be invalid.
Avoid this issue by only saving status XML on success of the monitor
command.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1439452
The history of USB controller for ppc64 guest is complex and goes
back to libvirt 1.3.1 where the fun started.
Prior Libvirt 1.3.1 if no model for USB controller was specified
we've simply passed "-usb" on QEMU command line.
Since Libvirt 1.3.1 there is a patch (8156493d8d) that fixes this
issue by using "-device pci-ohci,..." but it breaks migration with
older Libvirts which was agreed that's acceptable. However this
patch didn't reflect this change in the domain XML and the model
was still missing.
Since Libvirt 2.2.0 there is a patch (f55eaccb0c) that fixes the
issue with not setting the USB model into domain XML which we need
to know about to not break the migration and since the default
model was *pci-ohci* it was used as default in this patch as well.
This patch tries to take all the previous changes into account and
also change the default for newly defined domains that don't specify
any model for USB controller.
The VIR_DOMAIN_DEF_PARSE_ABI_UPDATE is set only if new domain is
defined or new device is added into a domain which means that in
all other cases we will use the old *pci-ohci* model instead of the
better and not broken *nec-usb-xhci* model.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1373184
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
So far there is probably no change that is allowed to be done
by the VIR_DOMAIN_DEF_PARSE_ABI_UPDATE flag that would break
guest ABI but this may change in the future.
This introduces new VIR_DOMAIN_DEF_PARSE_ABI_UPDATE_MIGRATION
which should be used only for ABI updates that are "safe" for
persistent migration.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
With QEMU older than 2.9.0 libvirt uses CPUID instruction to determine
what CPU features are supported on the host. This was later used when
checking compatibility of guest CPUs. Since QEMU 2.9.0 we ask QEMU for
the host CPU data. But the two methods we use usually provide disjoint
sets of CPU features because QEMU/KVM does not support all features
provided by the host CPU and on the other hand it can enable some
feature even if the host CPU does not support them.
So if there is a domain which requires a CPU features disabled by
QEMU/KVM, libvirt will refuse to start it with QEMU > 2.9.0 as its guest
CPU is incompatible with the host CPU data we got from QEMU. But such
domain would happily start on older QEMU (of course, the features would
be missing the guest CPU). To fix this regression, we need to combine
both CPU feature sets when checking guest CPU compatibility.
https://bugzilla.redhat.com/show_bug.cgi?id=1439933
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We already know from QEMU which CPU features will block migration. Let's
use this information to make a migratable copy of the host CPU model and
use it for updating guest CPU specification. This will allow us to drop
feature filtering from virCPUUpdate where it was just a hack.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Soon we will need to store multiple host CPU definitions in
virQEMUCapsHostCPUData and qemuCaps users will want to request the one
they need. This patch introduces virQEMUCapsHostCPUType enum which will
be used for specifying the requested CPU definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We need to store several CPU related data structure for both KVM and
TCG. So instead of keeping two different copies of everything let's
make a virQEMUCapsHostCPUData struct and use it twice.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This introduces virQEMUCapsHostCPUDataCopy which will later be
refactored a bit and called twice from virQEMUCapsNewCopy.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CLang's optimizer is more aggressive at inlining functions than
gcc and so will often inline functions that our tests want to
mock-override. This causes the test to fail in bizarre ways.
We don't want to disable inlining completely, but we must at
least prevent inlining of mocked functions. Fortunately there
is a 'noinline' attribute that lets us control this per function.
A syntax check rule is added that parses tests/*mock.c to extract
the list of functions that are mocked (restricted to names starting
with 'vir' prefix). It then checks that src/*.h header file to
ensure it has a 'ATTRIBUTE_NOINLINE' annotation. This should prevent
use from bit-rotting in future.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Introduce new wrapper functions without *Machine* in the function
name that take the whole virDomainDef structure as argument and
call the existing functions with *Machine* in the function name.
Change the arguments of existing functions to *machine* and *arch*
because they don't need the whole virDomainDef structure and they
could be used in places where we don't have virDomainDef.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Since the disks are copied by qemu, there's no need to enforce
cache=none. Thankfully the code that added qemuMigrateDisk did not break
existing configs, since if you don't select any disk to migrate
explicitly the code behaves sanely.
The logic for determining whether a disk should be migrated is
open-coded since using qemuMigrateDisk twice would be semantically
incorrect.
The code that validates whether an internal snapshot is possible would
reject an empty but not-readonly drive. Since floppies can have this
property, add a check for emptiness.
==20406== 8 bytes in 1 blocks are definitely lost in loss record 24 of 1,059
==20406== at 0x4C2CF55: calloc (vg_replace_malloc.c:711)
==20406== by 0x54BF530: virAllocN (viralloc.c:191)
==20406== by 0x54D37C4: virConfGetValueStringList (virconf.c:1001)
==20406== by 0x144E4E8E: virQEMUDriverConfigLoadFile (qemu_conf.c:835)
==20406== by 0x1452A744: qemuStateInitialize (qemu_driver.c:664)
==20406== by 0x55DB585: virStateInitialize (libvirt.c:770)
==20406== by 0x124570: daemonRunStateInit (libvirtd.c:881)
==20406== by 0x5532990: virThreadHelper (virthread.c:206)
==20406== by 0x8C82493: start_thread (in /lib64/libpthread-2.24.so)
==20406== by 0x8F7FA1E: clone (in /lib64/libc-2.24.so)
==20406== 4 bytes in 1 blocks are definitely lost in loss record 6 of 1,059
==20406== at 0x4C2AF3F: malloc (vg_replace_malloc.c:299)
==20406== by 0x8F17D39: strdup (in /lib64/libc-2.24.so)
==20406== by 0x552C0E0: virStrdup (virstring.c:784)
==20406== by 0x54D3622: virConfGetValueString (virconf.c:945)
==20406== by 0x144E4692: virQEMUDriverConfigLoadFile (qemu_conf.c:687)
==20406== by 0x1452A744: qemuStateInitialize (qemu_driver.c:664)
==20406== by 0x55DB585: virStateInitialize (libvirt.c:770)
==20406== by 0x124570: daemonRunStateInit (libvirtd.c:881)
==20406== by 0x5532990: virThreadHelper (virthread.c:206)
==20406== by 0x8C82493: start_thread (in /lib64/libpthread-2.24.so)
==20406== by 0x8F7FA1E: clone (in /lib64/libc-2.24.so)
Commit a4a39d90 added a check that checks for VFIO support with mediated
devices. The problem is that the hostdev preparing functions behave like
a fallthrough if device of that specific type doesn't exist. However,
the check for VFIO support was independent of the existence of a mdev
device which caused the guest to fail to start with any device to be
directly assigned if VFIO was disabled/unavailable in the kernel.
The proposed change first ensures that it makes sense to check for VFIO
support in the first place, and only then performs the VFIO support check
itself.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1441291
Signed-off-by: Erik Skultety <eskultet@redhat.com>
This removes the hacky extern global variable and modifies the
test code to properly create QEMU capabilities cache for QEMU
binaries used in our tests.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This attribute is not needed here, since @mon is in use.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Implement qemuMonitorRegister() as there is already a
qemuMonitorUnregister() function. This way it may be easier to
understand the code paths.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
This way qemuDomainLogContextRef() and qemuDomainLogContextFree() is
no longer needed. The naming qemuDomainLogContextFree() was also
somewhat misleading. Additionally, it's easier to turn
qemuDomainLogContext in a self-locking object.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
There were multiple race conditions that could lead to segmentation
faults. The first precondition for this is qemuProcessLaunch must fail
sometime shortly after starting the new QEMU process. The second
precondition for the segmentation faults is that the new QEMU process
dies - or to be more precise the QEMU monitor has to be closed
irregularly. If both happens during qemuProcessStart (starting a
domain) there are race windows between the thread with the event
loop (T1) and the thread that is starting the domain (T2).
First segmentation fault scenario:
If qemuProcessLaunch fails during qemuProcessStart the code branches
to the 'stop' path where 'qemuMonitorSetDomainLog(priv->mon, NULL,
NULL, NULL)' will set the log function of the monitor to NULL (done in
T2). In the meantime the event loop of T1 will wake up with an EOF
event for the QEMU monitor because the QEMU process has died. The
crash occurs if T1 has checked 'mon->logFunc != NULL' in qemuMonitorIO
just before the logFunc was set to NULL by T2. If this situation
occurs T1 will try to call mon->logFunc which leads to the
segmentation fault.
Solution:
Require the monitor lock for setting the log function.
Backtrace:
0 0x0000000000000000 in ?? ()
1 0x000003ffe9e45316 in qemuMonitorIO (watch=<optimized out>,
fd=<optimized out>, events=<optimized out>, opaque=0x3ffe08aa860) at
../../src/qemu/qemu_monitor.c:727
2 0x000003fffda2e1a4 in virEventPollDispatchHandles (nfds=<optimized
out>, fds=0x2aa000fd980) at ../../src/util/vireventpoll.c:508
3 0x000003fffda2e398 in virEventPollRunOnce () at
../../src/util/vireventpoll.c:657
4 0x000003fffda2ca10 in virEventRunDefaultImpl () at
../../src/util/virevent.c:314
5 0x000003fffdba9366 in virNetDaemonRun (dmn=0x2aa000cc550) at
../../src/rpc/virnetdaemon.c:818
6 0x000002aa00024668 in main (argc=<optimized out>, argv=<optimized
out>) at ../../daemon/libvirtd.c:1541
Second segmentation fault scenario:
If qemuProcessLaunch fails it will unref the log context and with
invoking qemuMonitorSetDomainLog(priv->mon, NULL, NULL, NULL)
qemuDomainLogContextFree() will be invoked. qemuDomainLogContextFree()
invokes virNetClientClose() to close the client and cleans everything
up (including unref of _virLogManager.client) when virNetClientClose()
returns. When T1 is now trying to report 'qemu unexpectedly closed the
monitor' libvirtd will crash because the client has already been
freed.
Solution:
As the critical section in qemuMonitorIO is protected with the monitor
lock we can use the same solution as proposed for the first
segmentation fault.
Backtrace:
0 virClassIsDerivedFrom (klass=0x3100979797979797,
parent=0x2aa000d92f0) at ../../src/util/virobject.c:169
1 0x000003fffda659e6 in virObjectIsClass (anyobj=<optimized out>,
klass=<optimized out>) at ../../src/util/virobject.c:365
2 0x000003fffda65a24 in virObjectLock (anyobj=0x3ffe08c1db0) at
../../src/util/virobject.c:317
3 0x000003fffdba4688 in
virNetClientIOEventLoop (client=client@entry=0x3ffe08c1db0,
thiscall=thiscall@entry=0x2aa000fbfa0) at
../../src/rpc/virnetclient.c:1668
4 0x000003fffdba4b4c in
virNetClientIO (client=client@entry=0x3ffe08c1db0,
thiscall=0x2aa000fbfa0) at ../../src/rpc/virnetclient.c:1944
5 0x000003fffdba4d42 in
virNetClientSendInternal (client=client@entry=0x3ffe08c1db0,
msg=msg@entry=0x2aa000cc710, expectReply=expectReply@entry=true,
nonBlock=nonBlock@entry=false) at ../../src/rpc/virnetclient.c:2116
6 0x000003fffdba6268 in
virNetClientSendWithReply (client=0x3ffe08c1db0, msg=0x2aa000cc710) at
../../src/rpc/virnetclient.c:2144
7 0x000003fffdba6e8e in virNetClientProgramCall (prog=0x3ffe08c1120,
client=<optimized out>, serial=<optimized out>, proc=<optimized out>,
noutfds=<optimized out>, outfds=0x0, ninfds=0x0, infds=0x0,
args_filter=0x3fffdb64440
<xdr_virLogManagerProtocolDomainReadLogFileArgs>, args=0x3ffffffe010,
ret_filter=0x3fffdb644c0
<xdr_virLogManagerProtocolDomainReadLogFileRet>, ret=0x3ffffffe008) at
../../src/rpc/virnetclientprogram.c:329
8 0x000003fffdb64042 in
virLogManagerDomainReadLogFile (mgr=<optimized out>, path=<optimized
out>, inode=<optimized out>, offset=<optimized out>, maxlen=<optimized
out>, flags=0) at ../../src/logging/log_manager.c:272
9 0x000003ffe9e0315c in qemuDomainLogContextRead (ctxt=0x3ffe08c2980,
msg=0x3ffffffe1c0) at ../../src/qemu/qemu_domain.c:4422
10 0x000003ffe9e280a8 in qemuProcessReadLog (logCtxt=<optimized out>,
msg=msg@entry=0x3ffffffe288) at ../../src/qemu/qemu_process.c:1800
11 0x000003ffe9e28206 in qemuProcessReportLogError (logCtxt=<optimized
out>, msgprefix=0x3ffe9ec276a "qemu unexpectedly closed the monitor")
at ../../src/qemu/qemu_process.c:1836
12 0x000003ffe9e28306 in
qemuProcessMonitorReportLogError (mon=mon@entry=0x3ffe085cf10,
msg=<optimized out>, opaque=<optimized out>) at
../../src/qemu/qemu_process.c:1856
13 0x000003ffe9e452b6 in qemuMonitorIO (watch=<optimized out>,
fd=<optimized out>, events=<optimized out>, opaque=0x3ffe085cf10) at
../../src/qemu/qemu_monitor.c:726
14 0x000003fffda2e1a4 in virEventPollDispatchHandles (nfds=<optimized
out>, fds=0x2aa000fd980) at ../../src/util/vireventpoll.c:508
15 0x000003fffda2e398 in virEventPollRunOnce () at
../../src/util/vireventpoll.c:657
16 0x000003fffda2ca10 in virEventRunDefaultImpl () at
../../src/util/virevent.c:314
17 0x000003fffdba9366 in virNetDaemonRun (dmn=0x2aa000cc550) at
../../src/rpc/virnetdaemon.c:818
18 0x000002aa00024668 in main (argc=<optimized out>, argv=<optimized
out>) at ../../daemon/libvirtd.c:1541
Other code parts where the same problem was possible to occur are
fixed as well (qemuMigrationFinish, qemuProcessStart, and
qemuDomainSaveImageStartVM).
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reported-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
So far only QEMU_MONITOR_MIGRATION_CAPS_POSTCOPY was reset, but only in
a single code path leaving post-copy enabled in quite a few cases.
https://bugzilla.redhat.com/show_bug.cgi?id=1425003
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
It's only called from qemuMigrationReset now so it doesn't need to be
exported and {tls,sec}Alias are always NULL.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new API is supposed to reset all migration parameters to make sure
future migrations won't accidentally use them. This patch makes the
first step and moves qemuMigrationResetTLS call inside
qemuMigrationReset.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Migration parameters are either reset by the main migration code path or
from qemuProcessRecoverMigration* in case libvirtd is restarted during
migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Finished qemuMigrationRun does not mean the migration itself finished
(it might have just switched to post-copy mode). While resetting TLS
parameters is probably OK at this point even if migration is still
running, we want to consolidate the code which resets various migration
parameters. Thus qemuMigrationResetTLS will be called from the Confirm
phase (or at the end of the Perform phase in case of v2 protocol), when
migration is either canceled or finished.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessRecoverMigrationOut doesn't explicitly call
qemuMigrationResetTLS relying on two things:
- qemuMigrationCancel resets TLS parameters
- our migration code resets TLS before entering
QEMU_MIGRATION_PHASE_PERFORM3_DONE phase
But this is not obvious and the assumptions will be broken soon. Let's
explicitly reset TLS parameters on all paths which do not kill the
domain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
There is no async job running when a freshly started libvirtd is trying
to recover from an interrupted incoming migration. While at it, let's
call qemuMigrationResetTLS every time we don't kill the domain. This is
not strictly necessary since TLS is not supported when v2 migration
protocol is used, but doing so makes more sense.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>