Nothing that could happen during networkNotifyActualDevice() could
justify unceremoniously killing the qemu process, but that's what we
were doing.
In particular, new code added in commit 85bcc022 (first appearred in
libvirt-3.2.0) attempts to reattach tap devices to their assigned
bridge devices when libvirtd restarts (to make it easier to recover
from a restart of a libvirt network). But if the network has been
stopped and *not* restarted, the bridge device won't exist and
networkNotifyActualDevice() will fail.
This patch changes networkNotifyActualDevice() and
qemuProcessNotifyNets() to return void, so that qemuProcessReconnect()
will soldier on regardless of what happens (any errors will still be
logged though).
Partially resolves: https://bugzilla.redhat.com/1442700
After 1eb6647979 nobody calls the iohelper with 6 arguments.
Everybody uses the other mode. Well, the only user of iohelper
after the previous commit is virFileWrapperFd really.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Currently we use iohelper for virFDStream implementation. This is
because UNIX I/O can lie sometimes: even though a FD for a
file/block device is set as unblocking, actual read()/write() can
block. To avoid this, a pipe is created and one end is kept for
read/write while the other is handed over to iohelper to
write/read the data for us. Thus it's iohelper which gets blocked
and not our event loop.
This approach has two problems:
1) we are spawning a new process.
2) any exchange of information between daemon and iohelper can be
done only through the pipe.
Therefore, iohelper is replaced with an implementation in thread
which is created just for the stream lifetime. The data are still
transferred through pipe (for now), but both problems described
above are solved.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
While this is no functional change, it makes the code look a bit
nicer. Moreover, it prepares ground for future work.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
There is really no reason why we should have to have 'struct'
everywhere.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
It helps with debugging if we know what's the return value of
saferead().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Because of copy-paste the temporary directory used for this test
is called "fakesysdir". That's probably misleading.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This is a USB3 controller and it's a better choice than piix3-uhci.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
The new logic will set the piix3-uhci if available regardless of
any architecture and it will be updated to better model based on
architecture and device existence.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
Make the schema more strict for HTTP disks requiring a name and
mandating exactly one source host.
ftp/tftp entries were not moved here, since http transport also will
support cookies and other options, which will be added later.
Since commit c5f6151390 qemuDomainBlockInfo tries to update the
"physical" storage size for all network storage and not only block
devices.
Since the storage driver APIs to do this are not implemented for certain
storage types (RBD, iSCSI, ...) the code would fail to retrieve any data
since the failure of qemuDomainStorageUpdatePhysical is fatal.
Since it's desired to return data even if the total size can't be
updated we need to ignore errors from that function and return plausible
data.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1442344
Since the private data structure is not freed upon stopping a VM, the
usbaddrs pointer would be leaked:
==15388== 136 (16 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 893 of 1,019
==15388== at 0x4C2CF55: calloc (vg_replace_malloc.c:711)
==15388== by 0x54BF64A: virAlloc (viralloc.c:144)
==15388== by 0x5547588: virDomainUSBAddressSetCreate (domain_addr.c:1608)
==15388== by 0x144D38A2: qemuDomainAssignUSBAddresses (qemu_domain_address.c:2458)
==15388== by 0x144D38A2: qemuDomainAssignAddresses (qemu_domain_address.c:2515)
==15388== by 0x144ED1E3: qemuProcessPrepareDomain (qemu_process.c:5398)
==15388== by 0x144F51FF: qemuProcessStart (qemu_process.c:5979)
[...]
Clean the stale data after shutting down the VM. Otherwise the data
would be leaked on next VM start. This happens due to the fact that the
private data object is not freed on destroy of the VM.
Testing various configuration schemas targeting postive and negative
nestedhvm under libvirt <cpu mode="host-passthrough"> configuration.
Mode "host-passthrough" generates nestedhvm=1 in/from xl format where
Intel virtualization (VT-x):
<feature policy='disable' name='vmx'/>
or
AMD virtualization (AMD-V):
<feature policy='disable' name='svm'/>
disables virtualization mode under guest domains.
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Per xen-xl conversions from and to native under host-passthrough
mode we take care for Xen (nestedhvm = mode) applied and inherited
settings generating or processing correct feature policy:
[On Intel (VT-x) architectures]
<feature policy='disable' name='vmx'/>
or
[On AMD (AMD-V) architectures]
<feature policy='disable' name='svm'/>
It will then generate (or parse) for nestedhvm=1 in/from xl format.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Xen feature nestedhvm is the option on Xen 4.4+ which enables
nested virtualization when mode host-passthrough is applied.
nested HVM is enabled by adding below on the target domain;
<cpu mode='host-passthrough'/>
Virtualization on target domain can be disabled by specifying
such under feature policy rule on target name;
[On Intel (VT-x) architecture]
<feature policy='disable' name='vmx'/>
or:
[On AMD (AMD-V) architecture]
<feature policy='disable' name='svm'/>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
This patch maps /domain/cpu/cache element into -cpu parameters:
- <cache mode='passthrough'/> is translated to host-cache-info=on
- <cache level='3' mode='emulate'/> is transformed into l3-cache=on
- <cache mode='disable'/> is turned in host-cache-info=off,l3-cache=off
Any other <cache> element is forbidden.
The tricky part is detecting whether QEMU supports the CPU properties.
The 'host-cache-info' property is introduced in v2.4.0-1389-ge265e3e480,
earlier QEMU releases enabled host-cache-info by default and had no way
to disable it. If the property is present, it defaults to 'off' for any
QEMU until at least 2.9.0.
The 'l3-cache' property was introduced later by v2.7.0-200-g14c985cffa.
Earlier versions worked as if l3-cache=off was passed. For any QEMU
until at least 2.9.0 l3-cache is 'off' by default.
QEMU 2.9.0 was the first release which supports probing both properties
by running device-list-properties with typename=host-x86_64-cpu. Older
QEMU releases did not support device-list-properties command for CPU
devices. Thus we can't really rely on probing them and we can just use
query-cpu-model-expansion QMP command as a witness.
Because the cache property probing is only reliable for QEMU >= 2.9.0
when both are already supported for quite a few releases, we let QEMU
report an error if a specific cache mode is explicitly requested. The
other mode (or both if a user requested CPU cache to be disabled) is
explicitly turned off for QEMU >= 2.9.0 to avoid any surprises in case
the QEMU defaults change. Any older QEMU already turns them off so not
doing so explicitly does not make any harm.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch introduces
<cache level='N' mode='emulate'/>
<cache mode='passthrough'/>
<cache mode='disable'/>
sub element of /domain/cpu. Currently only a single <cache> element is
allowed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The type of this parameter is virCPUType so calling it 'mode' is pretty
strange, 'type' is a much better name.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Not all async jobs are visible via virDomainGetJobStats (either they are
too fast or getting the stats is not allowed during the job), but
forcing all of them to advertise the operation is easier than hunting
the jobs for which fetching statistics is allowed. And we won't need to
think about this when we add support for getting stats for more jobs.
https://bugzilla.redhat.com/show_bug.cgi?id=1441563
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The parameter is reported by virDomainGetJobStats API and
VIR_DOMAIN_EVENT_ID_JOB_COMPLETED event and it can be used to identify
the operation (migration, snapshot, ...) to which the reported
statistics belong.
https://bugzilla.redhat.com/show_bug.cgi?id=1441563
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As with virtio-scsi, the "internal error" messages after
preparing a vhost-scsi hostdev overwrites more meaningful
error messages deeper in the callchain. Remove it too.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
I tried to attach a SCSI LUN to two different guests, and forgot
to specify "shareable" in the hostdev XML. Attaching the device
to the second guest failed, but the message was not helpful in
telling me what I was doing wrong:
$ cat scsi_scratch_disk.xml
<hostdev mode='subsystem' type='scsi'>
<source>
<adapter name='scsi_host3'/>
<address bus='0' target='15' unit='1074151456'/>
</source>
</hostdev>
$ virsh attach-device dasd_sles_d99c scsi_scratch_disk.xml
Device attached successfully
$ virsh attach-device dasd_fedora_0e1e scsi_scratch_disk.xml
error: Failed to attach device from scsi_scratch_disk.xml
error: internal error: Unable to prepare scsi hostdev: scsi_host3:0:15:1074151456
I eventually discovered my error, but thought it was weird that
Libvirt doesn't provide something more helpful in this case.
Looking over the code we had just gone through, I commented out
the "internal error" message, and got something more useful:
$ virsh attach-device dasd_fedora_0e1e scsi_scratch_disk.xml
error: Failed to attach device from scsi_scratch_disk.xml
error: Requested operation is not valid: SCSI device 3:0:15:1074151456 is already in use by other domain(s) as 'non-shareable'
Looking over the error paths here, we seem to issue better
messages deeper in the callchain so these "internal error"
messages overwrite any of them. Remove them, so that the
more detailed errors are seen.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
0feebab2 adds calling qemuBlockNodeNamesDetect for completed job
on updating block jobs. This affects cancelling drive mirror logic as
this function drops vm lock. Now we have to recheck all disks
before the disk with the completed block job before going
to wait for block job events.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuDomainGetNumaParameters would return the automatic nodeset even for
the persistent config if the domain was running. This is incorrect since
the automatic nodeset will be re-queried upon starting the vm.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1445325
While peer-to-peer migration enters the Confirm phase even if the
Perform phase fails, the client which initiated a non-p2p migration will
never call virDomainMigrateConfirm* API if the Perform phase failed.
Thus we need to explicitly reset migration before reporting a failure
from the Perform phase API.
https://bugzilla.redhat.com/show_bug.cgi?id=1425003
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The recently added sanlock_strerror function can be used to translate
sanlock's numeric errors into human readable strings.
https://bugzilla.redhat.com/show_bug.cgi?id=1409511
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Migration with old QEMU which does not support query-migrate-parameters
would fail because the QMP command is called unconditionally since the
introduction of TLS migration. Previously it was only called if the user
explicitly requested a feature which uses QEMU migration parameters. And
even then the situation was not ideal, instead of reporting an
unsupported feature we'd just complain about missing QMP command.
Trivially no migration parameters are supported when
query-migrate-parameters QMP command is missing. There's no need to
report an error if it is missing, the callers will report better error
if needed.
https://bugzilla.redhat.com/show_bug.cgi?id=1441934
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Rather than waiting for the first save to fail, let's generate the
directory with the correct privs during initialization.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than overloading one function - split apart the logic to have
separate interfaces and local/private structures to manage the data
for which the helper is collecting.
Signed-off-by: John Ferlan <jferlan@redhat.com>