The problem resides in virHostdevUpdateActiveMediatedDevices which gets
called during qemuProcessReconnect. The issue here is that
virMediatedDeviceListAdd takes a pointer to the item to be added to the
list to which VIR_APPEND_ELEMENT is used, which also clears the pointer.
However, in this case only the local copy of the pointer got cleared,
leaving the original pointing to valid memory. To sum it up, during
cleanup phase, the original pointer is freed and the daemon crashes
basically any time it would access it.
Backtrace:
0x00007ffff3ccdeba in __strcmp_sse2_unaligned
0x00007ffff72a444a in virMediatedDeviceListFindIndex
0x00007ffff7241446 in virHostdevReAttachMediatedDevices
0x00007fffc60215d9 in qemuHostdevReAttachMediatedDevices
0x00007fffc60216dc in qemuHostdevReAttachDomainDevices
0x00007fffc6046e6f in qemuProcessStop
0x00007fffc6091596 in processMonitorEOFEvent
0x00007fffc6091793 in qemuProcessEventHandler
0x00007ffff7294bf5 in virThreadPoolWorker
0x00007ffff7294184 in virThreadHelper
0x00007ffff3fdc3c4 in start_thread () from /lib64/libpthread.so.0
0x00007ffff3d269cf in clone () from /lib64/libc.so.6
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1446455
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
Use a local variable to hold data, rather than accessing the pointer
after calling virMediatedDeviceListAdd (therefore VIR_APPEND_ELEMENT).
Although not causing an issue at the moment, this change is a necessary
prerequisite for tweaking virMediatedDeviceListAdd in a separate patch,
which will take a reference for the source pointer (instead of pointer
value) and will clear it along the way.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Even though there are several checks before calling this function
and for some scenarios we don't call it at all (e.g. on disk hot
unplug), it may be possible to sneak in some weird files (e.g. if
domain would have RNG with /dev/shm/some_file as its backend). No
matter how improbable, we shouldn't unlink it as we would be
unlinking a file from the host which we haven't created in the
first place.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
Just like in previous commit, this fixes the same issue for
hotplug.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
While the code allows devices to already be there (by some
miracle), we shouldn't try to create devices that don't belong to
us. For instance, we shouldn't try to create /dev/shm/file
because /dev/shm is a mount point that is preserved. Therefore if
a file is created there from an outside (e.g. by mgmt application
or some other daemon running on the system like vhostmd), it
exists in the qemu namespace too as the mount point is the same.
It's only /dev and /dev only that is different. The same
reasoning applies to all other preserved mount points.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
Currently, all we need to do in qemuDomainCreateDeviceRecursive() is to
take given @device, get all kinds of info on it (major & minor numbers,
owner, seclabels) and create its copy at a temporary location @path
(usually /var/run/libvirt/qemu/$domName.dev), if @device live under
/dev. This is, however, very loose condition, as it also means
/dev/shm/* is created too. Therefor, we will need to pass more arguments
into the function for better decision making (e.g. list of mount points
under /dev). Instead of adding more arguments to all the functions (not
easily reachable because some functions are callback with strictly
defined type), lets just turn this one 'const char *' into a 'struct *'.
New "arguments" can be then added at no cost.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
When setting up mount namespace for a qemu domain the following
steps are executed:
1) get list of mountpoints under /dev/
2) move them to /var/run/libvirt/qemu/$domName.ext
3) start constructing new device tree under /var/run/libvirt/qemu/$domName.dev
4) move the mountpoint of the new device tree to /dev
5) restore original mountpoints from step 2)
Note the problem with this approach is that if some device in step
3) requires access to a mountpoint from step 2) it will fail as
the mountpoint is not there anymore. For instance consider the
following domain disk configuration:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/shm/vhostmd0'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
In this case operation fails as we are unable to create vhostmd0
in the new device tree because after step 2) there is no /dev/shm
anymore. Leave aside fact that we shouldn't try to create devices
living in other mountpoints. That's a separate bug that will be
addressed later.
Currently, the order described above is rearranged to:
1) get list of mountpoints under /dev/
2) start constructing new device tree under /var/run/libvirt/qemu/$domName.dev
3) move them to /var/run/libvirt/qemu/$domName.ext
4) move the mountpoint of the new device tree to /dev
5) restore original mountpoints from step 3)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
When we get a POLLHUP or VIR_EVENT_HANDLE_HANGUP event for a client, we
still want to read from the socket to process any accumulated data. But
doing so inevitably results in an error and a call to
virNetClientMarkClose before we get to processing the hangup event (and
another call to virNetClientMarkClose). However the close reason passed
to the second virNetClientMarkClose call is ignored because another one
was already set. We need to pass the correct close reason when marking
the socket to be closed for the first time.
https://bugzilla.redhat.com/show_bug.cgi?id=1373859
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
While fixing a bug with incorrectly freed memory in commit
v3.1.0-399-g5498aa29a, I accidentally broke persistent migration of
transient domains. Before adding qemuDomainDefCopy in the path, the code
just took NULL from vm->newDef and used it as the persistent def, which
resulted in no persistent XML being sent in the migration cookie. This
scenario is perfectly valid and the destination correctly handles it by
using the incoming live definition and storing it as the persistent one.
After the mentioned commit libvirtd would just segfault in the described
scenario.
https://bugzilla.redhat.com/show_bug.cgi?id=1446205
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If we are encoding a block of data that is 16 bytes in length,
we cannot leave it as 16 bytes, we must pad it out to the next
block boundary, 32 bytes. Without this padding, the decoder will
incorrectly treat the last byte of plain text as the padding
length, as it can't distinguish padded from non-padded data.
The problem exhibited itself when using a 16 byte passphrase
for a LUKS volume
$ virsh secret-set-value 55806c7d-8e93-456f-829b-607d8c198367 \
$(echo -n 1234567812345678 | base64)
Secret value set
$ virsh start demo
error: Failed to start domain demo
error: internal error: process exited while connecting to monitor: >>>>>>>>>>Len 16
2017-05-02T10:35:40.016390Z qemu-system-x86_64: -object \
secret,id=virtio-disk1-luks-secret0,data=SEtNi5vDUeyseMKHwc1c1Q==,\
keyid=masterKey0,iv=zm7apUB1A6dPcH53VW960Q==,format=base64: \
Incorrect number of padding bytes (56) found on decrypted data
Notice how the padding '56' corresponds to the ordinal value of
the character '8'.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When creating v3.2.0-77-g8be3ccd04 commit, I completely forgot that one
migration capability is very special. It's the "events" capability which
tells QEMU to report "MIGRATION" events. Since libvirt always wants the
events, it is enabled in qemuConnectMonitor and the rest of the code
should not touch it.
https://bugzilla.redhat.com/show_bug.cgi?id=1439841https://bugzilla.redhat.com/show_bug.cgi?id=1441165
Messed-up-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1330940
The virsh command 'domblkinfo' returns the capacity, allocation and phisycal
size of the devices attached in a domain. Usually, this sizes are very big
and hard to understand and calculate. This commits introduce a human readable
support to check the size of each field easilly.
For example, the command before:
virsh # domblkinfo my_domain hda
Capacity: 21474836480
Allocation: 14875545600
Physical: 21474836480
and after this patch:
virsh # domblkinfo my_domain hda --human
Capacity: 20.000G
Allocation: 13.900G
Physical: 20.000G
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Nested HVM support in the libxl driver is a news-worthy
improvement for libvirt 3.3.0.
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
... with VIR_NET_GENERATED_MACV???_PREFIX, which is defined in
util/virnetdevmacvlan.h.
Since VIR_NET_GENERATED_PREFIX is used for plain tap devices, it is
renamed to VIR_NET_GENERATED_TAP_PREFIX and moved to virnetdev.h
The parser had been clearing out *all* suggested device names for
type='direct' (aka macvtap) interfaces. All of the code implementing
macvtap allows for a user-specified device name, so we should allow
it. In the case that an interface name starts with "macvtap" or
"macvlan" though, we do still clear it out, just as we do with "vnet"
(which is the prefix used for automatically generated tap device
names), since those are the prefixes for the names we autogenerate for
macvtap and macvlan devices.
Resolves: https://bugzilla.redhat.com/1335798
MACVTAP_NAME_PREFIX and MACVLAN_NAME_PREFIX could be useful to other
files if they were defined in virnetdevmacvlan.h instead of
virnetdevmacvlan.c, so do that (while slightly renaming them and also
adding yet another #define that chooses between macvlan/macvtap based
on flags).
This is a prerequisite to fix: https://bugzilla.redhat.com/1335798
If the network isn't active during networkNotifyActualDevice(), we
would log an error message stating that the bridge device didn't
exist. This patch adds a check to see if the network is active, making
the logs more useful in the case that it isn't.
Partially resolves: https://bugzilla.redhat.com/1442700
Nothing that could happen during networkNotifyActualDevice() could
justify unceremoniously killing the qemu process, but that's what we
were doing.
In particular, new code added in commit 85bcc022 (first appearred in
libvirt-3.2.0) attempts to reattach tap devices to their assigned
bridge devices when libvirtd restarts (to make it easier to recover
from a restart of a libvirt network). But if the network has been
stopped and *not* restarted, the bridge device won't exist and
networkNotifyActualDevice() will fail.
This patch changes networkNotifyActualDevice() and
qemuProcessNotifyNets() to return void, so that qemuProcessReconnect()
will soldier on regardless of what happens (any errors will still be
logged though).
Partially resolves: https://bugzilla.redhat.com/1442700
After 1eb6647979 nobody calls the iohelper with 6 arguments.
Everybody uses the other mode. Well, the only user of iohelper
after the previous commit is virFileWrapperFd really.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Currently we use iohelper for virFDStream implementation. This is
because UNIX I/O can lie sometimes: even though a FD for a
file/block device is set as unblocking, actual read()/write() can
block. To avoid this, a pipe is created and one end is kept for
read/write while the other is handed over to iohelper to
write/read the data for us. Thus it's iohelper which gets blocked
and not our event loop.
This approach has two problems:
1) we are spawning a new process.
2) any exchange of information between daemon and iohelper can be
done only through the pipe.
Therefore, iohelper is replaced with an implementation in thread
which is created just for the stream lifetime. The data are still
transferred through pipe (for now), but both problems described
above are solved.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
While this is no functional change, it makes the code look a bit
nicer. Moreover, it prepares ground for future work.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
There is really no reason why we should have to have 'struct'
everywhere.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
It helps with debugging if we know what's the return value of
saferead().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Because of copy-paste the temporary directory used for this test
is called "fakesysdir". That's probably misleading.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This is a USB3 controller and it's a better choice than piix3-uhci.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
The new logic will set the piix3-uhci if available regardless of
any architecture and it will be updated to better model based on
architecture and device existence.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
Make the schema more strict for HTTP disks requiring a name and
mandating exactly one source host.
ftp/tftp entries were not moved here, since http transport also will
support cookies and other options, which will be added later.
Since commit c5f6151390 qemuDomainBlockInfo tries to update the
"physical" storage size for all network storage and not only block
devices.
Since the storage driver APIs to do this are not implemented for certain
storage types (RBD, iSCSI, ...) the code would fail to retrieve any data
since the failure of qemuDomainStorageUpdatePhysical is fatal.
Since it's desired to return data even if the total size can't be
updated we need to ignore errors from that function and return plausible
data.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1442344
Since the private data structure is not freed upon stopping a VM, the
usbaddrs pointer would be leaked:
==15388== 136 (16 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 893 of 1,019
==15388== at 0x4C2CF55: calloc (vg_replace_malloc.c:711)
==15388== by 0x54BF64A: virAlloc (viralloc.c:144)
==15388== by 0x5547588: virDomainUSBAddressSetCreate (domain_addr.c:1608)
==15388== by 0x144D38A2: qemuDomainAssignUSBAddresses (qemu_domain_address.c:2458)
==15388== by 0x144D38A2: qemuDomainAssignAddresses (qemu_domain_address.c:2515)
==15388== by 0x144ED1E3: qemuProcessPrepareDomain (qemu_process.c:5398)
==15388== by 0x144F51FF: qemuProcessStart (qemu_process.c:5979)
[...]
Clean the stale data after shutting down the VM. Otherwise the data
would be leaked on next VM start. This happens due to the fact that the
private data object is not freed on destroy of the VM.
Testing various configuration schemas targeting postive and negative
nestedhvm under libvirt <cpu mode="host-passthrough"> configuration.
Mode "host-passthrough" generates nestedhvm=1 in/from xl format where
Intel virtualization (VT-x):
<feature policy='disable' name='vmx'/>
or
AMD virtualization (AMD-V):
<feature policy='disable' name='svm'/>
disables virtualization mode under guest domains.
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Per xen-xl conversions from and to native under host-passthrough
mode we take care for Xen (nestedhvm = mode) applied and inherited
settings generating or processing correct feature policy:
[On Intel (VT-x) architectures]
<feature policy='disable' name='vmx'/>
or
[On AMD (AMD-V) architectures]
<feature policy='disable' name='svm'/>
It will then generate (or parse) for nestedhvm=1 in/from xl format.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Xen feature nestedhvm is the option on Xen 4.4+ which enables
nested virtualization when mode host-passthrough is applied.
nested HVM is enabled by adding below on the target domain;
<cpu mode='host-passthrough'/>
Virtualization on target domain can be disabled by specifying
such under feature policy rule on target name;
[On Intel (VT-x) architecture]
<feature policy='disable' name='vmx'/>
or:
[On AMD (AMD-V) architecture]
<feature policy='disable' name='svm'/>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>