https://bugzilla.redhat.com/show_bug.cgi?id=1462060
When building a qemu namespace we might be dealing with bare
regular files. Files that live under /dev. For instance
/dev/my_awesome_disk:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/dev/my_awesome_disk'/>
<target dev='vdc' bus='virtio'/>
</disk>
# qemu-img create -f qcow2 /dev/my_awesome_disk 10M
So far we were mknod()-ing them which is
obviously wrong. We need to touch the file and bind mount it to
the original:
1) touch /var/run/libvirt/qemu/fedora.dev/my_awesome_disk
2) mount --bind /dev/my_awesome_disk /var/run/libvirt/qemu/fedora.dev/my_awesome_disk
Later, when the new /dev is built and replaces original /dev the
file is going to live at expected location.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Currently, we silently assume that file we are creating in the
namespace is either a link or a device (character or block one).
This is not always the case. Therefore instead of doing something
wrong, claim about unsupported file type.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Currently, we silently assume that file we are creating in the
namespace is either a link or a device (character or block one).
This is not always the case. Therefore instead of doing something
wrong, claim about unsupported file type.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This function is going to be used on other places, so
instead of copying code we can just call the function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1459592
In 290a00e41d I've tried to fix the process of building a
qemu namespace when dealing with file mount points. What I
haven't realized then is that we might be dealing not with just
regular files but also special files (like sockets). Indeed, try
the following:
1) socat unix-listen:/tmp/soket stdio
2) touch /dev/socket
3) mount --bind /tmp/socket /dev/socket
4) virsh start anyDomain
Problem with my previous approach is that I wasn't creating the
temporary location (where mount points under /dev are moved) for
anything but directories and regular files.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This is only used in qemu_command.c, so move it, and clarify that
it's really about identifying if the serial config is a platform
device or not.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Some qemu arch/machine types have built in platform devices that
are always implicitly available. For platform serial devices, the
current code assumes that only old style -serial config can be
used for these devices.
Apparently though since -chardev was introduced, we can use -chardev
in these cases, like this:
-chardev pty,id=foo
-serial chardev:foo
Since -chardev enables all sorts of modern features, use this method
for platform devices.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Every qemu version we support has QEMU_CAPS_CHARDEV, so stop
explicitly tracking it and blacklist it like we've done for many
other feature flags.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
AFAIK there aren't any cases where we will/should hit the old code
path for our supported qemu versions, so drop the old code.
Massive test suite churn follows
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
AFAIK there aren't any cases where we should fail these checks with
supported qemu versions, so just drop them.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
AFAIK there aren't any qemu arch/machine types with platform parallel
devices that would require old style -parallel config, so we shouldn't
ever need this nowadays.
Remove a now redundant test
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Rather than try to whitelist all device configs that can't use
-chardev, blacklist the only one that really can't, which is the
default serial/console target type=isa case.
ISA specifically isn't a valid config for arm/aarch64, but we've
always implicitly treated it to mean 'default platform device'.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
vcpu properties gathered from query-hotpluggable cpus need to be passed
back to qemu. As qemu did not use the node-id property until now and
libvirt forgot to pass it back properly (it was parsed but not passed
around) we did not honor this.
This patch adds node-id to the structures where it was missing and
passes it around as necessary.
The test data was generated with a VM with following config:
<numa>
<cell id='0' cpus='0,2,4,6' memory='512000' unit='KiB'/>
<cell id='1' cpus='1,3,5,7' memory='512000' unit='KiB'/>
</numa>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452053
vcpu 0 must be always enabled and non-hotpluggable, thus you can't
modify it using the vcpu hotplug APIs. Disallow it so that users can't
create invalid configurations.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1459785
While qemuProcessIncomingDefNew takes an fd argument and stores it in
qemuProcessIncomingDef structure, the caller is still responsible for
closing the file descriptor.
Introduced by commit v1.2.21-140-ge7c6f4575.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Since qemu commit 3ef6c40ad0b it can fail if trying to hotplug a
disk that is not qcow2 despite us saying it is. We need to error
out in that case.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
If a remote call fails during event registration (more than likely from
a network failure or remote libvirtd restart timed just right), then when
calling the virObjectEventStateDeregisterID we don't want to call the
registered @freecb function because that breaks our contract that we
would only call it after succesfully returning. If the @freecb routine
were called, it could result in a double free from properly coded
applications that free their opaque data on failure to register, as seen
in the following details:
Program terminated with signal 6, Aborted.
#0 0x00007fc45cba15d7 in raise
#1 0x00007fc45cba2cc8 in abort
#2 0x00007fc45cbe12f7 in __libc_message
#3 0x00007fc45cbe86d3 in _int_free
#4 0x00007fc45d8d292c in PyDict_Fini
#5 0x00007fc45d94f46a in Py_Finalize
#6 0x00007fc45d960735 in Py_Main
#7 0x00007fc45cb8daf5 in __libc_start_main
#8 0x0000000000400721 in _start
The double dereference of 'pyobj_cbData' is triggered in the following way:
(1) libvirt_virConnectDomainEventRegisterAny is invoked.
(2) the event is successfully added to the event callback list
(virDomainEventStateRegisterClient in
remoteConnectDomainEventRegisterAny returns 1 which means ok).
(3) when function remoteConnectDomainEventRegisterAny is hit,
network connection disconnected coincidently (or libvirtd is
restarted) in the context of function 'call' then the connection
is lost and the function 'call' failed, the branch
virObjectEventStateDeregisterID is therefore taken.
(4) 'pyobj_conn' is dereferenced the 1st time in
libvirt_virConnectDomainEventFreeFunc.
(5) 'pyobj_cbData' (refered to pyobj_conn) is dereferenced the
2nd time in libvirt_virConnectDomainEventRegisterAny.
(6) the double free error is triggered.
Resolve this by adding a @doFreeCb boolean in order to avoid calling the
freeCb in virObjectEventStateDeregisterID for any remote call failure in
a remoteConnect*EventRegister* API. For remoteConnect*EventDeregister* calls,
the passed value would be true indicating they should run the freecb if it
exists; whereas, it's false for the remote call failure path.
Patch based on the investigation and initial patch posted by
fangying <fangying1@huawei.com>.
The function to check if -chardev is supported by QEMU was written a
long time ago, where adding chardevs did not make sense on the fixed ARM
platforms. Since then, we now have a general purpose virt platform,
which should support plugging in any device over PCIe which is supported
in a similar fashion on x86.
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Even though we got both the original CPU (used for starting a domain)
and the updated version (the CPU really provided by QEMU) during
incoming migration, restore, or snapshot revert, we still need to update
the CPU according to the data we got from the freshly started QEMU.
Otherwise we don't know whether the CPU we got from QEMU matches the one
before migration. We just need to keep the original CPU in
priv->origCPU.
Messed up by me in v3.4.0-58-g8e34f4781.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This function is called unconditionally from qemuProcessStop to
make sure we leave no dangling dirs behind. However, whenever the
directory we want to rmdir() is not there (e.g. because it hasn't
been created in the first place because domain doesn't use
hugepages at all), we produce a warning like this:
2017-06-20 15:58:23.615+0000: 32638: warning :
qemuProcessBuildDestroyHugepagesPath:3363 : Unable to remove
hugepage path: /dev/hugepages/libvirt/qemu/1-instance-00000001
(errno=2)
Fix this by not producing the warning on ENOENT.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Similarly to how we specify the groups of 5 capabilities in the header
file move the labels to separate line also for the VIR_ENUM_IMPL part.
This simplifies rebase conflict resolution in the capability file since
only lines have to be shuffled around, but they don't need to be edited.
Commit 7456c4f5f introduced a regression by not reloading the backing
chain of a disk after snapshot. The regression was caused as
src->relPath was not set and thus the block commit code could not
determine the relative path.
This patch adds code that will load the backing store string if
VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT and store it in the correct place
when a snapshot is successfully completed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1461303
Changing labelling of the images does not need to happen after setting
the labeling and lock manager access. This saves the cleanup of the
labeling if the relative path can't be determined.
Check for the LOADPARM capabilility and potentially add a loadparm=x to
the "-machine" string for the QEMU command line.
Also add xml2argv test cases for loadparm.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Add new capability for the "-machine loadparm" QEMU option.
Add the capabilities replies/xml for s390x for QEMU 2.9.50.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
It was added in commit 6c2e4c3856c8ed48c378bf1bf357cab46271a47a
so that Coverity would not complain about passing -1 to
qemuDomainDetachThisHostDevice(), but the function in question
has changed since and so the annotation doesn't apply anymore.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When added in multiple previous commits, it was used only with -device
qxl(-vga), but for some QEMUs (< 1.6) we need to add this
functionality when using -vga qxl as well.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1283207
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In the case that virtlogd is used as stdio handler we pass to QEMU
only FD to a PIPE connected to virtlogd instead of the file itself.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1430988
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Improve the code to decide whether to use virtlogd or not by checking
the same variable that is updated in qemuProcessPrepareDomain().
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
In QEMU driver we can use virtlogd as stdio handler for source backend
of char devices if current QEMU is new enough and it's enabled in
qemu.conf. We should store this information while starting a guest
because the config option may change while the guest is running.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1431112
Imagine a FS mounted on /dev/blah/blah2. Our process of creating
suffix for temporary location where all the mounted filesystems
are moved is very simplistic. We want:
/var/run/libvirt/qemu/$domName.$suffix\
were $suffix is just the mount point path stripped of the "/dev/"
prefix. For instance:
/var/run/libvirt/qemu/fedora.mqueue for /dev/mqueue
/var/run/libvirt/qemu/fedora.pts for /dev/pts
and so on. Now if we plug /dev/blah/blah2 into the example we see
some misbehaviour:
/var/run/libvirt/qemu/fedora.blah/blah2
Well, misbehaviour if /dev/blah/blah2 is a file, because in that
case we call virFileTouch() instead of virFileMakePath().
The solution is to replace all the slashes in the suffix with say
dots. That way we don't have to care about nested directories.
IOW, the result we want for given example is:
/var/run/libvirt/qemu/fedora.blah.blah2
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1431112
There can be nested mount points. For instance /dev/shm/blah can
be a mount point and /dev/shm too. It doesn't make much sense to
return the former path because callers preserve the latter (and
with that the former too). Therefore prune nested mount points.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1431112
After 290a00e41d we know how to deal with file mount points.
However, when cleaning up the temporary location for preserved
mount points we are still calling rmdir(). This won't fly for
files. We need to call unlink(). Now, since we don't really care
if the cleanup succeeded or not (it's the best effort anyway), we
can call both rmdir() and unlink() without need for
differentiation between files and directories.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Change the settings from qemuDomainUpdateDeviceLive() as otherwise the
call would succeed even though nothing has changed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1414627
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Most places which want to check ABI stability for an active domain need
to call this API rather than the original
qemuDomainDefCheckABIStability. The only exception is in snapshots where
we need to decide what to do depending on the saved image data.
https://bugzilla.redhat.com/show_bug.cgi?id=1460952
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When making ABI stability checks for an active domain, we need to make
sure we use the same migratable definition which virDomainGetXMLDesc
with the MIGRATABLE flag provides, otherwise the ABI check will fail.
This is implemented in the new qemuDomainCheckABIStability which takes a
domain object and generates the right migratable definition from it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This patch separates the actual ABI checks from getting migratable defs
in qemuDomainDefCheckABIStability so that we can create another wrapper
which will use different methods to get the migratable defs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The main goal of this function is to enable reusing the parsing code
from qemuDomainDefCopy.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1214369
My fix 671d18594f4 was incomplete. If domain doesn't have
hugepages enabled, because of missing condition we would still be
putting hugepages path onto qemu cmd line. Clean up the
conditions so that it's more visible next time.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
With the current logic, we only free @tlsalias as part of the error
label and would have to free it explicitly earlier in the code. Convert
the error label to cleanup, so that we have only one sink, where we
handle all frees. Since JSON object append operation consumes pointers,
make sure @backend is cleared before we hit the cleanup label.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1214369
Consider the following XML:
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='1'/>
</hugepages>
<source type='file'/>
<access mode='shared'/>
</memoryBacking>
<numa>
<cell id='0' cpus='0-3' memory='512000' unit='KiB'/>
<cell id='1' cpus='4-7' memory='512000' unit='KiB'/>
</numa>
The following cmd line is generated:
-object
memory-backend-file,id=ram-node0,mem-path=/var/lib/libvirt/qemu/ram,
share=yes,size=524288000 -numa node,nodeid=0,cpus=0-3,memdev=ram-node0
-object
memory-backend-file,id=ram-node1,mem-path=/var/lib/libvirt/qemu/ram,
share=yes,size=524288000 -numa node,nodeid=1,cpus=4-7,memdev=ram-node1
This is obviously wrong as for node 1 hugepages should have been
used. The hugepages configuration is more specific than <source
type='file'/>.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1455819
It may happen that a domain is started without any huge pages.
However, user might try to attach a DIMM module later. DIMM
backed by huge pages (why would somebody want to mix regular and
huge pages is beyond me). Therefore we have to create the dir if
we haven't done so far.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1455819
Currently, the per-domain path for huge pages mmap() for qemu is
created iff domain has memoryBacking and hugepages in it
configured. However, this alone is not enough because there can
be a DIMM module with hugepages configured too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>