Now that the multi-phb support series is in, work on the TODO at
qemuDomainGetMemLockLimitBytes() to arrive at the correct memlock limit
value.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Additional PHBs (pci-root controllers) will be created for
the guest using the spapr-pci-host-bridge QEMU device, if
available; the implicit default PHB, while present in the
guest configuration, will be skipped.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1431193
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
Usually, a controller with alias 'x' will create a bus with the
same name; however, the bus created by a PHBs with alias 'x' will
be named 'x.0' instead, so we need to account for that.
As an exception to the exception, the implicit PHB that's added
automatically to every pSeries guest creates the 'pci.0' bus.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
This new capability can be used to detect whether a QEMU
binary supports the spapr-pci-host-bridge controller.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
pSeries guests will soon need the new information; luckily,
we can figure it out automatically most of the time, so
users won't have to worry about it.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
pSeries guests will soon be allowed to have multiple
PHBs (pci-root controllers), meaning the current check
on the controller index no longer applies to them.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
pSeries guests will soon be allowed to have multiple
PHBs (pci-root controllers), which of course means that
all but one of them will have a non-zero index; hence,
we'll need to relax the current check.
However, right now the check is performed in the conf
module, which is generic rather than tied to the QEMU
driver, and where we don't have information such as the
guest machine type available.
To make this change of behavior possible down the line,
we need to move the check from the XML parser to the
drivers. Luckily, only QEMU and bhyve are using PCI
controllers, so this doesn't result in much duplication.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
Moving the check and rewriting it this way doesn't alter
the current behavior, but will allow us to special-case
pci-root down the line.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
We will soon need to be able to return a NULL pointer
without the caller considering that an error: to make
it possible, change the return type to int and use
an out parameter for the string instead.
Add some documentation for the function as well.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
We use hostdev->info frequently enough that having
a shorter name for it makes the code more readable.
We will also be adding even more uses later on.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
This function was private to the QEMU driver and was,
accordingly, called qemuDomainPCIBusFullyReserved().
However the function is really not QEMU-specific at
all, so it makes sense to move it closer to the
virDomainPCIAddressBus struct it operates on.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
New name is qemuBlockStorageSourceGetGlusterProps and also hardcode the
protocol name rather than calling the ToString function, since this
function can't be made universal.
New name is qemuBlockStorageSourceBuildHostsJSONSocketAddress since it
formats the JSON object in accordance with qemu's SocketAddress type.
Since the new naming in qemu uses 'inet' instead of 'tcp' add a
compatibility layer for gluster which uses the old name.
Rename it to qemuBlockStorageSourceGetBackendProps and refactor it to
return the JSON object instead of filling a pointer since now it's
always expected to return data.
Add logic which will call qemuGetDriveSourceProps only in cases where we
need the JSON representation. This will allow qemuGetDriveSourceProps to
generate the JSON representation for all possible disk sources.
The command line generators for the protocols above hardcoded a default
port number. Since we now always assign it when parsing the source
definition, this ad-hoc code is not required any more.
Fill them in right away rather than having to figure out at runtime
whether they are necessary or not.
virStorageSourceNetworkDefaultPort does not need to be exported any
more.
This reverts commit e4b980c853.
When a binary links against a .a archive (as opposed to a shared library),
any symbols which are marked as 'weak' get silently dropped. As a result
when the binary later runs, those 'weak' functions have an address of
0x0 and thus crash when run.
This happened with virtlogd and virtlockd because they don't link to
libvirt.so, but instead just libvirt_util.a and libvirt_rpc.a. The
virRandomBits symbols was weak and so left out of the virtlogd &
virtlockd binaries, despite being required by virHashTable functions.
Various other binaries like libvirt_lxc, libvirt_iohelper, etc also
link directly to .a files instead of libvirt.so, so are potentially
at risk of dropping symbols leading to a later runtime crash.
This is normal linker behaviour because a weak symbol is not treated
as undefined, so nothing forces it to be pulled in from the .a You
have to force the linker to pull in weak symbols using -u$SYMNAME
which is not a practical approach.
This risk is silent bad linkage that affects runtime behaviour is
not acceptable for a fix that was merely trying to fix the test
suite. So stop using __weak__ again.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When libvirt starts a new QEMU domain, it replaces host-model CPUs with
the appropriate custom CPU definition. However, when reconnecting to a
domain started by older libvirt (< 2.3), the domain would still have a
host-model CPU in its active definition.
https://bugzilla.redhat.com/show_bug.cgi?id=1463957
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
qemuProcessReconnect will need to call additional functions which were
originally defined further in qemu_process.c.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Separated from qemuProcessUpdateAndVerifyCPU to handle updating of an
active guest CPU definition according to live data from QEMU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
In addition to updating a guest CPU definition the function verifies
that all required features are provided to the guest. Let's make it
obvious by calling it qemuProcessUpdateAndVerifyCPU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Separated from qemuProcessUpdateLiveGuestCPU. The function makes sure
a guest CPU provides all features required by a domain definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Separated from qemuProcessUpdateLiveGuestCPU. Its purpose is to fetch
guest CPU data from a running QEMU process. The data can later be used
to verify and update the active guest CPU definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When checking ABI stability between two domain definitions, we first
make migratable copies of them. However, we also asked for the guest CPU
to be updated, even though the updated CPU is supposed to be already
included in the original definitions. Moreover, if we do this on the
destination host during migration, we're potentially updating the
definition with according to an incompatible host CPU.
While updating the CPU when checking ABI stability doesn't make any
sense, it actually just worked because updating the CPU doesn't do
anything for custom CPUs (only host-model CPUs are affected) and we
updated both definitions in the same way.
Less then a year ago commit v2.3.0-rc1~42 stopped updating the CPU in
the definition we got internally and only the user supplied definition
was updated. However, the same commit started updating host-model CPUs
to custom CPUs which are not affected by the request to update the CPU.
So it still seemed to work right, unless a user upgraded libvirt 2.2.0
to a newer version while there were some domains with host-model CPUs
running on the host. Such domains couldn't be migrated with a user
supplied XML since libvirt would complain:
Target CPU mode custom does not match source host-model
The fix is pretty straightforward, we just need to stop updating the CPU
when checking ABI stability.
https://bugzilla.redhat.com/show_bug.cgi?id=1463957
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
After 426dc5eb2 qemuCaps and virDomainDefPtr are unused here,
remove it from the call stack
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Obviously, old gcc-s ale sad when a variable shares the name with
a function. And we do have such variable (added in 4d8a914be0):
@mount. Rename it to @mountpoint so that compiler's happy again.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The way we create devices under /dev is highly linux specific.
For instance we do mknod(), mount(), umount(), etc. Some
platforms are even missing some of these functions. Then again,
as declared in qemuDomainNamespaceAvailable(): namespaces are
linux only. Therefore, to avoid obfuscating the code by trying to
make it compile on weird platforms, just provide a non-linux stub
for qemuDomainAttachDeviceMknodRecursive(). At the same time,
qemuDomainAttachDeviceMknodHelper() which actually calls the
non-existent functions is moved under ifdef __linux__ block since
its only caller is in that block too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1467826
Commit id 'b9b1aa639' was supposed to add logic to set the allocation
for sparse files when wr_highest_offset was zero; however, an unconditional
setting was done just prior. For block devices, this means allocation is
always returning 0 since 'actual-size' will be zero.
Remove the unconditional setting and add the note about it being possible
to still be zero for block devices. As soon as the guest starts writing to
the volume, the allocation value will then be obtainable from qemu via
the wr_highest_offset.
On domain startup, bind host or bind service can be omitted
and we will format a working command line.
Extend this to hotplug as well and specify the service to QEMU
even if the host is missing.
https://bugzilla.redhat.com/show_bug.cgi?id=1452441
Currently all mockable functions are annotated with the 'noinline'
attribute. This is insufficient to guarantee that a function can
be reliably mocked with an LD_PRELOAD. The C language spec allows
the compiler to assume there is only a single implementation of
each function. It can thus do things like propagating constant
return values into the caller at compile time, or creating
multiple specialized copies of the function body each optimized
for a different caller. To prevent these optimizations we must
also set the 'noclone' and 'weak' attributes.
This fixes the test suite when libvirt.so is built with CLang
with optimization enabled.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The HOST_NAME_MAX, INET_ADDRSTRLEN and VIR_LOOPBACK_IPV4_ADDR
constants are only used by a handful of files, so are better
kept in virsocketaddr.h or the source file that uses them.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently, the only type of chardev that we create the backend
for in the namespace is type='dev'. This is not enough, other
backends might have files under /dev too. For instance channels
might have a unix socket under /dev (well, bind mounted under
/dev from a different place).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1462060
Just like in the previous commit, when attaching a file based
device which has its source living under /dev (that is not a
device rather than a regular file), calling mknod() is no help.
We need to:
1) bind mount device to some temporary location
2) enter the namespace
3) move the mount point to desired place
4) umount it in the parent namespace from the temporary location
At the same time, the check in qemuDomainNamespaceSetupDisk makes
no longer sense. Therefore remove it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1462060
When building a qemu namespace we might be dealing with bare
regular files. Files that live under /dev. For instance
/dev/my_awesome_disk:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/dev/my_awesome_disk'/>
<target dev='vdc' bus='virtio'/>
</disk>
# qemu-img create -f qcow2 /dev/my_awesome_disk 10M
So far we were mknod()-ing them which is
obviously wrong. We need to touch the file and bind mount it to
the original:
1) touch /var/run/libvirt/qemu/fedora.dev/my_awesome_disk
2) mount --bind /dev/my_awesome_disk /var/run/libvirt/qemu/fedora.dev/my_awesome_disk
Later, when the new /dev is built and replaces original /dev the
file is going to live at expected location.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Currently, we silently assume that file we are creating in the
namespace is either a link or a device (character or block one).
This is not always the case. Therefore instead of doing something
wrong, claim about unsupported file type.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Currently, we silently assume that file we are creating in the
namespace is either a link or a device (character or block one).
This is not always the case. Therefore instead of doing something
wrong, claim about unsupported file type.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This function is going to be used on other places, so
instead of copying code we can just call the function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1459592
In 290a00e41d I've tried to fix the process of building a
qemu namespace when dealing with file mount points. What I
haven't realized then is that we might be dealing not with just
regular files but also special files (like sockets). Indeed, try
the following:
1) socat unix-listen:/tmp/soket stdio
2) touch /dev/socket
3) mount --bind /tmp/socket /dev/socket
4) virsh start anyDomain
Problem with my previous approach is that I wasn't creating the
temporary location (where mount points under /dev are moved) for
anything but directories and regular files.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This is only used in qemu_command.c, so move it, and clarify that
it's really about identifying if the serial config is a platform
device or not.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Some qemu arch/machine types have built in platform devices that
are always implicitly available. For platform serial devices, the
current code assumes that only old style -serial config can be
used for these devices.
Apparently though since -chardev was introduced, we can use -chardev
in these cases, like this:
-chardev pty,id=foo
-serial chardev:foo
Since -chardev enables all sorts of modern features, use this method
for platform devices.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Every qemu version we support has QEMU_CAPS_CHARDEV, so stop
explicitly tracking it and blacklist it like we've done for many
other feature flags.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
AFAIK there aren't any cases where we will/should hit the old code
path for our supported qemu versions, so drop the old code.
Massive test suite churn follows
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
AFAIK there aren't any cases where we should fail these checks with
supported qemu versions, so just drop them.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
AFAIK there aren't any qemu arch/machine types with platform parallel
devices that would require old style -parallel config, so we shouldn't
ever need this nowadays.
Remove a now redundant test
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Rather than try to whitelist all device configs that can't use
-chardev, blacklist the only one that really can't, which is the
default serial/console target type=isa case.
ISA specifically isn't a valid config for arm/aarch64, but we've
always implicitly treated it to mean 'default platform device'.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
vcpu properties gathered from query-hotpluggable cpus need to be passed
back to qemu. As qemu did not use the node-id property until now and
libvirt forgot to pass it back properly (it was parsed but not passed
around) we did not honor this.
This patch adds node-id to the structures where it was missing and
passes it around as necessary.
The test data was generated with a VM with following config:
<numa>
<cell id='0' cpus='0,2,4,6' memory='512000' unit='KiB'/>
<cell id='1' cpus='1,3,5,7' memory='512000' unit='KiB'/>
</numa>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452053
vcpu 0 must be always enabled and non-hotpluggable, thus you can't
modify it using the vcpu hotplug APIs. Disallow it so that users can't
create invalid configurations.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1459785
While qemuProcessIncomingDefNew takes an fd argument and stores it in
qemuProcessIncomingDef structure, the caller is still responsible for
closing the file descriptor.
Introduced by commit v1.2.21-140-ge7c6f4575.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Since qemu commit 3ef6c40ad0b it can fail if trying to hotplug a
disk that is not qcow2 despite us saying it is. We need to error
out in that case.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
If a remote call fails during event registration (more than likely from
a network failure or remote libvirtd restart timed just right), then when
calling the virObjectEventStateDeregisterID we don't want to call the
registered @freecb function because that breaks our contract that we
would only call it after succesfully returning. If the @freecb routine
were called, it could result in a double free from properly coded
applications that free their opaque data on failure to register, as seen
in the following details:
Program terminated with signal 6, Aborted.
#0 0x00007fc45cba15d7 in raise
#1 0x00007fc45cba2cc8 in abort
#2 0x00007fc45cbe12f7 in __libc_message
#3 0x00007fc45cbe86d3 in _int_free
#4 0x00007fc45d8d292c in PyDict_Fini
#5 0x00007fc45d94f46a in Py_Finalize
#6 0x00007fc45d960735 in Py_Main
#7 0x00007fc45cb8daf5 in __libc_start_main
#8 0x0000000000400721 in _start
The double dereference of 'pyobj_cbData' is triggered in the following way:
(1) libvirt_virConnectDomainEventRegisterAny is invoked.
(2) the event is successfully added to the event callback list
(virDomainEventStateRegisterClient in
remoteConnectDomainEventRegisterAny returns 1 which means ok).
(3) when function remoteConnectDomainEventRegisterAny is hit,
network connection disconnected coincidently (or libvirtd is
restarted) in the context of function 'call' then the connection
is lost and the function 'call' failed, the branch
virObjectEventStateDeregisterID is therefore taken.
(4) 'pyobj_conn' is dereferenced the 1st time in
libvirt_virConnectDomainEventFreeFunc.
(5) 'pyobj_cbData' (refered to pyobj_conn) is dereferenced the
2nd time in libvirt_virConnectDomainEventRegisterAny.
(6) the double free error is triggered.
Resolve this by adding a @doFreeCb boolean in order to avoid calling the
freeCb in virObjectEventStateDeregisterID for any remote call failure in
a remoteConnect*EventRegister* API. For remoteConnect*EventDeregister* calls,
the passed value would be true indicating they should run the freecb if it
exists; whereas, it's false for the remote call failure path.
Patch based on the investigation and initial patch posted by
fangying <fangying1@huawei.com>.
The function to check if -chardev is supported by QEMU was written a
long time ago, where adding chardevs did not make sense on the fixed ARM
platforms. Since then, we now have a general purpose virt platform,
which should support plugging in any device over PCIe which is supported
in a similar fashion on x86.
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Even though we got both the original CPU (used for starting a domain)
and the updated version (the CPU really provided by QEMU) during
incoming migration, restore, or snapshot revert, we still need to update
the CPU according to the data we got from the freshly started QEMU.
Otherwise we don't know whether the CPU we got from QEMU matches the one
before migration. We just need to keep the original CPU in
priv->origCPU.
Messed up by me in v3.4.0-58-g8e34f4781.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This function is called unconditionally from qemuProcessStop to
make sure we leave no dangling dirs behind. However, whenever the
directory we want to rmdir() is not there (e.g. because it hasn't
been created in the first place because domain doesn't use
hugepages at all), we produce a warning like this:
2017-06-20 15:58:23.615+0000: 32638: warning :
qemuProcessBuildDestroyHugepagesPath:3363 : Unable to remove
hugepage path: /dev/hugepages/libvirt/qemu/1-instance-00000001
(errno=2)
Fix this by not producing the warning on ENOENT.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Similarly to how we specify the groups of 5 capabilities in the header
file move the labels to separate line also for the VIR_ENUM_IMPL part.
This simplifies rebase conflict resolution in the capability file since
only lines have to be shuffled around, but they don't need to be edited.
Commit 7456c4f5f introduced a regression by not reloading the backing
chain of a disk after snapshot. The regression was caused as
src->relPath was not set and thus the block commit code could not
determine the relative path.
This patch adds code that will load the backing store string if
VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT and store it in the correct place
when a snapshot is successfully completed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1461303
Changing labelling of the images does not need to happen after setting
the labeling and lock manager access. This saves the cleanup of the
labeling if the relative path can't be determined.
Check for the LOADPARM capabilility and potentially add a loadparm=x to
the "-machine" string for the QEMU command line.
Also add xml2argv test cases for loadparm.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Add new capability for the "-machine loadparm" QEMU option.
Add the capabilities replies/xml for s390x for QEMU 2.9.50.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
It was added in commit 6c2e4c3856
so that Coverity would not complain about passing -1 to
qemuDomainDetachThisHostDevice(), but the function in question
has changed since and so the annotation doesn't apply anymore.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When added in multiple previous commits, it was used only with -device
qxl(-vga), but for some QEMUs (< 1.6) we need to add this
functionality when using -vga qxl as well.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1283207
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In the case that virtlogd is used as stdio handler we pass to QEMU
only FD to a PIPE connected to virtlogd instead of the file itself.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1430988
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Improve the code to decide whether to use virtlogd or not by checking
the same variable that is updated in qemuProcessPrepareDomain().
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
In QEMU driver we can use virtlogd as stdio handler for source backend
of char devices if current QEMU is new enough and it's enabled in
qemu.conf. We should store this information while starting a guest
because the config option may change while the guest is running.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1431112
Imagine a FS mounted on /dev/blah/blah2. Our process of creating
suffix for temporary location where all the mounted filesystems
are moved is very simplistic. We want:
/var/run/libvirt/qemu/$domName.$suffix\
were $suffix is just the mount point path stripped of the "/dev/"
prefix. For instance:
/var/run/libvirt/qemu/fedora.mqueue for /dev/mqueue
/var/run/libvirt/qemu/fedora.pts for /dev/pts
and so on. Now if we plug /dev/blah/blah2 into the example we see
some misbehaviour:
/var/run/libvirt/qemu/fedora.blah/blah2
Well, misbehaviour if /dev/blah/blah2 is a file, because in that
case we call virFileTouch() instead of virFileMakePath().
The solution is to replace all the slashes in the suffix with say
dots. That way we don't have to care about nested directories.
IOW, the result we want for given example is:
/var/run/libvirt/qemu/fedora.blah.blah2
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1431112
There can be nested mount points. For instance /dev/shm/blah can
be a mount point and /dev/shm too. It doesn't make much sense to
return the former path because callers preserve the latter (and
with that the former too). Therefore prune nested mount points.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1431112
After 290a00e41d we know how to deal with file mount points.
However, when cleaning up the temporary location for preserved
mount points we are still calling rmdir(). This won't fly for
files. We need to call unlink(). Now, since we don't really care
if the cleanup succeeded or not (it's the best effort anyway), we
can call both rmdir() and unlink() without need for
differentiation between files and directories.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Change the settings from qemuDomainUpdateDeviceLive() as otherwise the
call would succeed even though nothing has changed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1414627
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Most places which want to check ABI stability for an active domain need
to call this API rather than the original
qemuDomainDefCheckABIStability. The only exception is in snapshots where
we need to decide what to do depending on the saved image data.
https://bugzilla.redhat.com/show_bug.cgi?id=1460952
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When making ABI stability checks for an active domain, we need to make
sure we use the same migratable definition which virDomainGetXMLDesc
with the MIGRATABLE flag provides, otherwise the ABI check will fail.
This is implemented in the new qemuDomainCheckABIStability which takes a
domain object and generates the right migratable definition from it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This patch separates the actual ABI checks from getting migratable defs
in qemuDomainDefCheckABIStability so that we can create another wrapper
which will use different methods to get the migratable defs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The main goal of this function is to enable reusing the parsing code
from qemuDomainDefCopy.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1214369
My fix 671d18594f was incomplete. If domain doesn't have
hugepages enabled, because of missing condition we would still be
putting hugepages path onto qemu cmd line. Clean up the
conditions so that it's more visible next time.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
With the current logic, we only free @tlsalias as part of the error
label and would have to free it explicitly earlier in the code. Convert
the error label to cleanup, so that we have only one sink, where we
handle all frees. Since JSON object append operation consumes pointers,
make sure @backend is cleared before we hit the cleanup label.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1214369
Consider the following XML:
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='1'/>
</hugepages>
<source type='file'/>
<access mode='shared'/>
</memoryBacking>
<numa>
<cell id='0' cpus='0-3' memory='512000' unit='KiB'/>
<cell id='1' cpus='4-7' memory='512000' unit='KiB'/>
</numa>
The following cmd line is generated:
-object
memory-backend-file,id=ram-node0,mem-path=/var/lib/libvirt/qemu/ram,
share=yes,size=524288000 -numa node,nodeid=0,cpus=0-3,memdev=ram-node0
-object
memory-backend-file,id=ram-node1,mem-path=/var/lib/libvirt/qemu/ram,
share=yes,size=524288000 -numa node,nodeid=1,cpus=4-7,memdev=ram-node1
This is obviously wrong as for node 1 hugepages should have been
used. The hugepages configuration is more specific than <source
type='file'/>.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1455819
It may happen that a domain is started without any huge pages.
However, user might try to attach a DIMM module later. DIMM
backed by huge pages (why would somebody want to mix regular and
huge pages is beyond me). Therefore we have to create the dir if
we haven't done so far.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1455819
Currently, the per-domain path for huge pages mmap() for qemu is
created iff domain has memoryBacking and hugepages in it
configured. However, this alone is not enough because there can
be a DIMM module with hugepages configured too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Commit v3.4.0-44-gac793bd71 fixed a memory leak, but failed to return
the special -3 value. Thus an attempt to start a domain with corrupted
managed save file would removed the corrupted file and report
"An error occurred, but the cause is unknown" instead of starting the
domain from scratch.
https://bugzilla.redhat.com/show_bug.cgi?id=1460962
Use ATTRIBUTE_FALLTHROUGH, introduced by commit
5d84f5961b, instead of comments to
indicate that the fall through is an intentional behavior.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Add a comment for mon->watch to make clear what's the purpose of this
value.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
The virDomainUSBAddressEnsure returns 0 or -1, so commit id 'de325472'
checking for 1 like qemuDomainAttachChrDeviceAssignAddr was wrong.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Commit 824272cb28 attempted to fix escaping of characters in unix
socket path but it was wrong. We need to escape only ',', there is
no escape character for '='.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1447618
Currently, any attempt to change MTU on an interface that is
plugged to a running domain is silently ignored. We should either
do what's asked or error out. Well, we can update the host side
of the interface, but we cannot change 'host_mtu' attribute for
the virtio-net device. Therefore we have to error out.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
https://bugzilla.redhat.com/show_bug.cgi?id=1408701
While implementing MTU (572eda12ad and friends), I've forgotten
to actually set MTU on the host NIC in case of hotplug. We
correctly tell qemu on the monitor what the MTU should be, but we
are not actually setting it on the host NIC.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
https://bugzilla.redhat.com/show_bug.cgi?id=1459091
Currently, we are querying for vhostuser interface name in post
parse callback. At that time interface might not yet exist.
However, it has to exist when starting domain. Therefore it makes
more sense to query its name at that point. This partially
reverts 57b5e27.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When adding the aliased serial stub console, the structure wasn't
properly allocated (VIR_ALLOC instead of virDomainChrDefNew) which then
resulted in SIGSEGV in virDomainChrSourceIsEqual during a serial device
coldplug.
https://bugzilla.redhat.com/show_bug.cgi?id=1434278
Signed-off-by: Erik Skultety <eskultet@redhat.com>
If QEMU is new enough and we have the live updated CPU definition in
either save or migration cookie, we can use it to enforce ABI. The
original guest CPU from domain XML will be stored in private data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Since the domain XML saved in a snapshot or saved image uses the
original guest CPU definition but we still want to enforce ABI when
restoring the domain if libvirt and QEMU are new enough, we save the
live updated CPU definition in a save cookie.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Since the domain XML send during migration uses the original guest CPU
definition but we still want the destination to enforce ABI if it is new
enough, we send the live updated CPU definition in a migration cookie.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When persistent migration of a transient domain is requested but no
custom XML is passed to the migration API we would just let the
destination daemon make a persistent definition from the live definition
itself. This is not a problem now, but once the destination daemon
starts replacing the original CPU definition with the one from migration
cookie before starting a domain, it would need to add more ugly hacks to
reverse the operation. Let's just always send the persistent definition
in the cookie to make things a bit cleaner.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The destination host may not be able to start a domain using the live
updated CPU definition because either libvirt or QEMU may not be new
enough. Thus we need to send the original guest CPU definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When starting a domain we update the guest CPU definition to match what
QEMU actually provided (since it is allowed to add or removed some
features unless check='full' is specified). Let's store the original CPU
in domain private data so that we can use it to provide a backward
compatible domain XML.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The following patches will add an actual content in the cookie and use
the data when restoring a domain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This patch implements a new save cookie object and callbacks for qemu
driver. The actual useful content will be added in the object later.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
virDomainXMLOption gains driver specific callbacks for parsing and
formatting save cookies.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The new structure encapsulates save image header and associated data
(domain XML).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function is now called virQEMUSaveDataWrite and it is now doing
everything it needs to save both the save image header and domain XML to
a file. Be it a new file or an existing file in which a user wants to
change the domain XML.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function is supposed to update the save image header after a
successful migration to the save image file.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This is a preparation for creating a new virQEMUSaveData structure which
will encapsulate all save image header data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Since virQEMUSaveHeader will be followed by more than just domain XML,
the old name would be confusing as it was designed to describe the
length of all data following the save image header.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This will be used later when a save cookie will become part of the
snapshot XML using new driver specific parser/formatter functions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Allow starting the block-copy job for a persistent domain if a user
declares by using a flag that the job will not be recovered if the VM is
switched off while the job is active.
This allows to use the block-copy job with persistent VMs under the same
conditions as would apply to transient domains.
Without this patch libvirt would just report the operation of a
completed job as "unknown" instead of "incoming migration".
https://bugzilla.redhat.com/show_bug.cgi?id=1457052
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
vCPU ordering information would not be updated if a vCPU emerged or
disappeared during the time libvirtd is not running. This allowed to
create invalid configuration like:
[...]
<vcpu id='56' enabled='yes' hotpluggable='yes' order='57'/>
<vcpu id='57' enabled='yes' hotpluggable='yes' order='58'/>
<vcpu id='58' enabled='yes' hotpluggable='yes'/>
Call the function that records the information on reconnect.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451251
https://bugzilla.redhat.com/show_bug.cgi?id=1450349
Problem is, qemu fails to load guest memory image if these
attribute change on migration/restore from an image.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While checking for ABI stability, drivers might pose additional
checks that are not valid for general case. For instance, qemu
driver might check some memory backing attributes because of how
qemu works. But those attributes may work well in other drivers.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainGetBlockInfo would error out if qemu did not report
'wr_highest_offset'. This usually does not happen, but can happen
briefly during active layer block commit. There's no need to report the
error, we can simply report that the disk is fully alocated at that
point.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452045
In 48d9e6cdcc and friends we've allowed users to back guest
memory by a file inside the host. And in order to keep things
manageable the memory_backing_dir variable was introduced to
qemu.conf to specify the directory where the files are kept.
However, libvirt's policy is that directories are created on
domain startup if they don't exist. We've missed this one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
commit a8eba5036 added further checking of the guest shutdown cause, but
this enhancement is available since qemu 2.10, causing a crash because
of a NULL pointer dereference on older qemus.
Thread 1 "libvirtd" received signal SIGSEGV, Segmentation fault.
0x00007ffff72441af in virJSONValueObjectGet (object=0x0,
key=0x7fffd5ef11bf "guest")
at util/virjson.c:769
769 if (object->type != VIR_JSON_TYPE_OBJECT)
(gdb) bt
0 in virJSONValueObjectGet
1 in virJSONValueObjectGetBoolean
2 in qemuMonitorJSONHandleShutdown
3 in qemuMonitorJSONIOProcessEvent
4 in qemuMonitorJSONIOProcessLine
5 in qemuMonitorJSONIOProcess
6 in qemuMonitorIOProcess
Signed-off-by: Erik Skultety <eskultet@redhat.com>
QEMU will likely report the details of it shutting down, particularly
whether the shutdown was initiated by the guest or host. We should
forward that information along, at least for shutdown events. Reset
has that as well, however that is not a lifecycle event and would add
extra constants that might not be used. It can be added later on.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1384007
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The @meminfo allocated in qemuMonitorGetMemoryDeviceInfo() may be
lost when qemuDomainObjExitMonitor() failed.
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Setting the 'group_name' for a disk would falsely trigger a error path
as in commit 4b57f76502 we did not properly check the return value of
VIR_STRDUP.
This reverts commit 2841e675.
It turns out that adding the host_mtu field to the PCI capabilities in
the guest bumps the length of PCI capabilities beyond the 32 byte
boundary, so the virtio-net device gets 64 bytes of ioport space
instead of 32, which offsets the address of all the other following
devices. Migration doesn't work very well when the location and length
of PCI capabilities of devices is changed between source and
destination.
This means that we need to make sure that the absence/presence of
host_mtu on the qemu commandline always matches between source and
destination, which means that we need to make setting of host_mtu an
opt-in thing (it can't happen automatically when the bridge being used
has a non-default MTU, which is what commit 2841e675 implemented).
I do want to re-implement this feature with an <mtu auto='on'/>
setting, but probably won't backport that to any stable branches, so
I'm first reverting the original commit, and that revert can be pushed
to the few releases that have been made since the original (3.1.0 -
3.3.0)
Resolves: https://bugzilla.redhat.com/1449346
The error message would contain first vcpu id after the list of vcpus
selected for modification. To print the proper vcpu id remember the
first vcpu selected to be modified.
The code causes the 'offset' variable to be overwritten (possibly with
NULL if neither of the vCPUs is halted) which causes a crash since the
variable is still used after that part.
Additionally there's a bug, since strstr() would look up the '(halted)'
string in the whole string rather than just the currently processed line
the returned data is completely bogus.
Rather than switching to single line parsing let's remove the code
altogether since it has a commonly used JSON monitor alternative and
the data itself is not very useful to report.
The code was introduced in commit cc5e695bde
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452106
Namely, this patch is about virMediatedDeviceGetIOMMUGroup{Dev,Num}
functions. There's no compelling reason why these functions should take
an object, on the contrary, having to create an object every time one
needs to query the IOMMU group number, discarding the object afterwards,
seems odd.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Since we allow active layer block commit the users are allowed to commit
the top of the chain (e.g. vda) into the backing image. The API would
not accept that parameter, as it tried to look up the image in the
backing chain.
Add the ability to use the top level image target name explicitly as the
top image of the block commit operation.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451394
The QEMU default is GICv2, and some of the code in libvirt
relies on the exact value. Stop pretending that's not the
case and use GICv2 explicitly where needed.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
There are currently some limitations in the emulated GICv3
that make it unsuitable as a default. Use GICv2 instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1450433
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Currently we consider all UNIX paths with specific prefix as generated
by libvirt, but that's a wrong assumption. Let's make the detection
better by actually checking whether the whole path matches one of the
paths that we generate or generated in the past.
The UNIX path isn't stored in config XML since libvirt-1.3.1.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1446980
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add kernel_irqchip=split/on to the QEMU command line
and a capability that looks for it in query-command-line-options
output. For the 'split' option, use a version check
since it cannot be reasonably probed.
https://bugzilla.redhat.com/show_bug.cgi?id=1427005
If a shutdown is expected because it was triggered via libvirt we can
also expect the monitor to close. In those cases do not report an
internal error like:
"internal error: End of file from qemu monitor"
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Adjust the current message to make it clear, that it is the hotplug
operation that is unsupported with the given host device type.
https://bugzilla.redhat.com/show_bug.cgi?id=1450072
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Added only in drivers that were already calling
virCapabilitiesInitNUMA(). Instead of refactoring all the callers to
behave the same way in case of error, just follow what the callers are
doing for all the functions.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Even though there are several checks before calling this function
and for some scenarios we don't call it at all (e.g. on disk hot
unplug), it may be possible to sneak in some weird files (e.g. if
domain would have RNG with /dev/shm/some_file as its backend). No
matter how improbable, we shouldn't unlink it as we would be
unlinking a file from the host which we haven't created in the
first place.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
Just like in previous commit, this fixes the same issue for
hotplug.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
While the code allows devices to already be there (by some
miracle), we shouldn't try to create devices that don't belong to
us. For instance, we shouldn't try to create /dev/shm/file
because /dev/shm is a mount point that is preserved. Therefore if
a file is created there from an outside (e.g. by mgmt application
or some other daemon running on the system like vhostmd), it
exists in the qemu namespace too as the mount point is the same.
It's only /dev and /dev only that is different. The same
reasoning applies to all other preserved mount points.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
Currently, all we need to do in qemuDomainCreateDeviceRecursive() is to
take given @device, get all kinds of info on it (major & minor numbers,
owner, seclabels) and create its copy at a temporary location @path
(usually /var/run/libvirt/qemu/$domName.dev), if @device live under
/dev. This is, however, very loose condition, as it also means
/dev/shm/* is created too. Therefor, we will need to pass more arguments
into the function for better decision making (e.g. list of mount points
under /dev). Instead of adding more arguments to all the functions (not
easily reachable because some functions are callback with strictly
defined type), lets just turn this one 'const char *' into a 'struct *'.
New "arguments" can be then added at no cost.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
When setting up mount namespace for a qemu domain the following
steps are executed:
1) get list of mountpoints under /dev/
2) move them to /var/run/libvirt/qemu/$domName.ext
3) start constructing new device tree under /var/run/libvirt/qemu/$domName.dev
4) move the mountpoint of the new device tree to /dev
5) restore original mountpoints from step 2)
Note the problem with this approach is that if some device in step
3) requires access to a mountpoint from step 2) it will fail as
the mountpoint is not there anymore. For instance consider the
following domain disk configuration:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/shm/vhostmd0'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
In this case operation fails as we are unable to create vhostmd0
in the new device tree because after step 2) there is no /dev/shm
anymore. Leave aside fact that we shouldn't try to create devices
living in other mountpoints. That's a separate bug that will be
addressed later.
Currently, the order described above is rearranged to:
1) get list of mountpoints under /dev/
2) start constructing new device tree under /var/run/libvirt/qemu/$domName.dev
3) move them to /var/run/libvirt/qemu/$domName.ext
4) move the mountpoint of the new device tree to /dev
5) restore original mountpoints from step 3)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Cedric Bosdonnat <cbosdonnat@suse.com>
While fixing a bug with incorrectly freed memory in commit
v3.1.0-399-g5498aa29a, I accidentally broke persistent migration of
transient domains. Before adding qemuDomainDefCopy in the path, the code
just took NULL from vm->newDef and used it as the persistent def, which
resulted in no persistent XML being sent in the migration cookie. This
scenario is perfectly valid and the destination correctly handles it by
using the incoming live definition and storing it as the persistent one.
After the mentioned commit libvirtd would just segfault in the described
scenario.
https://bugzilla.redhat.com/show_bug.cgi?id=1446205
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When creating v3.2.0-77-g8be3ccd04 commit, I completely forgot that one
migration capability is very special. It's the "events" capability which
tells QEMU to report "MIGRATION" events. Since libvirt always wants the
events, it is enabled in qemuConnectMonitor and the rest of the code
should not touch it.
https://bugzilla.redhat.com/show_bug.cgi?id=1439841https://bugzilla.redhat.com/show_bug.cgi?id=1441165
Messed-up-by: Jiri Denemark <jdenemar@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
... with VIR_NET_GENERATED_MACV???_PREFIX, which is defined in
util/virnetdevmacvlan.h.
Since VIR_NET_GENERATED_PREFIX is used for plain tap devices, it is
renamed to VIR_NET_GENERATED_TAP_PREFIX and moved to virnetdev.h
Nothing that could happen during networkNotifyActualDevice() could
justify unceremoniously killing the qemu process, but that's what we
were doing.
In particular, new code added in commit 85bcc022 (first appearred in
libvirt-3.2.0) attempts to reattach tap devices to their assigned
bridge devices when libvirtd restarts (to make it easier to recover
from a restart of a libvirt network). But if the network has been
stopped and *not* restarted, the bridge device won't exist and
networkNotifyActualDevice() will fail.
This patch changes networkNotifyActualDevice() and
qemuProcessNotifyNets() to return void, so that qemuProcessReconnect()
will soldier on regardless of what happens (any errors will still be
logged though).
Partially resolves: https://bugzilla.redhat.com/1442700
This is a USB3 controller and it's a better choice than piix3-uhci.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
The new logic will set the piix3-uhci if available regardless of
any architecture and it will be updated to better model based on
architecture and device existence.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Acked-by: Andrea Bolognani <abologna@redhat.com>
Since commit c5f6151390 qemuDomainBlockInfo tries to update the
"physical" storage size for all network storage and not only block
devices.
Since the storage driver APIs to do this are not implemented for certain
storage types (RBD, iSCSI, ...) the code would fail to retrieve any data
since the failure of qemuDomainStorageUpdatePhysical is fatal.
Since it's desired to return data even if the total size can't be
updated we need to ignore errors from that function and return plausible
data.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1442344
Since the private data structure is not freed upon stopping a VM, the
usbaddrs pointer would be leaked:
==15388== 136 (16 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 893 of 1,019
==15388== at 0x4C2CF55: calloc (vg_replace_malloc.c:711)
==15388== by 0x54BF64A: virAlloc (viralloc.c:144)
==15388== by 0x5547588: virDomainUSBAddressSetCreate (domain_addr.c:1608)
==15388== by 0x144D38A2: qemuDomainAssignUSBAddresses (qemu_domain_address.c:2458)
==15388== by 0x144D38A2: qemuDomainAssignAddresses (qemu_domain_address.c:2515)
==15388== by 0x144ED1E3: qemuProcessPrepareDomain (qemu_process.c:5398)
==15388== by 0x144F51FF: qemuProcessStart (qemu_process.c:5979)
[...]
Clean the stale data after shutting down the VM. Otherwise the data
would be leaked on next VM start. This happens due to the fact that the
private data object is not freed on destroy of the VM.
This patch maps /domain/cpu/cache element into -cpu parameters:
- <cache mode='passthrough'/> is translated to host-cache-info=on
- <cache level='3' mode='emulate'/> is transformed into l3-cache=on
- <cache mode='disable'/> is turned in host-cache-info=off,l3-cache=off
Any other <cache> element is forbidden.
The tricky part is detecting whether QEMU supports the CPU properties.
The 'host-cache-info' property is introduced in v2.4.0-1389-ge265e3e480,
earlier QEMU releases enabled host-cache-info by default and had no way
to disable it. If the property is present, it defaults to 'off' for any
QEMU until at least 2.9.0.
The 'l3-cache' property was introduced later by v2.7.0-200-g14c985cffa.
Earlier versions worked as if l3-cache=off was passed. For any QEMU
until at least 2.9.0 l3-cache is 'off' by default.
QEMU 2.9.0 was the first release which supports probing both properties
by running device-list-properties with typename=host-x86_64-cpu. Older
QEMU releases did not support device-list-properties command for CPU
devices. Thus we can't really rely on probing them and we can just use
query-cpu-model-expansion QMP command as a witness.
Because the cache property probing is only reliable for QEMU >= 2.9.0
when both are already supported for quite a few releases, we let QEMU
report an error if a specific cache mode is explicitly requested. The
other mode (or both if a user requested CPU cache to be disabled) is
explicitly turned off for QEMU >= 2.9.0 to avoid any surprises in case
the QEMU defaults change. Any older QEMU already turns them off so not
doing so explicitly does not make any harm.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Not all async jobs are visible via virDomainGetJobStats (either they are
too fast or getting the stats is not allowed during the job), but
forcing all of them to advertise the operation is easier than hunting
the jobs for which fetching statistics is allowed. And we won't need to
think about this when we add support for getting stats for more jobs.
https://bugzilla.redhat.com/show_bug.cgi?id=1441563
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As with virtio-scsi, the "internal error" messages after
preparing a vhost-scsi hostdev overwrites more meaningful
error messages deeper in the callchain. Remove it too.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
I tried to attach a SCSI LUN to two different guests, and forgot
to specify "shareable" in the hostdev XML. Attaching the device
to the second guest failed, but the message was not helpful in
telling me what I was doing wrong:
$ cat scsi_scratch_disk.xml
<hostdev mode='subsystem' type='scsi'>
<source>
<adapter name='scsi_host3'/>
<address bus='0' target='15' unit='1074151456'/>
</source>
</hostdev>
$ virsh attach-device dasd_sles_d99c scsi_scratch_disk.xml
Device attached successfully
$ virsh attach-device dasd_fedora_0e1e scsi_scratch_disk.xml
error: Failed to attach device from scsi_scratch_disk.xml
error: internal error: Unable to prepare scsi hostdev: scsi_host3:0:15:1074151456
I eventually discovered my error, but thought it was weird that
Libvirt doesn't provide something more helpful in this case.
Looking over the code we had just gone through, I commented out
the "internal error" message, and got something more useful:
$ virsh attach-device dasd_fedora_0e1e scsi_scratch_disk.xml
error: Failed to attach device from scsi_scratch_disk.xml
error: Requested operation is not valid: SCSI device 3:0:15:1074151456 is already in use by other domain(s) as 'non-shareable'
Looking over the error paths here, we seem to issue better
messages deeper in the callchain so these "internal error"
messages overwrite any of them. Remove them, so that the
more detailed errors are seen.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
0feebab2 adds calling qemuBlockNodeNamesDetect for completed job
on updating block jobs. This affects cancelling drive mirror logic as
this function drops vm lock. Now we have to recheck all disks
before the disk with the completed block job before going
to wait for block job events.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuDomainGetNumaParameters would return the automatic nodeset even for
the persistent config if the domain was running. This is incorrect since
the automatic nodeset will be re-queried upon starting the vm.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1445325
While peer-to-peer migration enters the Confirm phase even if the
Perform phase fails, the client which initiated a non-p2p migration will
never call virDomainMigrateConfirm* API if the Perform phase failed.
Thus we need to explicitly reset migration before reporting a failure
from the Perform phase API.
https://bugzilla.redhat.com/show_bug.cgi?id=1425003
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Migration with old QEMU which does not support query-migrate-parameters
would fail because the QMP command is called unconditionally since the
introduction of TLS migration. Previously it was only called if the user
explicitly requested a feature which uses QEMU migration parameters. And
even then the situation was not ideal, instead of reporting an
unsupported feature we'd just complain about missing QMP command.
Trivially no migration parameters are supported when
query-migrate-parameters QMP command is missing. There's no need to
report an error if it is missing, the callers will report better error
if needed.
https://bugzilla.redhat.com/show_bug.cgi?id=1441934
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
it should be a comparison of modes between new and old devices. So
the argument of the second virDomainNetGetActualDirectMode should be
newdev.
Signed-off-by: ZhiPeng Lu <lu.zhipeng@zte.com.cn>
This patch makes use of the virNetDevSetCoalesce() function to make
appropriate settings effective for devices that support them.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1414627
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We are currently parsing only rx/frames/max because that's the only
value that makes sense for us. The tun device just added support for
this one and the others are only supported by hardware devices which
we don't need to worry about as the only way we'd pass those to the
domain is using <hostdev/> or <interface type='hostdev'/>. And in
those cases the guest can modify the settings itself.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In the vcpu hotplug code if exit from the monitor failed we would still
attempt to save the status XML. When the daemon is terminated the
monitor socket is closed. In such case, the written status XML would not
contain the monitor path and thus be invalid.
Avoid this issue by only saving status XML on success of the monitor
command.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1439452
The history of USB controller for ppc64 guest is complex and goes
back to libvirt 1.3.1 where the fun started.
Prior Libvirt 1.3.1 if no model for USB controller was specified
we've simply passed "-usb" on QEMU command line.
Since Libvirt 1.3.1 there is a patch (8156493d8d) that fixes this
issue by using "-device pci-ohci,..." but it breaks migration with
older Libvirts which was agreed that's acceptable. However this
patch didn't reflect this change in the domain XML and the model
was still missing.
Since Libvirt 2.2.0 there is a patch (f55eaccb0c) that fixes the
issue with not setting the USB model into domain XML which we need
to know about to not break the migration and since the default
model was *pci-ohci* it was used as default in this patch as well.
This patch tries to take all the previous changes into account and
also change the default for newly defined domains that don't specify
any model for USB controller.
The VIR_DOMAIN_DEF_PARSE_ABI_UPDATE is set only if new domain is
defined or new device is added into a domain which means that in
all other cases we will use the old *pci-ohci* model instead of the
better and not broken *nec-usb-xhci* model.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1373184
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
So far there is probably no change that is allowed to be done
by the VIR_DOMAIN_DEF_PARSE_ABI_UPDATE flag that would break
guest ABI but this may change in the future.
This introduces new VIR_DOMAIN_DEF_PARSE_ABI_UPDATE_MIGRATION
which should be used only for ABI updates that are "safe" for
persistent migration.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
With QEMU older than 2.9.0 libvirt uses CPUID instruction to determine
what CPU features are supported on the host. This was later used when
checking compatibility of guest CPUs. Since QEMU 2.9.0 we ask QEMU for
the host CPU data. But the two methods we use usually provide disjoint
sets of CPU features because QEMU/KVM does not support all features
provided by the host CPU and on the other hand it can enable some
feature even if the host CPU does not support them.
So if there is a domain which requires a CPU features disabled by
QEMU/KVM, libvirt will refuse to start it with QEMU > 2.9.0 as its guest
CPU is incompatible with the host CPU data we got from QEMU. But such
domain would happily start on older QEMU (of course, the features would
be missing the guest CPU). To fix this regression, we need to combine
both CPU feature sets when checking guest CPU compatibility.
https://bugzilla.redhat.com/show_bug.cgi?id=1439933
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We already know from QEMU which CPU features will block migration. Let's
use this information to make a migratable copy of the host CPU model and
use it for updating guest CPU specification. This will allow us to drop
feature filtering from virCPUUpdate where it was just a hack.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Soon we will need to store multiple host CPU definitions in
virQEMUCapsHostCPUData and qemuCaps users will want to request the one
they need. This patch introduces virQEMUCapsHostCPUType enum which will
be used for specifying the requested CPU definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We need to store several CPU related data structure for both KVM and
TCG. So instead of keeping two different copies of everything let's
make a virQEMUCapsHostCPUData struct and use it twice.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>