CPU models (and especially some additional details which we will start
probing for later) differ depending on the accelerator. Thus we need to
call query-cpu-definitions in both KVM and TCG mode to get all data we
want.
Tests in tests/domaincapstest.c are temporarily switched to TCG to avoid
having to squash even more stuff into this single patch. They will all
be switched back later in separate commits.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch moves the CPU models formatting code from
virQEMUCapsFormatCache into a separate function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function just returned cached capabilities without checking whether
they are still valid. We should check that and refresh the capabilities
to make sure we don't return stale data. In other words, we should do
what all other lookup functions do.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function is made a little bit more readable and the code which
refreshes cached capabilities if they are not valid any more was moved
into a separate function (virQEMUCapsCacheValidate) so that it can be
reused in other places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If a user asked for a KVM domain capabilities when KVM is not available,
we would happily return data we got when probing through TCG and
pretended they were relevant for KVM. Let's just report KVM is not
supported to avoid confusion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When domain capabilities were introduced we did not have enough data to
decide whether KVM works on the host or not and thus working legacy/VFIO
device assignment was used as a witness. Now that we know whether KVM
was enabled when probing QEMU capabilities (and thus we know it's
working), we can use this knowledge to provide better default value for
virttype.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since some may depend on the accelerator used when probing QEMU the
cache becomes invalid when KVM becomes available or if it is not
available anymore.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU related capabilities may differ depending on accelerator used when
probing. Let's use KVM if available when probing QEMU and fall back to
TCG. The created capabilities already contain all we need to distinguish
whether KVM or TCG was used:
- KVM was used when probing capabilities:
QEMU_CAPS_KVM is set
QEMU_CAPS_ENABLE_KVM is not set
- TCG was used and QEMU supports KVM, but it failed (e.g., missing
kernel module or wrong /dev/kvm permissions)
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is set
- KVM was not used and QEMU does not support it
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is not set
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When starting QEMU more than once during a single probing process,
qemucapsprobe utility would save QMP greeting several times, which
doesn't play well with our test monitor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's set QEMU_CAPS_KVM and QEMU_CAPS_ENABLE_KVM early so that the rest
of the probing code can use these capabilities to handle KVM/TCG replies
differently.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Using -machine instead of -M for QMP probing is safe because any QEMU
binary which is capable of QMP probing supports -machine.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The code that runs a new QEMU process to be used for probing
capabilities is separated into four reusable functions so that any code
that wants to probe a QEMU process may just follow a few simple steps:
cmd = virQEMUCapsInitQMPCommandNew(...);
virQEMUCapsInitQMPCommandRun(cmd);
/* talk to the running QEMU process using its QMP monitor */
if (reprobeIsRequired) {
virQEMUCapsInitQMPCommandAbort(cmd, ...);
virQEMUCapsInitQMPCommandRun(cmd);
/* talk to the running QEMU process again */
}
virQEMUCapsInitQMPCommandFree(cmd);
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This reverts commit 3a6cf6fc16.
Mistakenly this commit was pushed because I thought I missed the
corret one b880ff42dd while in fact I didn't.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
We have couple of functions that operate over NULL terminated
lits of strings. However, our naming sucks:
virStringJoin
virStringFreeList
virStringFreeListCount
virStringArrayHasString
virStringGetFirstWithPrefix
We can do better:
virStringListJoin
virStringListFree
virStringListFreeCount
virStringListHasString
virStringListGetFirstWithPrefix
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If libvirt is compiled without NUMACTL support starting libvirtd
reports a libvirt internal error "NUMA isn't available on this host"
without checking if NUMA support is compiled into the libvirt binaries.
This patch adds the missing NUMA support check to prevent the internal error.
It also includes a check if the cgroup controller cpuset is available before
using it.
The error was noticed when libvirtd was restarted with running domains and
on libvirtd start the qemuConnectCgroup gets called during qemuProcessReconnect.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
With the QEMU components in place, provide the XML parsing to
invoke that code when given the following XML snippet:
<hostdev mode='subsystem' type='scsi_host'>
<source protocol='vhost' wwpn='naa.501234567890abcd'/>
</hostdev>
An optional address element can be specified within the hostdev
(pick CCW or PCI as necessary):
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0625'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Add basic vhost-scsi tests which were cloned from hostdev-scsi-virtio-scsi
in both xml2argv and xml2xml. Added ones for both vhost-scsi-ccw and
vhost-scsi-pci since the syntaxes are slightly different between them.
Also adjusted the docs to describe the changes.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Adjust the device string that is built for vhost-scsi devices so that it
can be invoked from hotplug.
From the QEMU command line, the file descriptors are expect to be numeric only.
However, for hotplug, the file descriptors are expected to begin with at least
one alphabetic character else this error occurs:
# virsh attach-device guest_0001 ~/vhost.xml
error: Failed to attach device from /root/vhost.xml
error: internal error: unable to execute QEMU command 'getfd':
Parameter 'fdname' expects a name not starting with a digit
We also close the file descriptor in this case, so that shutting down the
guest cleans up the host cgroup entries and allows future guests to use
vhost-scsi devices. (Otherwise the guest will silently end.)
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Open /dev/vhost-scsi, and record the resulting file descriptor, so that
the guest has access to the host device outside of the libvirt daemon.
Pass this information, along with data parsed from the XML file, to build
a device string for the qemu command line. That device string will be
for either a vhost-scsi-ccw device in the case of an s390 machine, or
vhost-scsi-pci for any others.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
For a new hostdev type='scsi_host' we have a number of
required functions for managing, adding, and removing the
host device to/from guests. Provide the basic infrastructure
for these tasks.
The name "SCSIVHost" (and its variants) is chosen to avoid
conflicts with existing code named "SCSIHost" to refer to
a hostdev type='scsi' protcol='none'.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
We already have a "scsi" hostdev subsys type, which refers to a single
LUN that is passed through to a guest. But what of things where
multiple LUNs are passed through via a single SCSI HBA, such as with
the vhost-scsi target? Create a new hostdev subsys type that will
carry this.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Do all the stuff for the vhost-scsi capability in QEMU,
so it's in place for our checks later.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
macOS doesn't support clock_gettime(2), at least versions prior 10.12
(I didn't actually check 10.12 though). So, use its own routines in
eventtest.
* configure.ac: check for requires symbols and define
HAVE_MACH_CLOCK_ROUTINES if found
* tests/eventtest.c: add clock_get_time() based implementation
* add vboxDriver object to serve as a singleton global object that
holds references to IVirtualBox and ISession to be shared among
multiple connections. The vbox_driver is instantiated only once in
the first call vboxGetDriverConnection function that is guarded by
a mutex.
* call vbox API initialize only when the first connection is
established, and likewise uninitialize when last connection
disconnects. The prevents each subsequent connection from overwriting
IVirtualBox/ISession instances of any other active connection that
led to libvirtd segfaults. The virConnectOpen and virConnectClose
implementations are guarded by mutex on the global vbox_driver_lock
where the global vbox_driver object counts connectios and decides
when it's safe to call vbox's init/uninit routines.
* add IVirutalBoxClient to vboxDriver and use it to in tandem with newer
pfnClientInitialize/pfnClientUninitalize APIs for vbox versions that
support it, to avoid usage of the old pfnComInitialize/Uninitialize.
Removed the comment 'Set the migration source' as it isn't valid anymore
and 'start it up' isn't useful as qemuProcessStart() is already a
speaking name.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
The path prefixes for sysfs trees are always prepended by paths
beginning with a slash, making the trailing slash in the prefix
redundant.
Signed-off-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Only generate a warning if there is something to report.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Just like in the previous commit, we are not updating CGroups on
chardev hot(un-)plug and thus leaving qemu unable to access any
non-default device users are trying to hotplug.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If users try to hotplug RNG device with a backend different to
/dev/random or /dev/urandom the whole operation fails as qemu is
unable to access the device. The problem is we don't update
device CGroups during the operation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
I think when trying to introduce libvirt (we have a section for
that in our index page) it might be useful to promote success
stories - other applications that are based on libvirt.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainObjExitAgent is unsafe.
First it accesses domain object without domain lock.
Second it uses outdated logic that goes back to commit 79533da1 of
year 2009 when code was quite different. (unref function
instead of unreferencing only unlocked and disposed object
in case of last reference and leaved unlocking to the caller otherwise).
Nowadays this logic may lead to disposing locked object
i guess.
Another problem is that the callers of qemuDomainObjEnterAgent
use domain object again (namely priv->agent) without domain lock.
This patch address these two problems.
qemuDomainGetAgent is dropped as unused.
Sometimes after domain restart agent is unavailabe even
if it is up and running in guest. Diagnostic message is
"QEMU guest agent is not available due to an error"
that is 'priv->agentError' is set. Investiagion shows that
'priv->agent' is not NULL, so error flag is set probably
during domain shutdown process and not cleaned up eventually.
The patch is quite simple - just clean up error flag unconditionally
upon domain stop.
Other hunks address other cases when error flag is not cleaned up.
1. processSerialChangedEvent. We need to clean error flag
unconditionally here too. For example if upon first 'connected' event we
fail to connect and set error flag and then connect on second
'connected' event then error flag will remain set erroneously
and make agent unavailable.
2. qemuProcessHandleAgentEOF. If error flag is set and we get
EOF we need to change state (and diagnostic) from 'error' to
'not connected'.