Originally added by commit id '89646e69' prior to commit id '15fa84ac'
and '71d2c172' which ensured that qemuStorageLimitsRefresh was only called
for inactive domains.
Adjust the comment describing the need for FIXME and move all the text
to the function description.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In preparation to the code move to virnetdevtap.c, this change:
* renames virNetInterfaceStats to virNetDevTapInterfaceStats
* changes 'path' to 'ifname', to use the same vocable as other
method in virnetdevtap.c.
* Add the attributes checker
When vhostuser interfaces are used, the interface statistics
are not available in /proc/net/dev.
This change looks at the openvswitch interfaces statistics
tables to provide this information for vhostuser interface.
Note that in openvswitch world drop/error doesn't always make sense
for some interface type. When these informations are not available we
set them to 0 on the virDomainInterfaceStats.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There's nothing to compress if the requested snapshot memory format is
set to 'raw' explicitly. After commit 9e14689ea libvirt would try to
run /sbin/raw to process the memory stream if the qemu.conf option
snapshot_image_format is set.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1402726
Since its introduction in 2012 this internal API did nothing.
Moreover we have the same API that does exactly the same:
virSecurityManagerDomainSetPathLabel.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If you've ever tried running a huge page backed guest under
different user than in qemu.conf, you probably failed. Problem is
even though we have corresponding APIs in the security drivers,
there's no implementation and thus we don't relabel the huge page
path. But even if we did, so far all of the domains share the
same path:
/hugepageMount/libvirt/qemu
Our only option there would be to set 0777 mode on the qemu dir
which is totally unsafe. Therefore, we can create dir on
per-domain basis, i.e.:
/hugepageMount/libvirt/qemu/domainName
and chown domainName dir to the user that domain is configured to
run under.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far this function takes virDomainObjPtr which:
1) is an overkill,
2) might be not available in all the places we will use it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Qemu 2.8.0+ changes arguments structure for blockdev-add in the effort
to make it finally stable. Since libvirt recently added the detection of
gluster debug support relying on the old syntax we need to add the new
as well.
With current perf framework, this patch adds support and documentation
for the branch_instructions perf event.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com>
Add in the block I/O throttling group parameter to the command line
if supported. If not supported, fail command creation.
Add the xml2argvtest for testing.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have multiple bool values, create a single enum with bits
representing what fields are set. Fields are generally set in groups
of 3 (read, write, total).
Currently we build the JSON object for the "block_set_io_throttle"
command using the knowledge that a NULL for a support*Options boolean
would essentially ignore the rest of the arguments.
This may not work properly if some capability was backported, plus it just
looks rather ugly. So instead, build the "base" arguments and then if
the support*Option bool capability is set, add in the arguments on the fly.
Then append those arguments to the basic command and send to qemu.
Rather than using negative logic and setting the maxparams to a lesser
value based on which capabilities exist, alter the logic to modify the
maxparams based on a base value plus the found capabilities. Reduces the
chance that some backported feature produces an incorrect value.
Two reasons:
1.in none hotplug, we will pass it. We can see from libvirt function
qemuBuildVhostuserCommandLine
2.qemu will use this vetcor num to init msix table. If we don't pass, qemu
will use default value, this will cause VM can only use default value
interrupts at most.
Signed-off-by: gaohaifeng <gaohaifeng.gao@huawei.com>
Consider the following XML snippets:
$ cat scsicontroller.xml
<controller type='scsi' model='virtio-scsi' index='0'/>
$ cat scsihostdev.xml
<hostdev mode='subsystem' type='scsi'>
<source>
<adapter name='scsi_host0'/>
<address bus='0' target='8' unit='1074151456'/>
</source>
</hostdev>
If we create a guest that includes the contents of scsihostdev.xml,
but forget the virtio-scsi controller described in scsicontroller.xml,
one is silently created for us. The same holds true when attaching
a hostdev before the matching virtio-scsi controller.
(See qemuDomainFindOrCreateSCSIDiskController for context.)
Detaching the hostdev, followed by the controller, works well and the
guest behaves appropriately.
If we detach the virtio-scsi controller device first, any associated
hostdevs are detached for us by the underlying virtio-scsi code (this
is fine, since the connection is broken). But all is not well, as the
guest is unable to receive new virtio-scsi devices (the attach commands
succeed, but devices never appear within the guest), nor even be
shutdown, after this point.
While this is not libvirt's problem, we can prevent falling into this
scenario by checking if a controller is being used by any hostdev
devices. The same is already done for disk elements today.
Applying this patch and then using the XML snippets from earlier:
$ virsh detach-device guest_01 scsicontroller.xml
error: Failed to detach device from scsicontroller.xml
error: operation failed: device cannot be detached: device is busy
$ virsh detach-device guest_01 scsihostdev.xml
Device detached successfully
$ virsh detach-device guest_01 scsicontroller.xml
Device detached successfully
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Although nearly all host devices that are assigned to guests using
VFIO ("<hostdev>" devices in libvirt) are physically PCI Express
devices, until now libvirt's PCI address assignment has always
assigned them addresses on legacy PCI controllers in the guest, even
if the guest's machinetype has a PCIe root bus (e.g. q35 and
aarch64/virt).
This patch tries to assign them to an address on a PCIe controller
instead, when appropriate. First we do some preliminary checks that
might allow setting the flags without doing any extra work, and if
those conditions aren't met (and if libvirt is running privileged so
that it has proper permissions), we perform the (relatively) time
consuming task of reading the device's PCI config to see if it is an
Express device. If this is successful, the connect flags are set based
on the result, but if we aren't able to read the PCI config (most
likely due to the device not being present on the system at the time
of the check) we assume it is (or will be) an Express device, since
that is almost always the case anyway.
If libvirtd is running unprivileged, it can open a device's PCI config
data in sysfs, but can only read the first 64 bytes. But as part of
determining whether a device is Express or legacy PCI,
qemuDomainDeviceCalculatePCIConnectFlags() will be updated in a future
patch to call virPCIDeviceIsPCIExpress(), which tries to read beyond
the first 64 bytes of the PCI config data and fails with an error log
if the read is unsuccessful.
In order to avoid creating a parallel "quiet" version of
virPCIDeviceIsPCIExpress(), this patch passes a virQEMUDriverPtr down
through all the call chains that initialize the
qemuDomainFillDevicePCIConnectFlagsIterData, and saves the driver
pointer with the rest of the iterdata so that it can be used by
qemuDomainDeviceCalculatePCIConnectFlags(). This pointer isn't used
yet, but will be used in an upcoming patch (that detects Express vs
legacy PCI for VFIO assigned devices) to examine driver->privileged.
Restarting libvirtd on the source host at the end of migration when a
domain is already running on the destination would cause image labels to
be reset effectively killing the domain. Commit e8d0166e1d fixed similar
issue on the destination host, but kept the source always resetting the
labels, which was mostly correct except for the specific case handled by
this patch.
https://bugzilla.redhat.com/show_bug.cgi?id=1343858
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Post-copy migration needs bi-directional communication between the
source and the destination QEMU processes, which is not supported by
tunnelled migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1371358
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Thanks to the complex capability caching code virQEMUCapsProbeQMP was
never called when we were starting a new qemu VM. On the other hand,
when we are reconnecting to the qemu process we reload the capability
list from the status XML file. This means that the flag preventing the
function being called was not set and thus we partially reprobed some of
the capabilities.
The recent addition of CPU hotplug clears the
QEMU_CAPS_QUERY_HOTPLUGGABLE_CPUS if the machine does not support it.
The partial re-probe on reconnect results into attempting to call the
unsupported command and then killing the VM.
Remove the partial reprobe and depend on the stored capabilities. If it
will be necessary to reprobe the capabilities in the future, we should
do a full reprobe rather than this partial one.
QEMU 2.8.0 adds support for unavailable-features in
query-cpu-definitions reply. The unavailable-features array lists CPU
features which prevent a corresponding CPU model from being usable on
current host. It can only be used when all the unavailable features are
disabled. Empty array means the CPU model can be used without
modifications.
We can use unavailable-features for providing CPU model usability info
in domain capabilities XML:
<domainCapabilities>
...
<cpu>
<mode name='host-passthrough' supported='yes'/>
<mode name='host-model' supported='yes'>
<model fallback='allow'>Skylake-Client</model>
...
</mode>
<mode name='custom' supported='yes'>
<model usable='yes'>qemu64</model>
<model usable='yes'>qemu32</model>
<model usable='no'>phenom</model>
<model usable='yes'>pentium3</model>
<model usable='yes'>pentium2</model>
<model usable='yes'>pentium</model>
<model usable='yes'>n270</model>
<model usable='yes'>kvm64</model>
<model usable='yes'>kvm32</model>
<model usable='yes'>coreduo</model>
<model usable='yes'>core2duo</model>
<model usable='no'>athlon</model>
<model usable='yes'>Westmere</model>
<model usable='yes'>Skylake-Client</model>
...
</mode>
</cpu>
...
</domainCapabilities>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
"host" CPU model is supported by a special host-passthrough CPU mode and
users is not allowed to specify this model directly with custom mode.
Thus we should not advertise "host" CPU model in domain capabilities.
This worked well on architectures for which libvirt provides a list of
supported CPU models in cpu_map.xml (since "host" is not in the list).
But we need to explicitly filter "host" model out for all other
architectures.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU models (and especially some additional details which we will start
probing for later) differ depending on the accelerator. Thus we need to
call query-cpu-definitions in both KVM and TCG mode to get all data we
want.
Tests in tests/domaincapstest.c are temporarily switched to TCG to avoid
having to squash even more stuff into this single patch. They will all
be switched back later in separate commits.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch moves the CPU models formatting code from
virQEMUCapsFormatCache into a separate function.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function just returned cached capabilities without checking whether
they are still valid. We should check that and refresh the capabilities
to make sure we don't return stale data. In other words, we should do
what all other lookup functions do.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function is made a little bit more readable and the code which
refreshes cached capabilities if they are not valid any more was moved
into a separate function (virQEMUCapsCacheValidate) so that it can be
reused in other places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If a user asked for a KVM domain capabilities when KVM is not available,
we would happily return data we got when probing through TCG and
pretended they were relevant for KVM. Let's just report KVM is not
supported to avoid confusion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When domain capabilities were introduced we did not have enough data to
decide whether KVM works on the host or not and thus working legacy/VFIO
device assignment was used as a witness. Now that we know whether KVM
was enabled when probing QEMU capabilities (and thus we know it's
working), we can use this knowledge to provide better default value for
virttype.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since some may depend on the accelerator used when probing QEMU the
cache becomes invalid when KVM becomes available or if it is not
available anymore.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPU related capabilities may differ depending on accelerator used when
probing. Let's use KVM if available when probing QEMU and fall back to
TCG. The created capabilities already contain all we need to distinguish
whether KVM or TCG was used:
- KVM was used when probing capabilities:
QEMU_CAPS_KVM is set
QEMU_CAPS_ENABLE_KVM is not set
- TCG was used and QEMU supports KVM, but it failed (e.g., missing
kernel module or wrong /dev/kvm permissions)
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is set
- KVM was not used and QEMU does not support it
QEMU_CAPS_KVM is not set
QEMU_CAPS_ENABLE_KVM is not set
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's set QEMU_CAPS_KVM and QEMU_CAPS_ENABLE_KVM early so that the rest
of the probing code can use these capabilities to handle KVM/TCG replies
differently.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Using -machine instead of -M for QMP probing is safe because any QEMU
binary which is capable of QMP probing supports -machine.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The code that runs a new QEMU process to be used for probing
capabilities is separated into four reusable functions so that any code
that wants to probe a QEMU process may just follow a few simple steps:
cmd = virQEMUCapsInitQMPCommandNew(...);
virQEMUCapsInitQMPCommandRun(cmd);
/* talk to the running QEMU process using its QMP monitor */
if (reprobeIsRequired) {
virQEMUCapsInitQMPCommandAbort(cmd, ...);
virQEMUCapsInitQMPCommandRun(cmd);
/* talk to the running QEMU process again */
}
virQEMUCapsInitQMPCommandFree(cmd);
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We have couple of functions that operate over NULL terminated
lits of strings. However, our naming sucks:
virStringJoin
virStringFreeList
virStringFreeListCount
virStringArrayHasString
virStringGetFirstWithPrefix
We can do better:
virStringListJoin
virStringListFree
virStringListFreeCount
virStringListHasString
virStringListGetFirstWithPrefix
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If libvirt is compiled without NUMACTL support starting libvirtd
reports a libvirt internal error "NUMA isn't available on this host"
without checking if NUMA support is compiled into the libvirt binaries.
This patch adds the missing NUMA support check to prevent the internal error.
It also includes a check if the cgroup controller cpuset is available before
using it.
The error was noticed when libvirtd was restarted with running domains and
on libvirtd start the qemuConnectCgroup gets called during qemuProcessReconnect.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Adjust the device string that is built for vhost-scsi devices so that it
can be invoked from hotplug.
From the QEMU command line, the file descriptors are expect to be numeric only.
However, for hotplug, the file descriptors are expected to begin with at least
one alphabetic character else this error occurs:
# virsh attach-device guest_0001 ~/vhost.xml
error: Failed to attach device from /root/vhost.xml
error: internal error: unable to execute QEMU command 'getfd':
Parameter 'fdname' expects a name not starting with a digit
We also close the file descriptor in this case, so that shutting down the
guest cleans up the host cgroup entries and allows future guests to use
vhost-scsi devices. (Otherwise the guest will silently end.)
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Open /dev/vhost-scsi, and record the resulting file descriptor, so that
the guest has access to the host device outside of the libvirt daemon.
Pass this information, along with data parsed from the XML file, to build
a device string for the qemu command line. That device string will be
for either a vhost-scsi-ccw device in the case of an s390 machine, or
vhost-scsi-pci for any others.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
We already have a "scsi" hostdev subsys type, which refers to a single
LUN that is passed through to a guest. But what of things where
multiple LUNs are passed through via a single SCSI HBA, such as with
the vhost-scsi target? Create a new hostdev subsys type that will
carry this.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Do all the stuff for the vhost-scsi capability in QEMU,
so it's in place for our checks later.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Removed the comment 'Set the migration source' as it isn't valid anymore
and 'start it up' isn't useful as qemuProcessStart() is already a
speaking name.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Just like in the previous commit, we are not updating CGroups on
chardev hot(un-)plug and thus leaving qemu unable to access any
non-default device users are trying to hotplug.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If users try to hotplug RNG device with a backend different to
/dev/random or /dev/urandom the whole operation fails as qemu is
unable to access the device. The problem is we don't update
device CGroups during the operation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainObjExitAgent is unsafe.
First it accesses domain object without domain lock.
Second it uses outdated logic that goes back to commit 79533da1 of
year 2009 when code was quite different. (unref function
instead of unreferencing only unlocked and disposed object
in case of last reference and leaved unlocking to the caller otherwise).
Nowadays this logic may lead to disposing locked object
i guess.
Another problem is that the callers of qemuDomainObjEnterAgent
use domain object again (namely priv->agent) without domain lock.
This patch address these two problems.
qemuDomainGetAgent is dropped as unused.
Sometimes after domain restart agent is unavailabe even
if it is up and running in guest. Diagnostic message is
"QEMU guest agent is not available due to an error"
that is 'priv->agentError' is set. Investiagion shows that
'priv->agent' is not NULL, so error flag is set probably
during domain shutdown process and not cleaned up eventually.
The patch is quite simple - just clean up error flag unconditionally
upon domain stop.
Other hunks address other cases when error flag is not cleaned up.
1. processSerialChangedEvent. We need to clean error flag
unconditionally here too. For example if upon first 'connected' event we
fail to connect and set error flag and then connect on second
'connected' event then error flag will remain set erroneously
and make agent unavailable.
2. qemuProcessHandleAgentEOF. If error flag is set and we get
EOF we need to change state (and diagnostic) from 'error' to
'not connected'.
qemuConnectAgent return -1 or -2 in case of different errors.
A. -1 is a case of unsuccessuful connection to guest agent.
B. -2 is a case of destoyed domain during connection attempt.
All qemuConnectAgent callers handle the first error the same way
so let's move this logic into qemuConnectAgent itself. Patched
function returns 0 in case A and -1 in case B.
Use the util function virHostdevIsSCSIDevice() to simplify if
statements.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Add missing checks if a hostdev is a subsystem/SCSI device before access
the union member 'subsys'/'scsi'. Also fix indentation and simplify
qemuDomainObjCheckHostdevTaint().
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
New line character in name of domain is now forbidden because it
mess virsh output and can be confusing for users.
Validation of name is done in drivers, after parsing XML to avoid
problems with dissappeared domains which was already created with
new-line char in name.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 3f71c79768 added 'qemu_id' field to track the id of the cpu
as reported by query-cpus. The patch did not include changes necessary
to propagate the id through the functions matching the data to the
libvirt cpu structures and thus all vcpus had id 0.
Don't use qemuMonitorGetCPUInfo which does a lot of matching to get the
full picture which is not necessary and would be mostly discarded.
Refresh only the vcpu halted state using data from query-cpus.
We don't need to call qemuMonitorGetCPUInfo which is very inefficient to
get data required to update the vcpu 'halted' state.
Add a monitor helper that will retrieve the halted state and return it
in a bitmap so that it can be indexed easily.
Whenever qemuMonitorJSONCheckError returns 0, the "return" object is
guaranteed to exist. Thus virJSONValueObjectGetObject will never fail to
get it. On the other hand, virJSONValueObjectGetArray may fail since the
"return" object may not be an array.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Libvirt's code relies on this fact so don't allow parsing a command line
which would have none.
Libvirtd would crash in the post parse callback on such config.
After a944bd92 we gained support for setting gluster debug level.
However, due to a space we haven't tested whether augeas file
actually works.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
PPC driver needs to convert POWERx_v* legacy CPU model names into POWERx
to maintain backward compatibility with existing domains. This patch
adds a new step into the guest CPU configuration work flow which CPU
drivers can use to convert legacy CPU definitions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
It was introduced by commit 7a51d9ebb, which started to use
monitor commands without job acquiring, which is unsafe and leads
to simultaneous access to vm->mon structure by different threads.
Crash backtrace is the following (shortened):
Program received signal SIGSEGV, Segmentation fault.
qemuMonitorSend (mon=mon@entry=0x7f4ef4000d20, msg=msg@entry=0x7f4f18e78640) at qemu/qemu_monitor.c:1011
1011 while (!mon->msg->finished) {
0 qemuMonitorSend () at qemu/qemu_monitor.c:1011
1 0x00007f691abdc720 in qemuMonitorJSONCommandWithFd () at qemu/qemu_monitor_json.c:298
2 0x00007f691abde64a in qemuMonitorJSONCommand at qemu/qemu_monitor_json.c:328
3 qemuMonitorJSONQueryCPUs at qemu/qemu_monitor_json.c:1408
4 0x00007f691abcaebd in qemuMonitorGetCPUInfo g@entry=false) at qemu/qemu_monitor.c:1931
5 0x00007f691ab96863 in qemuDomainRefreshVcpuHalted at qemu/qemu_domain.c:6309
6 0x00007f691ac0af99 in qemuDomainGetStatsVcpu at qemu/qemu_driver.c:18945
7 0x00007f691abef921 in qemuDomainGetStats at qemu/qemu_driver.c:19469
8 qemuConnectGetAllDomainStats at qemu/qemu_driver.c:19559
9 0x00007f693382e806 in virConnectGetAllDomainStats at libvirt-domain.c:11546
10 0x00007f6934470c40 in remoteDispatchConnectGetAllDomainStats at remote.c:6267
(gdb) p mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
This change fixes it by calling qemuDomainRefreshVcpuHalted only when job is acquired.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
For machinetypes with a pci-root bus (all legacy PCI), libvirt will
make a "fake" reservation for one extra slot prior to assigning
addresses to unaddressed PCI endpoint devices in the domain. This will
trigger auto-adding of a pci-bridge for the final device to be
assigned an address *if that device would have otherwise instead been
the last device on the last available pci-bridge*; thus it assures
that there will always be at least one slot left open in the domain's
bus topology for expansion (which is important both for hotplug (since
a new pci-bridge can't be added while the guest is running) as well as
for offline additions to the config (since adding a new device might
otherwise in some cases require re-addressing existing devices, which
we want to avoid)).
It's important to note that for the above case (legacy PCI), we must
check for the special case of all slots on all buses being occupied
*prior to assigning any addresses*, and avoid attempting to reserve
the extra address in that case, because there is no free address in
the existing topology, so no place to auto-add a pci-bridge for
expansion (i.e. it would always fail anyway). Since that condition can
only be reached by manual intervention, this is acceptable.
For machinetypes with pcie-root (Q35, aarch64 virt), libvirt's
methodology for automatically expanding the bus topology is different
- pcie-root-ports are plugged into slots (soon to be functions) of
pcie-root as needed, and the new endpoint devices are assigned to the
single slot in each pcie-root-port. This is done so that the devices
are, by default, hotpluggable (the slots of pcie-root don't support
hotplug, but the single slot of the pcie-root-port does). Since
pcie-root-ports can only be plugged into pcie-root, and we don't
auto-assign endpoint devices to the pcie-root slots, this means
topology expansion doesn't compete with endpoint devices for slots, so
we don't need to worry about checking for all "useful" slots being
free *prior* to assigning addresses to new endpoint devices - as a
matter of fact, if we attempt to reserve the open slots before the
used slots, it can lead to errors.
Instead this patch just reserves one slot for a "future potential"
PCIe device after doing the assignment for actual devices, but only
if the only PCI controller defined prior to starting address
assignment was pcie-root, and only if we auto-added at least one PCI
controller during address assignment. This assures two things:
1) that reserving the open slots will only be done when the domain is
initially defined, never at any time after, and
2) that if the user understands enough about PCI controllers that they
are adding them manually, that we don't mess up their plan by
adding extras - if they know enough to add one pcie-root-port, or
to manually assign addresses such that no pcie-root-ports are
needed, they know enough to add extra pcie-root-ports if they want
them (this could be called the "libguestfs clause", since
libguestfs needs to be able to create domains with as few
devices/controllers as possible).
This is set to reserve a single free port for now, but could be
increased in the future if public sentiment goes in that direction
(it's easy to increase later, but essentially impossible to decrease)
Real Q35 hardware has an ICH9 chip that includes several integrated
devices at particular addresses (see the file docs/q35-chipset.cfg in
the qemu source). libvirt already attempts to put the first two sets
of ich9 USB2 controllers it finds at 00:1D.* and 00:1A.* to match the
real hardware. This patch does the same for the ich9 "HD audio"
device.
The main inspiration for this patch is that currently the *only*
device in a reasonable "workstation" type virtual machine config that
requires a legacy PCI slot is the audio device, Without this patch,
the standard Q35 machine created by virt-manager will have a
dmi-to-pci-bridge and a pci-bridge just for the sound device; with the
patch (and if you change the sound device model from the default
"ich6" to "ich9"), the machine definition constructed by virt-manager
has absolutely no legacy PCI controllers - any legacy PCI devices
(e.g. video and sound) are on pcie-root as integrated devices.
Previously we added a set of EHCI+UHCI controllers to Q35 machines to
mimic real hardware as closely as possible, but recent discussions
have pointed out that the nec-usb-xhci (USB3) controller is much more
virtualization-friendly (uses less CPU), so this patch switches the
default for Q35 machinetypes to add an XHCI instead (if it's
supported, which it of course *will* be).
Since none of the existing test cases left out USB controllers in the
input XML, a new Q35 test case was added which has *no* devices, so
ends up with only the defaults always put in by qemu, plus those added
by libvirt.
Now the a dmi-to-pci-bridge is automatically added just as it's needed
(when a pci-bridge is being added), we no longer have any need to
force-add one to every single Q35 domain.
Previously libvirt would only add pci-bridge devices automatically
when an address was requested for a device that required a legacy PCI
slot and none was available. This patch expands that support to
dmi-to-pci-bridge (which is needed in order to add a pci-bridge on a
machine with a pcie-root), and pcie-root-port (which is needed to add
a hotpluggable PCIe device). It does *not* automatically add
pcie-switch-upstream-ports or pcie-switch-downstream-ports (and
currently there are no plans for that).
Given the existing code to auto-add pci-bridge devices, automatically
adding pcie-root-ports is fairly straightforward. The
dmi-to-pci-bridge support is a bit tricky though, for a few reasons:
1) Although the only reason to add a dmi-to-pci-bridge is so that
there is a reasonable place to plug in a pci-bridge controller,
most of the time it's not the presence of a pci-bridge *in the
config* that triggers the requirement to add a dmi-to-pci-bridge.
Rather, it is the presence of a legacy-PCI device in the config,
which triggers auto-add of a pci-bridge, which triggers auto-add of
a dmi-to-pci-bridge (this is handled in
virDomainPCIAddressSetGrow() - if there's a request to add a
pci-bridge we'll check if there is a suitable bus to plug it into;
if not, we first add a dmi-to-pci-bridge).
2) Once there is already a single dmi-to-pci-bridge on the system,
there won't be a need for any more, even if it's full, as long as
there is a pci-bridge with an open slot - you can also plug
pci-bridges into existing pci-bridges. So we have to make sure we
don't add a dmi-to-pci-bridge unless there aren't any
dmi-to-pci-bridges *or* any pci-bridges.
3) Although it is strongly discouraged, it is legal for a pci-bridge
to be directly plugged into pcie-root, and we don't want to
auto-add a dmi-to-pci-bridge if there is already a pci-bridge
that's been forced directly into pcie-root.
Although libvirt will now automatically create a dmi-to-pci-bridge
when it's needed, the code still remains for now that forces a
dmi-to-pci-bridge on all domains with pcie-root (in
qemuDomainDefAddDefaultDevices()). That will be removed in a future
patch.
For now, the pcie-root-ports are added one to a slot, which is a bit
wasteful and means it will fail after 31 total PCIe devices (30 if
there are also some PCI devices), but helps keep the changeset down
for this patch. A future patch will have 8 pcie-root-ports sharing the
functions on a single slot.
Andrea had the right idea when he disabled the "reserve an extra
unused slot" bit for aarch64/virt. For *any* PCI Express-based
machine, it is pointless since 1) an extra legacy-PCI slot can't be
used for hotplug, since hotplug into legacy PCI slots doesn't work on
PCI Express machinetypes, and 2) even for "coldplug" expansion,
everybody will want to expand using Express controllers, not legacy
PCI.
This patch eliminates the extra slot reserve unless the system has a
pci-root (i.e. legacy PCI)
The nec-usb-xhci device (which is a USB3 controller) has always
presented itself as a PCI device when plugged into a legacy PCI slot,
and a PCIe device when plugged into a PCIe slot, but libvirt has
always auto-assigned it to a legacy PCI slot.
This patch changes that behavior to auto-assign to a PCIe slot on
systems that have pcie-root (e.g. Q35 and aarch64/virt).
Since we don't yet auto-create pcie-*-port controllers on demand, this
means a config with an nec-xhci USB controller that has no PCI address
assigned will also need to have an otherwise-unused pcie-*-port
controller specified:
<controller type='pci' model='pcie-root-port'/>
<controller type='usb' model='nec-xhci'/>
(this assumes there is an otherwise-unused slot on pcie-root to accept
the pcie-root-port)
The e1000e is an emulated network device based on the Intel 82574,
present in qemu 2.7.0 and later. Among other differences from the
e1000, it presents itself as a PCIe device rather than legacy PCI. In
order to get it assigned to a PCIe controller, this patch updates the
flags setting for network devices when the model name is "e1000e".
(Note that for some reason libvirt has never validated the network
device model names other than to check that there are no dangerous
characters in them. That should probably change, but is the subject of
another patch.)
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1343094
libvirt previously assigned nearly all devices to a "hotpluggable"
legacy PCI slot even on machines with a PCIe root bus (and even though
most such machines don't even support hotplug on legacy PCI slots!)
Forcing all devices onto legacy PCI slots means that the domain will
need a dmi-to-pci-bridge (to convert from PCIe to legacy PCI) and a
pci-bridge (to provide hotpluggable legacy PCI slots which, again,
usually aren't hotpluggable anyway).
To help reduce the need for these legacy controllers, this patch tries
to assign virtio-1.0-capable devices to PCIe slots whenever possible,
by setting appropriate connectFlags in
virDomainCalculateDevicePCIConnectFlags(). Happily, when that function
was written (just a few commits ago) it was created with a
"virtioFlags" argument, set by both of its callers, which is the
proper connectFlags to set for any virtio-*-pci device - depending on
the arch/machinetype of the domain, and whether or not the qemu binary
supports virtio-1.0, that flag will have either been set to PCI or
PCIe. This patch merely enables the functionality by setting the flags
for the device to whatever is in virtioFlags if the device is a
virtio-*-pci device.
NB: the first virtio video device will be placed directly on bus 0
slot 1 rather than on a pcie-root-port due to the override for primary
video devices in qemuDomainValidateDevicePCISlotsQ35(). Whether or not
to change that is a topic of discussion, but this patch doesn't change
that particular behavior.
NB2: since the slot must be hotpluggable, and pcie-root (the PCIe root
complex) does *not* support hotplug, this means that suitable
controllers must also be in the config (i.e. either pcie-root-port, or
pcie-downstream-port). For now, libvirt doesn't add those
automatically, so if you put virtio devices in a config for a qemu
that has PCIe-capable virtio devices, you'll need to add extra
pcie-root-ports yourself. That requirement will be eliminated in a
future patch, but for now, it's simple to do this:
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
...
Partially Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1330024
This patch cleans up the connect flags for certain types/models of
devices that aren't PCI to return 0. In the future that may be used as
an indicator to the caller about whether or not a device needs a PCI
address. For now it's just ignored, except for in
virDomainPCIAddressEnsureAddr() - called during device hotplug - (and
in some cases actually needs to be re-set to PCI|HOTPLUGGABLE just in
case someone (in some old config) has manually set a PCI address for a
device that isn't PCI.
Before now, all the qemu hotplug functions assumed that all devices to
be hotplugged were legacy PCI endpoint devices
(VIR_PCI_CONNECT_TYPE_PCI_DEVICE). This worked out "okay", because all
devices *are* legacy PCI endpoint devices on x86/440fx machinetypes,
and hotplug didn't work properly on machinetypes using PCIe anyway
(hotplugging onto a legacy PCI slot doesn't work, and until commit
b87703cf any attempt to manually specify a PCIe address for a
hotplugged device would be erroneously rejected).
This patch makes all qemu hotplug operations honor the pciConnectFlags
set by the single all-knowing function
qemuDomainDeviceCalculatePCIConnectFlags(). This is done in 3 steps,
but in a single commit since we would have to touch the other points
at each step anyway:
1) add a flags argument to the hypervisor-agnostic
virDomainPCIAddressEnsureAddr() (previously it hardcoded
..._PCI_DEVICE)
2) add a new qemu-specific function qemuDomainEnsurePCIAddress() which
gets the correct pciConnectFlags for the device from
qemuDomainDeviceConnectFlags(), then calls
virDomainPCIAddressEnsureAddr().
3) in qemu_hotplug.c replace all calls to
virDomainPCIAddressEnsureAddr() with calls to
qemuDomainEnsurePCIAddress()
So in effect, we're putting a "shim" on top of all calls to
virDomainPCIAddressEnsureAddr() that sets the right pciConnectFlags.
Set pciConnectFlags in each device's DeviceInfo and then use those
flags later when validating existing addresses in
qemuDomainCollectPCIAddress() and when assigning new addresses with
qemuDomainPCIAddressReserveNextAddr() (rather than scattering the
logic about which devices need which type of slot all over the place).
Note that the exact flags set by
qemuDomainDeviceCalculatePCIConnectFlags() are different from the
flags previously set manually in qemuDomainCollectPCIAddress(), but
this doesn't matter because all validation of addresses in that case
ignores the setting of the HOTPLUGGABLE flag, and treats PCIE_DEVICE
and PCI_DEVICE the same (this lax checking was done on purpose,
because there are some things that we want to allow the user to
specify manually, e.g. assigning a PCIe device to a PCI slot, that we
*don't* ever want libvirt to do automatically. The flag settings that
we *really* want to match are 1) the old flag settings in
qemuDomainAssignDevicePCISlots() (which is HOTPLUGGABLE | PCI_DEVICE
for everything except PCI controllers) and 2) the new flag settings
done by qemuDomainDeviceCalculatePCIConnectFlags() (which are
currently exactly that - HOTPLUGGABLE | PCI_DEVICE for everything
except PCI controllers).
The lowest level function of this trio
(qemuDomainDeviceCalculatePCIConnectFlags()) aims to be the single
authority for the virDomainPCIConnectFlags to use for any given device
using a particular arch/machinetype/qemu-binary.
qemuDomainFillDevicePCIConnectFlags() sets info->pciConnectFlags in a
single device (unless it has no virDomainDeviceInfo, in which case
it's a NOP).
qemuDomainFillAllPCIConnectFlags() sets info->pciConnectFlags in all
devices that have a virDomainDeviceInfo
The latter two functions aren't called anywhere yet. This commit is
just making them available. Later patches will replace all the current
hodge-podge of flag settings with calls to this single authority.
Coverity identified that this variable might be leaked. And it's
right. If an error occurred and we have to roll back the control
jumps to try_remove label where we save the current error (see
0e82fa4c34 for more info). However, inside the code a jump onto
other label is possible thus leaking the error object.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
As was suggested in an earlier review comment[1], we can
catch some additional code points by cleaning up how we use the
hostdev subsystem type in some switch statements.
[1] End of https://www.redhat.com/archives/libvir-list/2016-September/msg00399.html
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
The memory device alias needs to be treated as machine ABI as qemu is
using it in the migration stream for section labels. To simplify this
generate the alias from the slot number unless an existing broken
configuration is detected.
With this patch the aliases are predictable and even certain
configurations which would not be migratable previously are fixed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1359135
As with other devices assign the slot number right away when adding the
device. This will make the slot numbers static as we do with other
addressing elements and it will ultimately simplify allocation of the
alias in a static way which does not break with qemu.
Detect on reconnect to a running qemu VM whether the alias of a
hotpluggable memory device (dimm) does not match the dimm slot number
where it's connected to. This is necessary as qemu is actually
considering the alias as machine ABI used to connect the backend object
to the dimm device.
This will require us to keep them consistent so that we can reliably
restore them on migration. In some situations it was currently possible
to create a mismatched configuration and qemu would refuse to restore
the migration stream.
To avoid breaking existing VMs we'll need to keep the old algorithm
though.
Simplify handling of the 'dimm' address element by allowing to specify
the slot number only. This will allow libvirt to allocate slot numbers
before starting qemu.
https://bugzilla.redhat.com/show_bug.cgi?id=1386976
We have everything ready. Actually the only limitation was our
check that denied hotplug of vhost-user.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If there is an error hotpluging a net device (for whatever
reason) a rollback operation is performed. However, whilst doing
so various helper functions that are called report errors on
their own. This results in the original error to be overwritten
and thus misleading the user.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Even though using /dev/shm/asdf as the backend, we still need to make
the mapping shared. The original patch forgot to add that parameter.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1392031
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
../../src/qemu/qemu_capabilities.c:3757: error: declaration of
'basename' shadows a global declaration [-Wshadow]
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Function qemuDomainAttachShmemDevice() steals the device data if the
hotplug was successful, but the condition checked for unsuccessful
execution otherwise.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Propagate the selected or default level to qemu if it's supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1376009
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This helps in selecting log level of the gluster gfapi, output to stderr.
The option is 'gluster_debug_level', can be tuned by editing
'/etc/libvirt/qemu.conf'
Debug levels ranges 0-9, with 9 being the most verbose, and 0
representing no debugging output. The default is the same as it was
before, which is a level of 4. The current logging levels defined in
the gluster gfapi are:
0 - None
1 - Emergency
2 - Alert
3 - Critical
4 - Error
5 - Warning
6 - Notice
7 - Info
8 - Debug
9 - Trace
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Allow detecting capabilities according to the qemu QMP schema. This is
necessary as sometimes the availability of certain options depends on
the presence of a field in the schema.
This patch adds support for loading the QMP schema when detecting qemu
capabilities and adds a very simple query language to allow traversing
the schema and selecting a certain element from it.
The infrastructure in this patch uses a query path to set a specific
capability flag according to the availability of the given element in
the schema.
https://bugzilla.redhat.com/show_bug.cgi?id=1379196
Add check in qemuCheckDiskConfig for an invalid combination
of using the 'scsi' bus for a block 'lun' device and any disk
source format other than 'raw'.
Commit c29e6d4805 cause build failure on RHEL-6:
../../src/qemu/qemu_capabilities.c: In function 'virQEMUCapsIsValid':
../../src/qemu/qemu_capabilities.c:4085: error: declaration of 'ctime'
shadows a global declaration [-Wshadow]
/usr/include/time.h:258: error: shadowed declaration is here [-Wshadow]
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's keep all run time validation of cached QEMU capabilities in
virQEMUCapsIsValid and call it whenever we access the cache.
virQEMUCapsInitCached should keep only the checks which do not make
sense once the cache is loaded in memory.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
virQEMUCapsLoadCache loads QEMU capabilities from a file, but strangely
enough it returns the loaded QEMU binary ctime in qemuctime parameter
instead of storing it in qemuCaps.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This is needed in order to migrate a domain with shmem devices as that
is not allowed to migrate.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
QEMU added support for ivshmem-plain and ivshmem-doorbell. Those are
reworked varians of legacy ivshmem that are compatible from the guest
POV, but not from host's POV and have sane specification and handling.
Details about the newer device type can be found in qemu's commit
5400c02b90bb:
http://git.qemu.org/?p=qemu.git;a=commit;h=5400c02b90bb
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We're keeping some things at default and that's not something we want to
do intentionally. Let's save some sensible defaults upfront in order to
avoid having problems later. The details for the defaults (of the newer
implementation) can be found in qemu's commit 5400c02b90bb:
http://git.qemu.org/?p=qemu.git;a=commit;h=5400c02b90bb
Since we are merely saving the defaults it will not change the guest ABI
and thanks to the fact that we're doing it in the PostParse callback it
will not break the ABI stability checks.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The old ivshmem is deprecated in QEMU, so let's use the better
ivshmem-{plain,doorbell} variants instead.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Unlike other migration capabilities, post-copy is also set on the
destination host which means it doesn't disappear once domain is
migrated. As a result of that other functionality which internally uses
migration to a file (virDomainManagedSave, virDomainSave,
virDomainCoreDump) may fail after migration because the post-copy
capability is still set.
https://bugzilla.redhat.com/show_bug.cgi?id=1374718
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If we failed to unlink old dom cfg file, we goto rollback.
But inside rollback, we fogot to unlink the new dom cfg file.
This patch fixes this issue.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Whilst working on another issue, I've noticed that in some
functions we have a local @driver variable among with access to
global @qemu_driver variable. This makes no sense.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Rather than waiting until we've free'd up all the resources, cause the
'workerPool' thread pool to flush as soon as possible during stateCleanup.
Otherwise, it's possible something waiting to run will SEGV such as is the
case during race conditions of simultaneous exiting libvirtd and qemu process.
Resolves the following crash:
[1] crash backtrace: (bt is shortened a bit):
0 0x00007ffff7282f2b in virClassIsDerivedFrom
(klass=0xdeadbeef, parent=0x55555581d650) at util/virobject.c:169
1 0x00007ffff72835fd in virObjectIsClass
(anyobj=0x7fffd024f580, klass=0x55555581d650) at util/virobject.c:365
2 0x00007ffff7283498 in virObjectLock
(anyobj=0x7fffd024f580) at util/virobject.c:317
3 0x00007ffff722f0a3 in virCloseCallbacksUnset
(closeCallbacks=0x7fffd024f580, vm=0x7fffd0194db0,
cb=0x7fffdf1af765 <qemuProcessAutoDestroy>)
at util/virclosecallbacks.c:164
4 0x00007fffdf1afa7b in qemuProcessAutoDestroyRemove
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_process.c:6365
5 0x00007fffdf1adff1 in qemuProcessStop
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0, reason=VIR_DOMAIN_SHUTOFF_CRASHED,
asyncJob=QEMU_ASYNC_JOB_NONE, flags=0)
at qemu/qemu_process.c:5877
6 0x00007fffdf1f711c in processMonitorEOFEvent
(driver=0x7fffd00f3a60, vm=0x7fffd0194db0) at qemu/qemu_driver.c:4545
7 0x00007fffdf1f7313 in qemuProcessEventHandler
(data=0x555555832710, opaque=0x7fffd00f3a60) at qemu/qemu_driver.c:4589
8 0x00007ffff72a84c4 in virThreadPoolWorker
(opaque=0x555555805da0) at util/virthreadpool.c:167
Thread 1 (Thread 0x7ffff7fb1880 (LWP 494472)):
1 0x00007ffff72a7898 in virCondWait
(c=0x7fffd01c21f8, m=0x7fffd01c21a0) at util/virthread.c:154
2 0x00007ffff72a8a22 in virThreadPoolFree
(pool=0x7fffd01c2160) at util/virthreadpool.c:290
3 0x00007fffdf1edd44 in qemuStateCleanup ()
at qemu/qemu_driver.c:1102
4 0x00007ffff736570a in virStateCleanup ()
at libvirt.c:807
5 0x000055555556f991 in main (argc=1, argv=0x7fffffffe458) at libvirtd.c:1660
We don't support cpu pinning for TCG domains because QEMU runs them in
one thread only. But vcpupin command was able to set them, which
resulted in a failed startup, so make sure that doesn't happen.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
When starting a new domain, we allocate the USB addresses and keep
an address cache in the domain object's private data.
However this data is lost on libvirtd restart.
Also generate the address cache if all the addresses have been
specified, so that devices hotplugged after libvirtd restart
also get theirs assigned.
https://bugzilla.redhat.com/show_bug.cgi?id=1387666
Return 0 instead of 1, so that qemuDomainAttachChrDevice does not
assume the address neeeds to be released on error.
No functional change, since qemuDomainReleaseDeviceAddress has been a noop
for virtio serial addresses since the address cache was removed
in commit 19a148b.
This time do not require an address cache as a parameter.
Simplify qemuDomainAttachChrDeviceAssignAddr to not generate
the virtio serial address cache for devices of other types.
Partially reverts commit 925fa4b.
Commit 19a148b dropped the cache from QEMU's private domain object.
Assume the callers do not have the cache by default and use
a longer name for the internal ones that do.
This makes the shorter 'virDomainVirtioSerialAddrAutoAssign'
name availabe for a function that will not require the cache.
When user tries to resume already running domain (Qemu or LXC)
VIR_ERR_OPERATION_INVALID error should be raised with message that
domain is already running.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1009008
Support for virtio disks was added in commit id 'fceeeda', but not for
SCSI drives. Add the secret for the server when hotplugging a SCSI drive.
No need to make any adjustments for unplug since that's handled during
the qemuDomainDetachDiskDevice call to qemuDomainRemoveDiskDevice in
the qemuDomainDetachDeviceDiskLive switch.
Added a test to/for the command line processing to show the command line
options when adding a SCSI drive for the guest.
https://bugzilla.redhat.com/show_bug.cgi?id=1300776
Complete the implementation of support for TLS encryption on
chardev TCP transports by adding the hotplug ability of a secret
to generate the passwordid for the TLS object for chrdev, RNG,
and redirdev.
Fix up the order of object removal on failure to be the inverse
of the attempted attach (for redirdev, chr, rng) - for each the
tls object was being removed before the chardev backend.
Likewise, add the ability to hot unplug that secret object as well
and be sure the order of unplug matches that inverse order of plug.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add the secret object so the 'passwordid=' can be added if the command line
if there's a secret defined in/on the host for TCP chardev TLS objects.
Preparation for the secret involves adding the secinfo to the char source
device prior to command line processing. There are multiple possibilities
for TCP chardev source backend usage.
Add test for at least a serial chardev as an example.
Need to remove the drive first, then the secobj and/or encobj if they exist.
This is because the drive has a dependency on secobj (or the secret for
the networked storage server) and/or the encobj (or the secret for the
LUKS encrypted volume). Deleting either object first leaves an drive
without it's respective objects.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add in the block I/O throttling length/duration parameter to the command
line if supported. If not supported, fail command creation.
Add the xml2argvtest for testing.
Add support for a duration/length for the bps/iops and friends.
Modify the API in order to add the "blkdeviotune." specific definitions
for the iotune throttling duration/length options
total_bytes_sec_max_length
write_bytes_sec_max_length
read_bytes_sec_max_length
total_iops_sec_max_length
write_iops_sec_max_length
read_iops_sec_max_length
Create a helper to set the bytes/iops iotune default values based on
the current qemu setting for both the live and persistent definitions.
NB: This also fixes an unreported bug where the persistent values for
*_max and size_iops_sec would be set back to 0 if unrelated persistent
values were set.
This patch will also adjust the qemuMonitorJSONSetBlockIoThrottle error
procession so that rather than returning/displaying:
"error: internal error: Unexpected error"
Fetch the actual error message from qemu and display that
Create a macros to hide all the comparisons for each of the fields.
Add a 'continue;' for a compiler hint that we only need to find one
this should be similar enough to the if - elseif - elseif logic.
Commit id '2c32237' added the TLS object removal to the DetachChrDevice
all when it should have been added to the RemoveChrDevice since that's
the norm for similar processing (e.g. disk)
Signed-off-by: John Ferlan <jferlan@redhat.com>
After succesfully reading an outdated caps cache from disk,
calling virQEMUCapsReset did not properly clear out the calculated
host CPU model. This lead to a memory leak when the host CPU model
pointer was overwritten later in virQEMUCapsNewForBinaryInternal.
Introduced by commit 68c70118.
Extended qemuDomainGetStatsVcpu to include the per vcpu halted
indicator if reported by QEMU. The key for new boolean value
has the format "vcpu.<n>.halted".
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Adding a field to the domain's private vcpu object to hold the halted
state information.
Adding two functions in support of the halted state:
- qemuDomainGetVcpuHalted: retrieve the halted state from a
private vcpu object
- qemuDomainRefreshVcpuHalted: obtain the per-vcpu halted states
via qemu monitor and store the results in the private vcpu objects
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Hao QingFeng <haoqf@linux.vnet.ibm.com>
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Extended the qemuMonitorCPUInfo with a halted flag. Extract the halted
flag for both text and JSON monitor.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
An upcoming commit will remove the "flag" argument from all the calls
to reserve the next available address|slot, but I don't want to change
the arguments in the hypervisor-agnostic
virDomainPCIAddressReserveNext*() functions, so this patch places a
simple qemu-specific wrapper around those functions - the new
functions don't take a flags arg, but grab it from the device's
info->pciConnectFlags.
Since TLS was introduced hostwide for libvirt 2.3.0 and a domain
configurable haveTLS was implemented for libvirt 2.4.0, we have to
modify the migratable XML for specific case where the 'tls' attribute
is based on setting from qemu.conf.
The "tlsFromConfig" is libvirt internal attribute and is stored only in
status XML to ensure that when libvirtd is restarted this internal flag
is not lost by the restart.
That flag is used to decide whether we should put *tls* attribute to
migratable XML or not.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add an optional "tls='yes|no'" attribute for a TCP chardev.
For QEMU, this will allow for disabling the host config setting of the
'chardev_tls' for a domain chardev channel by setting the value to "no" or
to attempt to use a host TLS environment when setting the value to "yes"
when the host config 'chardev_tls' setting is disabled, but a TLS environment
is configured via either the host config 'chardev_tls_x509_cert_dir' or
'default_tls_x509_cert_dir'
Signed-off-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Currently the union has only one member so remove that union. If there
is a need to add a new type of source for new bus in the future this
will force the author to add a union and properly check bus type before
any access to union member.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Commit id '2c322378' missed the nuance that the rng backend could be
using a TCP chardev and if TLS is enabled on the host, thus will need
to have the TLS object added.
Commit id '2c322378' missed the nuance that the redirdev backend could
be using a TCP chardev and if TLS is enabled on the host, thus will need
to have the TLS object added.