As discussed on the upstream list, it's better not to make this
kind of predictions in libvirt. It may happen that qemu learns
how to enable OVMF on other architectures too and we shouldn't
try to chase that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, we are whitelisting architectures, that we know how to run
OVMF on. So far, only x86_64 was enabled. However, looking at qemu
code, the same commandline can be used to enable OVMF for armv7l and
aarch64.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
I noticed this while working on qemuDomainGetBlockInfo. Assigning
a bool value to an int variable compiles fine, but raises red flags
on the maintenance front as it becomes too easy to assign -1 or 2
or any other non-bool value to the same variable.
* cfg.mk (sc_prohibit_int_assign_bool): New rule.
* src/conf/snapshot_conf.c (virDomainSnapshotRedefinePrep): Fix
offenders.
* src/qemu/qemu_driver.c (qemuDomainGetBlockInfo)
(qemuDomainSnapshotCreateXML): Likewise.
* src/test/test_driver.c (testDomainSnapshotAlignDisks):
Likewise.
* src/util/vircgroup.c (virCgroupSupportsCpuBW): Likewise.
* src/util/virpci.c (virPCIDeviceBindToStub): Likewise.
* src/util/virutil.c (virIsCapableVport): Likewise.
* tools/virsh-domain-monitor.c (cmdDomMemStat): Likewise.
* tools/virsh-domain.c (cmdBlockResize, cmdScreenshot)
(cmdInjectNMI, cmdSendKey, cmdSendProcessSignal)
(cmdDetachInterface): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Ethernet interfaces in libvirt currently do not support bandwidth setting.
For example, following xml file for an interface will not apply these
settings to corresponding qdiscs.
<interface type="ethernet">
<mac address="02:36:1d:18:2a:e4"/>
<model type="virtio"/>
<script path=""/>
<target dev="tap361d182a-e4"/>
<bandwidth>
<inbound average="984" peak="1024" burst="64"/>
<outbound average="2000" peak="2048" burst="128"/>
</bandwidth>
</interface>
Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For some reason, commit id '72b4151f' triggered a Coverity uninitialized
'reply' variable check when referenced within the for loop.
It seems Coverity doesn't know that flags will have to be either AFFECT_LIVE
or AFFECT_CONFIG after the virDomainLiveConfigHelperMethod call.
By adding a "sa_assert()" to confirm that fact, Coverity is happy again.
https://bugzilla.redhat.com/show_bug.cgi?id=1164080
After a disk is hotunplugged a subsequent call to qemuDomainGetBlockIoTune
to get the --config settings of that disk will fail because the disk is no
longer found by qemuDiskPathToAlias causing an unexpected failure.
Since only the --live flag needs to have the disk device pointer, move the
fetch inside the (flags & VIR_DOMAIN_AFFECT_LIVE) condition. This will also
affect the results if no flags are provided or the --current flag is provided.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Seems the 'size_iops_sec' was a late add and the checks for whether
the field was defined, but unsupported and the maximum size of the
field were not being made.
Also, adjust blkdeviotune support error message for grammar, spelling
(paramater), and remove the "(need QEMU 1.7 or superior)". None of
our other similar error messages list which QEMU version is required.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since QEMU 1.2.0, we switched to QMP probing instead of parsing -help
(and other commands, such as -cpu ?) output. However, if QMP probing
failed, we still tried starting QEMU with various options and parsing
the output, which was guaranteed to fail because the output changed.
Let's just refuse parsing -help for QEMU >= 1.2.0.
https://bugzilla.redhat.com/show_bug.cgi?id=1160318
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We used to set migration capabilities only when a user asked for them in
flags. This is fine when migration succeeds since the QEMU process is
killed in the end but in case migration fails or if it's cancelled, some
capabilities may remain turned on with no way to turn them off. To fix
that, migration capabilities have to be turned on if requested but
explicitly turned off in case they were not requested but QEMU supports
them.
https://bugzilla.redhat.com/show_bug.cgi?id=1163953
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
and starting of guest, but this same deadlock exists for
updating/attaching network device to domain.
The deadlock was introduced by removing global QEMU driver lock because
nwfilter was counting on this lock and ensure that all driver locks are
locked inside of nwfilter{Define,Undefine}.
This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
the deadlock for all possible paths in QEMU driver. LXC and UML drivers
still have global lock.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In one of my previous patches (3a3c3780b) I've tried to fix the
problem of nvram path disappearing on a domain that's been
started and shut down again. I fixed this by explicitly saving
domain's config file. However, I did a bit of clumsy without
realizing we have a transient domains for which we don't save the
config file. Hence, any domain using UEFI became persistent.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Use the device type name if we know it instead of its number,
even if we can't hotplug it:
qemuMonitorJSONAttachCharDevCommand:6094 : operation failed: Unsupported
char device type '10'
If the memory mode is specified as 'strict' and with one node, we
get the following error when starting domain.
error: Unable to write to '$cgroup_path/cpuset.mems': Device or resource busy
XML is configured with numatune as follows:
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
It's broken by Commit 411cea638f
which moved qemuSetupCgroupForEmulator() before setting cpuset.mems
in qemuSetupCgroupPostInit.
Directory '$cgroup_path/emulator/' is created in qemuSetupCgroupForEmulator.
But '$cgroup_path/emulator/cpuset.mems' it not set and has a default value
(all nodes, such as 0-1). Then we setup '$cgroup_path/cpuset.mems' to the
nodemask (in this case it's '0') in qemuSetupCgroupPostInit. It must fail.
This patch makes '$cgroup_path/emulator/cpuset.mems' is set before
'$cgroup_path/cpuset.mems'. The action is similar with that in
qemuDomainSetNumaParamsLive.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
If the memory mode in numatune is specified as 'preferred' with one node
(such as nodeset='0'), domain's memory is not all in node 0 absolutely.
Assumption that node 0 doesn't have enough memory, memory can be allocated
on node 1 when qemu process startup. Then if we set cpuset.mems to '0',
it may invoke OOM.
Commit 1a7be8c600 changed the former logic of
checking memory mode in virDomainNumatuneGetNodeset. This patch adds the
check as before.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Check the arability of the options with the current qemu binary,
add them in the varable opt if yes, print a message if not.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Detect if the the qemu binary currently in use support the bps_max option,
If yes add it to the command, if not, just ignore the option.
We don't print error here, because the check for invalide arguments
has alerady been made in qemu_driver.c
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Add support for bps_max and friends in the driver part.
In the part checking if a qemu is running, check if the running binary
support bps_max, if not print an error message, if yes add it to
"info" variable
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add the capability to detect if the qemu binary have the capability
to use bps_max and friends
Add a value in the enum virQEMUCapsFlags for the qemu capability.
Set it with virQEMUCapsSet if the binary suport bps_max and they friends.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
PowerISA allows processors to run VMs in binary compatibility ("compat")
mode supporting an older version of ISA. QEMU has recently added support to
explicitly denote a VM running in compatibility mode through commit 6d9412ea
& 8dfa3a5e85. Now, a "compat" mode VM can be run by invoking this qemu
commandline on a POWER8 host: -cpu host,compat=power7.
This patch allows libvirt to exploit cpu mode 'host-model' to describe this
new mode for PowerKVM guests. For example, when a user wants to request a
power7 vm to run in compatibility mode on a Power8 host, this can be
described in XML as follows :
<cpu mode='host-model'>
<model>power7</model>
</cpu>
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Signed-off-by: Pradipta Kr. Banerjee <bpradip@in.ibm.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>
This adds support for PowerPC Little Endian architecture.,
and allows libvirt to spawn VMs based on 'ppc64le' architecture.
Signed-off-by: Pradipta Kr. Banerjee <bpradip@in.ibm.com>
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160084
As of b6d4dad1 (1.2.5) libvirt keeps track if domain disks have been
frozen. However, this falls into that set of information which don't
survive domain restart. Therefore, we need to clear the flag upon some
state transitions. Moreover, once we clear the flag we must update the
status file too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Extending the iothread disk support from pci to pci and ccw.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Finding the right type of disk should check for virtio as bus and
pci as device address type.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
This is a reaction to Michal's fix [1] for non-NUMA systems that also
splits out conf/ out of util/ because libvirt_util shouldn't require
libvirt_conf if it is the other way around. This particular use case
worked, but we're trying to avoid it as mentioned [2], many times.
The only functions from virnuma.c that needed numatune_conf were
virDomainNumatuneNodesetIsAvailable() and virNumaSetupMemoryPolicy().
The first one should be in numatune_conf as it works with
virDomainNumatune, the second one just needs nodeset and mode, both of
which can be passed without the need of numatune_conf.
Apart from fixing that, this patch also fixes recently added
code (between commits d2460f85^..5c8515620) that doesn't support
non-contiguous nodesets. It uses new function
virNumaNodesetIsAvailable(), which doesn't need a stub as it doesn't use
any libnuma functions, to check if every specified nodeset is available.
[1] https://www.redhat.com/archives/libvir-list/2014-November/msg00118.html
[2] http://www.redhat.com/archives/libvir-list/2011-June/msg01040.html
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since there was a valid note to patch 43b67f2e about the best spot to
check for bandwidth set call while having libvirt daemon run in session
mode, this patch reverts previous changes dealing with bandwith
(also reverts adding variable @cfg in qemuDomainGetNumaParameters which
does not have any use at the moment, but getting and unreferencing
driver's config) in qemu_driver.c and qemu_command.c. There will be
another patch in the series which introduces the fix itself.
==404== 232 bytes in 1 blocks are definitely lost in loss record 669 of 758
==404== at 0x4C2B934: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==404== by 0x52A2BF3: virAlloc (viralloc.c:144)
==404== by 0x1D49AD70: qemuMigrationCookieAddStatistics (qemu_migration.c:554)
==404== by 0x1D49AD70: qemuMigrationBakeCookie (qemu_migration.c:1228)
==404== by 0x1D4A43B8: qemuMigrationFinish (qemu_migration.c:5002)
==404== by 0x1D4C9339: qemuDomainMigrateFinish3Params (qemu_driver.c:11526)
Introduced by commit 5d6fb96
https://bugzilla.redhat.com/show_bug.cgi?id=1159219
Users might want to update startupPolicy via the
virDomainUpdateDeviceFlags API too. This patch
implements the feature on config layer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Domain memory elements such as max_balloon and cur_balloon are
implemented as 'unsigned long long', whereas the 'memory' element
in NUMA cells is implemented as 'unsigned int'.
Use the same data type (unsigned long long) for 'memory' element
in NUMA cells.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
In qemuMigrationFinish mig->nbd can not be initialized by
qemuMigrationEatCookie without the QEMU_MIGRATION_COOKIE_NBD flag.
That causes qemuMigrationStopNBDServer to return early without
stopping the NBD server properly.
Signed-off-by: Weiwei Li <nuonuoli@tencent.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
There was no check for 'nodeset' attribute in numatune-related
elements. This patch adds validation that any nodeset specified does
not exceed maximum host node.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
When one domain is being undefined and at the same time started, for
example, there is a possibility of a rare problem occuring.
- Thread 1 does virDomainUndefine(), has the lock, checks that the
domain is active and because it's not, calls
virDomainObjListRemove().
- Thread 2 does virDomainCreate() and tries to lock the domain.
- Thread 1 needs to lock domain list in order to remove the domain from
it, but must unlock domain first (proper order is to lock domain list
first and the domain itself second).
- Thread 2 grabs the lock, starts the domain and releases the lock.
- Thread 1 grabs the lock and removes the domain from list.
With this patch:
- qemuDomainRemoveInactive() creates a QEMU_JOB_MODIFY if that's
possible, but since it must remove the domain from list either way,
it continues even when starting the job failed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1150505
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When daemon is killed right in the middle of probing a qemu binary for
its capabilities, the qemu process is left running. Next time the
daemon is starting, it cannot start the probing qemu process because the
one that's already running does have the pidfile flock()'d.
Reported-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Particularly in qemuBuildNumaArgStr(), there was a need for the advice
due to memory backing, which needs to know the nodeset it will be pinned
to. With newer qemu this caused the following error when starting
domain:
error: internal error: Advice from numad is needed in case of
automatic numa placement
even when starting perfectly valid domain, e.g.:
...
<vcpu placement='auto'>4</vcpu>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
<cpu>
<numa>
<cell id='0' cpus='0' memory='524288'/>
<cell id='1' cpus='1' memory='524288'/>
</numa>
</cpu>
...
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138545
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Hotplugging and hotunplugging char devices is only supported through
'-device' and the check for device capability should be independently.
Coverity also complains about 'tmpChr->info.alias' could be NULL and we
are dereferencing it but it somehow only in this case don't recognize
that the value is set by 'qemuAssignDeviceChrAlias' so it's clearly
false positive. Add sa_assert to make coverity happy.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
commit 3e1e16aa8d (Use a port from the
migration range for NBD as well) changed ndb port allocation from
remotePorts to migrationPorts, but did not change the port releasing
process, which makes an error when migrating several times (above 64):
error: internal error: Unable to find an unused port in range
'migration' (49152-49215)
https://bugzilla.redhat.com/show_bug.cgi?id=1159245
Signed-off-by: Weiwei Li <nuonuoli@tencent.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1140981 reports that
the qemu-kvm shipped as part of RHEL 7.0 intentionally[1] cripples
block jobs by removing the 'block-stream' QMP command, while still
leaving 'block-job-cancel' as an unusable no-op. Meanwhile, we
already had existing code that checked whether block jobs were
completely missing (such as qemu 0.15), old style (cancel is
synchronous, and all commands spelled with '_'), or new style
(cancel is asynchronous, and all commands spelled with '-'), and
used that three-way probe to give decent error messages. At the
time that code was added, all existing qemu versions fell in one
of three buckets, and the code was using the presence of
'block-job-cancel' as the witness of which of the three buckets.
But now that RHEL qemu has shipped with intentionally crippled
'block-stream', we have a fourth bucket, which results in ugly
error messages when trying 'virsh blockpull':
error: Requested operation is not valid: Command 'block-stream' is not found
In reality, the fourth bucket should be treated the same as the
first bucket (no block job support); we can do that by realizing
that no existing build of qemu has working block-stream while
lacking block-job-cancel, so it is easiest to change our witness
to the command that starts a job rather than ends one. We still
act correctly regarding command spelling and whether cancel is
asynchronous. And on crippled RHEL builds, we now get the desired:
error: unsupported configuration: block jobs not supported with this qemu binary
[1] The intentional cripple is limited to qemu-kvm of RHEL; when using
qemu-kvm-rhev of RHEV, block job functionality is supported. Don't ask
me to explain the "why" behind it all - I'm just dealing with fallout
from someone else's decision.
* src/qemu/qemu_capabilities.h (QEMU_CAPS_BLOCKJOB_SYNC): Tweak comment.
* src/qemu/qemu_capabilities.c (virQEMUCapsCommands): Look for stream
rather than cancel when determining the flavor of block jobs supported.
Signed-off-by: Eric Blake <eblake@redhat.com>
Now that all offenders have been cleaned, turn on a syntax-check
rule to prevent future offenders.
* cfg.mk (sc_prohibit_static_zero_init): New rule.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Avoid false
positive.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1141621
As part of attach processing, assign the device aliases by calling
qemuAssignDeviceAliases during qemuDomainQemuAttach once all the devices
are found after the qemuParseCommandLinePid processing.
This will alleviate a symptom that caused a libvirtd crash during an
attempted device detach.
In qemuDomainDetachControllerDevice if the info.alias already exists
a call to qemuAssignDeviceControllerAlias would overwrite the existing
so avoid this possibility.
Not every error message from qemu-ga has to have the 'class' field
filled out. For instance, I've seen this error message lately:
qemuAgentCheckError:1047 : unable to execute QEMU agent command \
{"execute":"guest-set-time"}: \
{"error":{"desc":"Invalid parameter type, expected: integer"}}
However, this got translated into rather generic error message:
internal error: unable to execute QEMU agent command
'guest-set-time': unknown QEMU command error
So we've dropped better error message in favor of a generic one.
This is due to our code which expects 'class' which is not
present here.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch adds functionality to processNicRxFilterChangedEvent().
The old and new multicast lists are compared and the filters in
the macvtap are programmed to match the guest's filters.
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=956506 documents that
given a domain where an internal snapshot parent has an external
snapshot child, we lacked a safety check when trying to use the
--children-only option to snapshot-delete:
$ virsh start dom
$ virsh snapshot-create-as dom internal
$ virsh snapshot-create-as dom external --disk-only
$ virsh snapshot-delete dom external
error: Failed to delete snapshot external
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
$ virsh snapshot-delete dom internal --children
error: Failed to delete snapshot internal
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
$ virsh snapshot-delete dom internal --children-only
Domain snapshot internal children deleted
While I'd still like to see patches that actually do proper external
snapshot deletion, we should at least fix the inconsistency in the
meantime. With this patch:
$ virsh snapshot-delete dom internal --children-only
error: Failed to delete snapshot internal
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
* src/qemu/qemu_driver.c (qemuDomainSnapshotDelete): Fix condition.
Signed-off-by: Eric Blake <eblake@redhat.com>
To prepare for introducing a single global driver, rename the
virDriver struct to virHypervisorDriver and the registration
API to virRegisterHypervisorDriver()
Tuning NUMA or network interface parameters requires root
privileges to manage cgroups. Thus an attempt to set some of these
parameters in session mode on a running domain should be invalid
followed by an error. An example might be memory tuning which raises
an error in such case.
The following behavior in session mode will be present after applying
this patch:
Tuning | SET | GET |
----------|---------------|--------|
NUMA | shut off only | always |
Memory | never | never |
Interface | never | always |
Resolves https://bugzilla.redhat.com/show_bug.cgi?id=1126762
The documentation for the restore hook states that returning an empty
XML is equivalent with copying the input. There was a bug in the code
checking the returned string by checking the string instead of the
contents. Use the new helper to check if the string is empty.
virt-manager on Fedora sets up i686 hosts with "/usr/bin/qemu-kvm" emulator,
which in turn unconditionally execs qemu-system-x86_64 querying capabilities
then fails:
Error launching details: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/engine.py", line 748, in _show_vm_helper
details = self._get_details_dialog(uri, vm.get_connkey())
File "/usr/share/virt-manager/virtManager/engine.py", line 726, in _get_details_dialog
obj = vmmDetails(conn.get_vm(connkey))
File "/usr/share/virt-manager/virtManager/details.py", line 399, in __init__
self.init_details()
File "/usr/share/virt-manager/virtManager/details.py", line 784, in init_details
domcaps = self.vm.get_domain_capabilities()
File "/usr/share/virt-manager/virtManager/domain.py", line 518, in get_domain_capabilities
self.get_xmlobj().os.machine, self.get_xmlobj().type)
File "/usr/lib/python2.7/site-packages/libvirt.py", line 3492, in getDomainCapabilities
if ret is None: raise libvirtError ('virConnectGetDomainCapabilities() failed', conn=self)
libvirtError: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
Journal:
Oct 16 21:08:26 goatlord.localdomain libvirtd[1530]: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
If VM is configured with many devices(including passthrough devices)
and large memory, libvirtd will take seconds(in the worst case) to
wait for monitor. In this period the qemu process may run on any
PCPU though I intend to pin emulator to the specified PCPU in xml
configuration.
Actually qemu process takes high cpu usage during vm startup.
So this is not the strict CPU isolation in this case.
Signed-off-by: Zhou yimin <zhouyimin@huawei.com>
To allow live modification of device backends in qemu libvirt needs to
be able to hot-add/remove "objects". Add monitor backend functions to
allow this.
This function will be used for hot-add/remove of RNG backends,
IOThreads, memory backing objects, etc.
Our qemu monitor code has a converter from key-value pairs to a json
value object. I want to re-use the code later and having it part of the
monitor command generator is inflexible. Split it out into a separate
helper.
When enabling the migration_address option, by default it is
set to "127.0.0.1", but it's not a valid address for migration.
so we should add verification and set the default migration_address
to "0.0.0.0".
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
if specifying migration_host to an Ipv6 address without brackets,
it was resolved to an incorrect address, such as:
tcp:2001:0DB8::1428:4444,
but the correct address should be:
tcp:[2001:0DB8::1428]:4444
so we should add brackets when parsing it.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
After set domain's numa parameters for running domain, save the change,
save the change into live xml is needed to survive restarting the libvirtd,
same story with bug 1146511; meanwihle add call
qemuDomainObjBeginJob/qemuDomainObjEndJob in qemuDomainSetNumaParameters
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
After set the blkio parameters for running domain, save the change into
live xml is needed to survive restarting the libvirtd, same story with
bug 1146511, meanwhile add call qemuDomainObjBeginJob/qemuDomainObjEndJob
in qemuDomainSetBlkioParameters
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
This patch fills in the functionality of
processNicRxFilterChangedEvent(). It now checks if it is appropriate
to respond to the NIC_RX_FILTER_CHANGED event (based on device type
and configuration) and takes appropriate action. Currently it checks
if the guest interface has been configured with
trustGuestRxFilters='yes', and if the host side device is macvtap. If
so, and the MAC address on the guest has changed, the MAC address of
the macvtap device is changed to match.
The result of this is that networking from the guest will continue to
work if the mac address of a macvtap-connected network device is
changed from within the guest, as long as trustGuestRxFilters='yes'
(previously changing the MAC address in the guest would break
networking).
NIC_RX_FILTER_CHANGED is sent by qemu any time a NIC driver in the
guest modified the NIC's RX Filter (for example, if the MAC address of
the NIC is changed by the guest).
This patch doesn't do anything useful with that event; it just sets up
all the plumbing to get news of the event into a worker thread with
all proper locking/reference counting, and provide an easy place to
add in desired functionality.
See src/qemu/EVENTHANDLERS.txt for information/instructions on adding
a libvirt-internal handler for a qemu event (using
NIC_RX_FILTER_CHANGED as an example).
This text was in the commit log for the patch that added the event
handler for NIC_RX_FILTER_CHANGED, and John Ferlan expressed a desire
that the information not be "lost", so I've put it into a file in the
qemu directory, hoping that it might catch the attention of future
writers of handlers for qemu events.
This function can be called at any time to get the current status of a
guest's network device rx-filter. In particular it is useful to call
after libvirt recieves a NIC_RX_FILTER_CHANGED event - this event only
tells you that something has changed in the rx-filter, the details are
retrieved with the query-rx-filter monitor command (only available in
the json monitor). The command sent to the qemu monitor looks like this:
{"execute":"query-rx-filter", "arguments": {"name":"net2"} }'
and the results will look something like this:
{
"return": [
{
"promiscuous": false,
"name": "net2",
"main-mac": "52:54:00:98:2d:e3",
"unicast": "normal",
"vlan": "normal",
"vlan-table": [
42,
0
],
"unicast-table": [
],
"multicast": "normal",
"multicast-overflow": false,
"unicast-overflow": false,
"multicast-table": [
"33:33:ff:98:2d:e3",
"01:80:c2:00:00:21",
"01:00:5e:00:00:fb",
"33:33:ff:98:2d:e2",
"01:00:5e:00:00:01",
"33:33:00:00:00:01"
],
"broadcast-allowed": false
}
],
"id": "libvirt-14"
}
This is all parsed from JSON into a virNetDevRxFilter object for
easier consumption. (unicast-table is usually empty, but is also an
array of mac addresses similar to multicast-table).
(NB: LIBNL_CFLAGS was added to tests/Makefile.am because virnetdev.h
now includes util/virnetlink.h, which includes netlink/msg.h when
appropriate. Without LIBNL_CFLAGS, gcc can't find that file (if
libnl/netlink isn't available, LIBNL_CFLAGS will be empty and
virnetlink.h won't try to include netlink/msg.h anyway).)
Prior patch removed the need for the virConnectPtr in the unplug
detach host path which caused ripple effect to remove in multiple
callers. The previous patch just left things as ATTRIBUTE_UNUSED -
this patch will remove the variable.
https://bugzilla.redhat.com/show_bug.cgi?id=1141732
Introduced by commit id '8f76ad99' the logic to detach a scsi_host
device (SCSI or iSCSI) fails when attempting to remove the 'drive'
because as I found in my investigation - the DelDevice takes care of
that for us.
The investigation turned up commits to adjust the logic for the
qemuMonitorDelDevice and qemuMonitorDriveDel processing for interfaces
(commit id '81f76598'), disk bus=VIRTIO,SCSI,USB (commit id '0635785b'),
and chr devices (commit id '55b21f9b'), but nothing with the host devices.
This commit uses the model for the previous set of changes and applies
it to the hostdev path. The call to qemuDomainDetachHostSCSIDevice will
return to qemuDomainDetachThisHostDevice handling either the audit of
the failure or the wait for the removal and then call into
qemuDomainRemoveHostDevice for the event, removal from the domain hostdev
list, and audit of the removal similar to other paths.
NOTE: For now the 'conn' param to +qemuDomainDetachHostSCSIDevice is left
as ATTRIBUTE_UNUSED. Removing requires a cascade of other changes to be
left for a future patch.
This patch implements support for the ivshmem device in QEMU.
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Ivshmem is supported by QEMU since 0.13 release.
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This patch adds parsing/formatting code as well as documentation for
shared memory devices. This will currently be only accessible in QEMU
using it's ivshmem device, but is designed as generic as possible to
allow future expansion for other hypervisors.
In the devices section in the domain XML users may specify:
- For shmem device using a server:
<shmem name='shmem0'>
<server path='/tmp/socket-ivshmem0'/>
<size unit='M'>32</size>
<msi vectors='32' ioeventfd='on'/>
</shmem>
- For ivshmem device not using an ivshmem server:
<shmem name='shmem1'>
<size unit='M'>32</size>
</shmem>
Most of the configuration is made optional so it also allows
specifications like:
<shmem name='shmem1/>
<shmem name='shmem2'>
<server/>
</shmem>
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Aeons ago (commit 34dcbbb4, v0.8.2), we added a new libvirt event
(VIR_DOMAIN_EVENT_ID_IO_ERROR_REASON) in order to tell the user WHY
the guest halted. This is because at least VDSM wants to react
differently to ENOSPC events (resize the lvm partition to be larger,
and resume the guest as if nothing had happened) from all other events
(I/O is hosed, throw up our hands and flag things as broken). At the
time this was done, downstream RHEL qemu added a vendor extension
'__com.redhat_reason', which would be exactly one of these strings:
"enospc", "eperm", "eio", and "eother". In our stupidity, we exposed
those exact strings to clients, rather than an enum, and we also
return "" if we did not have access to a reason (which was the case
for upstream qemu).
Fast forward to now: upstream qemu commit c7c2ff0c (will be qemu 2.2)
FINALLY adds a 'nospace' boolean, after discussion with multiple
projects determined that VDSM really doesn't care about distinction
between any other error types. So this patch converts 'nospace' into
the string "enospc" for compatibility with RHEL clients that were
already used to the downstream extension, while leaving the reason
blank for all other cases (no change from the status quo).
See also https://bugzilla.redhat.com/show_bug.cgi?id=1119784
* src/qemu/qemu_monitor_json.c (qewmuMonitorJSONHandleIOError):
Parse reason field from modern qemu.
* include/libvirt/libvirt.h.in
(virConnectDomainEventIOErrorReasonCallback): Document it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Right now when building the qemu command line, we try to do various
unconditional validations of the guest CPU against the host CPU. However
this checks are overly applied. The only time we should use the checks
are:
- The user requests host-model/host-passthrough, or
- When KVM is requsted. CPU features requested in TCG mode are always
emulated by qemu and are independent of the host CPU, so no host CPU
checks should be performed.
Right now if trying to specify a CPU for arm on an x86 host, it attempts
to do non-sensical validation and falls over.
Switch all the test cases that were intending to test CPU validation to
use KVM, so they continue to test the intended code.
Amend some aarch64 XML tests with a CPU model, to ensure things work
correctly.
check domain's status before call virQEMUCapsGet to report a accurate
error when domain is shut off
Resolve: https://bugzilla.redhat.com/show_bug.cgi?id=1147847
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Up until now, we set memballoon period in monitor successfully, however
we did not update domain definition structure, thus dumpxml was omitting
period attribute in memballoon element
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1140960
When trying to update bandwidth limits on a running domain, limits get
updated in our internal structures, however XML parser reads
bandwidth limits from network 'actual' definition. Committing this patch
it is now available to update bandwidth 'actual' definition as well,
thus updating domain runtime XML.
If we don't properly clean up all processes in the
machine-<vmname>.scope systemd won't remove the cgroup and subsequent vm
starts fail with
'CreateMachine: File exists'
Additional processes can e.g. be added via
echo $PID > /sys/fs/cgroup/systemd/machine.slice/machine-${VMNAME}.scope/tasks
but there are other cases like
http://bugs.debian.org/761521
Invoke TerminateMachine to be on the safe side since systemd tracks the
cgroup anyway. This is a noop if all processes have terminated already.
Management software wants to be able to allocate disk space on demand.
To support this they need keep track of the space occupation of the
block device. This information is reported by qemu as part of block
stats.
This patch extend the block information in the bulk stats with the
allocation information.
To keep the same behaviour a helper is extracted from
qemuMonitorJSONGetBlockExtent in order to get per-device allocation
information.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
While our code gathers block stats via "query-blockstats" some
information need to be gathered via "query-block". Add a helper function
that will update the blockstats structure if requested.
This removes the artificial and unnecessary restriction that
virDomainSetMaxDowntime() only be called while a migration is in
progress.
https://bugzilla.redhat.com/show_bug.cgi?id=1146618
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The current block stats code matched up the disk name with the actual
stats by the order in the data returned from qemu. This unfortunately
isn't right as qemu may return the disks in any order. Fix this by
returning a hash of stats and index them by the disk alias.
Commit fba6bc4 introduced the non-migratable invtsc feature,
breaking save/migration with host-model and host-passthrough.
On hosts with this feature present it was automatically included
in the CPU definition, regardless of QEMU support.
Commit de0aeaf stopped including it by default for host-model,
but failed to fix host-passthrough.
This commit ignores checking of CPU features with host-passthrough,
since we don't pass them to QEMU (only -cpu host is passed),
allowing domains using host-passthrough that were saved with
the broken version of libvirtd to be restored.
https://bugzilla.redhat.com/show_bug.cgi?id=1147584
For the new VIR_DOMAIN_EVENT_ID_TUNABLE event we have a bunch of
constants added
VIR_DOMAIN_EVENT_CPUTUNE_<blah>
VIR_DOMAIN_EVENT_BLKDEVIOTUNE_<blah>
This naming convention is bad for two reasons
- There is no common prefix unique for the events to both
relate them, and distinguish them from other event
constants
- The values associated with the constants were chosen
to match the names used with virConnectGetAllDomainStats
so having EVENT in the constant name is not applicable in
that respect
This patch proposes renaming the constants to
VIR_DOMAIN_TUNABLE_CPU_<blah>
VIR_DOMAIN_TUNABLE_BLKDEV_<blah>
ie, given them a common VIR_DOMAIN_TUNABLE prefix.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
On a domain startup, the variable store path is generated if needed.
The path is intended to be generated only once. However, the updated
domain definition is not saved into config dir rather than state XML
only. So later, whenever the domain is destroyed and the daemon is
restarted, the generated path is forgotten and the file may be left
behind on virDomainUndefine() call.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since 363e9a68 we track backing chain metadata when creating snapshots
the right way even for the inactive configuration. As we did not yet
update other code paths that modify the backing chain (blockpull) the
newDef backing chain gets out of sync.
After stopping of a VM the new definition gets copied to the next start
one. The new VM then has incorrect backing chain info. This patch
switches the backing chain detector to always purge the existing backing
chain and forces re-detection to avoid this issue until we'll have full
backing chain tracking support.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1144922
Use the universal tunable event to report changes to user. All
blkdeviotune values are prefixed with "blkdeviotune".
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
When you updated some blkdeviotune values for running domain the values
were stored only internally, but not saved into the live XML so they
won't survive restarting the libvirtd.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Request erroring out from the backing chain traveller and drop qemu's
internal backing chain integrity tester.
The backing chain traveller reports errors by itself with possibly more
detail than qemuDiskChainCheckBroken ever could.
We also need to make sure that we reconnect to existing qemu instances
even at the cost of losing the backing chain info (this really should be
stored in the XML rather than reloaded from disk, but that needs some
work).
Add a new parameter to virStorageFileGetMetadata that will break the
backing chain detection process and report useful error message rather
than having to use virStorageFileChainGetBroken.
This patch just introduces the option, usage will be provided
separately.
Now we have universal tunable event so we can use it for reporting
changes to user. The cputune values will be prefixed with "cputune" to
distinguish it from other tunable events.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In the function at one place we check if def->cpu is NULL prior
to accessing def->cpu->ncells. Then, later in the code,
def->cpu->ncells is accessed directly, without the check. This
makes coverity unhappy, because the first check makes it think
def->cpu can be NULL. However, the function is not called if
def->cpu is NULL. Therefore, remove the first check and hopefully
make coverity cheer again.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
RDMA Live migration requires registering memory with the hardware, and
thus QEMU offers a new 'capability' to pre-register / mlock() the guest
memory in advance for higher RDMA performance before the migration
begins. This capability is disabled by default, which means QEMU will
register the memory with the hardware in an on-demand basis.
This patch exposes this capability with the following example usage:
virsh migrate --live --rdma-pin-all --migrateuri rdma://hostname domain qemu+ssh://hostname/system
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch adds support for RDMA protocol in migration URIs.
USAGE: $ virsh migrate --live --migrateuri rdma://hostname domain qemu+ssh://hostname/system
Since libvirt runs QEMU in a pretty restricted environment, several
files needs to be added to cgroup_device_acl (in qemu.conf) for QEMU to
be able to access the host's infiniband hardware. Full documenation of
the feature can be found on QEMU wiki:
http://wiki.qemu.org/Features/RDMALiveMigration
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Currently we only support TCP protocol for native QEMU migration but
this is going to be changed. Let's make the code more general and remove
hardcoded TCP protocol from several places.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
For compatibility with old libvirt we need to support both tcp:host and
tcp://host migration URIs. Let's make the code that parses them a bit
cleaner.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
RDMA migration uses the 'setup' state in QEMU to optionally lock
all memory before the migration starts. The total time spent in
this state is exposed as VIR_DOMAIN_JOB_SETUP_TIME.
Additionally, QEMU also exports migration throughput (mbps) for both
memory and disk, so let's add them too: VIR_DOMAIN_JOB_MEMORY_BPS,
VIR_DOMAIN_JOB_DISK_BPS.
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
commit 72f919f558 introduced an user
friendly error message when trying to use IDE disks as readonly.
Do the same thing for the SATA bus.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1112939
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Add a new parameter that will allow to return the XML stored in the save
image for further manipulation and adjust the callers. This option will
be used in later patches.
Commit id '9a2f36ec' added a build conditional of CAP_SYS_RAWIO
in order to determine whether or not a disk definition using rawio
should be allowed on platforms without CAP_SYS_RAWIO. If one was
found, virReportError was used but the code didn't goto cleanup.
This patch adds the goto.
We are not detecting the presence of FIPS from QEMU, but from procfs and
that means it's not QEMU capability. It was decided that we will pass
this flag to QEMU even if it's not supported by old QEMU binaries.
This patch also reverts changes done by commit a21cfb0f to
qemucapabilitestest and implements a new test case in qemuxml2argvtest.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1135431
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
If the qemu being used doesn't support JSON, then querying for IOThread
data would fail. In that case, ensure the *iothreads is NULL and return 0
as the count of iothreads available.
Currently, build with clang fails with:
CC qemu/libvirt_driver_qemu_impl_la-qemu_command.lo
qemu/qemu_command.c:6580:58: error: implicit conversion from enumeration type
'virMemAccess' to different enumeration type 'virTristateSwitch'
[-Werror,-Wenum-conversion]
virTristateSwitch memAccess = def->cpu->cells[i].memAccess;
~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~^~~~~~~~~
1 error generated.
Fix that by using virMemAccess instead of virTristateSwitch.
Commit f05b6a91 added virQEMUDriverConfigPtr argument to the
virQEMUCapsFillDomainCaps function and it uses forward declaration
of virQEMUDriverConfig and virQEMUDriverConfigPtr that casues clang
build to fail:
gmake[3]: Entering directory `/usr/home/novel/code/libvirt/src'
CC qemu/libvirt_driver_qemu_impl_la-qemu_capabilities.lo
In file included from qemu/qemu_capabilities.c:43:
In file included from qemu/qemu_hostdev.h:27:
qemu/qemu_conf.h:63:37: error: redefinition of typedef 'virQEMUDriverConfig'
is a C11 feature [-Werror,-Wtypedef-redefinition]
typedef struct _virQEMUDriverConfig virQEMUDriverConfig;
^
qemu/qemu_capabilities.h:328:37: note: previous definition is here
typedef struct _virQEMUDriverConfig virQEMUDriverConfig;
^
Fix that by passing loader and nloader config attributes directly
instead of passing complete config.
Commit f36a94f introduced a double free on all success paths
in qemuSharedDeviceEntryInsert.
Only call qemuSharedDeviceEntryFree on the error path and
set entry to NULL before jumping there if the entry already
is in the hash table.
https://bugzilla.redhat.com/show_bug.cgi?id=1142722
Clean up all _virDomainMemoryStat.
Signed-off-by: James <james.wangyufei@huawei.com>
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Clean up all _virDomainBlockStats.
Signed-off-by: James <james.wangyufei@huawei.com>
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Clean up all _virDomainInterfaceStats.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Live definition was used to look up the disk index while persistent one
was indexed leading to a crash in qemuDomainGetBlockIoTune. Use the
correct def and report a nice error.
Unfortunately it's accessible via read-only connection, though it can
only crash libvirtd in the cases where the guest is hot-plugging disks
without reflecting those changes to the persistent definition. So
avoiding hotplug, or doing hotplug where persistent is always modified
alongside live definition, will avoid the out-of-bounds access.
Introduced in: eca96694a7f992be633d48d5ca03cedc9bbc3c9aa (v0.9.8)
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1140724
Reported-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1135396
There are two ways how to tell qemu to use huge pages. The first one
is suitable for domains with NUMA nodes: the path to hugetlbfs mount
is appended to NUMA node definition on the command line. The second
one is suitable for UMA domains: here there's this global '-mem-path'
argument that accepts path to the hugetlbfs mount point. However, the
latter case was not used for all the cases that it should be. For
instance:
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='0'/>
</hugepages>
</memoryBacking>
didn't trigger the '-mem-path' so the huge pages - despite being
configured - were not used at all.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
As of 136ad4974 it is possible to specify different huge pages per
guest NUMA node. However, there's no check if nodeset specified in
./hugepages/page contains only those guest NUMA nodes that exist.
In other words with current code it is possible to define meaningless
combination:
<memoryBacking>
<hugepages>
<page size='1048576' unit='KiB' nodeset='0,2-3'/>
<page size='2048' unit='KiB' nodeset='1,4'/>
</hugepages>
</memoryBacking>
<vcpu placement='static'>4</vcpu>
<cpu>
<numa>
<cell id='0' cpus='0' memory='1048576'/>
<cell id='1' cpus='1' memory='1048576'/>
<cell id='2' cpus='2' memory='1048576'/>
<cell id='3' cpus='3' memory='1048576'/>
</numa>
</cpu>
Notice the node 4 in <hugepages/>?
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch implements the VIR_DOMAIN_STATS_BLOCK group of statistics.
To do so, a helper function to get the block stats of all the disks of
a domain is added.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This patch implements the VIR_DOMAIN_STATS_INTERFACE group of
statistics.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This patch implements the VIR_DOMAIN_STATS_VCPU group of statistics. To
do so, this patch also extracts a helper to gather the vCPU information.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL group of
statistics.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Future patches which will implement more bulk stats groups for QEMU will
need to access the connection object.
To accommodate that, a few changes are needed:
* enrich internal prototype to pass qemu driver object
* add per-group flag to mark if one collector needs monitor access or not
* If at least one collector of the requested stats needs monitor access
we must start a query job for each domain. The specific collectors
will run nested monitor jobs inside that.
* If the job can't be acquired we pass flags to the collector so
specific collectors that need monitor access can be skipped in order
to gather as much data as is possible.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Check to see if the UEFI binary mentioned in qemu.conf actually
exists, and if so expose it in domcapabilities like
<loader ...>
<value>/path/to/ovmf</value>
</loader>
We introduce some generic domcaps infrastructure for handling
a dynamic list of string values, it may be of use for future bits.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up till now the virQEMUCapsFillDomainCaps() was type of void as
there was no way for it to fail. This is, however, going to
change in the next commit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemu for IBM Power processor architecture is adding functionality for
supporting multiple 'pseries' machine type versions, each with different
capabilities. This patch is for supporting the same
Signed-off-by: Pradipta Kr. Banerjee <bpradip@in.ibm.com>
As of 542899168c we learned libvirt to use UEFI for domains.
However, management applications may firstly query if libvirt
supports it. And this is where virConnectGetDomainCapabilities()
API comes handy.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit b606bbb4 broke reporting of errors when setting of guest time
fails via the guest agent as the return value is not checked and later
overwritten by the return value qemuMonitorRTCResetReinjection();
Fix this by checking the return value before resetting the RTC
reinjection.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1142294
If there are no iothreads, then return from qemuProcessDetectIOThreadPIDs
without error; otherwise, the following occurs:
error: Failed to start domain $dom
error: An error occurred, but the cause is unknown
Modify qemuProcessStart() in order to allowing setting affinity to
specific CPU's for IOThreads. The process followed is similar to
that for the vCPU's.
This involves adding a function to fetch the IOThread id's via
qemuMonitorGetIOThreads() and adding them to iothreadpids[] list.
Then making sure all the cgroup data has been properly set up and
finally assigning affinity.
In order to support cpuset setting, introduce qemuSetupCgroupIOThreadsPin
and qemuSetupCgroupForIOThreads to mimic the existing Vcpu API's.
These will support having an 'iotrhreadpin' element in the 'cpuset' in
order to pin named IOThreads to specific CPU's. The IOThread pin names
will follow the IOThread naming scheme starting at 1 (eg "iothread1")
up through an including the def->iothreads value.
Coverity complains about the calculation of the buf & len within
the PROBE macro. So to quiet things down, do the calculation prior
to usage in either write() or qemuMonitorIOWriteWithFD() calls and
then have the PROBE use the calculated values - which works.
We stupidly modeled block job bandwidth after migration
bandwidth, which in turn was an 'unsigned long' and therefore
subject to 32-bit vs. 64-bit interpretations. To work around
the fact that 10-gigabit interfaces are possible but don't fit
within 32 bits, the original interface took the number scaled
as MiB/sec. But this scaling is rather coarse, and it might
be nice to tune bandwidth finer than in megabyte chunks.
Several of the block job calls that can set speed are fed
through a common interface, so it was easier to adjust them all
at once. Note that there is intentionally no flag for the new
virDomainBlockCopy; there, since the API already uses a 64-bit
type always, instead of a possible 32-bit type, and is brand
new, it was easier to just avoid scaling issues. As with the
previous patch that adjusted the query side (commit db33cc24),
omitting the new flag preserves old behavior, and the
documentation now mentions limits of what happens when a 32-bit
machine is on either client or server side.
* include/libvirt/libvirt.h.in (virDomainBlockJobSetSpeedFlags)
(virDomainBlockPullFlags)
(VIR_DOMAIN_BLOCK_REBASE_BANDWIDTH_BYTES)
(VIR_DOMAIN_BLOCK_COMMIT_BANDWIDTH_BYTES): New enums.
* src/libvirt.c (virDomainBlockJobSetSpeed, virDomainBlockPull)
(virDomainBlockRebase, virDomainBlockCommit): Document them.
* src/qemu/qemu_driver.c (qemuDomainBlockJobSetSpeed)
(qemuDomainBlockPull, qemuDomainBlockRebase)
(qemuDomainBlockCommit, qemuDomainBlockJobImpl): Support new flag.
Signed-off-by: Eric Blake <eblake@redhat.com>
Upstream qemu 1.4 added some drive-mirror tunables not present
when it was first introduced in 1.3. Management apps may want
to set these in some cases (for example, without tuning
granularity down to sector size, a copy may end up occupying
more bytes than the original because an entire cluster is
copied even when only a sector within the cluster is dirty,
although tuning it down results in more CPU time to do the
copy). I haven't personally needed to use the parameters, but
since they exist, and since the new API supports virTypedParams,
we might as well expose them.
Since the tuning parameters aren't often used, and omitted from
the QMP command when unspecified, I think it is safe to rely on
qemu 1.3 to issue an error about them being unsupported, rather
than trying to create a new capability bit in libvirt.
Meanwhile, all versions of qemu from 1.4 to 2.1 have a bug where
a bad granularity (such as non-power-of-2) gives a poor message:
error: internal error: unable to execute QEMU command 'drive-mirror': Invalid parameter 'drive-virtio-disk0'
because of abuse of QERR_INVALID_PARAMETER (which is supposed to
name the parameter that was given a bad value, rather than the
value passed to some other parameter). I don't see that a
capability check will help, so we'll just live with it (and it
has since been improved in upstream qemu).
* src/qemu/qemu_monitor.h (qemuMonitorDriveMirror): Add
parameters.
* src/qemu/qemu_monitor.c (qemuMonitorDriveMirror): Likewise.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockCopyCommon): Likewise.
(qemuDomainBlockRebase, qemuDomainBlockCopy): Adjust callers.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror): Likewise.
* tests/qemumonitorjsontest.c (qemuMonitorJSONDriveMirror): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The hard part of managing the disk copy is already coded; all
this had to do was convert the XML and virTypedParameters into
the internal representation.
With this patch, all blockcopy operations that used the old
API should also work via the new API. Additional extensions,
such as supporting the granularity tunable or a network rather
than file destination, will be added as later patches.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): New function.
Signed-off-by: Eric Blake <eblake@redhat.com>
In order to implement the new virDomainBlockCopy, the existing
block copy internal implementation needs to be adjusted. The
new function will parse XML into a storage source, and parse
typed parameters into integers, then call into the same common
backend. For now, it's easier to keep the same implementation
limits that only local file destinations are suported, but now
the check needs to be explicit. Similar to qemuDomainBlockJobImpl
consuming 'vm', this code also consumes the caller's 'mirror'
description of the destination.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Rename...
(qemuDomainBlockCopyCommon): ...and adjust parameters.
(qemuDomainBlockRebase): Adjust caller.
Signed-off-by: Eric Blake <eblake@redhat.com>
When a domain is undefined, there are options to remove it's
managed save state or snapshots. However, there's another file
that libvirt creates per domain: the NVRAM variable store file.
Make sure that the file is not left behind if the domain is
undefined.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If we end up at the cleanup lable before we've VIR_EXPAND_N the list,
then calling virQEMUCapsFreeStringList() with a NULL proplist could
theoretically deref proplist if nproplist was set. Coverity doesn't
seem to acknowledge the relationship between proplist and nproplist
assuming in virQEMUCapsFreeStringList that nproplist could be at
least 1 and thus have a null deref. It only seems to follow the
NULL proplist.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Test suites using the port allocator don't want to have different
behaviour depending on whether a port is in use on the host. Add
a VIR_PORT_ALLOCATOR_SKIP_BIND_CHECK which test suites can use
to skip the bind() test. The port allocator will thus only track
ports in use by the test suite process itself. This is fine when
using the port allocator to generate guest configs which won't
actually be launched
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Coverity notes that if the virConnectListAllDomains returns a negative
value then the loop at the cleanup label that ends on numDomains will
have issues.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Coverity notes that if qemuMonitorGetMachines() returns a negative
nmachines value, then the code at the cleanup label will have issues.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In qemuProcessInitPCIAddresses() if qemuMonitorGetAllPCIAddresses()
returns a negative (or zero) value, then no need to call the
qemuProcessDetectPCIAddresses().
Signed-off-by: John Ferlan <jferlan@redhat.com>
If the qemuMigrationEatCookie() fails to set mig, we jump to cleanup:
which will call qemuMigrationCancelDriveMirror() without first checking
if mig == NULL
Signed-off-by: John Ferlan <jferlan@redhat.com>
If we jump to cleanup before allocating the 'result', then the call
to virBlkioDeviceArrayClear will deref result causing a problem.
Signed-off-by: John Ferlan <jferlan@redhat.com>
If the virJSONValueNewObject() fails, then rather than going to error
and getting a Coverity false positive since it doesn't seem to understand
the relationship between nkeywords, keywords, and values and seems to
believe calling qemuFreeKeywords will cause a NULL deref - just return NULL
Signed-off-by: John Ferlan <jferlan@redhat.com>
Coverity complains that checking for !domlist after setting doms = domlist
and making a deref of doms just above
It seems the call in question was intended to me made in the case that
'doms' was passed in and not when the virDomainObjListExport() call
allocated domlist and already called virConnectGetAllDomainStatsCheckACL().
Thus rather than check for !domlist - check that "doms != domlist" in
order to avoid the Coverity message.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In qemuDomainSetBlkioParameters(), Coverity points out that the calls
to qemuDomainParseBlkioDeviceStr() are slightly different and points
out there may be a cut-n-paste error.
In the first call (AFFECT_LIVE), the second parameter is "param->field";
however, for the second call (AFFECT_CONFIG), the second parameter is
"params->field". It seems the "param->field" is correct especially since
each path as a setting of "param" to "¶ms[i]". Furthermore, there
were a few more instances of using "params[i]" instead of "param->"
which I cleaned up.
Signed-off-by: John Ferlan <jferlan@redhat.com>
When using split UEFI image, it may come handy if libvirt manages per
domain _VARS file automatically. While the _CODE file is RO and can be
shared among multiple domains, you certainly don't want to do that on
the _VARS file. This latter one needs to be per domain. So at the
domain startup process, if it's determined that domain needs _VARS
file it's copied from this master _VARS file. The location of the
master file is configurable in qemu.conf.
Temporary, on per domain basis the location of master NVRAM file can
be overridden by this @template attribute I'm inventing to the
<nvram/> element. All it does is holding path to the master NVRAM file
from which local copy is created. If that's the case, the map in
qemu.conf is not consulted.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
QEMU now supports UEFI with the following command line:
-drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on \
-drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
where the first line reflects <loader> and the second one <nvram>.
Moreover, these two lines obsolete the -bios argument.
Note that UEFI is unusable without ACPI. This is handled properly now.
Among with this extension, the variable file is expected to be
writable and hence we need security drivers to label it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Up to now, users can configure BIOS via the <loader/> element. With
the upcoming implementation of UEFI this is not enough as BIOS and
UEFI are conceptually different. For instance, while BIOS is ROM, UEFI
is programmable flash (although all writes to code section are
denied). Therefore we need new attribute @type which will
differentiate the two. Then, new attribute @readonly is introduced to
reflect the fact that some images are RO.
Moreover, the OVMF (which is going to be used mostly), works in two
modes:
1) Code and UEFI variable store is mixed in one file.
2) Code and UEFI variable store is separated in two files
The latter has advantage of updating the UEFI code without losing the
configuration. However, in order to represent the latter case we need
yet another XML element: <nvram/>. Currently, it has no additional
attributes, it's just a bare element containing path to the variable
store file.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
After the previous commit, migration statistics on the source and
destination hosts are not equal because the destination updated time
statistics. Let's send the result back so that the same data can be
queried on both sides of the migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Total time of a migration and total downtime transfered from a source to
a destination host do not count with the transfer time to the
destination host and with the time elapsed before guest CPUs are
resumed. Thus, source libvirtd remembers when migration started and when
guest CPUs were paused. Both timestamps are transferred to destination
libvirtd which uses them to compute total migration time and total
downtime. Obviously, this requires the time to be synchronized between
the two hosts. The reported times are useless otherwise but they would
be equally useless if we didn't do this recomputation so don't lose
anything by doing it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When migrating a transient domain or with VIR_MIGRATE_UNDEFINE_SOURCE
flag, the domain may disappear from source host. And so will migration
statistics associated with the domain. We need to transfer the
statistics at the end of a migration so that they can be queried at the
destination host.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
virDomainGetJobStats gains new VIR_DOMAIN_JOB_STATS_COMPLETED flag that
can be used to fetch statistics of a completed job rather than a
currently running job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Job statistics data were tracked in several structures and variables.
Let's make a new qemuDomainJobInfo structure which can be used as a
single source of statistics data as a preparation for storing data about
completed a job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemu now checks for invalid address type for a panic device, which is
currently implemented only to use ISA address type, thus rejecting
any other options, except for leaving XML attributes blank, in that case,
defaults are used (this behaviour remains the same from earlier verions).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138125
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When QEMU fails during incoming migration after we successfully started
it (i.e., during Perform or Finish phase), we report a rather unhelpful
message
Unable to read from monitor: Connection reset by peer
We already have a code that takes error messages from QEMU's error
output but we disable it once QEMU successfully starts. This patch
postpones this until the end of Finish phase during incoming migration
so that we can report a much better error message:
internal error: early end of file from monitor: possible problem:
Unknown savevm section or instance '0000:00:05.0/virtio-balloon' 0
load of migration failed
https://bugzilla.redhat.com/show_bug.cgi?id=1090093
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Return failure right away when the domain object can't be looked up
instead of jumping to cleanup. This allows to remove the condition
before unlocking the domain object.