Change done by commit f309db1f4d wrongly
assumes that qemu can start with a combination of NUMA nodes specified
with the "memdev" option and the appropriate backends, and the legacy
way by specifying only "mem" as a size argument. QEMU rejects such
commandline though:
$ /usr/bin/qemu-system-x86_64 -S -M pc -m 1024 -smp 2 \
-numa node,nodeid=0,cpus=0,mem=256 \
-object memory-backend-ram,id=ram-node1,size=12345 \
-numa node,nodeid=1,cpus=1,memdev=ram-node1
qemu-system-x86_64: -numa node,nodeid=1,cpus=1,memdev=ram-node1: qemu: memdev option must be specified for either all or no nodes
To fix this issue we need to check if any of the nodes requires the new
definition with the backend and if so, then all other nodes have to use
it too.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1182467
With the new JSON to argv formatter we are now able to represent the
memory backend definitions in the JSON object format that is reusable
for monitor use (hotplug) and then convert it into the shell string.
This will avoid having two separate instances of the same code that
would create the different formats.
Previous refactors now allow to make this step without changes to the
test suite.
QEMU's command line visitor as well as the JSON interface take bytes by
default for memory object sizes. Convert mebibytes to bytes so that we
can later refactor the existing code for hotplug purposes.
QEMU's qapi visitor code allows yes/on/y for true and no/off/n for false
value of boolean properities. Unify the used style so that we can
generate it later and fix test cases.
Extract the memory backend device code into a separate function so that
it can be later easily refactored and reused.
Few small changes for future reusability, namely:
- new (currently unused) parameter for user specified page size
- size of the memory is specified in kibibytes, divided up in the
function
- new (currently unused) parameter for user specifed source nodeset
- option to enforce capability check
Unlike -device, qemu uses a JSON object to add backend "objects" via the
monitor rather than the string that would be passed on the commandline.
To be able to reuse code parts that configure backends for various
devices, this patch adds a helper that will allow generating the command
line representations from the JSON property object.
For distros that want to add versioned machine types, they will add
(downstream) machine types like "virt-foo-1.2.3". Detect these as
MMIO too.
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Previous patch of this series fixed the issue with adding a new PCI bridge
when all the slots were reserved by devices with user specified addresses.
In case there are still some PCI devices waiting to get a slot reserved
by qemuAssignDevicePCISlots, this means a new bus needs to be
created along with a corresponding bridge controller. By adding an
additional check, this scenario now results in a reasonable error
instead of generating wrong qemu command line.
Commit 93c8ca tried to fix the issue with auto-adding of a PCI bridge
controller, but didn't work properly in all scenarios.
This patch provides a better fix of the issue when all slots on a PCI bus
are reserved by devices with user specified addresses and no additional
bridges need to be created.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1132900
In order to be able to test for fully reserved PCI buses, assignment of
PCI slots for integrated devices needs to be moved to a separate function.
This also might be a good preparation if we decide to add support for
other chipsets as well.
Move qemuDomainAssignPCIAddresses after the definition
of the static function qemuDomainValidateDevicePCISlotsQ35.
This lets us define a new static function using
qemuDomainValidateDevicePCISlots* and use it in
qemuDomainAssignPCIAddresses without a forward declaration.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
As it turned out, fix of dead code 419a22 changed the affected condition
from "never true" to "always true", so better fix would be to change the
return code of virDomainMaybeAddController from 0 to 1 if
a new bridge has been added, thus distinguishing case when we didn't need to
add any controller and case we successfully added one.
The return code is changed in the next commit
https://bugzilla.redhat.com/show_bug.cgi?id=1164627
When using 'virsh attach-device' to hotplug an unsupported console type
into a qemu guest the attachment would succeed as the command line
formatter didn't report error in such case.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1130390
The listen address is not mandatory for <interface type='server'>
but when it's not specified, we've been formatting it as:
-netdev socket,listen=(null):5558,id=hostnet0
which failed with:
Device 'socket' could not be initialized
Omit the address completely and only format the port in the listen
attribute.
Also fix the schema to allow specifying a model.
Ploop is a pseudo device which makeit possible to access
to an image in a file as a block device. Like loop devices,
but with additional features, like snapshots, write tracker
and without double-caching.
It used in PCS for containers and in OpenVZ. You can manage
ploop devices and images with ploop utility
(http://git.openvz.org/?p=ploop).
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
We tested for positive return value from virDomainMaybeAddController,
but it returns 0 or -1 only resulting in a dead code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In case we find out, there are more PCI devices to be connected
than there are available slots on the default PCI bus, we automatically add a
new bus and a related PCI bridge controller as well. As there are no free slots
left on the default PCI bus, PCI bridge controller gets a free slot on a
newly created PCI bus which causes qemu to refuse to start the guest.
This fix introduces a new function qemuDomainPCIBusFullyReserved which
is checked right before we possibly try to reserve a slot for PCI bridge
controller.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1132900
https://bugzilla.redhat.com/show_bug.cgi?id=1165993
So, there are still plenty of vNIC types that we don't know how to set
bandwidth on. Let's warn explicitly in case user has requested it
instead of pretending everything was set.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
QEMU supports feature specification with -cpu host and we just skip
using that. Since QEMU developers themselves would like to use this
feature, this patch modifies the code to work.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1178850
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The default value should be 16 MiB instead of 8 MiB. Only really old
version of upstream QEMU used the 8 MiB as default for vga framebuffer.
Without this change if you update your libvirt where we introduced the
"vgamem" attribute for QXL video device the value will be set to 8 MiB,
but previously your guest had 16 MiB because we didn't pass any value to
QEMU command line which means QEMU used its own 16 MiB as default.
This will affect all users with guest's display resolution higher than
1920x1080.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In one of my previous commits (311b4a67) I've tried to allow to
pass regular system pages to <hugepages>. However, there was a
little bug that wasn't caught. If domain has guest NUMA topology
defined, qemuBuildNumaArgStr() function takes care of generating
corresponding command line. The hugepages backing for guest NUMA
nodes is handled there too. And here comes the bug: the hugepages
setting from XML is stored in KiB internally, however, the system
pages size was queried and stored in Bytes. So the check whether
these two are equal was failing even if it shouldn't.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Libvirt BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1175397
QEMU BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1170093
In qemu there are two interesting arguments:
1) -numa to create a guest NUMA node
2) -object memory-backend-{ram,file} to tell qemu which memory
region on which host's NUMA node it should allocate the guest
memory from.
Combining these two together we can instruct qemu to create a
guest NUMA node that is tied to a host NUMA node. And it works
just fine. However, depending on machine type used, there might
be some issued during migration when OVMF is enabled (see QEMU
BZ). While this truly is a QEMU bug, we can help avoiding it. The
problem lies within the memory backend objects somewhere. Having
said that, fix on our side consists on putting those objects on
the command line if and only if needed. For instance, while
previously we would construct this (in all ways correct) command
line:
-object memory-backend-ram,size=256M,id=ram-node0 \
-numa node,nodeid=0,cpus=0,memdev=ram-node0
now we create just:
-numa node,nodeid=0,cpus=0,mem=256
because the backend object is obviously not tied to any specific
host NUMA node.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When libvirt is managing a bridge's forwarding database (FDB)
(macTableManager='libvirt'), if we add FDB entries for a new guest
interface even before the qemu process is created, then in the case of
a migration any other guest attached to the "destination" bridge will
have its traffic immediately sent to the destination of the migration
even while the source domain is still running (and the destination, of
course, isn't). To make sure that traffic from other guests on the new
host continues flowing to the old guest until the new one is ready, we
have to wait until the new guest CPUs are started to add the FDB
entries.
Conversely, we need to remove the FDB entries from the bridge any time
the guest CPUs are stopped; among other things, this will assure
proper operation during a post-copy migration (which is just the
opposite of the problem described in the previous paragraph).
https://bugzilla.redhat.com/show_bug.cgi?id=1173507
It occurred to me that OpenStack uses the following XML when not using
regular huge pages:
<memoryBacking>
<hugepages>
<page size='4' unit='KiB'/>
</hugepages>
</memoryBacking>
However, since we are expecting to see huge pages only, we fail to
startup the domain with following error:
libvirtError: internal error: Unable to find any usable hugetlbfs
mount for 4 KiB
While regular system pages are not huge pages technically, our code is
prepared for that and if it helps OpenStack (or other management
applications) we should cope with that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuNetworkIfaceConnect() used to have a special case for
actualType='network' (a network with forward mode of route, nat, or
isolated) to call the libvirt public API to retrieve the bridge being
used by a network. That is no longer necessary - since all network
types that use a bridge and tap device now get the bridge name stored
in the ActualNetDef, we can just always use
virDomainNetGetActualBridgeName() instead.
(an audit of the two callers to qemuNetworkIfaceConnect() confirms
that it is never called for any other type of network, so the dead
code in the else statement (logging an internal error if it is called
for any other type of network) is eliminated in the process.)
When libvirt is managing the MAC table of a Linux host bridge, it must
turn off learning and unicast_flood for each tap device attached to
that bridge, then add a Forwarding Database (fdb) entry for the tap
device using the MAC address from the domain interface config.
Once we have disabled learning and flooding, any packet that has a
destination MAC address not present in the fdb will be dropped by the
bridge. This, along with the opportunistic disabling of promiscuous
mode[*], can result in enhanced network performance. and a potential
slight security improvement.
[*] If there is only one device on the bridge with learning/unicast_flood
enabled, then that device will automatically have promiscuous mode
disabled. If there are *no* devices with learning/unicast_flood
enabled (e.g. for a libvirt "route", "nat", or isolated network that
has no physical device attached), then all non-tap devices will have
promiscuous mode disabled (tap devices always have promiscuous mode
enabled, which may be a bug in the kernel, but in practice has 0
effect).
None of this has any effect for kernels prior to 3.15 (upstream kernel
commit 2796d0c648c940b4796f84384fbcfb0a2399db84 "bridge: Automatically
manage port promiscuous mode"). Even after that, until kernel 3.17
(upstream commit 5be5a2df40f005ea7fb7e280e87bbbcfcf1c2fc0 "bridge: Add
filtering support for default_pvid") traffic will not be properly
forwarded without manually adding vlan table entries. Unfortunately,
although the presence of the first patch is signalled by existence of
the "learning" and "unicast_flood" options in sysfs, there is no
reliable way to query whether or not the system's kernel has the
second of those patches installed, the only thing that can be done is
to try the setting and see if traffic continues to pass.
Since virNetworkFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Add attribute to set vgamem_mb parameter of QXL device for QEMU. This
value sets the size of VGA framebuffer for QXL device. Default value in
QEMU is 8MB so reuse it also in libvirt to not break things.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
So far we didn't have any option to set video memory size for qemu video
devices. There was only the vram (ram for QXL) attribute but it was valid
only for the QXL video device.
To provide this feature to users QEMU has a dedicated device attribute
called 'vgamem_mb' to set the video memory size. We will use the 'vram'
attribute for setting video memory size for other QEMU video devices.
For the cirrus device we will ignore the vram value because it has
hardcoded video size in QEMU.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
QEMU has two different type of QXL display device. The first "qxl-vga"
is for primary video device and second "qxl" is for secondary video
device.
There are also two different ways how to specify those devices on qemu
command line, the first one and obsolete is using "-vga" option and the
current new one is using "-device" option. The "-vga" could be used only
to setup primary video device, so the "-vga qxl" equal to
"-device qxl-vga". Unfortunately the "-vga qxl" doesn't support setting
additional parameters for the device and "-global" option must be used
for this purpose. It's mandatory to use "-global qxl-vga...." to set the
parameters of primary video device previously defined with "-vga qxl".
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The vram attribute was introduced to set the video memory but it is
usable only for few hypervisors excluding QEMU/KVM and the old XEN
driver. Only in case of QEMU the vram was used for QXL.
This patch updates the documentation to reflect current code in libvirt
and also changes the cases when we will set the default vram attribute.
It also fixes existing strange default value for VGA devices 9MB to 16MB
because the video ram should be rounded to power of two.
The change of default value could affect migrations but I found out that
QEMU always round the video ram to power of two internally so it's safe
to change the default value to the next closest power of two and also
silently correct every domain XML definition. And it's also safe because
we don't pass the value to QEMU.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1076098
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
To be able to express some use cases of the RBD backing with libvirt, we
need to be able to specify a config file for the RBD client to qemu as
that is one of the commonly used options.
Some storage systems have internal support for snapshots. Libvirt should
be able to select a correct snapshot when starting a VM.
This patch adds a XML element to select a storage source snapshot for
the RBD protocol which supports this feature.
To allow reuse this non-trivial parser code in the backing store parser
this part of the command line parser needs to be split out into a
separate funciton.
Instead of splitting out various fields, pass the complete structure and
let the function pick various things of it.
As one of the callers isn't using virStorageSourcePtr to store the data,
this patch adds glue code that fills the data into a dummy
virStorageSourcePtr before calling the func.
This change will help when adding new fields that need output processing
in the future.
As discussed on the upstream list, it's better not to make this
kind of predictions in libvirt. It may happen that qemu learns
how to enable OVMF on other architectures too and we shouldn't
try to chase that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, we are whitelisting architectures, that we know how to run
OVMF on. So far, only x86_64 was enabled. However, looking at qemu
code, the same commandline can be used to enable OVMF for armv7l and
aarch64.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Ethernet interfaces in libvirt currently do not support bandwidth setting.
For example, following xml file for an interface will not apply these
settings to corresponding qdiscs.
<interface type="ethernet">
<mac address="02:36:1d:18:2a:e4"/>
<model type="virtio"/>
<script path=""/>
<target dev="tap361d182a-e4"/>
<bandwidth>
<inbound average="984" peak="1024" burst="64"/>
<outbound average="2000" peak="2048" burst="128"/>
</bandwidth>
</interface>
Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Seems the 'size_iops_sec' was a late add and the checks for whether
the field was defined, but unsupported and the maximum size of the
field were not being made.
Also, adjust blkdeviotune support error message for grammar, spelling
(paramater), and remove the "(need QEMU 1.7 or superior)". None of
our other similar error messages list which QEMU version is required.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Check the arability of the options with the current qemu binary,
add them in the varable opt if yes, print a message if not.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
PowerISA allows processors to run VMs in binary compatibility ("compat")
mode supporting an older version of ISA. QEMU has recently added support to
explicitly denote a VM running in compatibility mode through commit 6d9412ea
& 8dfa3a5e85. Now, a "compat" mode VM can be run by invoking this qemu
commandline on a POWER8 host: -cpu host,compat=power7.
This patch allows libvirt to exploit cpu mode 'host-model' to describe this
new mode for PowerKVM guests. For example, when a user wants to request a
power7 vm to run in compatibility mode on a Power8 host, this can be
described in XML as follows :
<cpu mode='host-model'>
<model>power7</model>
</cpu>
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Signed-off-by: Pradipta Kr. Banerjee <bpradip@in.ibm.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>