Newer QEMU introduced cache=directsync for -drive, this patchset
is to expose it in libvirt layer.
* Introduced a new QEMU capability flag ($prefix_CACHE_DIRECTSYNC),
As even $prefix_CACHE_V2 is set, we can't known if directsync
is supported.
In some versions of qemu, both virtio-blk-pci and virtio-net-pci
devices can have an event_idx setting that determines some details of
event processing. When it is enabled, it "reduces the number of
interrupts and exits for the guest". qemu will automatically enable
this feature when it is available, but there may be cases where this
new feature could actually make performance worse (NB: no such case
has been found so far).
As a safety switch in case such a situation is encountered in the
field, this patch adds a new attribute "event_idx" to the <driver>
element of both disk and interface devices. event_idx can be set to
"on" (to force event_idx on in case qemu has it disabled by default)
or "off" (for force event_idx off). In the case that event_idx support
isn't present in qemu, the attribute is ignored (this on the advice of
the qemu developer).
docs/formatdomain.html.in: document the new flag (marking it as
"don't mess with this!"
docs/schemas/domain.rng: add event_idx in appropriate places
src/conf/domain_conf.[ch]: add event_idx to parser and formatter
src/libvirt_private.syms: export
virDomainVirtioEventIdx(From|To)String
src/qemu/qemu_capabilities.[ch]: detect and report event_idx in
disk/net
src/qemu/qemu_command.c: add event_idx parameter to qemu commandline
when appropriate.
tests/qemuxml2argvdata/qemuxml2argv-event_idx.args,
tests/qemuxml2argvdata/qemuxml2argv-event_idx.xml,
tests/qemuxml2argvtest.c,
tests/qemuxml2xmltest.c: test cases for event_idx.
This patch creates new <bios> element which, at this time has only the
attribute useserial='yes|no'. This attribute allow users to use
Serial Graphics Adapter and see BIOS messages from the very first moment
domain boots up. Therefore, users can choose boot medium, set PXE, etc.
For virtio disks and interfaces, qemu allows users to enable or disable
ioeventfd feature. This means, qemu can execute domain code, while
another thread waits for I/O event. Basically, in some cases it is win,
in some loss. This feature is available via 'ioeventfd' attribute in disk
and interface <driver> element. It accepts 'on' and 'off'. Leaving this
attribute out defaults to hypervisor decision.
NB: the enum that uses the string vnet-host (now changed to vhost-net)
is used in XML, but fortunately that hasn't been in an official
release yet, so it can still be fixed.
To cope with the QEMU binary being changed while a VM is running,
it is neccessary to persist the original qemu capabilities at the
time the VM is booted.
* src/qemu/qemu_capabilities.c, src/qemu/qemu_capabilities.h: Add
an enum for a string rep of every capability
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Support for
storing capabilities in the domain status XML
* src/qemu/qemu_process.c: Populate & free QEMU capabilities at
domain startup
For qemu names the primary vga as "qxl-vga":
1) if vram is specified for 2nd qxl device:
-vga qxl -global qxl-vga.vram_size=$SIZE \
-device qxl,id=video1,vram_size=$SIZE,...
2) if vram is not specified for 2nd qxl device, (use the default
set by global):
-vga qxl -global qxl-vga.vram_size=$SIZE \
-device qxl,id=video1,...
For qemu names all qxl devices as "qxl":
1) if vram is specified for 2nd qxl device:
-vga qxl -global qxl.vram_size=$SIZE \
-device qxl,id=video1,vram_size=$SIZE ...
2) if vram is not specified for 2nd qxl device:
-vga qxl -global qxl-vga.vram_size=$SIZE \
-device qxl,id=video1,...
"-global" is the only way to define vram_size for the primary qxl
device, regardless of how qemu names it, (It's not good a good
way, as original idea of "-global" is to set a global default for
a driver property, but to specify vram for first qxl device, we
have to use it).
For other qxl devices, as they are represented by "-device", could
specify it directly and seperately for each, and it overrides the
default set by "-global" if specified.
v1 - v2:
* modify "virDomainVideoDefaultRAM" so that it returns 16M as the
default vram_size for qxl device.
* vram_size * 1024 (qemu accepts bytes for vram_size).
* apply default vram_size for qxl device for which vram_size is
not specified.
* modify "graphics-spice" tests (more sensiable vram_size)
* Add an argument of virDomainDefPtr type for qemuBuildVideoDevStr,
to use virDomainVideoDefaultRAM in qemuBuildVideoDevStr).
v2 - v3:
* Modify default video memory size for qxl device from 16M to 24M
* Update codes to be consistent with changes on qemu_capabilities.*
This is done for two reasons:
- we are getting very close to 64 flags which is the maximum we can use
with unsigned long long
- by using LL constants in enum we already violates C99 constraint that
enum values have to fit into int
This is in response to:
https://bugzilla.redhat.com/show_bug.cgi?id=629662
Explanation
qemu's virtio-net-pci driver allows setting the algorithm used for tx
packets to either "bh" or "timer". This is done by adding ",tx=bh" or
",tx=timer" to the "-device virtio-net-pci" commandline option.
'bh' stands for 'bottom half'; when this is set, packet tx is all done
in an iothread in the bottom half of the driver. (In libvirt, this
option is called the more descriptive "iothread".)
'timer' means that tx work is done in qemu, and if there is more tx
data than can be sent at the present time, a timer is set before qemu
moves on to do other things; when the timer fires, another attempt is
made to send more data. (libvirt retains the name "timer" for this
option.)
The resulting difference, according to the qemu developer who added
the option is:
bh makes tx more asynchronous and reduces latency, but potentially
causes more processor bandwidth contention since the cpu doing the
tx isn't necessarily the cpu where the guest generated the
packets.
Solution
This patch provides a libvirt domain xml knob to change the option on
the qemu commandline, by adding a new attribute "txmode" to the
<driver> element that can be placed inside any <interface> element in
a domain definition. It's use would be something like this:
<interface ...>
...
<model type='virtio'/>
<driver txmode='iothread'/>
...
</interface>
I chose to put this setting as an attribute to <driver> rather than as
a sub-element to <tune> because it is specific to the virtio-net
driver, not something that is generally usable by all network drivers.
(note that this is the same placement as the "driver name=..."
attribute used to choose kernel vs. userland backend for the
virtio-net driver.)
Actually adding the tx=xxx option to the qemu commandline is only done
if the version of qemu being used advertises it in the output of
qemu -device virtio-net-pci,?
If a particular txmode is requested in the XML, and the option isn't
listed in that help output, an UNSUPPORTED_CONFIG error is logged, and
the domain fails to start.
QEMUD_CMD_FLAG_PCI_MULTIBUS should be set in the function
qemuCapsExtractVersionInfo()
The flag QEMUD_CMD_FLAG_PCI_MULTIBUS is used in the function
qemuBuildDeviceAddressStr(). All callers get qemuCmdFlags
by the function qemuCapsExtractVersionInfo() except that
testCompareXMLToArgvFiles() in qemuxml2argvtest.c.
So we should set QEMUD_CMD_FLAG_PCI_MULTIBUS in the function
qemuCapsExtractVersionInfo() instead of qemuBuildCommandLine()
because the function qemuBuildCommandLine() does not be called
when we attach a pci device.
tests: set QEMUD_CMD_FLAG_PCI_MULTIBUS in testCompareXMLToArgvFiles()
set QEMUD_CMD_FLAG_PCI_MULTIBUS before calling qemuBuildCommandLine()
as the flags is not set by qemuCapsExtractVersionInfo().
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
qemu 0.13.0 (at least as built for Fedora 14, and also backported to
RHEL 6.0 qemu) supported an older syntax for a spicevmc channel; it's
not as flexible (it has an implicit name and hides the chardev
aspect), but now that we support spicevmc, we might as well target
both variants.
* src/qemu/qemu_capabilities.h (QEMUD_CMD_FLAG_DEVICE_SPICEVMC):
New flag.
* src/qemu/qemu_capabilities.c (qemuCapsParseDeviceStr): Set it
correctly.
* src/qemu/qemu_command.h (qemuBuildVirtioSerialPortDevStr): Drop
declaration.
* src/qemu/qemu_command.c (qemuBuildVirtioSerialPortDevStr): Alter
signature, check flag.
(qemuBuildCommandLine): Adjust caller and check flag.
* tests/qemuhelptest.c (mymain): Update test.
* tests/qemuxml2argvtest.c (mymain): New test.
* tests/qemuxml2argvdata/qemuxml2argv-channel-spicevmc-old.xml:
New file.
* tests/qemuxml2argvdata/qemuxml2argv-channel-spicevmc-old.args:
Likewise.
Qemu smartcard/spicevmc support exists on branches (such as
http://cgit.freedesktop.org/~alon/qemu/commit/?h=usb_ccid.v15&id=024a37b)
but is not yet upstream. The added -help output matches a scratch build
that will be close to the RHEL 6.1 qemu-kvm.
* src/qemu/qemu_capabilities.h (QEMUD_CMD_FLAG_CCID_EMULATED)
(QEMUD_CMD_FLAG_CCID_PASSTHRU, QEMUD_CMD_FLAG_CHARDEV_SPICEVMC):
New flags.
* src/qemu/qemu_capabilities.c (qemuCapsComputeCmdFlags)
(qemuCapsParseDeviceStr): Check for smartcard capabilities.
(qemuCapsExtractVersionInfo): Tweak comment.
* tests/qemuhelptest.c (mymain): New test.
* tests/qemuhelpdata/qemu-kvm-0.12.1.2-rhel61: New file.
* tests/qemuhelpdata/qemu-kvm-0.12.1.2-rhel61-device: Likewise.
Depending if the qemu binary supports multiple pci-busses, the device
options will contain "bus=pci" or "bus=pci.0".
Only x86_64 and i686 seem to have support for multiple PCI-busses. When
a guest of these architectures is started, set the
QEMUD_CMD_FLAG_PCI_MULTIBUS flag.
Signed-off-by: Niels de Vos <ndevos@redhat.com>
In QEMU, the card itself is a PCI device, but it requires a codec
(either -device hda-output or -device hda-duplex) to actually output
sound. Specifying <sound model='ich6'/> gives us -device intel-hda
-device hda-duplex I think it's important that a simple <sound model='ich6'/>
sets up a useful codec, to have consistent behavior with all other sound cards.
This is basically Dan's proposal of
<sound model='ich6'>
<codec type='output' slot='0'/>
<codec type='duplex' slot='3'/>
</sound>
without the codec bits implemented.
The important thing is to keep a consistent API here, we don't want some
<sound> devs require tweaking codecs but not others. Steps I see to
accomplishing this:
- every <sound> device has a <codec type='default'/> (unless codecs are
manually specified)
- <codec type='none'/> is required to specify 'no codecs'
- new audio settings like mic=on|off could then be exposed in
<sound> or <codec> in a consistent manner for all sound models
v2:
Use model='ich6'
v3:
Use feature detection, from eblake
Set codec id, bus, and cad values
v4:
intel-hda isn't supported if -device isn't available
v5:
Comment spelling fixes
* src/qemu/qemu_capabilities.h (qemuCapsParseDeviceStr): New
prototype.
* src/qemu/qemu_capabilities.c (qemuCapsParsePCIDeviceStrs)
Rename and split...
(qemuCapsExtractDeviceStr, qemuCapsParseDeviceStr): ...to make it
easier to add and test device-specific checks.
(qemuCapsExtractVersionInfo): Update caller.
* tests/qemuhelptest.c (testHelpStrParsing): Also test parsing of
device-related flags.
(mymain): Update expected flags.
* tests/qemuhelpdata/qemu-0.12.1-device: New file.
* tests/qemuhelpdata/qemu-kvm-0.12.1.2-rhel60-device: New file.
* tests/qemuhelpdata/qemu-kvm-0.12.3-device: New file.
* tests/qemuhelpdata/qemu-kvm-0.13.0-device: New file.
The qemu_conf.c code is doing three jobs, driver config file
loading, QEMU capabilities management and QEMU command line
management. Move the capabilities code into its own file
* src/qemu/qemu_capabilities.c, src/qemu/qemu_capabilities.h: New
capabilities management code
* src/qemu/qemu_conf.c, src/qemu/qemu_conf.h: Delete capabilities
code
* src/qemu/qemu_conf.h: Adapt for API renames
* src/Makefile.am: add src/qemu/qemu_capabilities.c