The Relax-NG schema for domains regarding <hostdev> doesn't match what's
implemented in src/conf/domain_conf.c#virDomainHostdevDefFormat(): The
implementation only requires @type, but the schema currently either
required none or all three attributes (@mode, @type, and @managed) to be
defined together, because they are declared in the same
<optional)-section. (@managed is currently even undocumented on
<http://libvirt.org/formatdomain.html#elementsUSB>).
Thus the following minimal <hostdev>-example fails to validate:
<domain type='test'>
<name>N</name>
<memory>4096</memory>
<bootloader>/bin/false</bootloader>
<os>
<type arch='x86_64' machine='xenpv'>linux</type>
</os>
<devices>
<hostdev type='pci'>
<source>
<address bus='0x06' slot='0x00' function='0x0'/>
</source>
</hostdev>
</devices>
</domain>
The schema is changed to match the current implementation:
1. @mode is optional (which defaults to 'subsystem')
2. @type is required
3. @managed is optional (which defaults to 'no')
The documentation is updated to mention @managed.
Signed-off-by: Philipp Hahn <hahn@univention.de>
This patchs adds documentation about the 802.1Qbg related parameters
of the virtualport element in a 'direct' interface definition.
Signed-off-by: Gerhard Stenzel <gerhard.stenzel@de.ibm.com>
A diff of 'make dist' from in-tree vs. a VPATH build showed
that we were missing docs/api_extension/*.patch files, but
shipping other files that we didn't need.
* bootstrap.conf (gnulib_extra_files): Don't distribute files we
don't care about.
* docs/Makefile.am (patches): Perform wildcard correctly.
Done mechanically with:
$ git grep -l '\bDEBUG0\? *(' | xargs -L1 sed -i 's/\bDEBUG0\? *(/VIR_&/'
followed by manual deletion of qemudDebug in daemon/libvirtd.c, along
with a single 'make syntax-check' fallout in the same file, and the
actual deletion in src/util/logging.h.
* src/util/logging.h (DEBUG, DEBUG0): Delete.
* daemon/libvirtd.h (qemudDebug): Likewise.
* global: Change remaining clients over to VIR_DEBUG counterpart.
XSLT allows for two ways of generating the output of transformation.
Either implicit, which xsltproc prints to stdout and can be redirected
to a file using -o file. Or explicit, which means the stylesheet
contains <xsl:document> element which specifies where the output should
be saved. This can be used for generating more files by a single run of
xsltproc and -o directory/ can change the directory where the output
files will be stored.
devhelp.xsl is special in that it combines both options in one
stylesheet, which doesn't work well with -o:
xsltproc --nonet -o ./devhelp/ ./devhelp/devhelp.xsl ./libvirt-api.xml
Outputs 4 *.html files into ./devhelp but then tries to write to
./devhelp/ as a file (hence the I/O error) rather than writing output to
the fifth file devhelp/libvirt.devhelp.
This patch modifies devhelp.xsl so that all files are generated using
<xsl:document> element and -o directory/ can be used to override output
directory where those files are saved.
This patch adds the possibility to not just drop packets, but to also have them rejected where iptables at least sends an ICMP msg back to the originator. On ebtables this again maps into dropping packets since rejecting is not supported.
I am adding 'since 0.8.9' to the docs assuming this will be the next version of libvirt.
This still doesn't fix {html,devhelp}/libvirt-{libvirt-virterror}.html,
but it's progress in the right direction.
* docs/Makefile.am (%.html): Build into srcdir.
This fixes https://bugzilla.redhat.com/show_bug.cgi?id=609463
The problem was that, since a bridge always acquires the MAC address
of the connected interface with the numerically lowest MAC, as guests
are started and stopped, it was possible for the MAC address to change
over time, and this change in the network was being detected by
Windows 7 (it sees the MAC of the default route change), so on each
reboot it would bring up a dialog box asking about this "new network".
The solution is to create a dummy tap interface with a MAC guaranteed
to be lower than any guest interface's MAC, and attach that tap to the
bridge as soon as it's created. Since all guest MAC addresses start
with 0xFE, we can just generate a MAC with the standard "0x52, 0x54,
0" prefix, and it's guaranteed to always win (physical interfaces are
never connected to these bridges, so we don't need to worry about
competing numerically with them).
Note that the dummy tap is never set to IFF_UP state - that's not
necessary in order for the bridge to take its MAC, and not setting it
to UP eliminates the clutter of having an (eg) "virbr0-nic" displayed
in the output of the ifconfig command.
I chose to not auto-generate the MAC address in the network XML
parser, as there are likely to be consumers of that API that don't
need or want to have a MAC address associated with the
bridge.
Instead, in bridge_driver.c when the network is being defined, if
there is no MAC, one is generated. To account for virtual network
configs that already exist when upgrading from an older version of
libvirt, I've added a %post script to the specfile that searches for
all network definitions in both the config directory
(/etc/libvirt/qemu/networks) and the state directory
(/var/lib/libvirt/network) that are missing a mac address, generates a
random address, and adds it to the config (and a matching address to
the state file, if there is one).
docs/formatnetwork.html.in: document <mac address.../>
docs/schemas/network.rng: add nac address to schema
libvirt.spec.in: %post script to update existing networks
src/conf/network_conf.[ch]: parse and format <mac address.../>
src/libvirt_private.syms: export a couple private symbols we need
src/network/bridge_driver.c:
auto-generate mac address when needed,
create dummy interface if mac address is present.
tests/networkxml2xmlin/isolated-network.xml
tests/networkxml2xmlin/routed-network.xml
tests/networkxml2xmlout/isolated-network.xml
tests/networkxml2xmlout/routed-network.xml: add mac address to some tests
This is in response to:
https://bugzilla.redhat.com/show_bug.cgi?id=629662
Explanation
qemu's virtio-net-pci driver allows setting the algorithm used for tx
packets to either "bh" or "timer". This is done by adding ",tx=bh" or
",tx=timer" to the "-device virtio-net-pci" commandline option.
'bh' stands for 'bottom half'; when this is set, packet tx is all done
in an iothread in the bottom half of the driver. (In libvirt, this
option is called the more descriptive "iothread".)
'timer' means that tx work is done in qemu, and if there is more tx
data than can be sent at the present time, a timer is set before qemu
moves on to do other things; when the timer fires, another attempt is
made to send more data. (libvirt retains the name "timer" for this
option.)
The resulting difference, according to the qemu developer who added
the option is:
bh makes tx more asynchronous and reduces latency, but potentially
causes more processor bandwidth contention since the cpu doing the
tx isn't necessarily the cpu where the guest generated the
packets.
Solution
This patch provides a libvirt domain xml knob to change the option on
the qemu commandline, by adding a new attribute "txmode" to the
<driver> element that can be placed inside any <interface> element in
a domain definition. It's use would be something like this:
<interface ...>
...
<model type='virtio'/>
<driver txmode='iothread'/>
...
</interface>
I chose to put this setting as an attribute to <driver> rather than as
a sub-element to <tune> because it is specific to the virtio-net
driver, not something that is generally usable by all network drivers.
(note that this is the same placement as the "driver name=..."
attribute used to choose kernel vs. userland backend for the
virtio-net driver.)
Actually adding the tx=xxx option to the qemu commandline is only done
if the version of qemu being used advertises it in the output of
qemu -device virtio-net-pci,?
If a particular txmode is requested in the XML, and the option isn't
listed in that help output, an UNSUPPORTED_CONFIG error is logged, and
the domain fails to start.
* configure.ac docs/news.html.in libvirt.spec.in: bump version and add docs
* po/*.po*: updated Gujarati, Polish and Dutch localisations and regenerated
Libxml2-Logo-90x34.gif was removed from the repository in Sep 2009
(commit d6d528c) because our docs no longer reference it.
* docs/Makefile.am (install-data-local): Don't install missing file.
Adds <smartcard mode='passthrough' type='spicevmc'/>, which uses the
new <channel name='smartcard'/> of <graphics type='spice'>.
* docs/schemas/domain.rng: Support new XML.
* docs/formatdomain.html.in: Document it.
* src/conf/domain_conf.h (virDomainGraphicsSpiceChannelName): New
enum value.
(virDomainChrSpicevmcName): New enum.
(virDomainChrSourceDef): Distinguish spicevmc types.
* src/conf/domain_conf.c (virDomainGraphicsSpiceChannelName): Add
smartcard.
(virDomainSmartcardDefParseXML): Parse it.
(virDomainChrDefParseXML, virDomainSmartcardDefParseXML): Set
spicevmc name.
(virDomainChrSpicevmc): New enum conversion functions.
* src/libvirt_private.syms: Export new functions.
* src/qemu/qemu_command.c (qemuBuildChrChardevStr): Conditionalize
name.
* tests/qemuxml2argvtest.c (domain): New test.
* tests/qemuxml2argvdata/qemuxml2argv-smartcard-passthrough-spicevmc.args:
New file.
* tests/qemuxml2argvdata/qemuxml2argv-smartcard-passthrough-spicevmc.xml:
Likewise.
Inspired by https://bugzilla.redhat.com/show_bug.cgi?id=615757
Add a new character device backend for virtio serial channels that
activates the QEMU spice agent on the main channel using the vdagent
spicevmc connection. The <target> must be type='virtio', and supports
an optional name that specifies how the guest will see the channel
(for now, name must be com.redhat.spice.0).
<channel type='spicevmc'>
<target type='virtio'/>
<address type='virtio-serial' controller='1' bus='0' port='3'/>
</channel>
* docs/schemas/domain.rng: Support new XML.
* docs/formatdomain.html.in: Document it.
* src/conf/domain_conf.h (virDomainChrType): New enum value.
* src/conf/domain_conf.c (virDomainChr): Add spicevmc.
(virDomainChrDefParseXML, virDomainChrSourceDefParseXML)
(virDomainChrDefParseTargetXML): Parse and enforce proper use.
(virDomainChrSourceDefFormat, virDomainChrDefFormat): Format.
* src/qemu/qemu_command.c (qemuBuildChrChardevStr)
(qemuBuildCommandLine): Add qemu support.
* tests/qemuxml2argvtest.c (domain): New test.
* tests/qemuxml2argvdata/qemuxml2argv-channel-spicevmc.xml: New
file.
* tests/qemuxml2argvdata/qemuxml2argv-channel-spicevmc.args:
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Assuming a hypervisor that supports multiple smartcard devices in the
guest, this would be a valid XML description:
<devices>
<smartcard mode='host'/>
<smartcard mode='host-certificates'>
<certificate>/path/to/cert1</certificate>
<certificate>/path/to/cert2</certificate>
<certificate>/path/to/cert3</certificate>
</smartcard>
<smartcard mode='passthrough' type='tcp'>
<source mode='bind' host='127.0.0.1' service='2001'/>
<protocol type='raw'/>
</smartcard>
</devices>
(As of this commit, the qemu hypervisor will be the first
implementation, but it only supports one smartcard.)
* docs/formatdomain.html.in (Smartcard devices): New section.
* docs/schemas/domain.rng (smartcard): New define, used in
devices.
* tests/qemuxml2argvdata/qemuxml2argv-smartcard-host.xml: New file
to test schema.
* tests/qemuxml2argvdata/qemuxml2argv-smartcard-host-certificates.xml:
Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-smartcard-passthrough-tcp.xml:
Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-smartcard-controller.xml:
Likewise.
In QEMU, the card itself is a PCI device, but it requires a codec
(either -device hda-output or -device hda-duplex) to actually output
sound. Specifying <sound model='ich6'/> gives us -device intel-hda
-device hda-duplex I think it's important that a simple <sound model='ich6'/>
sets up a useful codec, to have consistent behavior with all other sound cards.
This is basically Dan's proposal of
<sound model='ich6'>
<codec type='output' slot='0'/>
<codec type='duplex' slot='3'/>
</sound>
without the codec bits implemented.
The important thing is to keep a consistent API here, we don't want some
<sound> devs require tweaking codecs but not others. Steps I see to
accomplishing this:
- every <sound> device has a <codec type='default'/> (unless codecs are
manually specified)
- <codec type='none'/> is required to specify 'no codecs'
- new audio settings like mic=on|off could then be exposed in
<sound> or <codec> in a consistent manner for all sound models
v2:
Use model='ich6'
v3:
Use feature detection, from eblake
Set codec id, bus, and cad values
v4:
intel-hda isn't supported if -device isn't available
v5:
Comment spelling fixes
QEMU supports serving VNC over a unix domain socket rather than traditional
TCP host/port. This is specified with:
<graphics type='vnc' socket='/foo/bar/baz'/>
This provides better security access control than VNC listening on
127.0.0.1, but will cause issues with tools that rely on the lax security
(virt-manager in fedora runs as regular user by default, and wouldn't be
able to access a socket owned by 'qemu' or 'root').
Also not currently supported by any clients, though I have patches for
virt-manager, and virt-viewer should be simple to update.
v2:
schema: Make listen vs. socket a <choice>
Currently, boot order can be specified per device class but there is no
way to specify exact disk/NIC device to boot from.
This patch adds <boot order='N'/> element which can be used inside
<disk/> and <interface/>. This is incompatible with the older os/boot
element. Since not all hypervisors support per-device boot
specification, new deviceboot flag is included in capabilities XML for
hypervisors which understand the new boot element. Presence of the flag
allows (but doesn't require) users to use the new style boot order
specification.