A future patch is going to wire up qemu active block commit jobs;
but as they have similar events and are canceled/pivoted in the
same way as block copy jobs, it is easiest to track all bookkeeping
for the commit job by reusing the <mirror> element. This patch
adds domain XML to track which job was responsible for creating a
mirroring situation, and adds a job='copy' attribute to all
existing uses of <mirror>. Along the way, it also massages the
qemu monitor backend to read the new field in order to generate
the correct type of libvirt job (even though it requires a
future patch to actually cause a qemu event that can be reported
as an active commit). It also prepares to update persistent XML
to match changes made to live XML when a copy completes.
* docs/schemas/domaincommon.rng: Enhance schema.
* docs/formatdomain.html.in: Document it.
* src/conf/domain_conf.h (_virDomainDiskDef): Add a field.
* src/conf/domain_conf.c (virDomainBlockJobType): String conversion.
(virDomainDiskDefParseXML): Parse job type.
(virDomainDiskDefFormat): Output job type.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Distinguish
active from regular commit.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Set job type.
(qemuDomainBlockPivot, qemuDomainBlockJobImpl): Clean up job type
on completion.
* tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-mirror-old.xml:
Update tests.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-disk-active-commit.xml: New
file.
* tests/qemuxml2xmltest.c (mymain): Drive new test.
Signed-off-by: Eric Blake <eblake@redhat.com>
Doing a blockcopy operation across a libvirtd restart is not very
robust at the moment. In particular, we are clearing the <mirror>
element prior to telling qemu to finish the job. Also, thanks to the
ability to request async completion, the user can easily regain
control prior to qemu actually finishing the effort, and they should
be able to poll the domain XML to see if the job is still going.
A future patch will fix things to actually wait until qemu is done
before modifying the XML to reflect the job completion. But since
qemu issues identical BLOCK_JOB_COMPLETE events regardless of whether
the job was cancelled (kept the original disk) or completed (pivoted
to the new disk), we have to track which of the two operations were
used to end the job. Furthermore, we'd like to avoid attempts to
end a job where we are already waiting on an earlier request to qemu
to end the job. Likewise, if we miss the qemu event (perhaps because
it arrived during a libvirtd restart), we still need enough state
recorded to be able to determine how to modify the domain XML once
we reconnect to qemu and manually learn whether the job still exists.
Although this patch doesn't actually fix the problem, it is a
preliminary step that makes it possible to track whether a job
has already begun steps towards completion.
* src/conf/domain_conf.h (virDomainDiskMirrorState): New enum.
(_virDomainDiskDef): Convert bool mirroring to new enum.
* src/conf/domain_conf.c (virDomainDiskDefParseXML)
(virDomainDiskDefFormat): Handle new values.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Adjust
client.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl): Likewise.
* docs/schemas/domaincommon.rng (diskMirror): Expose new values.
* docs/formatdomain.html.in (elementsDisks): Document it.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Test it.
Signed-off-by: Eric Blake <eblake@redhat.com>
* docs/schemas/domaincommon.rng: Add bhyve domain type, nmdm
serial type and master and slave optional attributes for
serial that are used by nmdm
* tests/domainschematest: Add bhyvexml2argvdata directory
to validate bhyve XMLs
https://bugzilla.redhat.com/show_bug.cgi?id=1092886
Rather than point off to some nefarious "pool-specific docs" page when
describing the "format" field for the target pool provide a link to the
storage driver page which describes the various valid formats for each
pool type. Also make it a bit more clear that if a valid format isn't
specified, then the type field is ignored.
Added <capabilities> in the <features> section of LXC domains
configuration. This section can contain elements named after the
capabilities like:
<mknod state="on"/>, keep CAP_MKNOD capability
<sys_chroot state="off"/> drop CAP_SYS_CHROOT capability
Users can restrict or give more capabilities than the default using
this mechanism.
Introduce a new function to parse the provided scsi_host parent address
and unique_id value in order to find the /sys/class/scsi_host directory
which will allow a stable SCSI host address
Add a test to scsihosttest to lookup the host# name by using the PCI address
and unique_id value
Add an optional unique_id parameter to nodedev. Allows for easier lookup
and display of the unique_id value in order to document for use with
scsi_host code.
Between reboots and kernel reloads, the SCSI host number used for SCSI
storage pools may change requiring modification to the storage pool XML
in order to use a specific SCSI host adapter.
This patch introduces the "parentaddr" element and "unique_id" attribute
for the SCSI host adapter in order to uniquely identify the adapter
between reboots and kernel reloads. For now the goal is to only parse
and format the XML. Both will be required to be provided in order to
uniquely identify the desired SCSI host.
The new XML is expected to be as follows:
<adapter type='scsi_host'>
<parentaddr unique_id='3'>
<address domain='0x0000' bus='0x00' slot='0x1f' func='0x2'/>
</parentaddr>
</adapter>
where "parentaddr" is the parent device of the SCSI host using the PCI
address on which the device resides and the value from the unique_id file
for the device. Both the PCI address and unique_id values will be used
to traverse the /sys/class/scsi_host/ directories looking at each link
to match the PCI address reformatted to the directory link format where
"domain🚌slot:function" is found. Then for each matching directory
the unique_id file for the scsi_host will be used to match the unique_id
value in the xml.
For a PCI address listed above, this will be formatted to "0000:00:1f.2"
and the links in /sys/class/scsi_host will be used to find the host#
to be used for the 'scsi_host' device. Each entry is a link to the
/sys/bus/pci/devices directories, e.g.:
% ls -al /sys/class/scsi_host/host2
lrwxrwxrwx. 1 root root 0 Jun 1 00:22 /sys/class/scsi_host/host2 -> ../../devices/pci0000:00/0000:00:1f.2/ata3/host2/scsi_host/host2
% cat /sys/class/scsi_host/host2/unique_id
3
The "parentaddr" and "name" attributes are mutually exclusive to identify
the SCSI host number. Use of the "parentaddr" element will be the preferred
mechanism.
This patch only supports to parse and format the XMLs. Later patches will
add code to find out the scsi host number.
Gluster volumes don't start with a leading slash. Our schema for netfs
gluster pools enforces it though. Luckily mount.glusterfs skips it.
Allow a slashless volume name for glusterfs netfs mounts in the schema.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1101999
libvirt supports pci domain already, so update the documentation.
Otherwise users who lookup the documentation for how to use hostdev may
miss the domain and encounter error when pass-through a pci device in a
domain other than 0.
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Disk type 'lun' enables SCSI command passthrough for a disk. We stated
that it works only with "block" disks. Qemu supports it also when using
the iSCSI protocol.
LXC network devices can now be assigned a custom NIC device name on the
container side. For example, this is configured with:
<interface type='network'>
<source network='default'/>
<guest dev="eth1"/>
</interface>
In this example the network card will appear as eth1 in the guest.
The previous commit 09d4d26 put the interleave at the wrong point;
it didn't allow interleaving with <memory>.
* docs/schema/domaincommon.rng (numatune): Fix interleave location.
* tests/qemuxml2argvdata/qemuxml2argv-numatune-memnode.xml: Adjust test.
Signed-off-by: Eric Blake <eblake@redhat.com>
In XML format, by definition, order of fields should not matter, so
order of parsing the elements doesn't affect the end result. When
specifying guest NUMA cells, we depend only on the order of the 'cell'
elements. With this patch all older domain XMLs are parsed as before,
but with the 'id' attribute they are parsed and formatted according to
that field. This will be useful when we have tuning settings for
particular guest NUMA node.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This patch adds support for the QEMU vhost-user feature to libvirt.
vhost-user enables the communication between a QEMU virtual machine
and other userspace process using the Virtio transport protocol.
It uses a char dev (e.g. Unix socket) for the control plane,
while the data plane based on shared memory.
The XML looks like:
<interface type='vhostuser'>
<mac address='52:54:00:3b:83:1a'/>
<source type='unix' path='/tmp/vhost.sock' mode='server'/>
<model type='virtio'/>
</interface>
Signed-off-by: Michele Paolino <m.paolino@virtualopensystems.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add 'nocow' to storage volume xml so that user can have an option
to set NOCOW flag to the newly created volume. It's useful on btrfs
file system to enhance performance.
Btrfs has low performance when hosting VM images, even more when the guest
in those VM are also using btrfs as file system. One way to mitigate this
bad performance is to turn off COW attributes on VM files. Generally, there
are two ways to turn off COW on btrfs: a) by mounting fs with nodatacow,
then all newly created files will be NOCOW. b) per file. Add the NOCOW file
attribute. It could only be done to empty or new files.
This patch tries the second way, according to 'nocow' option, it could set
NOCOW flag per file:
for raw file images, handle 'nocow' in libvirt code; for non-raw file images,
pass 'nocow=on' option to qemu-img, and let qemu-img to handle that (requires
qemu-img version >= 2.1).
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Our documentation for features was rather sparse; this fleshes out
more of the details for other existing capabilities (and cost me
some time trawling git history).
* docs/formatcaps.html.in: Document it feature bits.
Signed-off-by: Eric Blake <eblake@redhat.com>
The link to the page "how to get your code into an open source
project" has been fixed.
Signed-off-by: Michele Paolino <m.paolino@virtualopensystems.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Fix a couple of typos ('chap' should have been 'iscsi' and there was
a stray 'iqn.2013-07.com.example:iscsi-pool' entry. Clean up the
description of the <auth> element for the disk
This new module holds and formats capabilities for emulator. If you
are about to create a new domain, you may want to know what is the
host or hypervisor capable of. To make sure we don't regress on the
XML, the formatting is not something left for each driver to
implement, rather there's general format function.
The domain capabilities is a lockable object (even though the locking
is not necessary yet) which uses reference counter.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Check if the buffer is in error state and report an error if it is.
This replaces the pattern:
if (virBufferError(buf)) {
virReportOOMError();
goto cleanup;
}
with:
if (virBufferCheckError(buf) < 0)
goto cleanup;
Document typical buffer usage to favor this.
Also remove the redundant FreeAndReset - if an error has
been set via virBufferSetError, the content is already freed.
This introduces two new attributes "cmd_per_lun" and "max_sectors" same
with the names QEMU uses for virtio-scsi. An example of the XML:
<controller type='scsi' index='0' model='virtio-scsi' cmd_per_lun='50'
max_sectors='512'/>
The corresponding QEMU command line:
-device virtio-scsi-pci,id=scsi0,cmd_per_lun=50,max_sectors=512,
bus=pci.0,addr=0x3
Signed-off-by: Mike Perez <thingee@gmail.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
We publish libvirt-api.xml for others to use, and in fact, the
libvirt-python bindings use it to generate python constants that
correspond to our enum values. However, we had an off-by-one bug
that any enum that relied on C's rules for implicit initialization
of the first enum member to 0 got listed in the xml as having a
value of 1 (and all later members of the enum were equally
botched).
The fix is simple - since we add one to the previous value when
encountering an enum without an initializer, the previous value
must start at -1 so that the first enum member is assigned 0.
The python generator code has had the off-by-one ever since DV
first wrote it years ago, but most of our public enums were immune
because they had an explicit = 0 initializer. The only affected
enums are:
- virDomainEventGraphicsAddressType (such as
VIR_DOMAIN_EVENT_GRAPHICS_ADDRESS_IPV4), since commit 987e31e
(libvirt v0.8.0)
- virDomainCoreDumpFormat (such as VIR_DOMAIN_CORE_DUMP_FORMAT_RAW),
since commit 9fbaff0 (libvirt v1.2.3)
- virIPAddrType (such as VIR_IP_ADDR_TYPE_IPV4), since commit
03e0e79 (not yet released)
Thanks to Nehal J Wani for reporting the problem on IRC, and
for helping me zero in on the culprit function.
* docs/apibuild.py (CParser.parseEnumBlock): Fix implicit enum
values.
Signed-off-by: Eric Blake <eblake@redhat.com>
The interface state for bonds and vlans does seem to reflect the state
of the underlying physical devices, at least in some cases, so it
makes sense to allow reporting it (netcf now does).
The link state/speed for bridge devices is meaningless though, so we
don't even look for it.
The interface xml schema was written with strict rules about the
ordering of the elements. This was never intentional, but just due to
omission of <interleave> in the appropriate places. This patch just
adds in <interleave> wherever there is more than one element, and
re-indents everything else appropriately.
In section "Block / character devices" of "Host device assignment",
the description of hostdev element has some error:
For a block device, the type should be "storage", not "block";
For a character device, the type should be "misc", not "char".
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
There are two places where you'll find info on page sizes. The first
one is under <cpu/> element, where all supported pages sizes are
listed. Then the second one is under each <cell/> element which refers
to concrete NUMA node. At this place, the size of page's pool is
reported. So the capabilities XML looks something like this:
<capabilities>
<host>
<uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid>
<cpu>
<arch>x86_64</arch>
<model>Westmere</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='1' threads='1'/>
...
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
...
<topology>
<cells num='4'>
<cell id='0'>
<memory unit='KiB'>4054408</memory>
<pages unit='KiB' size='4'>1013602</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
</cpus>
</cell>
<cell id='1'>
<memory unit='KiB'>4071072</memory>
<pages unit='KiB' size='4'>1017768</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='1' socket_id='0' core_id='0' siblings='1'/>
</cpus>
</cell>
...
</cells>
</topology>
...
</host>
<guest/>
</capabilities>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 7c6fc39 introduced a regression in the XML produced for older
clients. The argument at the time was that clients shouldn't be
depending on output-only data for something that is only going to
be triggered for a transient guest; but John Ferlan reported that
the automated testsuite was such a client. It's better to be safe
than sorry by guaranteeing back-compat cruft. Note that later
patches will be using <mirror> for active block commit, but there
we don't have to worry about back-compat.
* src/conf/domain_conf.c (virDomainDiskDefFormat): Restore old
style output when necessary.
* docs/schemas/domaincommon.rng: Validate back-compat style.
* docs/formatdomain.html.in: Update the documentation.
* tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-mirror-old.xml:
Update tests.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
This new element is there to represent PCI-Express capabilities
of a PCI devices, like link speed, number of lanes, etc.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While exposing the info under <interface/> in previous patch works, it
may work only in cases where interface is configured on the host.
However, orchestrating application may want to know the link state and
speed even in that case. That's why we ought to expose this in nodedev
XML too:
virsh # nodedev-dumpxml net_eth0_f0_de_f1_2b_1b_f3
<device>
<name>net_eth0_f0_de_f1_2b_1b_f3</name>
<path>/sys/devices/pci0000:00/0000:00:19.0/net/eth0</path>
<parent>pci_0000_00_19_0</parent>
<capability type='net'>
<interface>eth0</interface>
<address>f0🇩🇪f1:2b:1b:f3</address>
<link speed='1000' state='up'/>
<capability type='80203'/>
</capability>
</device>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently it is not possible to determine the speed of an interface
and whether a link is actually detected from the API. Orchestrating
platforms want to be able to determine when the link has failed and
where multiple speeds may be available which one the interface is
actually connected at. This commit introduces an extension to our
interface XML (without implementation to interface driver backends):
<interface type='ethernet' name='eth0'>
<start mode='none'/>
<mac address='aa:bb:cc:dd:ee:ff'/>
<link speed='1000' state='up'/>
<mtu size='1492'/>
...
</interface>
Where @speed is negotiated link speed in Mbits per second, and state
is the current NIC state (can be one of the following: "unknown",
"notpresent", "down", "lowerlayerdown","testing", "dormant", "up").
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that we track a disk mirror as a virStorageSource, we might
as well update the XML to theoretically allow any type of
mirroring destination (not just a local file). A later patch
will also be reusing <mirror> to track the block commit of the
top layer of a chain, which is another case where libvirt needs
to update the backing chain after the job is finally pivoted,
and since backing chains can have network backing files as the
destination to commit into, it makes more sense to display that
in the XML.
This patch changes output-only XML; it was already documented
that <mirror> does not affect a domain definition at this point
(because qemu doesn't provide persistent bitmaps yet). Any
application that was starting a block copy job with older libvirt
and then relying on the domain XML to determine if it was
complete will no longer be able to access the file= and format=
attributes of mirror that were previously used. However, this is
not going to be a problem in practice: the only time a block copy
job works is on a transient domain, and any app that is managing
a transient domain probably already does enough of its own
bookkeeping to know which file it is mirroring into without
having to re-read it from the libvirt XML. The one thing that
was likely to be used in a mirroring job was the ready=
attribute, which is unchanged. Meanwhile, I made sure the schema
and parser still accept the old format, even if we no longer
output it, so that upgrading from an older version of libvirt is
seamless.
* docs/schemas/domaincommon.rng (diskMirror): Alter definition.
* src/conf/domain_conf.c (virDomainDiskDefParseXML): Parse two
styles of mirror elements.
(virDomainDiskDefFormat): Output new style.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror-old.xml: New
file, copied from...
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: ...here
before modernizing.
* tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-mirror-old*: New
files.
* tests/qemuxml2xmltest.c (mymain): Test both styles.
Signed-off-by: Eric Blake <eblake@redhat.com>
A PCI device can be associated with a specific NUMA node. Later, when
a guest is pinned to one NUMA node the PCI device can be assigned on
different NUMA node. This makes DMA transfers travel across nodes and
thus results in suboptimal performance. We should expose the NUMA node
locality for PCI devices so management applications can make better
decisions.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
At the moment we are missing even basic documentation on our
capabilities XML. Without demand on completeness, I'm
reorganizing the document structure and adding very basic
documentation to two major components of the capabilities XML.
These stubs are intended to be enhanced in the future.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If user or management application wants to create a guest,
it may be useful to know the cost of internode latencies
before the guest resources are pinned. For example:
<capabilities>
<host>
...
<topology>
<cells num='2'>
<cell id='0'>
<memory unit='KiB'>4004132</memory>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='20'/>
</distances>
<cpus num='2'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
<cpu id='2' socket_id='0' core_id='2' siblings='2'/>
</cpus>
</cell>
<cell id='1'>
<memory unit='KiB'>4030064</memory>
<distances>
<sibling id='0' value='20'/>
<sibling id='1' value='10'/>
</distances>
<cpus num='2'>
<cpu id='1' socket_id='0' core_id='0' siblings='1'/>
<cpu id='3' socket_id='0' core_id='2' siblings='3'/>
</cpus>
</cell>
</cells>
</topology>
...
</host>
...
</capabilities>
We can see the distance from node1 to node0 is 20 and within nodes 10.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>