In virDomainDefMaybeAddHostdevSCSIcontroller when we add a new
controller because someone neglected to add one or we're adding
one because the existing one is full, we should copy over the
model number from the existing controller since whatever we
create should at least have the same characteristics as the one
we cannot use because it's full.
NB: This affects the existing hostdev-scsi-autogen-address test
which would add a default ('lsi') SCSI controller for the various
scsi_host's that would create a controller for the hostdev.
Commit id '70249927b' neglected to cover this case because the test
had taken the "shortcut" to already add the <address>; however, when
the PCI address assignment code was adjusted by commit id '70249927'
the vhost-scsi (VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_SCSI_HOST) wasn't
covered thus returning a 0 for pciFlags. So I altered the tests too
to make sure it doesn't happen again.
Previously the qemuxml2xmloutdata was a softlink to the source
qemuxml2argvdata, so I unlinked and recreated the output file to
force generation of the adddress. Without the test changes, an
address generation returns:
libvirt: Domain Config error : internal error: Cannot automatically
add a new PCI bus for a device with connect flags 00
if an address was supplied in the test, a restart of libvirtd or
edit of a guest would display the following opaque message:
warning : qemuDomainCollectPCIAddress:1237 :
qemuDomainDeviceCalculatePCIConnectFlags() thinks that the device
with PCI address 0000:00:09.0 should not have a PCI address
where the address is related to the guest PCI address provided.
There's no reason for the files to have qemuxml2xmlout- prefix
since they all live under qemuxml2xmloutdata directory.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
These XMLs live in a separate directory, there's no need for them
to have a special prefix in addition. It also doesn't play nicely
with ':e' completion in Vim, finding proper file based on
qemuxml2argvtest.c is also needlessly complicated.
The files were renamed using the following commands. From
qemuxml2argvdata:
for i in qemuxml2argv-*.xml; do mv $i ${i#qemuxml2argv-}; done
and then (to fix broken symlinks) from qemuxml2argvdata and
qemuxml2xmloutdata:
for i in $(find . -xtype l); do \
ln -sf $(readlink $i | sed 's/qemuxml2argv-//') $i;
done
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that <serial> and <console> on s390/s390x behave a bit more like the
other architectures, remove this extra differentation, and use sclp
console by default for new guests. New virtio consoles can still be
added, and it is actually needed because of the limited number of
instances for sclp and sclplm.
This reverts commit b1c88c1476, whose
reasons are not totally clear.
Signed-off-by: Pino Toscano <ptoscano@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Introduce specific a target types with two models for the console
devices (sclp and sclplm) used in s390 and s390x guests, so isa-serial
is no more used for them.
This makes <serial> usable on s390 and s390x guests, with at most only
a single sclpconsole and one sclplmconsole devices usable in a single
guest (due to limitations in QEMU, which will enforce already at
runtime).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1449265
Signed-off-by: Pino Toscano <ptoscano@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We can finally introduce a specific target model for the pl011 device
used by mach-virt guests, which means isa-serial will no longer show
up to confuse users.
We make sure migration works in both directions by interpreting the
isa-serial target type, or the lack of target type, appropriately
when parsing the guest XML, and skipping the newly-introduced type
when formatting if for migration. We also verify that pl011 is not
used for non-mach-virt guests and add a bunch of test cases.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=151292
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The existing implementation set the address type for all serial
devices to spapr-vio, which made it impossible to use other devices
such as usb-serial and pci-serial; moreover, some decisions were
made based on the address type rather than the device type.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1512934
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
We can finally introduce a specific target model for the spapr-vty
device used by pSeries guests, which means isa-serial will no longer
show up to confuse users.
We make sure migration works in both directions by interpreting the
isa-serial target type, or the lack of target type, appropriately
when parsing the guest XML, and skipping the newly-introduced type
when formatting if for migration. We also verify that spapr-vty is
not used for non-pSeries guests and add a bunch of test cases.
This commit is best viewed with 'git show -w'.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1511421
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This attribute was used to decide whether to format the type
attribute of the <target> element, but the logic didn't take into
account all possible cases and as such could lead to unexpected
results. Moreover, it's one more thing to keep track of, and can
easily fall out of sync with other attributes.
Now that we have VIR_DOMAIN_CHR_SERIAL_TARGET_TYPE_NONE, we can
use that value to signal that no specific target type has been
configured for the serial device and as such the attribute should
not be formatted at all. All other values are now formatted.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Some test cases have the backend tag inside wrong interfaces. The backend xml
tag does not support <interface type='user|direct|hostdev'>. So this commit
changes some network types inside the interfaces that have backend defined.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Starting from qemu 2.11, the `-device vmcoreinfo` will create a fw_cfg
entry for a guest to store dump details, necessary to process kernel
dump with KASLR enabled and providing additional kernel details.
In essence, it is similar to -fw_cfg name=etc/vmcoreinfo,file=X but in
this case it is not backed by a file, but collected by QEMU itself.
Since the device is a singleton and shouldn't use additional hardware
resources, it is presented as a <feature> element in the libvirt
domain XML.
The device is arm/x86 only for now (targets that support fw_cfg+dma).
Related to:
https://bugzilla.redhat.com/show_bug.cgi?id=1395248
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The terminator would not be parsed properly since the XPath selector was
looking for an populated element, and also the code did not bother
assigning the terminating virStorageSourcePtr to the backingStore
property of the parent.
Some tests would catch it if there wasn't bigger fallout from the change
to backing store termination in a693fdba01. Fix them properly now.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1509110
Since the virStorageEncryptionPtr encryption; is a member of
_virStorageSource it really should be allowed to be a subelement
of the disk <source> for various disk formats:
Source{File|Dir|Block|Volume}
SourceProtocol{RBD|ISCSI|NBD|Gluster|Simple|HTTP}
NB: Simple includes sheepdog, ftp, ftps, tftp
That way we can set up to allow the <encryption> element to be
formatted within the disk source, but we still need to be wary
from whence the element was read - see keep track and when it
comes to format the data, ensure it's written in the correct place.
Modify the qemuxml2argvtest to add a parse failure when there is an
<encryption> as a child of <disk> *and* an <encryption> as a child
of <source>.
The virschematest will read the new test files and validate from a
RNG viewpoint things are fine.
Since the virStorageAuthDefPtr auth; is a member of _virStorageSource
it really should be allowed to be a subelement of the disk <source>
for the RBD and iSCSI prototcols. That way we can set up to allow
the <auth> element to be formatted within the disk source.
Since we've allowed the <auth> to be a child of <disk>, we'll need
to keep track of how it was read so that when writing out we'll know
whether to format as child of <disk> or <source>. For the argv2xml
parsing, let's format under <source> as a preference. Do not allow
<auth> to be both a child of <disk> and <source>.
Modify the qemuxml2argvtest to add a parse failure when there is an
<auth> as a child of <disk> *and* an <auth> as a child of <source>.
Add tests to validate that if the <auth> was found in <source>, then
the resulting xml2xml and xml2arg works just fine. The two new .args
file are exact copies of the non "-source" version of the file.
The virschematest will read the new test files and validate from a
RNG viewpoint things are fine
Update the virstoragefile, virstoragetest, and args2xml file to show
the "preference" to place <auth> as a child of <source>.
Express a properly terminated backing chain by putting a
virStorageSource of type VIR_STORAGE_TYPE_NONE in the chain. The newly
used helpers simplify this greatly.
The change fixes a bug as formatting an incomplete backing chain and
parsing it back would end up in expressing a terminated chain since
src->backingStoreRaw was not populated. By relying on the terminator
object this can be now processed appropriately.
qemu 2.7.0 introduces multiqueue virtio-blk(commit 2f27059).
This patch introduces a new attribute "queues". An example of
the XML:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' queues='4'/>
The corresponding QEMU command line:
-device virtio-blk-pci,scsi=off,num-queues=4,id=virtio-disk0
Signed-off-by: Lin Ma <lma@suse.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Alter qemu command line generation in order to possibly add TLS for
a suitably configured domain.
Sample TLS args generated by libvirt -
-object tls-creds-x509,id=objvirtio-disk0_tls0,dir=/etc/pki/qemu,\
endpoint=client,verify-peer=yes \
-drive file.driver=vxhs,file.tls-creds=objvirtio-disk0_tls0,\
file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,\
file.server.type=tcp,file.server.host=192.168.0.1,\
file.server.port=9999,format=raw,if=none,\
id=drive-virtio-disk0,cache=none \
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\
id=virtio-disk0
Update the qemuxml2argvtest with a couple of examples. One for a
simple case and the other a bit more complex where multiple VxHS disks
are added where at least one uses a VxHS that doesn't require TLS
credentials and thus sets the domain disk source attribute "tls = 'no'".
Update the hotplug to be able to handle processing the tlsAlias whether
it's to add the TLS object when hotplugging a disk or to remove the TLS
object when hot unplugging a disk. The hot plug/unplug code is largely
generic, but the addition code does make the VXHS specific checks only
because it needs to grab the correct config directory and generate the
object as the command line would do.
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add an optional virTristateBool haveTLS to virStorageSource to
manage whether a storage source will be using TLS.
Sample XML for a VxHS disk:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251' tls='yes'>
<host name='192.168.0.1' port='9999'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
Additionally add a tlsFromConfig boolean to control whether the TLS
setting was due to domain configuration or qemu.conf global setting
in order to decide whether to Format the haveTLS setting for either
a live or saved domain configuration file.
Update the qemuxml2xmltest in order to add a test to show the proper
parsing.
Also update the docs to describe the tls attribute.
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
Alter the schema to allow a VxHS block device. Sample XML is:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'>
<host name='192.168.0.1' port='9999'/>
</source>
<target dev='vda' bus='virtio'/>
<serial>eb90327c-8302-4725-9e1b-4e85ed4dc251</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
Update the html docs to describe the capability for VxHS.
Alter the qemuxml2xmltest to validate the formatting.
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1075520
Currently, all that users can specify for an interface type of
'user' is the common attributes: PCI address, NIC model (and
that's basically it). However, some need to configure other
address range than the default one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: laine@laine.org
Alter the example to remove the <auth> from:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/>
<auth username='myuser'>
<secret type='iscsi' usage='libvirtiscsi'/>
</auth>
<target dev='vdb' bus='virtio'/>
</disk>
and
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/>
<auth username='myuser'>
<secret type='iscsi' usage='libvirtiscsi'/>
</auth>
<target dev='vdc' bus='virtio'/>
</disk>
The reality is, it's not even used. For a <source pool> the authdef
from the storage source pool will supercede whatever is in the <disk>
definition during virStorageTranslateDiskSourcePool processing. In fact,
if the pool doesn't have/need authentication, then the authdef would
be removed anyway as the storage pool would be handling things.
The "proof" for this is in the adjustment to the test to add an
<auth> for a disk. The resulting .args file won't add what normally
would be added "myname:encodedpassword@" prior to the hostname in
the IQN (e.g. iscsi://myname:encodedpassword@iscsi.example.org:3260/...
https://bugzilla.redhat.com/show_bug.cgi?id=1477880
If the "/#" is missing from the provided iSCSI path, then we need
to provide the default LUN of /0; otherwise, QEMU will fail to parse
the URL causing a failure to either create the guest or hotplug
attach the storage.
During post parse, for any iSCSI disk or hostdev, scan the source
path looking for the presence of '/', if found, then we can assume
the LUN is provided. If not found, alter the input XML to add the
"/0". This will cause the generated XML to have the generated
value when the domain config is saved after post parse.
arm/aarch64 -M virt on KVM doesn't and will never work with standard
VGA card emulation. The recommended method is to use type=virtio, so
let's make it the default for video devices without an explicit type
set by the user.
https://bugzilla.redhat.com/show_bug.cgi?id=1404112
Signed-off-by: Cole Robinson <crobinso@redhat.com>
While formatting disk or chardev element they both uses
virDomainDiskSourceDefFormatSeclabel() function which also closes
the source element. This is not extendable.
Use the new virXMLFormatElement() to properly format the source
element with possible child elements.
As a side effect it fixes a bug in disk source formatting.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1458638
This code is so complicated because we allow enabling the same
bits at many places. Just like in this case: huge pages can be
enabled by global <hugepages/> element under <memoryBacking> or
on per <memory/> basis. To complicate things a bit more, users
are allowed to omit the page size which case the default page
size is used. And this is what is causing this bug. If no page
size is specified, @pagesize is keeping value of zero throughout
whole function. Therefore we need yet another boolean to hold
[use, don't use] information as we can't sue @pagesize for that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
My commit 0c1d863 broke formatting of passthrough smartcard devices:
<smartcard mode='passthrough' type='spicevmc'/>
resulted in invalid XML:
<smartcard mode='passthrough'>
type='spicevmc'>
<address type='ccid' controller='0' slot='0'/>
</smartcard>
Split out chardev source formatting function into two -
one formatting the attributes and other formatting the subelements.
Reported-by: Cole Robinson <crobinso@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
While using "definitely-not-virtio" as a model name is very
cute, it will also cause the relevant test to fail once we
introduce stricter validation.
Use "e1000", which is definitely not virtio but also a valid
model name, instead.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
This patch addresses the same aspects on PPC the bug 1103314 addressed
on x86.
PCI expander bus creates multiple primary PCI busses, where each of these
busses can be assigned a specific NUMA affinity, which, on x86 is
advertised through ACPI on a per-bus basis.
For SPAPR, a PHB's NUMA affinities are assigned on a per-PHB basis, and
there is no mechanism for advertising NUMA affinities to a guest on a
per-bus basis. So, even if qemu-ppc manages to get some sort of multi-bus
topology working using PXB, there is no way to expose the affinities
of these busses to the guest. It can only be exposed on a per-PHB/per-domain
basis.
So patch enables NUMA node tag in pci-root controller on PPC.
The way to set the NUMA node is through the numa_node option of
spapr-pci-host-bridge device. However for the implicit PHB, the only way
to set the numa_node is from the -global option. The -global option applies
to all the PHBs unless explicitly specified with the option on the
respective PHB of CLI. The default PHB has the emulated devices only, so
the patch prevents setting the NUMA node for the default PHB.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
All the pieces are now in place, so we can finally start
using isolation groups to achieve our initial goal, which is
separating hostdevs from emulated PCI devices while keeping
hostdevs that belong to the same host IOMMU group together.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1280542
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
When looking for slots suitable for a PCI device, libvirt
might need to add an extra PCI controller: for pSeries guests,
we want that extra controller to be a PHB (pci-root) rather
than a PCI bridge.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
PCI bus has to be numbered sequentially, and no index can be
missing, so libvirt will fill in the blanks automatically for
the user.
Up until now, it has done so using either pci-bridge, for machine
types based on legacy PCI, or pcie-root-port, for machine types
based on PCI Express. Neither choice is good for pSeries guests,
where PHBs (pci-root) should be used instead.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
These tests demonstrate that, while it's now possible for the
user to create PHB explicitly and manually assign devices to
them, libvirt still defaults to extending the guest PCI
topology using PCI bridges and making suboptimal device
placement choices.
The next few commits will improve on these behaviors and the
tests outputs will automatically be updated to reflect this.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
pSeries guests will soon need the new information; luckily,
we can figure it out automatically most of the time, so
users won't have to worry about it.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
Fill them in right away rather than having to figure out at runtime
whether they are necessary or not.
virStorageSourceNetworkDefaultPort does not need to be exported any
more.
Several cases have incidental <serial> or <console> XML which aren't
the features being tested for. Upcoming changes will cause some
churn here, so instead drop these bits now.
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
This demonstrates that the previous qemu caps changes will use
-chardev for pci-serial on aarch64 machvirt
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Update the per device boot schema to add an optional loadparm parameter.
eg: <boot order='1' loadparm='2'/>
Extend the virDomainDeviceInfo to support loadparm option.
Modify the appropriate functions to parse loadparm from boot device xml.
Add the xml2xml test to validate the field.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1214369
Consider the following XML:
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='1'/>
</hugepages>
<source type='file'/>
<access mode='shared'/>
</memoryBacking>
<numa>
<cell id='0' cpus='0-3' memory='512000' unit='KiB'/>
<cell id='1' cpus='4-7' memory='512000' unit='KiB'/>
</numa>
The following cmd line is generated:
-object
memory-backend-file,id=ram-node0,mem-path=/var/lib/libvirt/qemu/ram,
share=yes,size=524288000 -numa node,nodeid=0,cpus=0-3,memdev=ram-node0
-object
memory-backend-file,id=ram-node1,mem-path=/var/lib/libvirt/qemu/ram,
share=yes,size=524288000 -numa node,nodeid=1,cpus=4-7,memdev=ram-node1
This is obviously wrong as for node 1 hugepages should have been
used. The hugepages configuration is more specific than <source
type='file'/>.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We have couple of hugepage enabled domains for qemuxml2argvtest.
Unfortunately, often when adding a test case there I forget to
add it to xml2xml test too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1459091
Currently, we are querying for vhostuser interface name in post
parse callback. At that time interface might not yet exist.
However, it has to exist when starting domain. Therefore it makes
more sense to query its name at that point. This partially
reverts 57b5e27.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Make the decision based on the usage of childBuf buffer.
This fixes the oddity in the test case introduced by commit c1c4d0d
where we would format an empty pair tag.
There are currently some limitations in the emulated GICv3
that make it unsuitable as a default. Use GICv2 instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1450433
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Currently we consider all UNIX paths with specific prefix as generated
by libvirt, but that's a wrong assumption. Let's make the detection
better by actually checking whether the whole path matches one of the
paths that we generate or generated in the past.
The UNIX path isn't stored in config XML since libvirt-1.3.1.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1446980
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add a new <ioapic> element with a driver attribute.
Possible values are qemu and kvm. With 'qemu', the I/O
APIC can be put in the userspace even for KVM domains.
https://bugzilla.redhat.com/show_bug.cgi?id=1427005
We are currently parsing only rx/frames/max because that's the only
value that makes sense for us. The tun device just added support for
this one and the others are only supported by hardware devices which
we don't need to worry about as the only way we'd pass those to the
domain is using <hostdev/> or <interface type='hostdev'/>. And in
those cases the guest can modify the settings itself.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Our test data used a lot of different qemu binary paths and some
of them were based on downstream systems.
Note that there is one file where I had to add "accel=kvm" because
the qemuargv2xml code parses "/usr/bin/kvm" as virt type="kvm".
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The virt type for QEMU can be modified by -machine attribute "accel"
so there is no need to have different QEMU binary paths.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Depending on the architecture, requirements for ACPI and UEFI can
be different; more specifically, while on x86 UEFI requires ACPI,
on aarch64 it's the other way around.
Enforce these requirements when validating the domain, and make
the error message more accurate by mentioning that they're not
necessarily applicable to all architectures.
Several aarch64 test cases had to be tweaked because they would
have failed the validation step otherwise.
We want pcie-root-ports to be used when available in QEMU,
but at the same time we need to ensure that hosts running
older QEMU releases keep working and that the user can
override the default at any time.
Add a comment for the original pcie-root-port test cases
to make it clear how these new test cases are different.
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that NVDIMM has found its way into libvirt, users might want
to fine tune some settings for each module separately. One such
setting is 'share=on|off' for the memory-backend-file object.
This setting - just like its name suggest already - enables
sharing the nvdimm module with other applications. Under the hood
it controls whether qemu mmaps() the file as MAP_PRIVATE or
MAP_SHARED.
Yet again, we have such config knob in domain XML, but it's just
an attribute to numa <cell/>. This does not give fine enough
tuning on per-memdevice basis so we need to have the attribute
for each device too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
NVDIMM is new type of memory introduced into QEMU 2.6. The idea
is that we have a Non-Volatile memory module that keeps the data
persistent across domain reboots.
At the domain XML level, we already have some representation of
'dimm' modules. Long story short, NVDIMM will utilize the
existing <memory/> element that lives under <devices/> by adding
a new attribute 'nvdimm' to the existing @model and introduce a
new <path/> element for <source/> while reusing other fields. The
resulting XML would appear as:
<memory model='nvdimm'>
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit='KiB'>523264</size>
<node>0</node>
</target>
<address type='dimm' slot='0'/>
</memory>
So far, this is just a XML parser/formatter extension. QEMU
driver implementation is in the next commit.
For more info on NVDIMM visit the following web page:
http://pmem.io/
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up until a while ago, libvirt would automatically add a legacy
PCI controllers combo (dmi-to-pci-bridge + pci-bridge) to any
PCIe machine type (x86_64/q35 and aarch64/virt).
As a result, a number of input and output files in the test
suite ended up containing the legacy PCI controllers, even
though they are not needed or in any way relevant to the
feature being tested.
Get rid of most of the occurrences. Most of the time, this
just means removing the controllers from the input file and
regenerating the output files; in a few instances, some
minor tweaking is performed on the input file, most notably
removing the memory balloon: as memory balloon support was
not the scope of the test being changed, there is no loss
of test coverage from doing so.
Several occurrences of the legacy PCI controllers remain in
the test suite, both because removing their usage would have
required even more tweaking, and because we still want to
have coverage of this perfectly valid combination.
Add a new attribute 'rendernode' to <gl> spice element.
Give it to QEMU if qemu supports it (queued for 2.9).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This part introduces new xml elements for file based
memorybacking support and their parsing.
(It allows vhost-user to be used without hugepages.)
New xml elements:
<memoryBacking>
<source type="file|anonymous"/>
<access mode="shared|private"/>
<allocation mode="immediate|ondemand"/>
</memoryBacking>
Not only we should set the MTU on the host end of the device but
also let qemu know what MTU did we set.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Based on work of Mehdi Abaakouk <sileht@sileht.net>.
When parsing vhost-user interface XML and no ifname is found we
can try to fill it in in post parse callback. The way this works
is we try to make up interface name from given socket path and
then ask openvswitch whether it knows the interface.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Set the VIR_PCI_CONNECT_AGGREGATE_SLOT flag for pcie-root-ports so
that they will be assigned to all the functions on a slot.
Some qemu test case outputs had to be adjusted due to the
pcie-root-ports now being put on multiple functions.
If there are multiple devices assigned to the different functions of a
single PCI slot, they will not work properly if the device at function
0 doesn't have its "multi" attribute turned on, so it makes sense for
libvirt to turn it on during PCI address assignment. Setting multi
then assures that the new setting is stored in the config (so it will
be used next time the domain is started), preventing any potential
problems in the case that a future change in the configuration
eliminates the devices on all non-0 functions (multi will still be set
for function 0 even though it is the only function in use on the slot,
which has no useful purpose, but also doesn't cause any problems).
(NB: If we were to instead just decide on the setting for
multifunction at runtime, a later removal of the non-0 functions of a
slot would result in a silent change in the guest ABI for the
remaining device on function 0 (although it may seem like an
inconsequential guest ABI change, it *is* a guest ABI change to turn
off the multi bit).)
virtio-pci is the way forward for aarch64 guests: it's faster
and less alien to people coming from other architectures.
Now that guest support is finally getting there (Fedora 24,
CentOS 7.3, Ubuntu 16.04 and Debian testing all support
virtio-pci out of the box), we'd like to start using it by
default instead of virtio-mmio.
Users and applications can already opt-in by explicitly using
<address type='pci'/>
inside the relevant elements, but that's kind of cumbersome and
requires all users and management applications to adapt, which
we'd really like to avoid.
What we can do instead is use virtio-mmio only if the guest
already has at least one virtio-mmio device, and use virtio-pci
in all other situations.
That means existing virtio-mmio guests will keep using the old
addressing scheme, and new guests will automatically be created
using virtio-pci instead. Users can still override the default
in either direction.
Existing tests such as aarch64-aavmf-virtio-mmio and
aarch64-virtio-pci-default already cover all possible
scenarios, so no additions to the test suites are necessary.
Since the great rework of how we store vcpu- and iothread-related
data, we have overly complex part of code that is trying to format the
scheduler tuning data in as less lines as possible by grouping
settings for multiple threads. That was designed as an input syntax
sugar for users, but we don't need to also use that when formatting
the XML. Switching to simple enumeration makes the code nicer,
shorter and more welcoming to future changes.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Modify _virDomainBlockIoTuneInfo and rng schema to support the group_name
option for iotune throttling. Document the new value.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Don't use duplicate disk addresses in test cases unless it's useful. At
least the test case will break once we have a check for uniqueness of
addresses at time of domain definition.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
With the QEMU components in place, provide the XML parsing to
invoke that code when given the following XML snippet:
<hostdev mode='subsystem' type='scsi_host'>
<source protocol='vhost' wwpn='naa.501234567890abcd'/>
</hostdev>
An optional address element can be specified within the hostdev
(pick CCW or PCI as necessary):
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0625'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Add basic vhost-scsi tests which were cloned from hostdev-scsi-virtio-scsi
in both xml2argv and xml2xml. Added ones for both vhost-scsi-ccw and
vhost-scsi-pci since the syntaxes are slightly different between them.
Also adjusted the docs to describe the changes.
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
For machinetypes with a pci-root bus (all legacy PCI), libvirt will
make a "fake" reservation for one extra slot prior to assigning
addresses to unaddressed PCI endpoint devices in the domain. This will
trigger auto-adding of a pci-bridge for the final device to be
assigned an address *if that device would have otherwise instead been
the last device on the last available pci-bridge*; thus it assures
that there will always be at least one slot left open in the domain's
bus topology for expansion (which is important both for hotplug (since
a new pci-bridge can't be added while the guest is running) as well as
for offline additions to the config (since adding a new device might
otherwise in some cases require re-addressing existing devices, which
we want to avoid)).
It's important to note that for the above case (legacy PCI), we must
check for the special case of all slots on all buses being occupied
*prior to assigning any addresses*, and avoid attempting to reserve
the extra address in that case, because there is no free address in
the existing topology, so no place to auto-add a pci-bridge for
expansion (i.e. it would always fail anyway). Since that condition can
only be reached by manual intervention, this is acceptable.
For machinetypes with pcie-root (Q35, aarch64 virt), libvirt's
methodology for automatically expanding the bus topology is different
- pcie-root-ports are plugged into slots (soon to be functions) of
pcie-root as needed, and the new endpoint devices are assigned to the
single slot in each pcie-root-port. This is done so that the devices
are, by default, hotpluggable (the slots of pcie-root don't support
hotplug, but the single slot of the pcie-root-port does). Since
pcie-root-ports can only be plugged into pcie-root, and we don't
auto-assign endpoint devices to the pcie-root slots, this means
topology expansion doesn't compete with endpoint devices for slots, so
we don't need to worry about checking for all "useful" slots being
free *prior* to assigning addresses to new endpoint devices - as a
matter of fact, if we attempt to reserve the open slots before the
used slots, it can lead to errors.
Instead this patch just reserves one slot for a "future potential"
PCIe device after doing the assignment for actual devices, but only
if the only PCI controller defined prior to starting address
assignment was pcie-root, and only if we auto-added at least one PCI
controller during address assignment. This assures two things:
1) that reserving the open slots will only be done when the domain is
initially defined, never at any time after, and
2) that if the user understands enough about PCI controllers that they
are adding them manually, that we don't mess up their plan by
adding extras - if they know enough to add one pcie-root-port, or
to manually assign addresses such that no pcie-root-ports are
needed, they know enough to add extra pcie-root-ports if they want
them (this could be called the "libguestfs clause", since
libguestfs needs to be able to create domains with as few
devices/controllers as possible).
This is set to reserve a single free port for now, but could be
increased in the future if public sentiment goes in that direction
(it's easy to increase later, but essentially impossible to decrease)
Real Q35 hardware has an ICH9 chip that includes several integrated
devices at particular addresses (see the file docs/q35-chipset.cfg in
the qemu source). libvirt already attempts to put the first two sets
of ich9 USB2 controllers it finds at 00:1D.* and 00:1A.* to match the
real hardware. This patch does the same for the ich9 "HD audio"
device.
The main inspiration for this patch is that currently the *only*
device in a reasonable "workstation" type virtual machine config that
requires a legacy PCI slot is the audio device, Without this patch,
the standard Q35 machine created by virt-manager will have a
dmi-to-pci-bridge and a pci-bridge just for the sound device; with the
patch (and if you change the sound device model from the default
"ich6" to "ich9"), the machine definition constructed by virt-manager
has absolutely no legacy PCI controllers - any legacy PCI devices
(e.g. video and sound) are on pcie-root as integrated devices.
Previously we added a set of EHCI+UHCI controllers to Q35 machines to
mimic real hardware as closely as possible, but recent discussions
have pointed out that the nec-usb-xhci (USB3) controller is much more
virtualization-friendly (uses less CPU), so this patch switches the
default for Q35 machinetypes to add an XHCI instead (if it's
supported, which it of course *will* be).
Since none of the existing test cases left out USB controllers in the
input XML, a new Q35 test case was added which has *no* devices, so
ends up with only the defaults always put in by qemu, plus those added
by libvirt.
Now the a dmi-to-pci-bridge is automatically added just as it's needed
(when a pci-bridge is being added), we no longer have any need to
force-add one to every single Q35 domain.
A few of the qemu test cases assume that a dmi-to-pci-bridge will
always be added at index 1, and so they omit it from the input data
even though a pci-bridge is present at index 2, e.g.:
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='2' model='pci-bridge'/>
Support for this odd practice was discussed on libvir-list and we
decided that the complex code required to make this continue was not
worth the headache of maintaining. So instead, this patch modifies the
test cases to manually add a dmi-to-pci-bridge at index 1 (since an
upcoming patch is going to eliminate the unconditional adding of
dmi-to-pci-bridge).
Because the auto-add was placing the dmi-to-pci-bridge later in the
list (even though it has a lower index) the test output is also
updated to take account for the new order (which puts the pci
controllers in index-order)
Previously libvirt would only add pci-bridge devices automatically
when an address was requested for a device that required a legacy PCI
slot and none was available. This patch expands that support to
dmi-to-pci-bridge (which is needed in order to add a pci-bridge on a
machine with a pcie-root), and pcie-root-port (which is needed to add
a hotpluggable PCIe device). It does *not* automatically add
pcie-switch-upstream-ports or pcie-switch-downstream-ports (and
currently there are no plans for that).
Given the existing code to auto-add pci-bridge devices, automatically
adding pcie-root-ports is fairly straightforward. The
dmi-to-pci-bridge support is a bit tricky though, for a few reasons:
1) Although the only reason to add a dmi-to-pci-bridge is so that
there is a reasonable place to plug in a pci-bridge controller,
most of the time it's not the presence of a pci-bridge *in the
config* that triggers the requirement to add a dmi-to-pci-bridge.
Rather, it is the presence of a legacy-PCI device in the config,
which triggers auto-add of a pci-bridge, which triggers auto-add of
a dmi-to-pci-bridge (this is handled in
virDomainPCIAddressSetGrow() - if there's a request to add a
pci-bridge we'll check if there is a suitable bus to plug it into;
if not, we first add a dmi-to-pci-bridge).
2) Once there is already a single dmi-to-pci-bridge on the system,
there won't be a need for any more, even if it's full, as long as
there is a pci-bridge with an open slot - you can also plug
pci-bridges into existing pci-bridges. So we have to make sure we
don't add a dmi-to-pci-bridge unless there aren't any
dmi-to-pci-bridges *or* any pci-bridges.
3) Although it is strongly discouraged, it is legal for a pci-bridge
to be directly plugged into pcie-root, and we don't want to
auto-add a dmi-to-pci-bridge if there is already a pci-bridge
that's been forced directly into pcie-root.
Although libvirt will now automatically create a dmi-to-pci-bridge
when it's needed, the code still remains for now that forces a
dmi-to-pci-bridge on all domains with pcie-root (in
qemuDomainDefAddDefaultDevices()). That will be removed in a future
patch.
For now, the pcie-root-ports are added one to a slot, which is a bit
wasteful and means it will fail after 31 total PCIe devices (30 if
there are also some PCI devices), but helps keep the changeset down
for this patch. A future patch will have 8 pcie-root-ports sharing the
functions on a single slot.
The nec-usb-xhci device (which is a USB3 controller) has always
presented itself as a PCI device when plugged into a legacy PCI slot,
and a PCIe device when plugged into a PCIe slot, but libvirt has
always auto-assigned it to a legacy PCI slot.
This patch changes that behavior to auto-assign to a PCIe slot on
systems that have pcie-root (e.g. Q35 and aarch64/virt).
Since we don't yet auto-create pcie-*-port controllers on demand, this
means a config with an nec-xhci USB controller that has no PCI address
assigned will also need to have an otherwise-unused pcie-*-port
controller specified:
<controller type='pci' model='pcie-root-port'/>
<controller type='usb' model='nec-xhci'/>
(this assumes there is an otherwise-unused slot on pcie-root to accept
the pcie-root-port)
The e1000e is an emulated network device based on the Intel 82574,
present in qemu 2.7.0 and later. Among other differences from the
e1000, it presents itself as a PCIe device rather than legacy PCI. In
order to get it assigned to a PCIe controller, this patch updates the
flags setting for network devices when the model name is "e1000e".
(Note that for some reason libvirt has never validated the network
device model names other than to check that there are no dangerous
characters in them. That should probably change, but is the subject of
another patch.)
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1343094
libvirt previously assigned nearly all devices to a "hotpluggable"
legacy PCI slot even on machines with a PCIe root bus (and even though
most such machines don't even support hotplug on legacy PCI slots!)
Forcing all devices onto legacy PCI slots means that the domain will
need a dmi-to-pci-bridge (to convert from PCIe to legacy PCI) and a
pci-bridge (to provide hotpluggable legacy PCI slots which, again,
usually aren't hotpluggable anyway).
To help reduce the need for these legacy controllers, this patch tries
to assign virtio-1.0-capable devices to PCIe slots whenever possible,
by setting appropriate connectFlags in
virDomainCalculateDevicePCIConnectFlags(). Happily, when that function
was written (just a few commits ago) it was created with a
"virtioFlags" argument, set by both of its callers, which is the
proper connectFlags to set for any virtio-*-pci device - depending on
the arch/machinetype of the domain, and whether or not the qemu binary
supports virtio-1.0, that flag will have either been set to PCI or
PCIe. This patch merely enables the functionality by setting the flags
for the device to whatever is in virtioFlags if the device is a
virtio-*-pci device.
NB: the first virtio video device will be placed directly on bus 0
slot 1 rather than on a pcie-root-port due to the override for primary
video devices in qemuDomainValidateDevicePCISlotsQ35(). Whether or not
to change that is a topic of discussion, but this patch doesn't change
that particular behavior.
NB2: since the slot must be hotpluggable, and pcie-root (the PCIe root
complex) does *not* support hotplug, this means that suitable
controllers must also be in the config (i.e. either pcie-root-port, or
pcie-downstream-port). For now, libvirt doesn't add those
automatically, so if you put virtio devices in a config for a qemu
that has PCIe-capable virtio devices, you'll need to add extra
pcie-root-ports yourself. That requirement will be eliminated in a
future patch, but for now, it's simple to do this:
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-root-port'/>
...
Partially Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1330024
As with other devices assign the slot number right away when adding the
device. This will make the slot numbers static as we do with other
addressing elements and it will ultimately simplify allocation of the
alias in a static way which does not break with qemu.
We're keeping some things at default and that's not something we want to
do intentionally. Let's save some sensible defaults upfront in order to
avoid having problems later. The details for the defaults (of the newer
implementation) can be found in qemu's commit 5400c02b90bb:
http://git.qemu.org/?p=qemu.git;a=commit;h=5400c02b90bb
Since we are merely saving the defaults it will not change the guest ABI
and thanks to the fact that we're doing it in the PostParse callback it
will not break the ABI stability checks.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The old ivshmem is deprecated in QEMU, so let's use the better
ivshmem-{plain,doorbell} variants instead.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Add an optional "tls='yes|no'" attribute for a TCP chardev.
For QEMU, this will allow for disabling the host config setting of the
'chardev_tls' for a domain chardev channel by setting the value to "no" or
to attempt to use a host TLS environment when setting the value to "yes"
when the host config 'chardev_tls' setting is disabled, but a TLS environment
is configured via either the host config 'chardev_tls_x509_cert_dir' or
'default_tls_x509_cert_dir'
Signed-off-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The code is entirely correct, but it still managed to trip me
up when I first ran into it because I did not realize right away
that VIR_PCI_CONNECT_TYPES_ENDPOINT was not a single flag, but
rather a mask including both VIR_PCI_CONNECT_TYPE_PCI_DEVICE and
VIR_PCI_CONNECT_TYPE_PCIE_DEVICE.
In order to save the next distracted traveler in PCI Address Land
some time, document this fact with a comment. Add a test case for
the behavior as well.
It was missing... Also since I'm using the soft link from qemuxml2xmloutdata
to the qemuxml2argvdata file, modify the output file to have the necessary
<address> elements plus the mouse and keyboard.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The x86 CPU driver translated each CPU definition from domain XML into
CPUID data and then back to CPU definition. This effectively sorted the
list of CPU features according to their CPUID values. Since this is
going to change, we need to reorder CPU features in a few test files to
make sure the generated QEMU command lines will not change.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Most of QEMU's PCI display device models, such as:
libvirt video/model/@type QEMU -device
------------------------- ------------
cirrus cirrus-vga
vga VGA
qxl qxl-vga
virtio virtio-vga
come with a linear framebuffer (sometimes called "VGA compatibility
framebuffer"). This linear framebuffer lives in one of the PCI device's
MMIO BARs, and allows guest code (primarily: firmware drivers, and
non-accelerated OS drivers) to display graphics with direct memory access.
Due to architectural reasons on aarch64/KVM hosts, this kind of
framebuffer doesn't / can't work in
qemu-system-(arm|aarch64) -M virt
machines. Cache coherency issues guarantee a corrupted / unusable display.
The problem has been researched by several people, including kvm-arm
maintainers, and it's been decided that the best way (practically the only
way) to have boot time graphics for such guests is to consolidate on
QEMU's "virtio-gpu-pci" device.
>From <https://bugzilla.redhat.com/show_bug.cgi?id=1195176>, libvirt
supports
<devices>
<video>
<model type='virtio'/>
</video>
</devices>
but libvirt unconditionally maps @type='virtio' to QEMU's "virtio-vga"
device model. (See the qemuBuildDeviceVideoStr() function and the
"qemuDeviceVideo" enum impl.)
According to the above, this is not right for the "virt" machine type; the
qemu-system-(arm|aarch64) binaries don't even recognize the "virtio-vga"
device model (justifiedly). Whereas "virtio-gpu-pci", which is a pure
virtio device without a compatibility framebuffer, is available, and works
fine.
(The ArmVirtQemu ("AAVMF") platform of edk2 -- that is, the UEFI firmware
for "virt" -- supports "virtio-gpu-pci", as of upstream commit
3ef3209d3028. See
<https://tianocore.acgmultimedia.com/show_bug.cgi?id=66>.)
Override the default mapping of "virtio", from "virtio-vga" to
"virtio-gpu-pci", if qemuDomainMachineIsVirt() evaluates to true.
Cc: Andrea Bolognani <abologna@redhat.com>
Cc: Drew Jones <drjones@redhat.com>
Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
Cc: Martin Kletzander <mkletzan@redhat.com>
Suggested-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1372901
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Martin Kletzander <mkletzan@redhat.com>
Add a new TLS X.509 certificate type - "chardev". This will handle the
creation of a TLS certificate capability (and possibly repository) for
properly configured character device TCP backends.
Unlike the vnc and spice there is no "listen" or "passwd" associated. The
credentials eventually will be handled via a libvirt secret provided to
a specific backend.
Make use of the default verify option as well.
Signed-off-by: John Ferlan <jferlan@redhat.com>
When the user doesn't specify any model for a USB controller,
we use an architecture-dependent default, but we don't reflect
it in the guest XML.
Pick the default USB controller model when parsing the guest
XML instead of when creating the QEMU command line, so that
our choice is saved back to disk.
https://bugzilla.redhat.com/show_bug.cgi?id=1356937
Add the definitions to allow for viewing/setting cgroup period and quota
limits for IOThreads.
This is similar to the work done for emulator quota and period by
commit ids 'b65dafa' and 'e051c482'.
Being able to view/set the IOThread specific values is related to more
recent changes adding global period (commmit id '4d92d58f') and global
quota (commit id '55ecdae') definitions and qemu support (commit id
'4e17ff79' and 'fbcbd1b2'). With a global setting though, if somehow
the IOThread value in the cgroup hierarchy was set "outside of libvirt"
to a value that is incompatible with the global value.
Allowing control over IOThread specific values provides the capability
to alter the IOThread values as necessary.
To allow using failover with gluster it's necessary to specify multiple
volume hosts. Add support for starting qemu with such configurations.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
We were requiring a USB port path in the schema, but not enforcing it.
Omitting the USB port would lead to libvirt formatting it as (null).
Such domain cannot be started and will disappear after libvirtd restart
(since it cannot parse back the XML).
Only format the port if it has been specified and mark it as optional
in the XML schema.
Commit id's '9bbf0d7e6' and '2552fec24' added some XML parsing tests
for a LUKS volume to use a 'passphrase' secret format. After commit,
this was deemed to be incorrect, so covert the various tests to use
the volume usage format where the 'usage' is the path to the volume
rather than a user defined name string.
Also, removed the qemuxml2argv-luks-disk-cipher.xml since it was
just a duplicate of qemuxml2argv-luks-disks.xml.
Signed-off-by: John Ferlan <jferlan@redhat.com>
For type='ethernet' interfaces only.
(This patch had been pushed earlier in
commit 0b4645a7e0, but was reverted in
commit 84d47a3cce because it had been
accidentally pushed during the freeze for release 2.0.0)
In order to use more common code and set up for a future type, modify the
encryption secret to allow the "usage" attribute or the "uuid" attribute
to define the secret. The "usage" in the case of a volume secret would be
the path to the volume as dictated by the backwards compatibility brought
on by virStorageGenerateQcowEncryption where it set up the usage field as
the vol->target.path and didn't allow someone to provide it. This carries
into virSecretObjListFindByUsageLocked which takes the secret usage attribute
value from from the domain disk definition and compares it against the
usage type from the secret definition. Since none of the code dealing
with qcow/qcow2 encryption secrets uses usage for lookup, it's a mostly
cosmetic change. The real usage comes in a future path where the encryption
is expanded to be a luks volume and the secret will allow definition of
the usage field.
This code will make use of the virSecretLookup{Parse|Format}Secret common code.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Make them work again... The xml2xml had been working, but the xml2argv
were not working. Making the xml2argv work required a few adjustments to
the xml to update to more recent times.
Signed-off-by: John Ferlan <jferlan@redhat.com>
There has been some progress lately in enabling virtio-pci on
aarch64 guests; however, guest OS support is still spotty at best,
so most guests are going to be using virtio-mmio instead.
Currently, mach-virt guests are closely modeled after q35 guests,
and that includes always adding a dmi-to-pci-bridge that's just
impossible to get rid of. While that's acceptable (if suboptimal)
for q35, where you will always need some kind of PCI device anyway,
mach-virt guests should be allowed to avoid it.
Until now, a Q35 domain (or arm/virt, or any other domain that has a
pcie-root bus) would always have a pci-bridge added, so that there
would be a hotpluggable standard PCI slot available to plug in any PCI
devices that might be added. This patch removes the explicit add,
instead relying on the pci-bridge being auto-added during PCI address
assignment (it will add a pci-bridge if there are no free slots).
This doesn't eliminate the dmi-to-pci-bridge controller that is
explicitly added whether or not a standard PCI slot is required (and
that is almost never used as anything other than a converter between
pcie.0's PCIe slots and standard PCI). That will be done separately.
This option allows or disallows detection of zero-writes if it is set to
"on" or "off", respectively. It can be also set to "unmap" in which
case it will try discarding that part of image based on the value of the
"discard" option.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This new listen type is currently supported only by spice graphics.
It's introduced to make it easier and clearer specify to not listen
anywhere in order to start a guest with OpenGL support.
The old way to do this was set spice graphics autoport='no' and don't
specify any ports. The new way is to use <listen type='none'/>. In
order to be able to migrate to old libvirt the migratable XML will be
generated without the listen element and with autoport='no'. Also the
old configuration will be automatically converted to the this listen
type.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
VNC graphics already supports sockets but only via 'socket' attribute.
This patch coverts that attribute into listen type 'socket'.
For backward compatibility we need to handle listen type 'socket' and 'socket'
attribute properly to support old XMLs and new XMLs. If both are provided they
have to match, if only one of them is provided we need to be able to parse that
configuration too.
To not break migration back to old libvirt if the socket is provided by user we
need to generate migratable XML without the listen element and use only 'socket'
attribute.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Historically, we added heads=1 to videos, but for example for qxl, we
did not reflect that on the command line.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1283207
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Hand-entering indexes for 20 PCI controllers is not as tedious as
manually determining and entering their PCI addresses, but it's still
annoying, and the algorithm for determining the proper index is
incredibly simple (in all cases except one) - just pick the lowest
unused index.
The one exception is USB2 controllers because multiple controllers in
the same group have the same index. For these we look to see if 1) the
most recently added USB controller is also a USB2 controller, and 2)
the group *that* controller belongs to doesn't yet have a controller
of the exact model we're just now adding - if both are true, the new
controller gets the same index, but in all other cases we just assign
the lowest unused index.
With this patch in place and combined with the automatic PCI address
assignment, we can define a PCIe switch with several ports like this:
<controller type='pci' model='pcie-root-port'/>
<controller type='pci' model='pcie-switch-upstream-port'/>
<controller type='pci' model='pcie-switch-downstream-port'/>
<controller type='pci' model='pcie-switch-downstream-port'/>
<controller type='pci' model='pcie-switch-downstream-port'/>
<controller type='pci' model='pcie-switch-downstream-port'/>
<controller type='pci' model='pcie-switch-downstream-port'/>
...
These will each get a unique index, and PCI addresses that connect
them together appropriately with no pesky numbers required.
Add a new element to <domain> XML:
<os>
<acpi>
<table type="slic">/path/to/acpi/table/file</table>
</acpi>
</os>
To supply a path to a SLIC (Software Licensing) ACPI
table blob.
https://bugzilla.redhat.com/show_bug.cgi?id=1327537
Rather than only assigning a PCI address when no address is given at
all, also do it when the config says that the address type is 'pci',
but it gives no address (virDeviceInfoPCIAddressWanted()).
There are also several places after parsing but prior to address
assignment where code previously expected that any info with address
type='pci' would have a *valid* PCI address, which isn't always the
case - now we check not only for type='pci', but also for a valid
address (virDeviceInfoPCIAddressPresent()).
The test case added in this patch was directly copied from Cole's patch titled:
qemu: Wire up address type=pci auto_allocate
Commit 55320c23 introduced a new test for VNC to test if
vnc_auto_unix_socket is set in qemu.conf, but forget to enable it in
qemuxml2argvtest.c.
This patch also moves the code in qemuxml2xmltest.c next to other VNC
tests and refactor the test so we also check the case for parsing active
XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The only case where the hardware capabilities influence the result
is when no <gic/> element was provided.
The test programs now ensure both that the correct GIC version is
picked in that case, and that hardware capabilities are not taken
into account when the user has already picked a GIC version.
We support omitting listen attribute of graphics element so we should
also support omitting address attribute of listen element. This patch
also updates libvirt to always add a listen element into domain XML
except for VNC graphics if socket attribute is specified.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add the ability to add an 'iothread' to the controller which will be how
virtio-scsi-pci and virtio-scsi-ccw iothreads have been implemented in qemu.
Describe the new functionality and add tests to parse/validate that the
new attribute can be added.
Similarly to what commit 7140807917 did with some internal paths,
clear vnc socket paths that were generated by us. Having such path in
the definition can cause trouble when restoring the domain. The path is
generated to the per-domain directory that contains the domain ID.
However, that ID will be different upon restoration, so qemu won't be
able to create that socket because the directory will not be prepared.
To be able to migrate to older libvirt, skip formatting the socket path
in migratable XML if it was autogenerated. And mark it as autogenerated
if it already exists and we're parsing live XML.
Best viewed with '-C'.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1326270
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Currently we only allow /dev/random and /dev/hwrng as host input
for <rng><backend model='random'/> device. This was added after
various upstream discussions in commit 4932ef45
However this restriction has generated quite a few complaints over
the years, so a new discussion was initiated:
http://www.redhat.com/archives/libvir-list/2016-April/msg00987.html
Several people suggested removing the restriction, and nobody really
spoke up to defend it. So this patch drops the path restriction
entirely
https://bugzilla.redhat.com/show_bug.cgi?id=1074464
While working on the tests for the secret initialization vector, I found
that the existing iSCSI tests were lacking in how they defined the IQN.
Many had IQN's of just 'iqn.1992-01.com.example' for one disk while using
'iqn.1992-01.com.example/1' for the second disk (same for hostdevs - guess
how they were copied/generated).
Typically (and documented this way), IQN's would include be of the form
'iqn.1992-01.com.example:storage/1' indicating an IQN using "storage" for
naming authority specific string and "/1" for the iSCSI LUN.
So modify the input XML's to use the more proper format - this of course
has a ripple effect on the output XML and the args.
Also note that the "%3A" is generated by the virURIFormat/xmlSaveUri
to represent the colon.
Signed-off-by: John Ferlan <jferlan@redhat.com>
This controller provides a single PCIe port on a new root. It is
similar to pci-expander-bus, intended to provide a bus that can be
associated with a guest-identifiable NUMA node, but is for
machinetypes with PCIe rather than PCI (e.g. q35-based machinetypes).
Aside from PCIe vs. PCI, the other main difference is that a
pci-expander-bus has a companion pci-bridge that is automatically
attached along with it, but pcie-expander-bus has only a single port,
and that port will only connect to a pcie-root-port, or to a
pcie-switch-upstream-port. In order for the bus to be of any use in
the guest, it must have either a pcie-root-port or a
pcie-switch-upstream-port attached (and one or more
pcie-switch-downstream-ports attached to the
pcie-switch-upstream-port).
This is backed by the qemu device "pxb".
The pxb device always includes a pci-bridge that is at the bus number
of the pxb + 1.
busNr and <node> from the <target> subelement are used to set the
bus_nr and numa_node options for pxb.
During post-parse we validate that the domain's machinetype is
440fx-based (since the pxb device only works on 440fx-based machines),
and <node> also gets a sanity check to assure that the NUMA node
specified for the pxb (if any - it's optional) actually exists on the
guest.
This is a standard PCI root bus (not a bridge) that can be added to a
440fx-based domain. Although it uses a PCI slot, this is *not* how it
is connected into the PCI bus hierarchy, but is only used for
control. Each pci-expander-bus provides 32 slots (0-31) that can
accept hotplug of standard PCI devices.
The usefulness of pci-expander-bus relative to a pci-bridge is that
the NUMA node of the bus can be specified with the <node> subelement
of <target>. This gives guest-side visibility to the NUMA node of
attached devices (presuming that management apps only assign a device
to a bus that has a NUMA node number matching the node number of the
device on the host).
Each pci-expander-bus also has a "busNr" attribute. The expander-bus
itself will take the busNr specified, and all buses that are connected
to this bus (including the pci-bridge that is automatically added to
any expander bus of model "pxb" (see the next commit)) will use
busNr+1, busNr+2, etc, and the pci-root (or the expander-bus with next
lower busNr) will use bus numbers lower than busNr.
When support for dmi-to-pci-bridge was added, it was assumed that,
just as with the pci-root bus, slot 0 was reserved. This is not the
case - it can be used to connect a device just like any other slot, so
remove the restriction and update the test cases that auto-assign an
address on a dmi-to-pci-bridge.