The ftp protocol is already recognized by qemu/KVM so add this support to
libvirt as well.
The xml should be as following:
<disk type='network' device='cdrom'>
<source protocol='ftp' name='/url/path'>
<host name='host.name' port='21'/>
</source>
</disk>
Signed-off-by: Aline Manera <alinefm@br.ibm.com>
QEMU/KVM already allows a HTTP URL for the cdrom ISO image so add this support
to libvirt as well.
The xml should be as following:
<disk type='network' device='cdrom'>
<source protocol='http' name='/url/path'>
<host name='host.name' port='80'/>
</source>
</disk>
Signed-off-by: Aline Manera <alinefm@br.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=924153
Commit 904e05a2 (v0.9.9) added a per-<disk> seclabel element with
an attribute relabel='no' in order to try and minimize the
impact of shutdown delays when an NFS server disappears. The idea
was that if a disk is on NFS and can't be labeled in the first
place, there is no need to attempt the (no-op) relabel on domain
shutdown. Unfortunately, the way this was implemented was by
modifying the domain XML so that the optimization would survive
libvirtd restart, but in a way that is indistinguishable from an
explicit user setting. Furthermore, once the setting is turned
on, libvirt avoids attempts at labeling, even for operations like
snapshot or blockcopy where the chain is being extended or pivoted
onto non-NFS, where SELinux labeling is once again possible. As
a result, it was impossible to do a blockcopy to pivot from an
NFS image file onto a local file.
The solution is to separate the semantics of a chain that must
not be labeled (which the user can set even on persistent domains)
vs. the optimization of not attempting a relabel on cleanup (a
live-only annotation), and using only the user's explicit notation
rather than the optimization as the decision on whether to skip
a label attempt in the first place. When upgrading an older
libvirtd to a newer, an NFS volume will still attempt the relabel;
but as the avoidance of a relabel was only an optimization, this
shouldn't cause any problems.
In the ideal future, libvirt will eventually have XML describing
EVERY file in the backing chain, with each file having a separate
<seclabel> element. At that point, libvirt will be able to track
more closely which files need a relabel attempt at shutdown. But
until we reach that point, the single <seclabel> for the entire
<disk> chain is treated as a hint - when a chain has only one
file, then we know it is accurate; but if the chain has more than
one file, we have to attempt relabel in spite of the attribute,
in case part of the chain is local and SELinux mattered for that
portion of the chain.
* src/conf/domain_conf.h (_virSecurityDeviceLabelDef): Add new
member.
* src/conf/domain_conf.c (virSecurityDeviceLabelDefParseXML):
Parse it, for live images only.
(virSecurityDeviceLabelDefFormat): Output it.
(virDomainDiskDefParseXML, virDomainChrSourceDefParseXML)
(virDomainDiskSourceDefFormat, virDomainChrDefFormat)
(virDomainDiskDefFormat): Pass flags on through.
* src/security/security_selinux.c
(virSecuritySELinuxRestoreSecurityImageLabelInt): Honor labelskip
when possible.
(virSecuritySELinuxSetSecurityFileLabel): Set labelskip, not
norelabel, if labeling fails.
(virSecuritySELinuxSetFileconHelper): Fix indentation.
* docs/formatdomain.html.in (seclabel): Document new xml.
* docs/schemas/domaincommon.rng (devSeclabel): Allow it in RNG.
* tests/qemuxml2argvdata/qemuxml2argv-seclabel-*-labelskip.xml:
* tests/qemuxml2argvdata/qemuxml2argv-seclabel-*-labelskip.args:
* tests/qemuxml2xmloutdata/qemuxml2xmlout-seclabel-*-labelskip.xml:
New test files.
* tests/qemuxml2argvtest.c (mymain): Run the new tests.
* tests/qemuxml2xmltest.c (mymain): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
This resolves the issue that prompted the filing of
https://bugzilla.redhat.com/show_bug.cgi?id=928638
(although the request there is for something much larger and more
general than this patch).
commit f3868259ca disabled the
forwarding to upstream DNS servers of unresolved DNS requests for
names that had no domain, but were just simple host names (no "."
character anywhere in the name). While this behavior is frowned upon
by DNS root servers (that's why it was changed in libvirt), it is
convenient in some cases, and since dnsmasq can be configured to allow
it, it must not be strictly forbidden.
This patch restores the old behavior, but since it is usually
undesirable, restoring it requires specification of a new option in
the network config. Adding the attribute "forwardPlainNames='yes'" to
the <dns> elemnt does the trick - when that attribute is added to a
network config, any simple hostnames that can't be resolved by the
network's dnsmasq instance will be forwarded to the DNS servers listed
in the host's /etc/resolv.conf for an attempt at resolution (just as
any FQDN would be forwarded).
When that attribute *isn't* specified, unresolved simple names will
*not* be forwarded to the upstream DNS server - this is the default
behavior.
This PCI controller, named "dmi-to-pci-bridge" in the libvirt config,
and implemented with qemu's "i82801b11-bridge" device, connects to a
PCI Express slot (e.g. one of the slots provided by the pcie-root
controller, aka "pcie.0" on the qemu commandline), and provides 31
*non-hot-pluggable* PCI (*not* PCIe) slots, numbered 1-31.
Any time a machine is defined which has a pcie-root controller
(i.e. any q35-based machinetype), libvirt will automatically add a
dmi-to-pci-bridge controller if one doesn't exist, and also add a
pci-bridge controller. The reasoning here is that any useful domain
will have either an immediate (startup time) or eventual (subsequent
hot-plug) need for a standard PCI slot; since the pcie-root controller
only provides PCIe slots, we need to connect a dmi-to-pci-bridge
controller to it in order to get a non-hot-plug PCI slot that we can
then use to connect a pci-bridge - the slots provided by the
pci-bridge will be both standard PCI and hot-pluggable.
Since pci-bridge devices themselves can not be hot-plugged into a
running system (although you can hot-plug other devices into a
pci-bridge's slots), any new pci-bridge controller that is added can
(and will) be plugged into the dmi-to-pci-bridge as long as it has
empty slots available.
This patch is also changing the qemuxml2xml-pcie test from a "DO_TEST"
to a "DO_DIFFERENT_TEST". This is so that the "before" xml can omit
the automatically added dmi-to-pci-bridge and pci-bridge devices, and
the "after" xml can include it - this way we are testing if libvirt is
properly adding these devices.
This controller is implicit on q35 machinetypes. It provides 31 PCIe
(*not* PCI) slots as controller 0.
Currently there are no devices that can connect to pcie-root, and no
implicit pci controller on a q35 machine, so q35 is still
unusable. For a usable q35 system, we need to add a
"dmi-to-pci-bridge" pci controller, which can connect to pcie-root,
and provides standard pci slots that can be used to connect other
devices.
There are two ways to use a iSCSI LUN as disk source for qemu.
* The LUN's path as it shows up on host, e.g.
/dev/disk/by-path/ip-$ip:3260-iscsi-$iqn-fc18:iscsi.iscsi0-lun-1
* The libiscsi URI from the storage pool source element host attribute, e.g.
iscsi://demo.org:6000/iqn.1992-01.com.example/1
For a "volume" type disk, if the specified "pool" is of iscsi
type, we should support to use the LUN in either of above 2 ways.
That's why to introduce a new XML tag "mode" for the disk source
(libvirt should support iscsi pool with libiscsi, but it's another
new feature, which should be done later).
The "mode" can be either of "host" or "direct". Use "host" to indicate
use of the LUN with the path as it shows up on host. Use "direct" to
indicate to use it with the source pool host URI (future patches may support
to use network type libvirt storage too, e.g. Ceph)
When using logical pools, we had to trust the target->path provided.
This parameter, however, can be completely ommited and we can use
'/dev/<source.name>' safely and populate it to target.path.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=952973
The existing 'chap' XML logic was never used - just defined. Rather than
try to insert a square peg into a round hole, blow it up and rewrite the
logic to follow the 'ceph' format.
Remove the former "chap.login" and "chap.passwd" fields and replace
with "chap.username" and "chap.secret" in _virStoragePoolAuthChap.
Adjust the virStoragePoolDefParseAuthChap() to process.
Change the rng file to describe the new layout
Update the formatstorage.html to describe the usage of the secret element
to mention that the secret type "iscsi" and "ceph" can be used
to storage pool too.
Update the formatsecret.html to include a reference to the storage pool
Update tests to handle the changes from 'login' and 'passwd' to 'username'
and '<secret>' format
<hyperv>
<spinlocks state='off'/>
</hyperv>
results in:
error: XML error: missing HyperV spinlock retry count
Don't require retries when state is off and use virXPathUInt
instead of virXPathString to simplify parsing.
https://bugzilla.redhat.com/show_bug.cgi?id=784836#c19
This patch introduces new element <idmap> for
user namespace. for example
<idmap>
<uid start='0' target='1000' count='10'/>
<gid start='0' target='1000' count='10'/>
</idmap>
this new element is used for setting proc files
/proc/<pid>/{uid_map,gid_map}.
This patch also supports multiple uid/gid elements
setting in XML configuration.
We don't support the semi configuation, user has to
configure uid and gid both.
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
Implement check whether (maximum) vCPUs doesn't exceed machine
type's cpu-max settings.
On older versions of QEMU the check is disabled.
Signed-off-by: Michal Novotny <minovotn@redhat.com>
This includes adding it to the nodedev parser and formatter, docs, and
test.
An example of the new iommuGroup element that is a part of the output
from "virsh nodedev-dumpxml" (virNodeDeviceGetXMLDesc()):
<device>
<name>pci_0000_02_00_1</name>
<capability type='pci'>
...
<iommuGroup number='12'>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
</iommuGroup>
</capability>
</device>
This patch adds functionality to allow libvirt to configure the
'native-tagged' and 'native-untagged' modes on openvswitch networks.
Signed-off-by: Laine Stump <laine@redhat.com>
Add <features> and <compat> elements to volume target XML.
<compat> is a string which for qcow2 represents the QEMU version
it should be compatible with. Valid values are 0.10 and 1.1.
1.1 is implicit if the <features> element is present, otherwise
qemu-img default is used. 0.10 can be specified to explicitly
create older images after the qemu-img default changes.
<features> contains optional features, so far
<lazy_refcounts/> is available, which enables caching of reference
counters, improving performance for snapshots.
Add new CPU features for HyperV:
vapic for virtual APIC support
spinlocks for setting spinlock support
<features>
<hyperv>
<vapic state='on'/>
<spinlocks state='on' retries='4096'/>
</hyperv>
</features>
https://bugzilla.redhat.com/show_bug.cgi?id=784836
This attribute is going to represent number of queues for
multique vhost network interface. This commit implements XML
extension part of the feature and add one test as well. For now,
we can only do xml2xml test as qemu command line generation code
is not adapted yet.
-vnc :5900,share=allow-exclusive
allows clients to ask for exclusive access which is
implemented by dropping other connections Connecting
multiple clients in parallel requires all clients asking
for a shared session (vncviewer: -shared switch)
-vnc :5900,share=force-shared
disables exclusive client access. Useful for shared
desktop sessions, where you don't want someone forgetting
specify -shared disconnect everybody else.
-vnc :5900,share=ignore
completely ignores the shared flag and allows everybody
connect unconditionally
QEMU might support more values for "-drive discard", so using Bi-state
values (on/off) for it doesn't make sense.
"on" maps to "unmap", "off" maps to "ignore":
<...>
@var{discard} is one of "ignore" (or "off") or "unmap" (or "on") and
controls whether @dfn{discard} (also known as @dfn{trim} or @dfn{unmap})
requests are ignored or passed to the filesystem. Some machine types
may not support discard requests.
</...>
The following XML configuration can be used to request all domain's
memory pages to be kept locked in host's memory (i.e., domain's memory
pages will not be swapped out):
<memoryBacking>
<locked/>
</memoryBacking>
QEMU introduced "discard" option for drive since commit a9384aff53,
<...>
@var{discard} is one of "ignore" (or "off") or "unmap" (or "on") and
controls whether @dfn{discard} (also known as @dfn{trim} or @dfn{unmap})
requests are ignored or passed to the filesystem. Some machine types
may not support discard requests.
</...>
This patch exposes the support in libvirt.
QEMU supported "discard" for "-drive" since v1.5.0-rc0:
% git tag --contains a9384aff53
contains
v1.5.0-rc0
v1.5.0-rc1
So this only detects the capability bit using virQEMUCapsProbeQMPCommandLine.
Adding support for new attribute 'websocket' in the '<graphics>'
element, the attribute value is the port to listen on with '-1'
meaning auto-allocation, '0' meaning no websockets.
QEMU introduced command line "-mem-merge=on|off" (defaults to on) to
enable/disable the memory merge (KSM) at guest startup. This exposes
it by new XML:
<memoryBacking>
<nosharepages/>
</memoryBacking>
The XML tag is same with what we used internally for old RHEL.
network: static route support for <network>
This patch adds the <route> subelement of <network> to define a static
route. the address and prefix (or netmask) attribute identify the
destination network, and the gateway attribute specifies the next hop
address (which must be directly reachable from the containing
<network>) which is to receive the packets destined for
"address/(prefix|netmask)".
These attributes are translated into an "ip route add" command that is
executed when the network is started. The command used is of the
following form:
ip route add <address>/<prefix> via <gateway> \
dev <virbr-bridge> proto static metric <metric>
Tests are done to validate that the input data are correct. For
example, for a static route ip definition, the address must be a
network address and not a host address. Additional checks are added
to ensure that the specified gateway is directly reachable via this
network (i.e. that the gateway IP address is in the same subnet as one
of the IP's defined for the network).
prefix='0' is supported for both family='ipv4' address='0.0.0.0'
netmask='0.0.0.0' or prefix='0', and for family='ipv6' address='::',
prefix=0', although care should be taken to not override a desired
system default route.
Anytime an attempt is made to define a static route which *exactly*
duplicates an existing static route (for example, address=::,
prefix=0, metric=1), the following error message will be sent to
syslog:
RTNETLINK answers: File exists
This can be overridden by decreasing the metric value for the route
that should be preferred, or increasing the metric for the route that
shouldn't be preferred (and is thus in place only in anticipation that
the preferred route may be removed in the future). Caution should be
used when manipulating route metrics, especially for a default route.
Note: The use of the command-line interface should be replaced by
direct use of libnl so that error conditions can be handled better. But,
that is being left as an exercise for another day.
Signed-off-by: Gene Czarcinski <gene@czarc.net>
Signed-off-by: Laine Stump <laine@laine.org>
The <filesystem> element can now accept a <driver type='nbd'/>
as an alternative to 'loop'. The benefit of NBD is support
for non-raw disk image formats.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Extend the <driver> element in filesystem devices to
allow a storage format to be set. The new attribute
uses 'format' to reflect the storage format. This is
different from the <driver> element in disk devices
which use 'type' to reflect the storage format. This
is because the 'type' attribute on filesystem devices
is already used for the driver backend, for which the
disk devices use the 'name' attribute. Arggggh.
Anyway for disks we have
<driver name="qemu" type="raw"/>
And for filesystems this change means we now have
<driver type="loop" format="raw"/>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
An example of the scsi hostdev XML:
<hostdev mode='subsystem' type='scsi'>
<source>
<adapter name='scsi_host0'/>
<address bus='0' target='0' unit='0'/>
</source>
<address type='drive' controller='0' bus='0' target='4' unit='8'/>
</hostdev>
Controller is implicitly added for scsi hostdev, though the scsi
controller's model defaults to "lsilogic", which might be not what
the user wants (same problem exists for virtio-scsi disk). It's
the existing problem, will be addressed later.
The device address must be specified manually. Later patch will let
libvirt generate it automatically.
This only introduces the generic XMLs for scsi hostdev, later patches
will add other elements, e.g. <readonly>, <shareable>.
Signed-off-by: Han Cheng <hanc.fnst@cn.fujitsu.com>
Signed-off-by: Osier Yang <jyang@redhat.com>
A domain's <interface> or <hostdev>, as well as a <network>'s
<forward>, can now have an optional <driver name='kvm|vfio'/>
element. As of this patch, there is no functionality behind this new
knob - this patch adds support to the domain and network
formatter/parser, and to the RNG and documentation.
When the backend is added, legacy KVM PCI device assignment will
continue to be used when no driver name is specified (or if <driver
name='kvm'/> is specified), but if driver name is 'vfio', the new UEFI
Secure Boot compatible VFIO device assignment will be used.
Note that the parser doesn't automatically insert the current default
value of this setting. This is done on purpose because the two
possibilities are functionally equivalent from the guest's point of
view, and we want to be able to automatically start using vfio as the
default (even for existing domains) at some time in the future. This
is similar to what was done with the "vhost" driver option in
<interface>.
For pSeries guest in QEMU, NVRAM is one kind of spapr-vio device.
Users are allowed to specify spapr-vio devices'address.
But NVRAM is not supported in libvirt. So this patch is to
add NVRAM device to allow users to specify its address.
In QEMU, NVRAM device's address is specified by
"-global spapr-nvram.reg=xxxxx".
In libvirt, XML file is defined as the following:
<nvram>
<address type='spapr-vio' reg='0x3000'/>
</nvram>
Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Instead of making a choice between the underscore and camelCase, this
simply changes "num_queues" into "queues", which is also consistent
with Michal's multiple queue support for interface.
The rng schema for <controller> had been non-specific about which
types of controllers allowed which models, and also allowed the
num_queues attribute (since that hasn't been released yet, should we
rename it to "numQueues"?) and <master> subelement to be included for
any controller type. In reality, half of the models are allowed only
for type='scsi', and the other half only for type='usb', num_queues is
allowed only for type='scsi', and <master> only for type='usb'.
This patch makes a separate <group> for type='scsi' and type='usb',
with each group allowing only the appropriate model values, and
allowing num_queue and <master> only when appropriate.
<interleave> also hadn't been specified, forcing a specific order of
subelements, which should never be done. (Note that the <interleave>
had to surround the main element attributes that are in the <group>
subelements, due to one of the <group>s containing a subelement).
The recent qemu requires "0x" prefix for the disk wwn, this patch
changes virValidateWWN to allow the prefix, and prepend "0x" if
it's not specified. E.g.
qemu-kvm: -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,\
drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,wwn=6000c60016ea71ad:
Property 'scsi-hd.wwn' doesn't take value '6000c60016ea71ad'
Though it's a qemu regression, but it's nice to allow the prefix,
and doesn't hurt for us to always output "0x".
Allow VMs to be placed into resource groups using the
following syntax
<resource>
<partition>/virtualmachines/production</partition>
</resource>
A resource cgroup will be backed by some hypervisor specific
functionality, such as cgroups with KVM/LXC.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The definiton of scsi adapter in storagespool.rng (sourceinfoadapter)
can be used by scsi hostdev in later patch. Move it to basictypes.rng.
PortNumber is defined in both domaincommon.rng and storagespool.rng,
simplify it by moving it to basictypes.rng.
Signed-off-by: Han Cheng <hanc.fnst@cn.fujitsu.com>
This updates the definitions and supporting structures in the XML
schema and domain configuration files.
Signed-off-by: Bogdan Purcareata <bogdan.purcareata@freescale.com>
With this patch, one can specify the disk source using libvirt
storage like:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source pool='default' volume='fc18.img'/>
<target dev='vdb' bus='virtio'/>
</disk>
"seclabels" and "startupPolicy" are not supported for this new
disk type ("volume"). They will be supported in later patches.
docs/formatdomain.html.in:
* Add documents for new XMLs
docs/schemas/domaincommon.rng:
* Add rng for new XMLs;
src/conf/domain_conf.h:
* New struct for 'volume' type disk source (virDomainDiskSourcePoolDef)
* Add VIR_DOMAIN_DISK_TYPE_VOLUME for enum virDomainDiskType
src/conf/domain_conf.c:
* New helper virDomainDiskSourcePoolDefParse to parse the 'volume'
type disk source.
* New helper virDomainDiskSourcePoolDefFree to free the source def
if 'volume' type disk.
tests/qemuxml2argvdata/qemuxml2argv-disk-source-pool.xml:
tests/qemuxml2xmltest.c:
* New test
This introduces 4 new attributes for storage pool source adapter.
E.g.
<adapter type='fc_host' parent='scsi_host5' wwnn='20000000c9831b4b' wwpn='10000000c9831b4b'/>
Attribute 'type' can be either 'scsi_host' or 'fc_host', and defaults
to 'scsi_host' if attribute 'name' is specified. I.e. It's optional
for 'scsi_host' adapter, for back-compat reason. However, mandatory
for 'fc_host' adapter and any new future adapter types. Attribute
'parent' is to specify the parent for the fc_host adapter.
* docs/formatstorage.html.in:
- Add documents for the 4 new attrs
* docs/schemas/storagepool.rng:
- Add RNG schema
* src/conf/storage_conf.c:
- Parse and format the new XMLs
* src/conf/storage_conf.h:
- New struct virStoragePoolSourceAdapter, replace "char *adapter" with it;
- New enum virStoragePoolSourceAdapterType
* src/libvirt_private.syms:
- Export TypeToString and TypeFromString
* src/phyp/phyp_driver.c:
- Replace "adapter" with "adapter.data.name", which is member of the union
of the new struct virStoragePoolSourceAdapter now. Later patch will
add the checking, as "adapter.data.name" is only valid for "scsi_host"
adapter.
* src/storage/storage_backend_scsi.c:
- Like above
* tests/storagepoolxml2xmlin/pool-scsi-type-scsi-host.xml:
* tests/storagepoolxml2xmlin/pool-scsi-type-fc-host.xml:
- New test for 'fc_host' and "scsi_host" adapter
* tests/storagepoolxml2xmlout/pool-scsi.xml:
- Change the expected output, as the 'type' defaults to 'scsi_host' if 'name"
specified now
* tests/storagepoolxml2xmlout/pool-scsi-type-scsi-host.xml:
* tests/storagepoolxml2xmlout/pool-scsi-type-fc-host.xml:
- New test
* tests/storagepoolxml2xmltest.c:
- Include the test
This introduce a new attribute "num_queues" (same with the good name
QEMU uses) for virtio-scsi controller. An example of the XML:
<controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/>
The corresponding QEMU command line:
-device virtio-scsi-pci,id=scsi0,num_queues=8,bus=pci.0,addr=0x3 \
Split the "resource" define out into multiple smaller
defines, one for each type of resource tuning parameter.
This makes the schema a bit clearer to read
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This enrichs HBA's xml by dumping the number of max vports and
vports in use. Format is like:
<capability type='vport_ops'>
<max_vports>164</max_vports>
<vports>5</vports>
</capability>
* docs/formatnode.html.in: (Document the new XML)
* docs/schemas/nodedev.rng: (Add the schema)
* src/conf/node_device_conf.h: (New member for data.scsi_host)
* src/node_device/node_device_linux_sysfs.c: (Collect the value of
max_vports and vports)
This does nothing more than adding the new device and capability.
The device is present since QEMU 1.2.0.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Only sheepdog actually required it in the code, and we can use 7000 as the
default---the same value that QEMU uses for the simple "sheepdog:VOLUME"
syntax. With this change, the schema can be fixed to allow no port.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The 'trang' utility, which is able to transform '.rng' files into
'.rnc' files, reported some errors in our schemas that weren't caught
by the tools we use in the build. I haven't added a test for this,
but the validity can be checked by the following command:
trang -I rng -O rnc domain.rng domain.rnc
There were unescaped minuses in regular expressions and we were
constraining int (which is by default in the range of [-2^31;2^31-1]
to maximum of 2^32. But what we wanted was exactly an unsignedInt.
This plumbs in the XML description of iSCSI shares. The next patches
will add support for the libiscsi userspace initiator.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The storage volume formats supported by the disk storage pool are
missing from the allowed values.
Add partition types.
Signed-off-by: Philipp Hahn <hahn@univention.de>
iSCSI qualified names (iqn) from RFC3721 may contain colons (':'), which
neither matches the absFilePath nor genericName:
$ virsh pool-dumpxml myiscsipool
<pool type='iscsi'>
...
<source>
...
<device path='iqn.2003-01.org.linux-iscsi.phahn-sid93.x8664:sn.8a3daa0d4efd'/>
</source>
...
</pool>
Add IscsiQualifiedName type and allow its use in sourceiscsi.
Signed-off-by: Philipp Hahn <hahn@univention.de>
Qemu's implementation of virtio RNG supports rate limiting of the
entropy used. This patch exposes the option to tune this functionality.
This patch is based on qemu commit 904d6f588063fb5ad2b61998acdf1e73fb4
The rate limiting is exported in the XML as:
<devices>
...
<rng model='virtio'>
<rate bytes='123' period='1234'/>
<backend model='random'/>
</rng>
...
The native bus for s390 I/O is called CCW (channel command word).
As QEMU has added basic support for the CCW bus, i.e. the
ability to assign CCW devnos (bus addresses) to devices.
Domains with the new machine type s390-ccw-virtio can use the
CCW bus. Currently QEMU will only allow to define virtio
devices on the CCW bus.
Here we add the new machine type and the new device address to the
schema definition and add a new paragraph to the domain XML
documentation.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
'virsh capabilities' will now include a new <memory> element
per <cell> of the topology, as in:
<topology>
<cells num='2'>
<cell id='0'>
<memory unit='KiB'>12572412</memory>
<cpus num='12'>
...
</cell>
Signed-off-by: Eric Blake <eblake@redhat.com>
There is some controversy[1] on the qemu list on whether qemu should
have ever allowed arbitrary file name passthrough, or whether it
should be restricted to JUST /dev/random and /dev/hwrng. It is
always easier to add support for additional filenames than it is
to remove support for something once released, so this patch
restricts libvirt 1.0.3 (where the virtio-random backend was first
supported) to just the two uncontroversial names, letting us defer
to a later date any decision on whether supporting arbitrary files
makes sense. Additionally, since qemu 1.4 does NOT support
/dev/fdset/nnn fd passthrough for the backend, limiting to just
two known names means that we don't get tempted to try fd
passthrough where it won't work.
[1]https://lists.gnu.org/archive/html/qemu-devel/2013-03/threads.html#00023
* src/conf/domain_conf.c (virDomainRNGDefParseXML): Only allow
/dev/random and /dev/hwrng.
* docs/schemas/domaincommon.rng: Flag invalid files.
* docs/formatdomain.html.in (elementsRng): Document this.
* tests/qemuxml2argvdata/qemuxml2argv-virtio-rng-random.args:
Update test to match.
* tests/qemuxml2argvdata/qemuxml2argv-virtio-rng-random.xml:
Likewise.
This reverts commit 383ebc4694.
We decided the xml for this feature needed more thought to make sure
we are doing it the best way, in particular wrt option values that
have multiple items.
This reverts commit 24aa7f8d11.
The implementation to match the documentation is not complete yet,
and the final design might change the name of the 'schid' attribute.
virStrToLong(..., 8, ...) already requires the mode to be octal.
Change the relax-ng schema to check for octal as well.
Signed-off-by: Philipp Hahn <hahn@univention.de>
This patch documents XML elements used for (basic) support of virtual
RNG devices.
In the devices section in the domain XML users may specify:
For the default 'random' backend:
<devices>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
</rng>
</devices>
For the slightly more advanced EGD backend:
<devices>
<rng model='virtio'>
<backend model='egd' type='udp'>
<!-- this is a definition of a character device -->
<source mode='bind' service='1234'/>
<source mode='connect' host='1.2.3.4' service='1234'/>
<!-- or other valid character device configuration -->
</backend>
</rng>
</devices>
For the planned random daemon/pool:
<devices>
<rng model='virtio'>
<backend model='pool' pool='poolname'>class</backend>
</rng>
</devices>
to enable the RNG device for guests.
Originally, only a host name was used to associate a
DHCPv6 request with a specific IPv6 address. Further testing
demonstrates that this is an unreliable method and, instead,
a client-id or DUID needs to be used. According to DHCPv6
standards, this id can be a duid-LLT, duid-LL, or duid-UUID
even though dnsmasq will accept almost any text string.
Although validity checking of a specified string makes sure it is
hexadecimal notation with bytes separated by colons, there is no
rigorous check to make sure it meets the standard.
Documentation and schemas have been updated.
Signed-off-by: Gene Czarcinski <gene@czarc.net>
Signed-off-by: Laine Stump <laine@laine.org>
Although in IPv4 one must pick either mac or name, either
can be omitted. Similarly, for IPv6, the name
can be optionally omitted.
Signed-off-by: Gene Czarcinski <gene@czarc.net>
Signed-off-by: Laine Stump <laine@laine.org>
This patch adds support for a new <option>-Tag in the <dhcp> block of
network configs, based on a subset of the fifth proposal by Laine
Stump in the mailing list discussion at
https://www.redhat.com/archives/libvir-list/2012-November/msg01054.html.
Any such defined option will result in a dhcp-option=<number>,"<value>"
statement in the generated dnsmasq configuration file.
Currently, DHCP options can be specified by number only and there is
no whitelisting or blacklisting of option numbers, which should
probably be added.
Signed-off-by: Pieter Hollants <pieter@hollants.com>
Signed-off-by: Laine Stump <laine@laine.org>
The native bus for s390 I/O is called CCW (channel command word).
As QEMU has added basic support for the CCW bus, i.e. the
ability to assign CCW devnos (bus addresses) to devices.
Domains with the new machine type s390-ccw-virtio can use the
CCW bus. Currently QEMU will only allow to define virtio
devices on the CCW bus.
Here we add the new machine type and the new device address to the
schema definition and add a new paragraph to the domain XML
documentation.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Hosts for rbd are ceph monitor daemons. These have fixed IP addresses,
so they are often referenced by IP rather than hostname for
convenience, or to avoid relying on DNS. Using IPv4 addresses as the
host name works already, but IPv6 addresses require rbd-specific
escaping because the colon is used as an option separator in the
string passed to qemu.
Escape these colons, and enclose the IPv6 address in square brackets
so it is distinguished from the port, which is currently mandatory.
Acked-by: Osier Yang <jyang@redhat.com>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
This patch adds RNG schemas for adding more information in the topology
output of the NUMA section in the capabilities XML.
The added elements are designed to provide more information about the
placement and topology of the processors in the system to management
applications.
A demonstration of supported XML added by this patch:
<capabilities>
<host>
<topology>
<cells num='3'>
<cell id='0'>
<cpus num='4'> <!-- this is node with Hyperthreading -->
<cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
<cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
<cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
<cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='4'> <!-- this is node with modules (Bulldozer) -->
<cpu id='4' socket_id='0' core_id='2' siblings='4-5'/>
<cpu id='5' socket_id='0' core_id='3' siblings='4-5'/>
<cpu id='6' socket_id='0' core_id='4' siblings='6-7'/>
<cpu id='7' socket_id='0' core_id='5' siblings='6-7'/>
</cpus>
</cell>
<cell id='2'>
<cpus num='4'> <!-- this is a normal multi-core node -->
<cpu id='8' socket_id='1' core_id='0' siblings='8'/>
<cpu id='9' socket_id='1' core_id='1' siblings='9'/>
<cpu id='10' socket_id='1' core_id='2' siblings='10'/>
<cpu id='11' socket_id='1' core_id='3' siblings='11'/>
</cpus>
</cell>
</cells>
</topology>
</host>
</capabilities>
The socket_id field represents identification of the physical socket the
CPU is plugged in. This ID may not be identical to the physical socket
ID reported by the kernel.
The core_id identifies a core within a socket. Also this field may not
accurately represent physical ID's.
The core_id is guaranteed to be unique within a cell and a socket. There
may be duplicates between sockets. Only cores sharing core_id within one
cell and one socket can be considered as threads. Cores sharing core_id
within sparate cells are distinct cores.
The siblings field is a list of CPU id's the cpu id's the CPU is sibling
with - thus a thread. The list is in the cpuset format.
Adds a "ram" attribute globally to the video.model element, that changes
the resulting qemu command line only if video.type == "qxl".
<video>
<model type='qxl' ram='65536' vram='65536' heads='1'/>
</video>
That attribute gets a default value of 64*1024. The schema is unchanged
for other video element types.
The resulting qemu command line change is the addition of
-global qxl-vga.ram_size=<ram>*1024
or
-global qxl.ram_size=<ram>*1024
For the main and secondary qxl devices respectively.
The default for the qxl ram bar is 64*1024 kilobytes (the same as the
default qxl vram bar size).
Add an optional 'type' attribute to <target> element of serial port
device. There are two choices for its value, 'isa-serial' and
'usb-serial'. For backward compatibility, when attribute 'type' is
missing the 'isa-serial' will be chosen as before.
Libvirt XML sample
<serial type='pty'>
<target type='usb-serial' port='0'/>
<address type='usb' bus='0' port='1'/>
</serial>
qemu commandline:
qemu ${other_vm_args} \
-chardev pty,id=charserial0 \
-device usb-serial,chardev=charserial0,id=serial0,bus=usb.0,port=1
The SCLP console is the native console type for s390 and is preferred
over the virtio console as it doesn't require special drivers and
is more efficient. Recent versions of QEMU come with SCLP support
which is hereby enabled.
The new target types 'sclp' and 'sclplm' can be used to specify a
SCLP console. Adding documentation, domain schema and XML processing
support.
Signed-off-by: J.B. Joret <jb@linux.vnet.ibm.com>
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
This introduces new XML tag "sgio" for disk, its valid values
are "filtered" and "unfiltered", setting it as "filtered" will
set the disk's unpriv_sgio to 0, and "unfiltered" to set it
as 1, which allows the unprivileged SG_IO commands.
The <hostdev> device type has long had a redundant "mode"
attribute, which has always been "subsys". This finally
introduces a new mode "capabilities", which will be used
by the LXC driver for device assignment. Since container
based virtualization uses a single kernel, the idea of
assigning physical PCI devices doesn't make sense. It is
still reasonable to assign USB devices, but for assigning
arbitrary nodes in /dev, the new 'capabilities' mode is
to be used.
The first capability support is 'storage', which is for
assignment of block devices. Functionally this is really
pretty similar to the <disk> support. The only difference
is the device node name is identical in both host and
container namespaces.
<hostdev mode='capabilities' type='storage'>
<source>
<block>/dev/sdf1</block>
</source>
</hostdev>
The second capability support is 'misc', which is for
assignment of character devices. There is no existing
parallel to this. Again the device node is the same
inside & outside the container.
<hostdev mode='capabilities' type='misc'>
<source>
<char>/dev/input/event3</char>
</source>
</hostdev>
The reason for keeping the char & storage devices
separate in the domain XML, is to mirror the split
in the node device XML. NB the node device XML does
not yet report character devices, but that's another
new patch to come
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If there are multiple video devices
primary = 'yes' marks this video device as the primary one.
The rest are secondary video devices. No more than one could be
mark as primary. If none of them has primary attribute, the first
one will be the primary by default like what it was.
The reason of this changing is that for qemu, only one primary video
device is permitted which can be of any type. For secondary video
devices, only qxl is allowd. Primary attribute removes the restriction
that the first have to be the primary one.
We always put the primary video device into the first position of
video device structure array after parsing.
This is however supported only on domain interfaces with
type='network'. Moreover, target network needs to have at least
inbound QoS set. This is required by hierarchical traffic shaping.
From now on, the required attribute for <inbound/> is either 'average'
(old) or 'floor' (new). This new attribute can be used just for
interfaces type of network (<interface type='network'/>) currently.
The DHCPv6 support includes IPV6 dhcp-range and dhcp-host for one
IPv6 subnetwork on one interface. This support will only work
if dnsmasq version >= 2.64; otherwise an error occurs if
dhcp-range or dhcp-host is specified for an IPv6 address.
Essentially, this change provides the same DHCP support for IPv6
that has been available for IPv4.
With dnsmasq >= 2.64, support for the RA service is also now provided
by dnsmasq (radvd is no longer used/started). (Although at least one
version of dnsmasq prior to 2.64 "supported" IPv6 Router
Advertisement, there were bugs (fixed in 2.64) that rendered it
unusable.)
Documentation and the network schema has been updated
to reflect the new support.
QEMU supports setting vendor and product strings for disk since
1.2.0 (only scsi-disk, scsi-hd, scsi-cd support it), this patch
exposes it with new XML elements <vendor> and <product> of disk
device.
This patch adds the capability for virtual guests to do IPv6
communication via a virtual network interface with no IPv6 (gateway)
addresses specified. This capability has always been enabled by
default for IPv4, but disabled for IPv6 for security concerns, and
because it requires the ip6tables command to be operational (which
isn't the case on a system with the ipv6 module completely disabled).
This patch adds a new attribute "ipv6" at the toplevel of a <network>
object. If ipv6='yes', the extra ip6tables rules required to permite
inter-guest communications are added when the network is started. If
it is 'no', or not present, those rules will not be added; thus the
default behavior doesn't change, so there should be no compatibility
issues with any existing installations.
Note that virtual guests cannot communication with the virtualization
host via this interface, because the following kernel tunable has
been set:
net.ipv6.conf.<bridge_interface_name>.disable_ipv6 = 1
This assures that the bridge interface will not have an IPv6
link-local (fe80::) address.
To control this behavior so that it is not enabled by default, the parameter
ipv6='yes' on the <network> statement has been added.
Documentation related to this patch has been updated.
The network schema has also been updated.
This patch introduces the RNG schema and updates necessary data strucutures
to allow various hypervisors to make use of Gluster protocol as one of the
supported network disk backend. Next patch will add support to make use of
this feature in Qemu since it now supports Gluster protocol as one of the
network based storage backend.
Two new optional attributes for <host> element are introduced - 'transport'
and 'socket'. Valid transport values are tcp, unix or rdma. If none specified,
tcp is assumed. If transport is unix, socket specifies path to unix socket.
This patch allows users to specify disks on gluster backends like this:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='gluster' name='Volume1/image'>
<host name='example.org' port='6000' transport='tcp'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='gluster' name='Volume2/image'>
<host transport='unix' socket='/path/to/sock'/>
</source>
<target dev='vdb' bus='virtio'/>
</disk>
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
Each <domainsnapshot> can now contain an optional <memory>
element that describes how the VM state was handled, similar
to disk snapshots. The new element will always appear in
output; for back-compat, an input that lacks the element will
assume 'no' or 'internal' according to the domain state.
Along with this change, it is now possible to pass <disks> in
the XML for an offline snapshot; this also needs to be wired up
in a future patch, to make it possible to choose internal vs.
external on a per-disk basis for each disk in an offline domain.
At that point, using the --disk-only flag for an offline domain
will be able to work.
For some examples below, remember that qemu supports the
following snapshot actions:
qemu-img: offline external and internal disk
savevm: online internal VM and disk
migrate: online external VM
transaction: online external disk
=====
<domainsnapshot>
<memory snapshot='no'/>
...
</domainsnapshot>
implies that there is no VM state saved (mandatory for
offline and disk-only snapshots, not possible otherwise);
using qemu-img for offline domains and transaction for online.
=====
<domainsnapshot>
<memory snapshot='internal'/>
...
</domainsnapshot>
state is saved inside one of the disks (as in qemu's 'savevm'
system checkpoint implementation). If needed in the future,
we can also add an attribute pointing out _which_ disk saved
the internal state; maybe disk='vda'.
=====
<domainsnapshot>
<memory snapshot='external' file='/path/to/state'/>
...
</domainsnapshot>
This is not wired up yet, but future patches will allow this to
control a combination of 'virsh save /path/to/state' plus disk
snapshots from the same point in time.
=====
So for 1.0.1 (and later, as needed), I plan to implement this table
of combinations, with '*' designating new code and '+' designating
existing code reached through new combinations of xml and/or the
existing DISK_ONLY flag:
domain memory disk disk-only | result
-----------------------------------------
offline omit omit any | memory=no disk=int, via qemu-img
offline no omit any |+memory=no disk=int, via qemu-img
offline omit/no no any | invalid combination (nothing to snapshot)
offline omit/no int any |+memory=no disk=int, via qemu-img
offline omit/no ext any |*memory=no disk=ext, via qemu-img
offline int/ext any any | invalid combination (no memory to save)
online omit omit off | memory=int disk=int, via savevm
online omit omit on | memory=no disk=default, via transaction
online omit no/ext off | unsupported for now
online omit no on | invalid combination (nothing to snapshot)
online omit ext on | memory=no disk=ext, via transaction
online omit int off |+memory=int disk=int, via savevm
online omit int on | unsupported for now
online no omit any |+memory=no disk=default, via transaction
online no no any | invalid combination (nothing to snapshot)
online no int any | unsupported for now
online no ext any |+memory=no disk=ext, via transaction
online int/ext any on | invalid combination (disk-only vs. memory)
online int omit off |+memory=int disk=int, via savevm
online int no/ext off | unsupported for now
online int int off |+memory=int disk=int, via savevm
online ext omit off |*memory=ext disk=default, via migrate+trans
online ext no off |+memory=ext disk=no, via migrate
online ext int off | unsupported for now
online ext ext off |*memory=ext disk=ext, via migrate+transaction
* docs/schemas/domainsnapshot.rng (memory): New RNG element.
* docs/formatsnapshot.html.in: Document it.
* src/conf/snapshot_conf.h (virDomainSnapshotDef): New fields.
* src/conf/domain_conf.c (virDomainSnapshotDefFree)
(virDomainSnapshotDefParseString, virDomainSnapshotDefFormat):
Manage new fields.
* tests/domainsnapshotxml2xmltest.c: New test.
* tests/domainsnapshotxml2xmlin/*.xml: Update existing tests.
* tests/domainsnapshotxml2xmlout/*.xml: Likewise.
At one point, the code passed through arbitrary strings for file
formats, which supposedly lets qemu handle a new file type even
before libvirt has been taught to handle it. However, to properly
label files, libvirt has to learn the file type anyway, so we
might as well make our life easier by only accepting file types
that we are prepared to handle. This patch lets the RNG validation
ensure that only known strings are let through.
* docs/schemas/domaincommon.rng (driverFormat): Limit to list of
supported strings.
* docs/schemas/domainsnapshot.rng (driver): Likewise.
Hypervisors are starting to support HyperV Enlightenment features that
improve behavior of guests running Microsoft Windows operating systems.
This patch adds support for the "relaxed" feature that improves timer
behavior and also establishes a framework to add these features in
future.
When startupPolicy set for a USB devices allows such device to be
missing, there was no way this could be detected from domain XML. With
this patch, libvirt emits a new missing='yes' attribute for such devices
when active domain XML is generated.
USB devices can disappear without OS being mad about it, which makes
them ideal for startupPolicy. With this attribute, USB devices can be
configured to be mandatory (the default), requisite (will disappear
during migration if they cannot be found), or completely optional.
While current on_{poweroff,reboot,crash} action configuration is about
configuring life cycle actions, they can all be considered events and
actions that need to be done on a particular event. Let's generalize the
code by renaming life cycle actions to event actions so that it can be
reused later for non-lifecycle events.
This allows the user to control labelling of each character device
separately (the default is to inherit from the VM).
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Sometimes when guest machine crashes, coredump can get huge due to the
guest memory. This can be limited using madvise(2) system call and is
being used in QEMU hypervisor. This patch adds an option for configuring
that in the domain XML and related documentation.
Whenever the guest machine fails to boot, new parameter (reboot-timeout)
controls whether it should reboot and after how many ms it should do so.
Docs included.
New options is added to support EOI (End of Interrupt) exposure for
guests. As it makes sense only when APIC is enabled, I added this into
the <apic> element in <features> because this should be tri-state
option (cannot be handled as standalone feature).
After discussion with DB we decided to rename the new iolimit
element as it creates the impression it would be there to
limit (i.e. throttle) I/O instead of specifying immutable
characteristics of a block device.
This is also backed by the fact that the term I/O Limits has
vanished from newer storage admin documentation.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
There is a new <pm/> element implemented that can control what ACPI
sleeping states will be advertised by BIOS and allowed to be switched
to by libvirt. The default keeps defaults on hypervisor, otherwise
forces chosen setting.
The documentation of the pm element is added as well.
Introducing a new iolimits element allowing to override certain
properties of a guest block device like the physical and logical
block size.
This can be useful for platforms with 'non-standard' disk formats
like S390 DASD with its 4K block size.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
This patch introduces support of setting emulator's period and
quota to limit cpu bandwidth when the vm starts. Also updates
XML Schema for new entries and docs.
This patch adds a new xml element <emulatorpin>, which is a sibling
to the existing <vcpupin> element under the <cputune>, to pin emulator
threads to specified physical CPUs.
Signed-off-by: Tang Chen <tangchen@cn.fujitsu.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
A hypervisor may allow to override the disk geometry of drives.
Qemu, as an example with cyls=,heads=,secs=[,trans=].
This patch extends the domain config to allow the specification of
disk geometry with libvirt.
Signed-off-by: J.B. Joret <jb@linux.vnet.ibm.com>
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
This patch updates the domain and capability XML parser and formatter to
support more than one "seclabel" element for each domain and device. The
RNG schema and the tests related to this are also updated by this patch.
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
This patch introduces the new forward mode='hostdev' along with
attribute managed. Includes updates to the network RNG and new xml
parser/formatter code.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
The following config elements now support a <vlan> subelements:
within a domain: <interface>, and the <actual> subelement of <interface>
within a network: the toplevel, as well as any <portgroup>
Each vlan element must have one or more <tag id='n'/> subelements. If
there is more than one tag, it is assumed that vlan trunking is being
requested. If trunking is required with only a single tag, the
attribute "trunk='yes'" should be added to the toplevel <vlan>
element.
Some examples:
<interface type='hostdev'/>
<vlan>
<tag id='42'/>
</vlan>
<mac address='52:54:00:12:34:56'/>
...
</interface>
<network>
<name>vlan-net</name>
<vlan trunk='yes'>
<tag id='30'/>
</vlan>
<virtualport type='openvswitch'/>
</network>
<interface type='network'/>
<source network='vlan-net'/>
...
</interface>
<network>
<name>trunk-vlan</name>
<vlan>
<tag id='42'/>
<tag id='43'/>
</vlan>
...
</network>
<network>
<name>multi</name>
...
<portgroup name='production'/>
<vlan>
<tag id='42'/>
</vlan>
</portgroup>
<portgroup name='test'/>
<vlan>
<tag id='666'/>
</vlan>
</portgroup>
</network>
<interface type='network'/>
<source network='multi' portgroup='test'/>
...
</interface>
IMPORTANT NOTE: As of this patch there is no backend support for the
vlan element for *any* network device type. When support is added in
later patches, it will only be for those select network types that
support setting up a vlan on the host side, without the guest's
involvement. (For example, it will be possible to configure a vlan for
a guest connected to an openvswitch bridge, but it won't be possible
to do that for one that is connected to a standard Linux host bridge.)
<portgroup> allows a <bandwidth> element, but the schema didn't have
this. Since this makes for multiple elements in portgroup, they must
be interleaved.
<interface type='bridge'> needs to allow <virtualport> elements
for openvswitch, but the schema didn't allow this.
Just as each physical device used by a network has a connections
counter, now each network has a connections counter which is
incremented once for each guest interface that connects using this
network.
The count is output in the live network XML, like this:
<network connections='20'>
...
</network>
It is read-only, and for informational purposes only - it isn't used
internally anywhere by libvirt.
Until now, all attributes in a <virtualport> parameter list that were
acceptable for a particular type, were also required. There were no
optional attributes.
One of the aims of supporting <virtualport> in libvirt's virtual
networks and portgroups is to allow specifying the group-wide
parameters in the network's virtualport, and merge that with the
interface's virtualport, which will have the instance-specific info
(i.e. the interfaceid or instanceid).
Additionally, the guest's interface XML shouldn't need to know what
type of network connection will be used prior to runtime - it could be
openvswitch, 802.1Qbh, 802.1Qbg, or none of the above - but should
still be able to specify instance-specific info just in case it turns
out to be applicable.
Finally, up to now, the parser for virtualport has always generated a
random instanceid/interfaceid when appropriate, making it impossible
to leave it blank (which is what's required for virtualports within a
network/portprofile definition).
This patch modifies the parser and formatter of the <virtualport>
element in the following ways:
* because most of the attributes in a virNetDevVPortProfile are fixed
size binary data with no reserved values, there is no way to embed a
"this value wasn't specified" sentinel into the existing data. To
solve this problem, the new *_specified fields in the
virNetDevVPortProfile object that were added in a previous patch of
this series are now set when the corresponding attribute is present
during the parse.
* allow parsing/formatting a <virtualport> that has no type set. In
this case, all fields are settable, but all are also optional.
* add a GENERATE_MISSING_DEFAULTS flag to the parser - if this flag is
set and an instanceid/interfaceid is expected but not provided, a
random one will be generated. This was previously the default
behavior, but is now done only for virtualports inside an
<interface> definition, not for those in <network> or <portgroup>.
* add a REQUIRE_ALL_ATTRIBUTES flag to the parser - if this flag is
set the parser will call the new
virNetDevVPortProfileCheckComplete() functions at the end of the
parser to check for any missing attributes (based on type), and
return failure if anything is missing. This used to be default
behavior. Now it is only used for the virtualport defined inside an
interface's <actual> element (by the time you've figured out the
contents of <actual>, you should have all the necessary data to fill
in the entire virtualport)
* add a REQUIRE_TYPE flag to the parser - if this flag is set, the
parser will return an error if the virtualport has no type
attribute. This also was previously the default behavior, but isn't
needed in the case of the virtualport for a type='network' interface
(i.e. the exact type isn't yet known), or the virtualport of a
portgroup (i.e. the portgroup just has modifiers for the network's
virtualport, which *does* require a type) - in those cases, the
check will be done at domain startup, once the final virtualport is
assembled (this is handled in the next patch).
The access, birth, modification and change times are added to
storage volumes and corresponding xml representations. This
shows up in the XML in this format:
<timestamps>
<atime>1341933637.027319099</atime>
<mtime>1341933637.027319099</mtime>
</timestamps>
Signed-off-by: Eric Blake <eblake@redhat.com>
capability.rng: Guest features can be in any order.
nodedev.rng: Added <driver> element, <capability> phys_function and
virt_functions for PCI devices.
storagepool.rng: Owner or group ID can be -1.
schema tests: New capabilities and nodedev files; changed owner and
group to -1 in pool-dir.xml.
storage_conf: Print uid_t and gid_t as signed to storage pool XML.
Libvirt adds a USB controller to the guest even if the user does not
specify any in the XML. This is due to back-compat reasons.
To allow disabling USB for a guest this patch adds a new USB controller
type "none" that disables USB support for the guest.
This patch brings support to manage sheepdog pools and volumes to libvirt.
It uses the "collie" command-line utility that comes with sheepdog for that.
A sheepdog pool in libvirt maps to a sheepdog cluster.
It needs a host and port to connect to, which in most cases
is just going to be the default of localhost on port 7000.
A sheepdog volume in libvirt maps to a sheepdog vdi.
To create one specify the pool, a name and the capacity.
Volumes can also be resized later.
In the volume XML the vdi name has to be put into the <target><path>.
To use the volume as a disk source for virtual machines specify
the vdi name as "name" attribute of the <source>.
The host and port information from the pool are specified inside the host tag.
<disk type='network'>
...
<source protocol="sheepdog" name="vdi_name">
<host name="localhost" port="7000"/>
</source>
</disk>
To work right this patch parses the output of collie,
so it relies on the raw output option. There recently was a bug which caused
size information to be reported wrong. This is fixed upstream already and
will be in the next release.
Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
Added s390-virtio machine type to the XML schema for domains in order
to not fail the domain schema tests.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Currently you can configure LXC to bind a host directory to
a guest directory, but not to bind a guest directory to a
guest directory. While the guest container init could do
this itself, allowing it in the libvirt XML means a stricter
SELinux policy can be written
Introduce a new syntax for filesystems to allow use of a RAM
filesystem
<filesystem type='ram'>
<source usage='10' units='MiB'/>
<target dir='/mnt'/>
</filesystem>
The usage units default to KiB to limit consumption of host memory.
* docs/formatdomain.html.in: Document new syntax
* docs/schemas/domaincommon.rng: Add new attributes
* src/conf/domain_conf.c: Parsing/formatting of RAM filesystems
* src/lxc/lxc_container.c: Mounting of RAM filesystems
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
'boot' tag shouldn't be exclusive with 'kernel', 'initrd', and 'cmdline',
though the boot sequence doesn't make sense when the guest boots from
kernel directly. But it's useful if booting from kernel is to install
a newguest, even if it's not to install a guest, there is no hurt. And
on the other hand, we allow 'boot' and the kernel tags when parsing.
This patch adds support for a new storage backend with RBD support.
RBD is the RADOS Block Device and is part of the Ceph distributed storage
system.
It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
To function this backend relies on librbd and librados being present on the
local system.
The backend also supports Cephx authentication for safe authentication with
the Ceph cluster.
For storing credentials it uses the built-in secret mechanism of libvirt.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This patch adds support for the recent ipset iptables extension
to libvirt's nwfilter subsystem. Ipset allows to maintain 'sets'
of IP addresses, ports and other packet parameters and allows for
faster lookup (in the order of O(1) vs. O(n)) and rule evaluation
to achieve higher throughput than what can be achieved with
individual iptables rules.
On the command line iptables supports ipset using
iptables ... -m set --match-set <ipset name> <flags> -j ...
where 'ipset name' is the name of a previously created ipset and
flags is a comma-separated list of up to 6 flags. Flags use 'src' and 'dst'
for selecting IP addresses, ports etc. from the source or
destination part of a packet. So a concrete example may look like this:
iptables -A INPUT -m set --match-set test src,src -j ACCEPT
Since ipset management is quite complex, the idea was to leave ipset
management outside of libvirt but still allow users to reference an ipset.
The user would have to make sure the ipset is available once the VM is
started so that the iptables rule(s) referencing the ipset can be created.
Using XML to describe an ipset in an nwfilter rule would then look as
follows:
<rule action='accept' direction='in'>
<all ipset='test' ipsetflags='src,src'/>
</rule>
The two parameters on the command line are also the two distinct XML attributes
'ipset' and 'ipsetflags'.
FYI: Here is the man page for ipset:
https://ipset.netfilter.org/ipset.man.html
Regards,
Stefan
Though numad will manage the memory allocation of task dynamically,
it wants management application (libvirt) to pre-set the memory
policy according to the advisory nodeset returned from querying numad,
(just like pre-bind CPU nodeset for domain process), and thus the
performance could benefit much more from it.
This patch introduces new XML tag 'placement', value 'auto' indicates
whether to set the memory policy with the advisory nodeset from numad,
and its value defaults to the value of <vcpu> placement, or 'static'
if 'nodeset' is specified. Example of the new XML tag's usage:
<numatune>
<memory placement='auto' mode='interleave'/>
</numatune>
Just like what current "numatune" does, the 'auto' numa memory policy
setting uses libnuma's API too.
If <vcpu> "placement" is "auto", and <numatune> is not specified
explicitly, a default <numatume> will be added with "placement"
set as "auto", and "mode" set as "strict".
The following XML can now fully drive numad:
1) <vcpu> placement is 'auto', no <numatune> is specified.
<vcpu placement='auto'>10</vcpu>
2) <vcpu> placement is 'auto', no 'placement' is specified for
<numatune>.
<vcpu placement='auto'>10</vcpu>
<numatune>
<memory mode='interleave'/>
</numatune>
And it's also able to control the CPU placement and memory policy
independently. e.g.
1) <vcpu> placement is 'auto', and <numatune> placement is 'static'
<vcpu placement='auto'>10</vcpu>
<numatune>
<memory mode='strict' nodeset='0-10,^7'/>
</numatune>
2) <vcpu> placement is 'static', and <numatune> placement is 'auto'
<vcpu placement='static' cpuset='0-24,^12'>10</vcpu>
<numatune>
<memory mode='interleave' placement='auto'/>
</numatume>
A follow up patch will change the XML formatting codes to always output
'placement' for <vcpu>, even it's 'static'.
qemu's behavior in this case is to change the spice server behavior to
require secure connection to any channel not otherwise specified as
being in plaintext mode. libvirt doesn't currently allow requesting this
(via plaintext-channel=<channel name>).
RHBZ: 819499
Signed-off-by: Alon Levy <alevy@redhat.com>
In order to track a block copy job across libvirtd restarts, we
need to save internal XML that tracks the name of the file
holding the mirror. Displaying this name in dumpxml might also
be useful to the user, even if we don't yet have a way to (re-)
start a domain with mirroring enabled up front. This is done
with a new <mirror> sub-element to <disk>, as in:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/original.img'/>
<mirror file='/var/lib/libvirt/images/copy.img' format='qcow2' ready='yes'/>
...
</disk>
For now, the element is output-only, in live domains; it is ignored
when defining a domain or hot-plugging a disk (since those contexts
use VIR_DOMAIN_XML_INACTIVE in parsing). The 'ready' attribute appears
when libvirt knows that the job has changed from the initial pulling
phase over to the mirroring phase, although absence of the attribute
is not a sure indicator of the current phase. If we come up with a way
to make qemu start with mirroring enabled, we can relax the xml
restriction, and allow <mirror> (but not attribute 'ready') on input.
Testing active-only XML meant tweaking the testsuite slightly, but it
was worth it.
* docs/schemas/domaincommon.rng (diskspec): Add diskMirror.
* docs/formatdomain.html.in (elementsDisks): Document it.
* src/conf/domain_conf.h (_virDomainDiskDef): New members.
* src/conf/domain_conf.c (virDomainDiskDefFree): Clean them.
(virDomainDiskDefParseXML): Parse them, but only internally.
(virDomainDiskDefFormat): Output them.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: New test file.
* tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-mirror.xml: Likewise.
* tests/qemuxml2xmltest.c (testInfo): Alter members.
(testCompareXMLToXMLHelper): Allow more test control.
(mymain): Run new test.
<filesystemtgt> is redundant, as every group uses it; <address>
shouldn't be in <filesystemtgt> in case of the meaning could be
"filesystemtarget"; The elements <address>, <alias>, <target>,
... should be interleaved.
Since Xen 3.1 the clock=variable semantic is supported. In addition to
qemu/kvm Xen also knows about a variant where the offset is relative to
'localtime' instead of 'utc'.
Extends the libvirt structure with a flag 'basis' to specify, if the
offset is relative to 'localtime' or 'utc'.
Extends the libvirt structure with a flag 'reset' to force the reset
behaviour of 'localtime' and 'utc'; this is needed for backward
compatibility with previous versions of libvirt, since they report
incorrect XML.
Adapt the only user 'qemu' to the new name.
Extend the RelaxNG schema accordingly.
Document the new 'basis' attribute in the HTML documentation.
Adapt test for the new attribute.
Signed-off-by: Philipp Hahn <hahn@univention.de>
Pass argv to the init binary of LXC, using a new <initarg> element.
* docs/formatdomain.html.in: Document <os> usage for containers
* docs/schemas/domaincommon.rng: Add <initarg> element
* src/conf/domain_conf.c, src/conf/domain_conf.h: parsing and
formatting of <initarg>
* src/lxc/lxc_container.c: Setup LXC argv
* tests/Makefile.am, tests/lxcxml2xmldata/lxc-systemd.xml,
tests/lxcxml2xmltest.c, tests/testutilslxc.c,
tests/testutilslxc.h: Test parsing/formatting of LXC related
XML parts