The HTML5 doctype is simply
<!DOCTYPE html>
no DTD is present because HTML5 is no longer defined as an
extension of SGML.
XSL has no way to natively output a doctype without a public
or system identifier, so we have to use an <xsl:text> hack
instead.
See also
https://dev.w3.org/html5/html-author/#doctype-declaration
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A handful of places in the docs choose to use — instead
of '-' for no clear reason. Remove this inconsistency.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The 'name' attribute on <a...> elements is deprecated in favour
of the 'id' attribute which is allowed on any element. HTML5
drops 'name' support entirely.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch addresses the same aspects on PPC the bug 1103314 addressed
on x86.
PCI expander bus creates multiple primary PCI busses, where each of these
busses can be assigned a specific NUMA affinity, which, on x86 is
advertised through ACPI on a per-bus basis.
For SPAPR, a PHB's NUMA affinities are assigned on a per-PHB basis, and
there is no mechanism for advertising NUMA affinities to a guest on a
per-bus basis. So, even if qemu-ppc manages to get some sort of multi-bus
topology working using PXB, there is no way to expose the affinities
of these busses to the guest. It can only be exposed on a per-PHB/per-domain
basis.
So patch enables NUMA node tag in pci-root controller on PPC.
The way to set the NUMA node is through the numa_node option of
spapr-pci-host-bridge device. However for the implicit PHB, the only way
to set the numa_node is from the -global option. The -global option applies
to all the PHBs unless explicitly specified with the option on the
respective PHB of CLI. The default PHB has the emulated devices only, so
the patch prevents setting the NUMA node for the default PHB.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Users may want to run the init command of a container as a special
user / group. This is achieved by adding <inituser> and <initgroup>
elements. Note that the user can either provide a name or an ID to
specify the user / group to be used.
This commit also fixes a side effect of being able to run the command
as a non-root user: the user needs rights on the tty to allow shell
job control.
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Some containers may want the application to run in a special directory.
Add <initdir> element in the domain configuration to handle this case
and use it in the lxc driver.
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
When running an application container, setting environment variables
could be important.
The newly introduced <initenv> tag in domain configuration will allow
setting environment variables to the init program.
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Add support for vgaconf driver configuration. In domain xml it looks like
this:
<video>
<driver vgaconf='io|on|off'>
<model .../>
</video>
It was added with bhyve gop video in mind to allow users control how the
video device is exposed to the guest, specifically, how VGA I/O is
handled.
One can refer to the bhyve manual page to get more detailed description
of the possible VGA configuration options:
https://www.freebsd.org/cgi/man.cgi?query=bhyve&manpath=FreeBSD+12-current
The relevant part could be found using the 'vgaconf' keyword.
Also, add some tests for this new feature.
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Update the per device boot schema to add an optional loadparm parameter.
eg: <boot order='1' loadparm='2'/>
Extend the virDomainDeviceInfo to support loadparm option.
Modify the appropriate functions to parse loadparm from boot device xml.
Add the xml2xml test to validate the field.
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Add a new <ioapic> element with a driver attribute.
Possible values are qemu and kvm. With 'qemu', the I/O
APIC can be put in the userspace even for KVM domains.
https://bugzilla.redhat.com/show_bug.cgi?id=1427005
The parser had been clearing out *all* suggested device names for
type='direct' (aka macvtap) interfaces. All of the code implementing
macvtap allows for a user-specified device name, so we should allow
it. In the case that an interface name starts with "macvtap" or
"macvlan" though, we do still clear it out, just as we do with "vnet"
(which is the prefix used for automatically generated tap device
names), since those are the prefixes for the names we autogenerate for
macvtap and macvlan devices.
Resolves: https://bugzilla.redhat.com/1335798
This patch introduces
<cache level='N' mode='emulate'/>
<cache mode='passthrough'/>
<cache mode='disable'/>
sub element of /domain/cpu. Currently only a single <cache> element is
allowed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We are currently parsing only rx/frames/max because that's the only
value that makes sense for us. The tun device just added support for
this one and the others are only supported by hardware devices which
we don't need to worry about as the only way we'd pass those to the
domain is using <hostdev/> or <interface type='hostdev'/>. And in
those cases the guest can modify the settings itself.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
qemu requires that the topology equals to the maximum vcpu count.
Document this along with the API to set maximum vcpu count and the XML
element.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1426220
The attribute can be used to request a specific way of checking whether
the virtual CPU matches created by the hypervisor matches the
specification in domain XML.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that NVDIMM has found its way into libvirt, users might want
to fine tune some settings for each module separately. One such
setting is 'share=on|off' for the memory-backend-file object.
This setting - just like its name suggest already - enables
sharing the nvdimm module with other applications. Under the hood
it controls whether qemu mmaps() the file as MAP_PRIVATE or
MAP_SHARED.
Yet again, we have such config knob in domain XML, but it's just
an attribute to numa <cell/>. This does not give fine enough
tuning on per-memdevice basis so we need to have the attribute
for each device too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
NVDIMM is new type of memory introduced into QEMU 2.6. The idea
is that we have a Non-Volatile memory module that keeps the data
persistent across domain reboots.
At the domain XML level, we already have some representation of
'dimm' modules. Long story short, NVDIMM will utilize the
existing <memory/> element that lives under <devices/> by adding
a new attribute 'nvdimm' to the existing @model and introduce a
new <path/> element for <source/> while reusing other fields. The
resulting XML would appear as:
<memory model='nvdimm'>
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit='KiB'>523264</size>
<node>0</node>
</target>
<address type='dimm' slot='0'/>
</memory>
So far, this is just a XML parser/formatter extension. QEMU
driver implementation is in the next commit.
For more info on NVDIMM visit the following web page:
http://pmem.io/
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
bhyve supports 'gop' video device that allows clients to connect
to VMs using VNC clients. This commit adds support for that to
the bhyve driver:
- Introducr 'gop' video device type
- Add capabilities probing for the 'fbuf' device that's
responsible for graphics
- Update command builder routines to let users configure
domain's VNC via gop graphics.
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>