docs: Add "PCI topology and hotplug" guidelines

For all machine types except i440fx, making a guest hotplug
capable requires some sort of planning. Add some information
to help users make educated choices when defining the PCI
topology of guests.

Signed-off-by: Andrea Bolognani <abologna@redhat.com>
This commit is contained in:
Andrea Bolognani 2017-07-25 09:34:53 +02:00
parent e9f3222705
commit b9b0aa06a0
2 changed files with 167 additions and 1 deletions

View File

@ -3505,7 +3505,9 @@
appear more than once, with a group of virtual devices tied to a
virtual controller. Normally, libvirt can automatically infer such
controllers without requiring explicit XML markup, but sometimes
it is necessary to provide an explicit controller element.
it is necessary to provide an explicit controller element, notably
when planning the <a href="pci-hotplug.html">PCI topology</a>
for guests where device hotplug is expected.
</p>
<pre>

164
docs/pci-hotplug.html.in Normal file
View File

@ -0,0 +1,164 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
<h1>PCI topology and hotplug</h1>
<ul id="toc"></ul>
<p>
Perhaps surprisingly, most libvirt guests support only limited PCI
device hotplug out of the box, or even none at all.
</p>
<p>
The reason for this apparent limitation is the fact that each
hotplugged PCI device might require additional PCI controllers to
be added to the guest, and libvirt has no way of knowing in advance
how many devices will be hotplugged during the guest's lifetime,
thus making it impossible to automatically provide the right amount
of PCI controllers: any arbitrary number would end up being too big
for some users, and too small for others.
</p>
<p>
Ultimately, the user is the only one who knows how much the guest
will need to grow dynamically, so the responsibility of planning
a suitable PCI topology in advance falls on them.
</p>
<p>
This document aims at providing all the information needed to
successfully plan the PCI topology of a guest. Note that the
details can vary a lot between architectures and even machine
types, hence the way it's organized.
</p>
<h2><a name="x86_64">x86_64 architecture</a></h2>
<h3><a name="x86_64-q35">q35 machine type</a></h3>
<p>
This is a PCI Express native machine type. The default PCI topology
looks like
</p>
<pre>
&lt;controller type='pci' index='0' model='pcie-root'/&gt;
&lt;controller type='pci' index='1' model='pcie-root-port'&gt;
&lt;model name='pcie-root-port'/&gt;
&lt;target chassis='1' port='0x10'/&gt;
&lt;address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/&gt;
&lt;/controller&gt;</pre>
<p>
and supports hotplugging a single PCI Express device, either
emulated or assigned from the host.
</p>
<p>
Slots on the <code>pcie-root</code> controller do not support
hotplug, so the device will be hotplugged into the
<code>pcie-root-port</code> controller. If you plan to hotplug
more than a single PCI Express device, you should add a suitable
number of <code>pcie-root-port</code> controllers when defining
the guest: for example, add
</p>
<pre>
&lt;controller type='pci' model='pcie-root-port'/&gt;
&lt;controller type='pci' model='pcie-root-port'/&gt;
&lt;controller type='pci' model='pcie-root-port'/&gt;</pre>
<p>
if you expect to hotplug up to three PCI Express devices,
either emulated or assigned from the host. That's all the
information you need to provide: libvirt will fill in the
remaining details automatically.
</p>
<p>
If you expect to hotplug legacy PCI devices, then you will need
specialized controllers, since all those mentioned above are
intended for PCI Express devices only: add
</p>
<pre>
&lt;controller type='pci' model='dmi-to-pci-bridge'/&gt;
&lt;controller type='pci' model='pci-bridge'/&gt;</pre>
<p>
and you'll be able to hotplug up to 31 legacy PCI devices,
either emulated or assigned from the host.
</p>
<h3><a name="x86_64-i440fx">i440fx (pc) machine type</a></h3>
<p>
This is a legacy PCI native machine type. The default PCI
topology looks like
</p>
<pre>
&lt;controller type='pci' index='0' model='pci-root'/&gt;</pre>
<p>
where each of the 31 slots on the <code>pci-root</code>
controller is hotplug capable and can accept a legacy PCI
device, either emulated or assigned from the guest.
</p>
<h2><a name="ppc64">ppc64 architecture</a></h2>
<h3><a name="ppc64-pseries">pseries machine type</a></h3>
<p>
The default PCI topology for the <code>pseries</code> machine
type looks like
</p>
<pre>
&lt;controller type='pci' index='0' model='pci-root'&gt;
&lt;model name='spapr-pci-host-bridge'/&gt;
&lt;target index='0'/&gt;
&lt;/controller&gt;</pre>
<p>
The 31 slots on a <code>pci-root</code> controller are all
hotplug capable and, despite the name suggesting otherwise,
starting with QEMU 2.9 all of them can accept PCI Express
devices in addition to legacy PCI devices; however,
libvirt will only place emulated devices on the default
<code>pci-root</code> controller.
</p>
<p>
In order to take advantage of improved error reporting and
recovering capabilities, PCI devices assigned from the
host need to be isolated by placing each on a separate
<code>pci-root</code> controller, which has to be prepared
in advance for hotplug to work: for example, add
</p>
<pre>
&lt;controller type='pci' model='pci-root'/&gt;
&lt;controller type='pci' model='pci-root'/&gt;
&lt;controller type='pci' model='pci-root'/&gt;</pre>
<p>
if you expect to hotplug up to three PCI devices assigned
from the host.
</p>
<h2><a name="aarch64">aarch64 architecture</a></h2>
<h3><a name="aarch64-virt">mach-virt (virt) machine type</a></h3>
<p>
This machine type mostly behaves the same as the
<a href="#x86_64-q35">q35 machine type</a>, so you can just
refer to that section for information.
</p>
<p>
The only difference worth mentioning is that using legacy
PCI for <code>mach-virt</code> guests is extremely uncommon,
so you'll probably never need to add controllers other than
<code>pcie-root-port</code>.
</p>
</body>
</html>