libvirt/docs/pci-hotplug.html.in

186 lines
6.5 KiB
HTML
Raw Normal View History

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<body>
<h1>PCI topology and hotplug</h1>
<ul id="toc"></ul>
<p>
Perhaps surprisingly, most libvirt guests support only limited PCI
device hotplug out of the box, or even none at all.
</p>
<p>
The reason for this apparent limitation is the fact that each
hotplugged PCI device might require additional PCI controllers to
be added to the guest. Since most PCI controllers can't be
hotplugged, they need to be added before the guest is started;
however, libvirt has no way of knowing in advance how many devices
will be hotplugged during the guest's lifetime, thus making it
impossible to automatically provide the right amount of PCI
controllers: any arbitrary number would end up being too big
for some users, and too small for others.
</p>
<p>
Ultimately, the user is the only one who knows how much the guest
will need to grow dynamically, so the responsibility of planning
a suitable PCI topology in advance falls on them.
</p>
<p>
This document aims at providing all the information needed to
successfully plan the PCI topology of a guest. Note that the
details can vary a lot between architectures and even machine
types, hence the way it's organized.
</p>
<h2><a id="x86_64">x86_64 architecture</a></h2>
<h3><a id="x86_64-q35">q35 machine type</a></h3>
<p>
This is a PCI Express native machine type. The default PCI topology
looks like
</p>
<pre>
&lt;controller type='pci' index='0' model='pcie-root'/&gt;
&lt;controller type='pci' index='1' model='pcie-root-port'&gt;
&lt;model name='pcie-root-port'/&gt;
&lt;target chassis='1' port='0x10'/&gt;
&lt;address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/&gt;
&lt;/controller&gt;</pre>
<p>
and supports hotplugging a single PCI Express device, either
emulated or assigned from the host.
</p>
<p>
If you have a very specific use case, such as the appliances
used by <a href="http://libguestfs.org/">libguestfs</a> behind
the scenes to access disk images, and this automatically-added
<code>pcie-root-port</code> controller ends up being a nuisance,
you can prevent libvirt from adding it by manually managing PCI
controllers and addresses according to your needs.
</p>
<p>
Slots on the <code>pcie-root</code> controller do not support
hotplug, so the device will be hotplugged into the
<code>pcie-root-port</code> controller. If you plan to hotplug
more than a single PCI Express device, you should add a suitable
number of <code>pcie-root-port</code> controllers when defining
the guest: for example, add
</p>
<pre>
&lt;controller type='pci' model='pcie-root'/&gt;
&lt;controller type='pci' model='pcie-root-port'/&gt;
&lt;controller type='pci' model='pcie-root-port'/&gt;
&lt;controller type='pci' model='pcie-root-port'/&gt;</pre>
<p>
if you expect to hotplug up to three PCI Express devices,
either emulated or assigned from the host. That's all the
information you need to provide: libvirt will fill in the
remaining details automatically. Note that you need to add
the <code>pcie-root</code> controller along with the
<code>pcie-root-port</code> controllers or you will get an
error.
</p>
<p>
Note that if you're adding PCI controllers to a guest and at
the same time you're also adding PCI devices, some of the
controllers will be used for the newly-added devices and won't
be available for hotplug once the guest has been started.
</p>
<p>
If you expect to hotplug legacy PCI devices, then you will need
specialized controllers, since all those mentioned above are
intended for PCI Express devices only: add
</p>
<pre>
&lt;controller type='pci' model='pcie-to-pci-bridge'/&gt;</pre>
<p>
and you'll be able to hotplug up to 31 legacy PCI devices,
either emulated or assigned from the host, in the slots
from 0x01 to 0x1f of the <code>pcie-to-pci-bridge</code> controller.
</p>
<h3><a id="x86_64-i440fx">i440fx (pc) machine type</a></h3>
<p>
This is a legacy PCI native machine type. The default PCI
topology looks like
</p>
<pre>
&lt;controller type='pci' index='0' model='pci-root'/&gt;</pre>
<p>
where each of the 31 slots (from 0x01 to 0x1f) on the
<code>pci-root</code> controller is hotplug capable and
can accept a legacy PCI device, either emulated or
assigned from the guest.
</p>
<h2><a id="ppc64">ppc64 architecture</a></h2>
<h3><a id="ppc64-pseries">pseries machine type</a></h3>
<p>
The default PCI topology for the <code>pseries</code> machine
type looks like
</p>
<pre>
&lt;controller type='pci' index='0' model='pci-root'&gt;
&lt;model name='spapr-pci-host-bridge'/&gt;
&lt;target index='0'/&gt;
&lt;/controller&gt;</pre>
<p>
The 31 slots, from 0x01 to 0x1f, on a <code>pci-root</code>
controller are all hotplug capable and, despite the name
suggesting otherwise, starting with QEMU 2.9 all of them
can accept PCI Express devices in addition to legacy PCI
devices; however, libvirt will only place emulated devices
on the default <code>pci-root</code> controller.
</p>
<p>
In order to take advantage of improved error reporting and
recovering capabilities, PCI devices assigned from the
host need to be isolated by placing each on a separate
<code>pci-root</code> controller, which has to be prepared
in advance for hotplug to work: for example, add
</p>
<pre>
&lt;controller type='pci' model='pci-root'/&gt;
&lt;controller type='pci' model='pci-root'/&gt;
&lt;controller type='pci' model='pci-root'/&gt;</pre>
<p>
if you expect to hotplug up to three PCI devices assigned
from the host.
</p>
<h2><a id="aarch64">aarch64 architecture</a></h2>
<h3><a id="aarch64-virt">mach-virt (virt) machine type</a></h3>
<p>
This machine type mostly behaves the same as the
<a href="#x86_64-q35">q35 machine type</a>, so you can just
refer to that section for information.
</p>
<p>
The only difference worth mentioning is that using legacy
PCI for <code>mach-virt</code> guests is extremely uncommon,
so you'll probably never need to add controllers other than
<code>pcie-root-port</code>.
</p>
</body>
</html>