mirror of
https://gitlab.com/libvirt/libvirt.git
synced 2024-11-05 04:41:20 +00:00
c106a83712
* docs/drvopenvz.html.in: Spell administrator correctly. * docs/drvuml.html.in: Likewise. * src/qemu/qemu.conf: Likewise. Fix other typos, too.
113 lines
3.8 KiB
HTML
113 lines
3.8 KiB
HTML
<html> <!-- -*- html -*- -->
|
|
<body>
|
|
<h1>OpenVZ container driver</h1>
|
|
|
|
<ul id="toc"></ul>
|
|
|
|
<p>
|
|
The OpenVZ driver for libvirt allows use and management of container
|
|
based virtualization on a Linux host OS. Prior to using the OpenVZ
|
|
driver, the OpenVZ enabled kernel must be installed & booted, and the
|
|
OpenVZ userspace tools installed. The libvirt driver has been tested
|
|
with OpenVZ 3.0.22, but other 3.0.x versions should also work without
|
|
undue trouble.
|
|
</p>
|
|
|
|
<h2><a name="connections">Connections to OpenVZ driver</a></h2>
|
|
|
|
<p>
|
|
The libvirt OpenVZ driver is a single-instance privileged driver,
|
|
with a driver name of 'openvz'. Some example connection URIs for
|
|
the libvirt driver are:
|
|
</p>
|
|
|
|
<pre>
|
|
openvz:///system (local access)
|
|
openvz+unix:///system (local access)
|
|
openvz://example.com/system (remote access, TLS/x509)
|
|
openvz+tcp://example.com/system (remote access, SASl/Kerberos)
|
|
openvz+ssh://root@example.com/system (remote access, SSH tunnelled)
|
|
</pre>
|
|
|
|
<h2><a name="notes">Notes on bridged networking</a></h2>
|
|
|
|
<p>
|
|
Bridged networking enables a guest domain (ie container) to have its
|
|
network interface connected directly to the host's physical LAN. Before
|
|
this can be used there are a couple of configuration pre-requisites for
|
|
the host OS.
|
|
</p>
|
|
|
|
<h3><a name="host">Host network devices</a></h3>
|
|
|
|
<p>
|
|
One or more of the physical devices must be attached to a bridge. The
|
|
process for this varies according to the operating system in use, so
|
|
for up to date notes consult the <a href="http://wiki.libvirt.org">Wiki</a>
|
|
or your operating system's networking documentation. The basic idea is
|
|
that the host OS should end up with a bridge device "br0" containing a
|
|
physical device "eth0", or a bonding device "bond0".
|
|
</p>
|
|
|
|
<h3><a name="tools">OpenVZ tools configuration</a></h3>
|
|
|
|
<p>
|
|
OpenVZ releases later than 3.0.23 ship with a standard network device
|
|
setup script that is able to setup bridging, named
|
|
<code>/usr/sbin/vznetaddbr</code>. For releases prior to 3.0.23, this
|
|
script must be created manually by the host OS administrator. The
|
|
simplest way is to just download the latest version of this script
|
|
from a newer OpenVZ release, or upstream source repository. Then
|
|
a generic configuration file <code>/etc/vz/vznetctl.conf</code>
|
|
must be created containing
|
|
</p>
|
|
|
|
<pre>
|
|
#!/bin/bash
|
|
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"
|
|
</pre>
|
|
|
|
<p>
|
|
The host OS is now ready to allow bridging of guest containers, which
|
|
will work whether the container is started with libvirt, or OpenVZ
|
|
tools.
|
|
</p>
|
|
|
|
|
|
<h2><a name="example">Example guest domain XML configuration</a></h2>
|
|
|
|
<p>
|
|
The current libvirt OpenVZ driver has a restriction that the
|
|
domain names must match the OpenVZ container VEID, which by
|
|
convention start at 100, and are incremented from there. The
|
|
choice of OS template to use inside the container is determined
|
|
by the <code>filesystem</code> tag, and the template source name
|
|
matches the templates known to OpenVZ tools.
|
|
</p>
|
|
|
|
<pre>
|
|
<domain type='openvz' id='104'>
|
|
<name>104</name>
|
|
<uuid>86c12009-e591-a159-6e9f-91d18b85ef78</uuid>
|
|
<vcpu>3</vcpu>
|
|
<os>
|
|
<type>exe</type>
|
|
<init>/sbin/init</init>
|
|
</os>
|
|
<devices>
|
|
<filesystem type='template'>
|
|
<source name='fedora-9-i386-minimal'/>
|
|
<target dir='/'/>
|
|
</filesystem>
|
|
<interface type='bridge'>
|
|
<mac address='00:18:51:5b:ea:bf'/>
|
|
<source bridge='br0'/>
|
|
<target dev='veth101.0'/>
|
|
</interface>
|
|
</devices>
|
|
</domain>
|
|
</pre>
|
|
|
|
</body>
|
|
</html>
|