This adds a 'lockd' lock driver which is just a client which
talks to the lockd daemon to perform all locking. This will
be the default lock driver for any hypervisor which needs one.
* src/Makefile.am: Add lockd.so plugin
* src/locking/lock_driver_lockd.c: Lockd driver impl
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Introduce a lock_daemon_dispatch.c file which implements the
server side dispatcher the RPC APIs previously defined in the
lock protocol.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virtlockd daemon will maintain locks on behalf of libvirtd.
There are two reasons for it to be separate
- Avoid risk of other libvirtd threads accidentally
releasing fcntl() locks by opening + closing a file
that is locked
- Ensure locks can be preserved across libvirtd restarts.
virtlockd will need to be able to re-exec itself while
maintaining locks. This is simpler to achieve if its
sole job is maintaining locks
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Most of this deals with moving the libvirt-guests.sh script which
does all the work to /usr/libexec, so it can be shared by both
systemd and traditional init. Previously systemd depended on
the script being in /etc/init.d
Required to fix https://bugzilla.redhat.com/show_bug.cgi?id=789747
These set bridge part of QoS when bringing domain's interface up.
Long story short, if there's a 'floor' set, a new QoS class is created.
ClassID MUST be unique within the bridge and should be kept for
unplug phase.
Parallels Cloud Server uses virtual networks model for network
configuration. It uses own tools for virtual network management.
So add network driver, which will be responsible for listing
virtual networks and performing different operations on them
(in consequent patched).
This patch only allows listing virtual network names, without
any parameters like DHCP server settings.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
The patch adds the backend driver to support iSCSI format storage pools
and volumes for ESX host. The mapping of ESX iSCSI specifics to Libvirt
is as follows:
1. ESX static iSCSI target <------> Libvirt Storage Pools
2. ESX iSCSI LUNs <------> Libvirt Storage Volumes.
The above understanding is based on http://libvirt.org/storage.html.
The operation supported on iSCSI pools includes:
1. List storage pools & volumes.
2. Get XML descriptor operaion on pools & volumes.
3. Lookup operation on pools & volumes by name, UUID and path (if applicable).
iSCSI pools does not support operations such as: Create / remove pools
and volumes.
To be able todo controlled shutdown/reboot of containers an
API to talk to init via /dev/initctl is required. Fortunately
this is quite straightforward to implement, and is supported
by both sysvinit and systemd. Upstart support for /dev/initctl
is unclear.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
this patch addes fuse support for libvirt lxc.
we can use fuse filesystem to generate sysinfo dynamically,
So we can isolate /proc/meminfo,cpuinfo and so on through
fuse filesystem.
we mount fuse filesystem for every container.
the mount name is libvirt,mount point is
localstatedir/run/libvirt/lxc/containername.
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
The patch refactors the current ESX storage driver due to following reasons:
1. Given most of the public APIs exposed by the storage driver in Libvirt
remains same, ESX storage driver should not implement logic specific
for only one supported format (current implementation only supports VMFS).
2. Decoupling interface from specific storage implementation gives us an
extensible design to hook implementation for other supported storage
formats.
This patch refactors the current driver to implement it as a facade pattern i.e.
the driver exposes all the public libvirt APIs, but uses backend drivers to get
the required task done. The backend drivers provide implementation specific to
the type of storage device.
File changes:
------------------
esx_storage_driver.c ----> esx_storage_driver.c (base storage driver)
|
|---> esx_storage_backend_vmfs.c (VMFS backend)
* configure.ac docs/news.html.in libvirt.spec.in: update for the new release
* po/*.po*: update from transifex, a lot of added support e.g. Indian
languages, and regenerate
Currently, the CPU model driver is not implemented for PowerPC.
Host's CPU information is needed to exposed to guests' XML file some
time.
This patch is to implement the callback functions of CPU model driver.
Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>
Add two new APIs virNetServerServiceNewPostExecRestart and
virNetServerServicePreExecRestart which allow a virNetServerServicePtr
object to be created from a JSON object and saved to a
JSON object, for the purpose of re-exec'ing a process.
This includes serialization of the listening sockets associated
with the service
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The previously introduced virFile{Lock,Unlock} APIs provide a
way to acquire/release fcntl() locks on individual files. For
unknown reason though, the POSIX spec says that fcntl() locks
are released when *any* file handle referring to the same path
is closed. In the following sequence
threadA: fd1 = open("foo")
threadB: fd2 = open("foo")
threadA: virFileLock(fd1)
threadB: virFileLock(fd2)
threadB: close(fd2)
you'd expect threadA to come out holding a lock on 'foo', and
indeed it does hold a lock for a very short time. Unfortunately
when threadB does close(fd2) this releases the lock associated
with fd1. For the current libvirt use case for virFileLock -
pidfiles - this doesn't matter since the lock is acquired
at startup while single threaded an never released until
exit.
To provide a more generally useful API though, it is necessary
to introduce a slightly higher level abstraction, which is to
be referred to as a "lockspace". This is to be provided by
a virLockSpacePtr object in src/util/virlockspace.{c,h}. The
core idea is that the lockspace keeps track of what files are
already open+locked. This means that when a 2nd thread comes
along and tries to acquire a lock, it doesn't end up opening
and closing a new FD. The lockspace just checks the current
list of held locks and immediately returns VIR_ERR_RESOURCE_BUSY.
NB, the API as it stands is designed on the basis that the
files being locked are not being otherwise opened and used
by the application code. One approach to using this API is to
acquire locks based on a hash of the filepath.
eg to lock /var/lib/libvirt/images/foo.img the application
might do
virLockSpacePtr lockspace = virLockSpaceNew("/var/lib/libvirt/imagelocks");
lockname = md5sum("/var/lib/libvirt/images/foo.img");
virLockSpaceAcquireLock(lockspace, lockname);
NB, in this example, the caller should ensure that the path
is canonicalized before calculating the checksum.
It is also possible to do locks directly on resources by
using a NULL lockspace directory and then using the file
path as the lock name eg
virLockSpacePtr lockspace = virLockSpaceNew(NULL);
virLockSpaceAcquireLock(lockspace, "/var/lib/libvirt/images/foo.img");
This is only safe to do though if no other part of the process
will be opening the files. This will be the case when this
code is used inside the soon-to-be-reposted virlockd daemon
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
While the changes to sanlock driver should be stable, the actual
implementation of sanlock_helper is supposed to be replaced in the
future. However, before we can implement a better sanlock_helper, we
need an administrative interface to libvirtd so that the helper can just
pass a "leases lost" event to the particular libvirt driver and
everything else will be taken care of internally. This approach will
also allow libvirt to pass such event to applications and use
appropriate reasons when changing domain states.
The temporary implementation handles all actions directly by calling
appropriate libvirt APIs (which among other things means that it needs
to know the credentials required to connect to libvirtd).
Add a read-only udev based backend for virInterface. Useful for distros
that do not have netcf support yet. Multiple libvirt based utilities use
a HAL based fallback when virInterface is not available which is less
than ideal. This implements:
* virConnectNumOfInterfaces()
* virConnectListInterfaces()
* virConnectNumOfDefinedInterfaces()
* virConnectListDefinedInterfaces()
* virConnectListAllInterfaces()
* virConnectInterfaceLookupByName()
* virConnectInterfaceLookupByMACString()
Continue consolidation of process functions by moving some
helpers out of command.{c,h} into virprocess.{c,h}
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Based exclusively on work by Eric Blake in a patch posted with the same
subject. However some modifications related to comments and my plans to
add another backend.
Added WITH_INTERFACE as the only automake variable deciding whether to
build the driver and using WITH_NETCF to identify that we're wanting to
use the netcf library as the backend.
* configure.ac: Added with_interface
* src/interface/netcf_driver.c: Renamed..
* src/interface/interface_backend_netcf.c: ..to this to match storage.
* src/interface/netcf_driver.h: Renamed..
* src/interface/interface_driver.h: ..to this.
* daemon/Makefile.am: Respect WITH_INTERFACE and WITH_NETCF.
* libvirt.spec.in: Add RPM support for --with-interface
This has several benefits:
1. Future snapshot-related code has a definite place to go (and I
_will_ be adding some)
2. Snapshot errors now use the VIR_FROM_DOMAIN_SNAPSHOT error
classification, which has been underutilized (previously only in
libvirt.c)
* src/conf/domain_conf.h, domain_conf.c: Split...
* src/conf/snapshot_conf.h, snapshot_conf.c: ...into new files.
* src/Makefile.am (DOMAIN_CONF_SOURCES): Build new files.
* po/POTFILES.in: Mark new file for translation.
* src/vbox/vbox_tmpl.c: Update caller.
* src/esx/esx_driver.c: Likewise.
* src/qemu/qemu_command.c: Likewise.
* src/qemu/qemu_domain.h: Likewise.
This patch adds helper functions that enable us to use libssh2 in
conjunction with libvirt's virNetSockets for ssh transport instead of
spawning "ssh" client process.
This implemetation supports tunneled plaintext, keyboard-interactive,
private key, ssh agent based and null authentication. Libvirt's Auth
callback is used for interaction with the user. (Keyboard interactive
authentication, adding of host keys, private key passphrases). This
enables seamless integration into the application using libvirt. No
helpers as "ssh-askpass" are needed.
Reading and writing of OpenSSH style "known_hosts" files is supported.
Communication is done using SSH exec channel, where the user may specify
arbitrary command to be executed on the remote side and reads and writes
to/from stdin/out are sent through the ssh channel. Usage of stderr is
not (yet) supported.
Move the functions the parse/format, and validate PCI addresses to
their own file so they can be conveniently used in other places
besides device_conf.c
Refactoring existing code without causing any functional changes to
prepare for new code.
This patch makes the code reusable.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
The following config elements now support a <vlan> subelements:
within a domain: <interface>, and the <actual> subelement of <interface>
within a network: the toplevel, as well as any <portgroup>
Each vlan element must have one or more <tag id='n'/> subelements. If
there is more than one tag, it is assumed that vlan trunking is being
requested. If trunking is required with only a single tag, the
attribute "trunk='yes'" should be added to the toplevel <vlan>
element.
Some examples:
<interface type='hostdev'/>
<vlan>
<tag id='42'/>
</vlan>
<mac address='52:54:00:12:34:56'/>
...
</interface>
<network>
<name>vlan-net</name>
<vlan trunk='yes'>
<tag id='30'/>
</vlan>
<virtualport type='openvswitch'/>
</network>
<interface type='network'/>
<source network='vlan-net'/>
...
</interface>
<network>
<name>trunk-vlan</name>
<vlan>
<tag id='42'/>
<tag id='43'/>
</vlan>
...
</network>
<network>
<name>multi</name>
...
<portgroup name='production'/>
<vlan>
<tag id='42'/>
</vlan>
</portgroup>
<portgroup name='test'/>
<vlan>
<tag id='666'/>
</vlan>
</portgroup>
</network>
<interface type='network'/>
<source network='multi' portgroup='test'/>
...
</interface>
IMPORTANT NOTE: As of this patch there is no backend support for the
vlan element for *any* network device type. When support is added in
later patches, it will only be for those select network types that
support setting up a vlan on the host side, without the guest's
involvement. (For example, it will be possible to configure a vlan for
a guest connected to an openvswitch bridge, but it won't be possible
to do that for one that is connected to a standard Linux host bridge.)
An ESX server has one or more PhysicalNics that represent the actual
hardware NICs. Those can be listed via the interface driver.
A libvirt virtual network is mapped to a HostVirtualSwitch. On the
physical side a HostVirtualSwitch can be connected to PhysicalNics.
On the virtual side a HostVirtualSwitch has HostPortGroups that are
mapped to libvirt virtual network's portgroups. Typically there is
HostPortGroups named 'VM Network' that is used to connect virtual
machines to a HostVirtualSwitch. A second HostPortGroup typically
named 'Management Network' is used to connect the hypervisor itself
to the HostVirtualSwitch. This one is not mapped to a libvirt virtual
network's portgroup. There can be more HostPortGroups than those
typical two on a HostVirtualSwitch.
+---------------+-------------------+
...---| | | +-------------+
| HostPortGroup | |---| PhysicalNic |
| VM Network | | | vmnic0 |
...---| | | +-------------+
+---------------+ HostVirtualSwitch |
| vSwitch0 |
+---------------+ |
| HostPortGroup | |
...---| Management | |
| Network | |
+---------------+-------------------+
The virtual counterparts of the PhysicalNic is the HostVirtualNic for
the hypervisor and the VirtualEthernetCard for the virtual machines
that are grouped into HostPortGroups.
+---------------------+ +---------------+---...
| VirtualEthernetCard |---| |
+---------------------+ | HostPortGroup |
+---------------------+ | VM Network |
| VirtualEthernetCard |---| |
+---------------------+ +---------------+
|
+---------------+
+---------------------+ | HostPortGroup |
| HostVirtualNic |---| Management |
+---------------------+ | Network |
+---------------+---...
The currently implemented network driver can list, define and undefine
HostVirtualSwitches including HostPortGroups for virtual machines.
Existing HostVirtualSwitches cannot be edited yet. This will be added
in a followup patch.
Parallels Cloud Server has one serious discrepancy with libvirt:
libvirt stores domain configuration files in one place, and storage
files in other places (with the API of storage pools and storage volumes).
Parallels Cloud Server stores all domain data in a single directory,
for example, you may have domain with name fedora-15, which will be
located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be
in '/var/parallels/fedora-15.pvm/harddisk1.hdd'.
I've decided to create storage driver, which produces pseudo-volumes
(xml files with volume description), and they will be 'converted' to
real disk images after attaching to a VM.
So if someone creates VM with one hard disk using virt-manager,
at first virt-manager creates a new volume, and then defines a
domain. We can lookup a volume by path in XML domain definition
and find out location of new domain and size of its hard disk.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels driver is 'stateless', like vmware or openvz drivers.
It collects information about domains during startup using
command-line utility prlctl. VMs in Parallels are identified by UUIDs
or unique names, which can be used as respective fields in
virDomainDef structure. Currently only basic info, like
description, virtual cpus number and memory amount, is implemented.
Querying devices information will be added in the next patches.
Parallels doesn't support non-persistent domains - you can't run
a domain having only disk image, it must always be registered
in system.
Functions for querying domain info have been just copied from
test driver with some changes - they extract needed data from
previously created list of virDomainObj objects.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels Cloud Server is a cloud-ready virtualization
solution that allows users to simultaneously run multiple virtual
machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/
Also beta version of Parallels Cloud Server can be downloaded there.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Move the code that handles the LXC monitor out of the
lxc_process.c file and into lxc_monitor.{c,h}
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commands in node device group moved from virsh.c to virsh-nodedev.c,
* virsh.c: Remove commands in node device group.
* virsh-nodedev.c: New file, filled with commands in node device group
* po/POTFILES.in: Add virsh-nodedev.c
* cfg.mk: Skip to check config.h including for virsh-nodedev.c
Commands in host group moved from virsh.c to virsh-host.c,
* virsh.c: Remove commands in host group.
* virsh-host.c: New file, filled with commands in host group
* po/POTFILES.in: Add virsh-host.c
* cfg.mk: Skip to check config.h including for virsh-host.c
Commands to manage domain snapshot are moved from virsh.c to
virsh-snapshot.c.
* virsh.c: Remove domain snapshot commands.
* virsh-snapshot.c: New file, filled with domain snapshot commands.
* po/POTFILES.in: Add virsh-snapshot.c
* cfg.mk: Skip strcase and config.h including checking for
virsh-snapshot.c
Commands to manage secret are moved from virsh.c to virsh-secret.c,
with a few helpers for secret command use.
* virsh.c: Remove secret commands and a few helpers.
(vshCommandOptSecret, and vshCommandOptSecretBy)
* virsh-secret.c: New file, filled with secret commands and its helpers.
* po/POTFILES.in: Add virsh-secret.c
* cfg.mk: Skip to check config.h including for virsh-secret.c
Commands to manage network filter are moved from virsh.c to virsh-nwfilter.c,
with a few helpers for network filter command use.
* virsh.c: Remove network filter commands and a few helpers.
(vshCommandOptNWFilter, and vshCommandOptNWFilterBy)
* virsh-nwfilter.c: New file, filled with network filter commands and its helpers.
* po/POTFILES.in: Add virsh-nwfilter.c
* cfg.mk: Skip to check config.h including for virsh-nwfilter.c
Commands to manage host interface are moved from virsh.c to
virsh-interface.c, with a few helpers for interface command use.
* virsh.c: Remove interface commands and a few helpers.
(vshCommandOptInterface, vshCommandOptInterfaceBy)
* virsh-interface.c: New file, filled with interface commands and
its helpers.
* cfg.mk: Skip to check config.h including for virsh-interface.c
* po/POTFILES.in: Add virsh-interface.c
Commands to manage network are moved from virsh.c to virsh-network.c,
with a few helpers for network command use.
* virsh.c: Remove network commands and a few helpers.
* virsh-network.c: New file, filled with network commands and its
helpers.
* po/POTFILES.in: Add virsh-network.c
* cfg.mk: Skip to check config.h including for virsh-network.c
This splits commands of storage pool group into virsh-pool.c,
The helpers not for common use are moved too. Standard copyright
is added for the new file.
* tools/virsh.c:
Remove commands for storage storage pool and a few helpers.
(vshCommandOptVol, vshCommandOptVolBy).
* tools/virsh-pool.c:
New file, filled with commands of storage pool group and its
helpers.
* po/POTFILES.in:
Add virsh-pool.c
* cfg.mk:
Skip to check config.h including for virsh-pool.c
This splits commands of storage volume group into virsh-volume.c,
The helpers not for common use are moved too. Standard copyright
is added for the new file.
* tools/virsh.c:
Remove commands for storage storage volume and a few helpers.
(vshCommandOptVol, vshCommandOptVolBy).
* tools/virsh-volume.c:
New file, filled with commands of storage volume group and its
helpers.
* po/POTFILES.in:
Add virsh-volume.c
* cfg.mk:
Skip to check config.h including for virsh-volume.c
This splits commands to manage domain into virsh-domain.c,The helpers
not for common use are moved into them too. Standard copyright is added
for the new file.
* tools/virsh.c:
- Remove commands for domain group, and one helper
(vshDomainVcpuStateToString)
- vshStreamSink is moved before commands's definition for it's
also used by commands not of domain group, such as volUpload.
* tools/virsh-domain.c:
- New file, commands for domain group and the one helper are
moved into it.
* po/POTFILES.in:
- Add virsh-domain.c
* cfg.mk:
- Skip to check config.h including for virsh-domain.c
This splits commands commands to monitor domain status into
virsh-domain-monitor.c. The helpers not for common use are moved too.
Standard copyright is added.
* tools/virsh.c:
- Remove commands for domain monitoring group and a few helpers (
vshDomainIOErrorToString, vshGetDomainDescription,
vshDomainControlStateToString, vshDomainStateToString) not for
common use.
- Remove (incldue "intprops.h").
* tools/virsh-domain-monitor.c:
- New file, filled with commands of domain monitor group.
- Add "intprops.h".
* cfg.mk:
- Skip strcase checking for virsh-domain-monitor.c
- Skip to check config.h including for virsh-domain-monitor.c
* po/POTFILES.in
- Add virsh-domain-monitor.c
Move all the code that manages stop/start of LXC processes
into separate lxc_process.{c,h} file to make the lxc_driver.c
file smaller
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Move the cgroup setup code out of the lxc_controller.c file
and into lxc_cgroup.{c,h}. This reduces the size of the
lxc_controller.c file and paves the way to invoke cgroup
setup from lxc_driver.c instead of lxc_controller.c in the
future
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch brings support to manage sheepdog pools and volumes to libvirt.
It uses the "collie" command-line utility that comes with sheepdog for that.
A sheepdog pool in libvirt maps to a sheepdog cluster.
It needs a host and port to connect to, which in most cases
is just going to be the default of localhost on port 7000.
A sheepdog volume in libvirt maps to a sheepdog vdi.
To create one specify the pool, a name and the capacity.
Volumes can also be resized later.
In the volume XML the vdi name has to be put into the <target><path>.
To use the volume as a disk source for virtual machines specify
the vdi name as "name" attribute of the <source>.
The host and port information from the pool are specified inside the host tag.
<disk type='network'>
...
<source protocol="sheepdog" name="vdi_name">
<host name="localhost" port="7000"/>
</source>
</disk>
To work right this patch parses the output of collie,
so it relies on the raw output option. There recently was a bug which caused
size information to be reported wrong. This is fixed upstream already and
will be in the next release.
Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
The virnetdevtap.c and viruri.c files had two error report
messages which were not annotated with _(...)
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently, we either generate some cmd*Edit commands (cmdPoolEdit
and cmdNetworkEdit) via sed script or copy the body of cmdEdit
(e.g. cmdInterfaceEdit, cmdNWFilterEdit, etc.). This fact makes
it harder to implement any new feature to our editing system.
Therefore switch to new implementation - define macros to:
- dump XML (EDIT_GET_XML)
- take an action if XML wasn't changed,
usually just vshPrint() (EDIT_NOT_CHANGED)
- define new object (EDIT_DEFINE) - the edited XML is in @doc_edited
- free object defined by EDIT_DEFINE (EDIT_FREE)
and #include "virsh-edit.c"
This patch adds DHCP snooping support to libvirt. The learning method for
IP addresses is specified by setting the "CTRL_IP_LEARNING" variable to one of
"any" [default] (existing IP learning code), "none" (static only addresses)
or "dhcp" (DHCP snooping).
Active leases are saved in a lease file and reloaded on restart or HUP.
The following interface XML activates and uses the DHCP snooping:
<interface type='bridge'>
<source bridge='virbr0'/>
<filterref filter='clean-traffic'>
<parameter name='CTRL_IP_LEARNING' value='dhcp'/>
</filterref>
</interface>
All filters containing the variable 'IP' are automatically adjusted when
the VM receives an IP address via DHCP. However, multiple IP addresses per
interface are silently ignored in this patch, thus only supporting one IP
address per interface. Multiple IP address support is added in a later
patch in this series.
Signed-off-by: David L Stevens <dlstevens@us.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
To ensure consistent error reporting of invalid arguments,
provide a number of predefined helper methods & macros.
- An arg which must not be NULL:
virCheckNonNullArgReturn(argname, retvalue)
virCheckNonNullArgGoto(argname, label)
- An arg which must be NULL
virCheckNullArgGoto(argname, label)
- An arg which must be positive (ie 1 or greater)
virCheckPositiveArgGoto(argname, label)
- An arg which must not be 0
virCheckNonZeroArgGoto(argname, label)
- An arg which must be zero
virCheckZeroArgGoto(argname, label)
- An arg which must not be negative (ie 0 or greater)
virCheckNonNegativeArgGoto(argname, label)
* src/libvirt.c, src/libvirt-qemu.c,
src/nodeinfo.c, src/datatypes.c: Update to use
virCheckXXXX macros
* po/POTFILES.in: Add libvirt-qemu.c and virterror_internal.h
* src/internal.h: Define macros for checking invalid args
* src/util/virterror_internal.h: Define macros for reporting
invalid args
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds support for a new storage backend with RBD support.
RBD is the RADOS Block Device and is part of the Ceph distributed storage
system.
It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
To function this backend relies on librbd and librados being present on the
local system.
The backend also supports Cephx authentication for safe authentication with
the Ceph cluster.
For storing credentials it uses the built-in secret mechanism of libvirt.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
DBus connection. The HAL device code further requires that
the DBus connection is integrated with the event loop and
provides such glue logic itself.
The forthcoming FirewallD integration also requires a
dbus connection with event loop integration. Thus we need
to pull the current event loop glue out of the HAL driver.
Thus we create src/util/virdbus.{c,h} files. This contains
just one method virDBusGetSystemBus() which obtains a handle
to the single shared system bus instance, with event glue
automagically setup.
* configure.ac docs/news.html.in libvirt.spec.in: update for the release
* po/*.po*: updated a number of languages translation including new
indian languages and regenerated
To follow latest naming conventions, rename src/util/authhelper.[ch]
to src/util/virauth.[ch].
* src/util/authhelper.[ch]: Rename to src/util/virauth.[ch]
* src/esx/esx_driver.c, src/hyperv/hyperv_driver.c,
src/phyp/phyp_driver.c, src/xenapi/xenapi_driver.c: Update
for renamed include files
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The '.ini' file format is a useful alternative to the existing
config file style, when you need to have config files which
are hashes of hashes. The 'virKeyFilePtr' object provides a
way to parse these file types.
* src/Makefile.am, src/util/virkeyfile.c,
src/util/virkeyfile.h: Add .ini file parser
* tests/Makefile.am, tests/virkeyfiletest.c: Test
basic parsing capabilities
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds a set of functions used in creating console streams for
domains using PTYs and ensures mutually exclusive access to the PTYs.
If mutually exclusive access is not used, two clients may open the same
console, which results in corruption on both clients as both of them
race to read data from the PTY.
Two approaches are used to ensure this:
1) Internal data structure holding open PTYs.
This is used internally and enables the user to forcibly
terminate another console connection eg. when somebody leaves
the console open on another host.
2) UUCP style lock files:
This uses UUCP lock files according to the FHS
( http://www.pathname.com/fhs/pub/fhs-2.3.html#VARLOCKLOCKFILES )
to check if other programs (like minicom) are not using the pty
device of the console.
This feature is disabled by default and may be enabled using
configure parameter
--with-console-lock-files=/path/to/lock/file/directory
or --with-console-lock-files=auto (which tries to infer the
location from OS used (currently only linux).
On usual linux systems, normal users may not write to the
/var/lock directory containing the locks. This poses problems
while in session mode. If the current user has no access to the
lockfile directory, check for presence of the file is still
done, but no lock file is created. This does NOT result in an
error.
This patch allows libvirt to add interfaces to already
existing Open vSwitch bridges. The following syntax in
domain XML file can be used:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'/>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
or if libvirt should auto-generate the interfaceid use
following syntax:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
It is also possible to pass an optional profileid. To do that
use following syntax:
<interface type='bridge'>
<source bridge='ovsbr'/>
<mac address='00:55:1a:65:a2:8d'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'
profileid='test-profile'/>
</virtualport>
</interface>
To create Open vSwitch bridge install Open vSwitch and
run the following command:
ovs-vsctl add-br ovsbr
The auto-generated WWN comply with the new addressing schema of WWN:
<quote>
the first nibble is either hex 5 or 6 followed by a 3-byte vendor
identifier and 36 bits for a vendor-specified serial number.
</quote>
We choose hex 5 for the first nibble. And for the 3-bytes vendor ID,
we uses the OUI according to underlying hypervisor type, (invoking
virConnectGetType to get the virt type). e.g. If virConnectGetType
returns "QEMU", we use Qumranet's OUI (00:1A:4A), if returns
ESX|VMWARE, we use VMWARE's OUI (00:05:69). Currently it only
supports qemu|xen|libxl|xenapi|hyperv|esx|vmware drivers. The last
36 bits are auto-generated.
Rename the src/util/netlink files to src/util/virnetlink to
better fit the naming scheme. Also rename nlComm to virNetlinkCommand.
Signed-off-by: D. Herrendoerfer <d.herrendoerfer@herrendoerfer.name>
Curently security labels can be of type 'dynamic' or 'static'.
If no security label is given, then 'dynamic' is assumed. The
current code takes advantage of this default, and avoids even
saving <seclabel> elements with type='dynamic' to disk. This
means if you temporarily change security driver, the guests
can all still start.
With the introduction of sVirt to LXC though, there needs to be
a new default of 'none' to allow unconfined LXC containers.
This patch introduces two new security label types
- default: the host configuration decides whether to run the
guest with type 'none' or 'dynamic' at guest start
- none: the guest will run unconfined by security policy
The 'none' label type will obviously be undesirable for some
deployments, so a new qemu.conf option allows a host admin to
mandate confined guests. It is also possible to turn off default
confinement
security_default_confined = 1|0 (default == 1)
security_require_confined = 1|0 (default == 0)
* src/conf/domain_conf.c, src/conf/domain_conf.h: Add new
seclabel types
* src/security/security_manager.c, src/security/security_manager.h:
Set default sec label types
* src/security/security_selinux.c: Handle 'none' seclabel type
* src/qemu/qemu.conf, src/qemu/qemu_conf.c, src/qemu/qemu_conf.h,
src/qemu/libvirtd_qemu.aug: New security config options
* src/qemu/qemu_driver.c: Tell security driver about default
config
To assist people in verifying that their host is operating in an
optimal manner, provide a 'virt-host-validate' command. For each
type of hypervisor, it will check any pre-requisites, or other
good recommendations and report what's working & what is not.
eg
# virt-host-validate
QEMU: Checking for device /dev/kvm : FAIL (Check that the 'kvm-intel' or 'kvm-amd' modules are loaded & the BIOS has enabled virtualization)
QEMU: Checking for device /dev/vhost : WARN (Load the 'vhost_net' module to improve performance of virtio networking)
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux >= 2.6.26 : PASS
This warns people if they have vmx/svm, but don't have /dev/kvm. It
also warns about missing /dev/vhost net.
In preparation for the patch to include Murmurhash3, which
introduces a virhashcode.h and virhashcode.c files, rename
the existing hash.h and hash.c to virhash.h and virhash.c
respectively.
There is now a standard QEMU guest agent that can be installed
and given a virtio serial channel
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/f16x86_64.agent'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>
The protocol that runs over the guest agent is JSON based and
very similar to the JSON monitor. We can't use exactly the same
code because there are some odd differences in the way messages
and errors are structured. The qemu_agent.c file is based on
a combination and simplification of qemu_monitor.c and
qemu_monitor_json.c
* src/qemu/qemu_agent.c, src/qemu/qemu_agent.h: Support for
talking to the agent for shutdown
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add thread
helpers for talking to the agent
* src/qemu/qemu_process.c: Connect to agent whenever starting
a guest
* src/qemu/qemu_monitor_json.c: Make variable static
Preparation for another patch that refactors common patterns
into the new file for fewer lines of code overall.
* src/util/util.h (virTypedParameterArrayClear): Move...
* src/util/virtypedparam.h: ...to new file.
(virTypedParameterArrayValidate, virTypedParameterAssign): New
prototypes.
* src/util/util.c (virTypedParameterArrayClear): Likewise.
* src/util/virtypedparam.c: New file.
* po/POTFILES.in: Mark file for translation.
* src/Makefile.am (UTIL_SOURCES): Build it.
* src/libvirt_private.syms (util.h): Split...
(virtypedparam.h): to new section.
(virkeycode.h): Sort.
* daemon/remote.c: Adjust callers.
* tools/virsh.c: Likewise.
The logging APIs need to be able to generate formatted timestamps
using only async signal safe functions. This rules out using
gmtime/localtime/malloc/gettimeday(!) and much more.
Introduce a new internal API which is async signal safe.
virTimeMillisNowRaw replacement for gettimeofday. Uses clock_gettime
where available, otherwise falls back to the unsafe
gettimeofday
virTimeFieldsNowRaw replacements for gmtime(), convert a timestamp
virTimeFieldsThenRaw into a broken out set of fields. No localtime()
replacement is provided, because converting to
local time is not practical with only async signal
safe APIs.
virTimeStringNowRaw replacements for strftime() which print a timestamp
virTimeStringThenRaw into a string, using a pre-determined format, with
a fixed size buffer (VIR_TIME_STRING_BUFLEN)
For each of these there is also a version without the Raw postfix
which raises a full libvirt error. These versions are not async
signal safe
* src/Makefile.am, src/util/virtime.c, src/util/virtime.h: New files
* src/libvirt_private.syms: New APis
* configure.ac: Check for clock_gettime in -lrt
* tests/virtimetest.c, tests/Makefile.am: Test new APIs
Add the core functions that implement the functionality of the API.
Suspend is done by using an asynchronous mechanism so that we can return
the status to the caller before the host gets suspended. This asynchronous
operation is achieved by suspending the host in a separate thread of
execution. However, returning the status to the caller is only best-effort,
but not guaranteed.
To resume the host, an RTC alarm is set up (based on how long we want to
suspend) before suspending the host. When this alarm fires, the host
gets woken up.
Suspend-to-RAM operation on a host running Linux can take upto more than 20
seconds, depending on the load of the system. (Freezing of tasks, an operation
preceding any suspend operation, is given up after a 20 second timeout).
And Suspend-to-Disk can take even more time, considering the time required
for compaction, creating the memory image and writing it to disk etc.
So, we do not allow the user to specify a suspend duration of less than 60
seconds, to be on the safer side, since we don't want to prematurely declare
failure when we only had to wait for some more time.
The original patch for commit 4789fb2 considered renaming a file,
then backed out the name change, but forgot to back out the POTFILES.in
change, resulting in 'make syntax-check' failure.
This patch adds support for a systemd init service for libvirtd
and libvirt-guests. The libvirtd.service is *not* written to use
socket activation, since we want libvirtd to start on boot so it
can do guest auto-start.
The libvirt-guests.service is pretty lame, just exec'ing the
original init script for now. Ideally we would factor out the
functionality, into some shared tool.
Instead of
./configure --with-init-script=redhat
You can now do
./configure --with-init-script=systemd
Or better still:
./configure --with-init-script=systemd+redhat
We can also now support install of the upstart init script
* configure.ac: Add systemd, and systemd+redhat options to
--with-init-script option
* daemon/Makefile.am: Install systemd services
* daemon/libvirtd.sysconf: Add note about unused env variable
with systemd
* daemon/libvirtd.service.in: libvirtd systemd service unit
* libvirt.spec.in: Add scripts to installing systemd services
and migrating from legacy init scripts
* tools/Makefile.am: Install systemd services
* tools/libvirt-guests.init.sh: Rename to tools/libvirt-guests.init.in
* tools/libvirt-guests.service.in: systemd service unit
Move the ifaceMacvtapLinkDump and ifaceGetNthParent functions
into virnetdevvportprofile.c since they are specific to that
code. This avoids polluting the headers with the Linux specific
netlink data types
* src/util/interface.c, src/util/interface.h: Move
ifaceMacvtapLinkDump and ifaceGetNthParent functions and delete
remaining file
* src/util/virnetdevvportprofile.c: Add ifaceMacvtapLinkDump
and ifaceGetNthParent functions
* src/network/bridge_driver.c, src/nwfilter/nwfilter_gentech_driver.c,
src/nwfilter/nwfilter_learnipaddr.c, src/util/virnetdevmacvlan.c:
Remove include of interface.h
Rename the macvtap.c file to virnetdevmacvlan.c to reflect its
functionality. Move the port profile association code out into
virnetdevvportprofile.c. Make the APIs available unconditionally
to callers
* src/util/macvtap.h: rename to src/util/virnetdevmacvlan.h,
* src/util/macvtap.c: rename to src/util/virnetdevmacvlan.c
* src/util/virnetdevvportprofile.c, src/util/virnetdevvportprofile.h:
Pull in vport association code
* src/Makefile.am, src/conf/domain_conf.h, src/qemu/qemu_conf.c,
src/qemu/qemu_conf.h, src/qemu/qemu_driver.c: Update include
paths & remove conditional compilation
The src/lxc/veth.c file contains APIs for managing veth devices,
but some of the APIs duplicate stuff from src/util/virnetdev.h.
Delete thed duplicate APIs and rename the remaining ones to
follow virNetDevVethXXXX
* src/lxc/veth.c, src/lxc/veth.h: Rename APIs & delete duplicates
* src/lxc/lxc_container.c, src/lxc/lxc_controller.c,
src/lxc/lxc_driver.c: Update for API renaming
The src/util/network.c file is a dumping ground for many different
APIs. Split it up into 5 pieces, along functional lines
- src/util/virnetdevbandwidth.c: virNetDevBandwidth type & helper APIs
- src/util/virnetdevvportprofile.c: virNetDevVPortProfile type & helper APIs
- src/util/virsocketaddr.c: virSocketAddr and APIs
- src/conf/netdev_bandwidth_conf.c: XML parsing / formatting
for virNetDevBandwidth
- src/conf/netdev_vport_profile_conf.c: XML parsing / formatting
for virNetDevVPortProfile
* src/util/network.c, src/util/network.h: Split into 5 pieces
* src/conf/netdev_bandwidth_conf.c, src/conf/netdev_bandwidth_conf.h,
src/conf/netdev_vport_profile_conf.c, src/conf/netdev_vport_profile_conf.h,
src/util/virnetdevbandwidth.c, src/util/virnetdevbandwidth.h,
src/util/virnetdevvportprofile.c, src/util/virnetdevvportprofile.h,
src/util/virsocketaddr.c, src/util/virsocketaddr.h: New pieces
* daemon/libvirtd.h, daemon/remote.c, src/conf/domain_conf.c,
src/conf/domain_conf.h, src/conf/network_conf.c,
src/conf/network_conf.h, src/conf/nwfilter_conf.h,
src/esx/esx_util.h, src/network/bridge_driver.c,
src/qemu/qemu_conf.c, src/rpc/virnetsocket.c,
src/rpc/virnetsocket.h, src/util/dnsmasq.h, src/util/interface.h,
src/util/iptables.h, src/util/macvtap.c, src/util/macvtap.h,
src/util/virnetdev.h, src/util/virnetdevtap.c,
tools/virsh.c: Update include files
Following the renaming of the bridge management APIs, we can now
split the source file into 3 corresponding pieces
* src/util/virnetdev.c: APIs for any type of network interface
* src/util/virnetdevbridge.c: APIs for bridge interfaces
* src/util/virnetdevtap.c: APIs for TAP interfaces
* src/util/virnetdev.c, src/util/virnetdev.h,
src/util/virnetdevbridge.c, src/util/virnetdevbridge.h,
src/util/virnetdevtap.c, src/util/virnetdevtap.h: Copied
from bridge.{c,h}
* src/util/bridge.c, src/util/bridge.h: Split into 3 pieces
* src/lxc/lxc_driver.c, src/network/bridge_driver.c,
src/openvz/openvz_driver.c, src/qemu/qemu_command.c,
src/qemu/qemu_conf.h, src/uml/uml_conf.c, src/uml/uml_conf.h,
src/uml/uml_driver.c: Update #include directives
Currently every caller of the brXXX APIs has to store the returned
errno value and then raise an error message. This results in
inconsistent error messages across drivers, additional burden on
the callers and makes the error reporting inaccurate since it is
hard to distinguish different scenarios from 1 errno value.
* src/util/bridge.c: Raise errors instead of returning errnos
* src/lxc/lxc_driver.c, src/network/bridge_driver.c,
src/qemu/qemu_command.c, src/uml/uml_conf.c,
src/uml/uml_driver.c: Remove error reporting code
Domain listing, basic information retrieval and domain life cycle
management is implemented. But currently the domain XML output
lacks the complete devices section.
The driver uses OpenWSMAN to directly communicate with a Hyper-V
server over its WS-Management interface exposed via Microsoft WinRM.
The driver is based on the work of Michael Sievers. This started in
the same master program project group at the University of Paderborn
as the ESX driver.
See Michael's blog for details: http://hyperv4libvirt.wordpress.com/
Add a generator script to generate the structs and serialization
information for OpenWSMAN.
openwsman.h collects workarounds for problems in OpenWSMAN <= 2.2.6.
There are also disabled sections that would use ws_serializer_free_mem
but can't because it's broken in OpenWSMAN <= 2.2.6. Patches to fix
this have been posted upstream.
In daemons using pidfiles to protect against concurrent
execution there is a possibility that a crash may leave a stale
pidfile on disk, which then prevents later restart of the daemon.
To avoid this problem, introduce a pair of APIs which make
use of virFileLock to ensure crash-safe & race condition-safe
pidfile acquisition & releae
* src/libvirt_private.syms, src/util/virpidfile.c,
src/util/virpidfile.h: Add virPidFileAcquire and virPidFileRelease
* configure.ac docs/news.html.in libvirt.spec.in: updates for new
release
* po/*.po*: pulled translations from the transifex teams and regenerated
localizations
O_DIRECT has stringent requirements. Rather than make lots of changes
at each site that wants to use O_DIRECT, it is easier to offload
the work through a helper process that mirrors the I/O between a
pipe and the actual direct fd, so that the other end of the pipe
no longer has to worry about constraints.
Plus, if the kernel ever gains better posix_fadvise support, then we
only have to touch a single file to let all callers benefit from a
more efficient way to avoid file system caching.
* src/util/virfile.h (virFileDirectFdFlag, virFileDirectFdNew)
(virFileDirectFdClose, virFileDirectFdFree): New prototypes.
* src/util/virdirect.c: Implement new wrapper object.
* src/libvirt_private.syms (virfile.h): Export new symbols.
* cfg.mk (useless_free_options): Add to list.
* po/POTFILES.in: Add new translations.
This tweaks the RPC generator to cope with some naming
conventions used for the QEMU specific APIs
* daemon/remote.c: Server side dispatcher
* src/remote/remote_driver.c: Client side dispatcher
* src/remote/qemu_protocol.x: Wire protocol definition
* src/rpc/gendispatch.pl: Use '$structprefix' in method
names, fix QEMU flags and fix dispatcher method names
The last patch was incomplete. The translated strings merely
moved between generated file names, rather than disappearing.
* cfg.mk (generated_files): Update generated file names.
* po/POTFILES.in: Add remote_dispatch.h
This guts the libvirtd daemon, removing all its networking and
RPC handling code. Instead it calls out to the new virServerPtr
APIs for all its RPC & networking work
As a fallout all libvirtd daemon error reporting now takes place
via the normal internal error reporting APIs. There is no need
to call separate error reporting APIs in RPC code, nor should
code use VIR_WARN/VIR_ERROR for reporting fatal problems anymore.
* daemon/qemu_dispatch_*.h, daemon/remote_dispatch_*.h: Remove
old generated dispatcher code
* daemon/qemu_dispatch.h, daemon/remote_dispatch.h: New dispatch
code
* daemon/dispatch.c, daemon/dispatch.h: Remove obsoleted code
* daemon/remote.c, daemon/remote.h: Rewrite for new dispatch
APIs
* daemon/libvirtd.c, daemon/libvirtd.h: Remove all networking
code
* daemon/stream.c, daemon/stream.h: Update for new APIs
* daemon/Makefile.am: Link to libvirt-net-rpc-server.la
To facilitate creation of new clients using XDR RPC services,
pull alot of the remote driver code into a set of reusable
objects.
- virNetClient: Encapsulates a socket connection to a
remote RPC server. Handles all the network I/O for
reading/writing RPC messages. Delegates RPC encoding
and decoding to the registered programs
- virNetClientProgram: Handles processing and dispatch
of RPC messages for a single RPC (program,version).
A program can register to receive async events
from a client
- virNetClientStream: Handles generic I/O stream
integration to RPC layer
Each new client program now merely needs to define the list of
RPC procedures & events it wants and their handlers. It does
not need to deal with any of the network I/O functionality at
all.
Allow RPC servers to advertise themselves using MDNS,
via Avahi
* src/rpc/virnetserver.c, src/rpc/virnetserver.h: Allow
registration of MDNS services via avahi
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Add
API to fetch the listen port number
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Add API to
fetch the local port number
* src/rpc/virnetservermdns.c, src/rpc/virnetservermdns.h: Represent
an MDNS advertisement