This adds support for host device passthrough with the
LXC driver. Since there is only a single kernel image,
it doesn't make sense to pass through PCI devices, but
USB devices are fine. For the latter we merely need to
make the /dev/bus/usb/NNN/MMM character device exist
in the container's /dev
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This adds a 'lockd' lock driver which is just a client which
talks to the lockd daemon to perform all locking. This will
be the default lock driver for any hypervisor which needs one.
* src/Makefile.am: Add lockd.so plugin
* src/locking/lock_driver_lockd.c: Lockd driver impl
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Introduce a lock_daemon_dispatch.c file which implements the
server side dispatcher the RPC APIs previously defined in the
lock protocol.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virtlockd daemon will maintain locks on behalf of libvirtd.
There are two reasons for it to be separate
- Avoid risk of other libvirtd threads accidentally
releasing fcntl() locks by opening + closing a file
that is locked
- Ensure locks can be preserved across libvirtd restarts.
virtlockd will need to be able to re-exec itself while
maintaining locks. This is simpler to achieve if its
sole job is maintaining locks
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Most of this deals with moving the libvirt-guests.sh script which
does all the work to /usr/libexec, so it can be shared by both
systemd and traditional init. Previously systemd depended on
the script being in /etc/init.d
Required to fix https://bugzilla.redhat.com/show_bug.cgi?id=789747
These set bridge part of QoS when bringing domain's interface up.
Long story short, if there's a 'floor' set, a new QoS class is created.
ClassID MUST be unique within the bridge and should be kept for
unplug phase.
Parallels Cloud Server uses virtual networks model for network
configuration. It uses own tools for virtual network management.
So add network driver, which will be responsible for listing
virtual networks and performing different operations on them
(in consequent patched).
This patch only allows listing virtual network names, without
any parameters like DHCP server settings.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
The patch adds the backend driver to support iSCSI format storage pools
and volumes for ESX host. The mapping of ESX iSCSI specifics to Libvirt
is as follows:
1. ESX static iSCSI target <------> Libvirt Storage Pools
2. ESX iSCSI LUNs <------> Libvirt Storage Volumes.
The above understanding is based on http://libvirt.org/storage.html.
The operation supported on iSCSI pools includes:
1. List storage pools & volumes.
2. Get XML descriptor operaion on pools & volumes.
3. Lookup operation on pools & volumes by name, UUID and path (if applicable).
iSCSI pools does not support operations such as: Create / remove pools
and volumes.
To be able todo controlled shutdown/reboot of containers an
API to talk to init via /dev/initctl is required. Fortunately
this is quite straightforward to implement, and is supported
by both sysvinit and systemd. Upstart support for /dev/initctl
is unclear.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
this patch addes fuse support for libvirt lxc.
we can use fuse filesystem to generate sysinfo dynamically,
So we can isolate /proc/meminfo,cpuinfo and so on through
fuse filesystem.
we mount fuse filesystem for every container.
the mount name is libvirt,mount point is
localstatedir/run/libvirt/lxc/containername.
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
The patch refactors the current ESX storage driver due to following reasons:
1. Given most of the public APIs exposed by the storage driver in Libvirt
remains same, ESX storage driver should not implement logic specific
for only one supported format (current implementation only supports VMFS).
2. Decoupling interface from specific storage implementation gives us an
extensible design to hook implementation for other supported storage
formats.
This patch refactors the current driver to implement it as a facade pattern i.e.
the driver exposes all the public libvirt APIs, but uses backend drivers to get
the required task done. The backend drivers provide implementation specific to
the type of storage device.
File changes:
------------------
esx_storage_driver.c ----> esx_storage_driver.c (base storage driver)
|
|---> esx_storage_backend_vmfs.c (VMFS backend)
* configure.ac docs/news.html.in libvirt.spec.in: update for the new release
* po/*.po*: update from transifex, a lot of added support e.g. Indian
languages, and regenerate
Currently, the CPU model driver is not implemented for PowerPC.
Host's CPU information is needed to exposed to guests' XML file some
time.
This patch is to implement the callback functions of CPU model driver.
Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>
Add two new APIs virNetServerServiceNewPostExecRestart and
virNetServerServicePreExecRestart which allow a virNetServerServicePtr
object to be created from a JSON object and saved to a
JSON object, for the purpose of re-exec'ing a process.
This includes serialization of the listening sockets associated
with the service
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The previously introduced virFile{Lock,Unlock} APIs provide a
way to acquire/release fcntl() locks on individual files. For
unknown reason though, the POSIX spec says that fcntl() locks
are released when *any* file handle referring to the same path
is closed. In the following sequence
threadA: fd1 = open("foo")
threadB: fd2 = open("foo")
threadA: virFileLock(fd1)
threadB: virFileLock(fd2)
threadB: close(fd2)
you'd expect threadA to come out holding a lock on 'foo', and
indeed it does hold a lock for a very short time. Unfortunately
when threadB does close(fd2) this releases the lock associated
with fd1. For the current libvirt use case for virFileLock -
pidfiles - this doesn't matter since the lock is acquired
at startup while single threaded an never released until
exit.
To provide a more generally useful API though, it is necessary
to introduce a slightly higher level abstraction, which is to
be referred to as a "lockspace". This is to be provided by
a virLockSpacePtr object in src/util/virlockspace.{c,h}. The
core idea is that the lockspace keeps track of what files are
already open+locked. This means that when a 2nd thread comes
along and tries to acquire a lock, it doesn't end up opening
and closing a new FD. The lockspace just checks the current
list of held locks and immediately returns VIR_ERR_RESOURCE_BUSY.
NB, the API as it stands is designed on the basis that the
files being locked are not being otherwise opened and used
by the application code. One approach to using this API is to
acquire locks based on a hash of the filepath.
eg to lock /var/lib/libvirt/images/foo.img the application
might do
virLockSpacePtr lockspace = virLockSpaceNew("/var/lib/libvirt/imagelocks");
lockname = md5sum("/var/lib/libvirt/images/foo.img");
virLockSpaceAcquireLock(lockspace, lockname);
NB, in this example, the caller should ensure that the path
is canonicalized before calculating the checksum.
It is also possible to do locks directly on resources by
using a NULL lockspace directory and then using the file
path as the lock name eg
virLockSpacePtr lockspace = virLockSpaceNew(NULL);
virLockSpaceAcquireLock(lockspace, "/var/lib/libvirt/images/foo.img");
This is only safe to do though if no other part of the process
will be opening the files. This will be the case when this
code is used inside the soon-to-be-reposted virlockd daemon
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
While the changes to sanlock driver should be stable, the actual
implementation of sanlock_helper is supposed to be replaced in the
future. However, before we can implement a better sanlock_helper, we
need an administrative interface to libvirtd so that the helper can just
pass a "leases lost" event to the particular libvirt driver and
everything else will be taken care of internally. This approach will
also allow libvirt to pass such event to applications and use
appropriate reasons when changing domain states.
The temporary implementation handles all actions directly by calling
appropriate libvirt APIs (which among other things means that it needs
to know the credentials required to connect to libvirtd).
Add a read-only udev based backend for virInterface. Useful for distros
that do not have netcf support yet. Multiple libvirt based utilities use
a HAL based fallback when virInterface is not available which is less
than ideal. This implements:
* virConnectNumOfInterfaces()
* virConnectListInterfaces()
* virConnectNumOfDefinedInterfaces()
* virConnectListDefinedInterfaces()
* virConnectListAllInterfaces()
* virConnectInterfaceLookupByName()
* virConnectInterfaceLookupByMACString()
Continue consolidation of process functions by moving some
helpers out of command.{c,h} into virprocess.{c,h}
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Based exclusively on work by Eric Blake in a patch posted with the same
subject. However some modifications related to comments and my plans to
add another backend.
Added WITH_INTERFACE as the only automake variable deciding whether to
build the driver and using WITH_NETCF to identify that we're wanting to
use the netcf library as the backend.
* configure.ac: Added with_interface
* src/interface/netcf_driver.c: Renamed..
* src/interface/interface_backend_netcf.c: ..to this to match storage.
* src/interface/netcf_driver.h: Renamed..
* src/interface/interface_driver.h: ..to this.
* daemon/Makefile.am: Respect WITH_INTERFACE and WITH_NETCF.
* libvirt.spec.in: Add RPM support for --with-interface
This has several benefits:
1. Future snapshot-related code has a definite place to go (and I
_will_ be adding some)
2. Snapshot errors now use the VIR_FROM_DOMAIN_SNAPSHOT error
classification, which has been underutilized (previously only in
libvirt.c)
* src/conf/domain_conf.h, domain_conf.c: Split...
* src/conf/snapshot_conf.h, snapshot_conf.c: ...into new files.
* src/Makefile.am (DOMAIN_CONF_SOURCES): Build new files.
* po/POTFILES.in: Mark new file for translation.
* src/vbox/vbox_tmpl.c: Update caller.
* src/esx/esx_driver.c: Likewise.
* src/qemu/qemu_command.c: Likewise.
* src/qemu/qemu_domain.h: Likewise.
This patch adds helper functions that enable us to use libssh2 in
conjunction with libvirt's virNetSockets for ssh transport instead of
spawning "ssh" client process.
This implemetation supports tunneled plaintext, keyboard-interactive,
private key, ssh agent based and null authentication. Libvirt's Auth
callback is used for interaction with the user. (Keyboard interactive
authentication, adding of host keys, private key passphrases). This
enables seamless integration into the application using libvirt. No
helpers as "ssh-askpass" are needed.
Reading and writing of OpenSSH style "known_hosts" files is supported.
Communication is done using SSH exec channel, where the user may specify
arbitrary command to be executed on the remote side and reads and writes
to/from stdin/out are sent through the ssh channel. Usage of stderr is
not (yet) supported.
Move the functions the parse/format, and validate PCI addresses to
their own file so they can be conveniently used in other places
besides device_conf.c
Refactoring existing code without causing any functional changes to
prepare for new code.
This patch makes the code reusable.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
The following config elements now support a <vlan> subelements:
within a domain: <interface>, and the <actual> subelement of <interface>
within a network: the toplevel, as well as any <portgroup>
Each vlan element must have one or more <tag id='n'/> subelements. If
there is more than one tag, it is assumed that vlan trunking is being
requested. If trunking is required with only a single tag, the
attribute "trunk='yes'" should be added to the toplevel <vlan>
element.
Some examples:
<interface type='hostdev'/>
<vlan>
<tag id='42'/>
</vlan>
<mac address='52:54:00:12:34:56'/>
...
</interface>
<network>
<name>vlan-net</name>
<vlan trunk='yes'>
<tag id='30'/>
</vlan>
<virtualport type='openvswitch'/>
</network>
<interface type='network'/>
<source network='vlan-net'/>
...
</interface>
<network>
<name>trunk-vlan</name>
<vlan>
<tag id='42'/>
<tag id='43'/>
</vlan>
...
</network>
<network>
<name>multi</name>
...
<portgroup name='production'/>
<vlan>
<tag id='42'/>
</vlan>
</portgroup>
<portgroup name='test'/>
<vlan>
<tag id='666'/>
</vlan>
</portgroup>
</network>
<interface type='network'/>
<source network='multi' portgroup='test'/>
...
</interface>
IMPORTANT NOTE: As of this patch there is no backend support for the
vlan element for *any* network device type. When support is added in
later patches, it will only be for those select network types that
support setting up a vlan on the host side, without the guest's
involvement. (For example, it will be possible to configure a vlan for
a guest connected to an openvswitch bridge, but it won't be possible
to do that for one that is connected to a standard Linux host bridge.)
An ESX server has one or more PhysicalNics that represent the actual
hardware NICs. Those can be listed via the interface driver.
A libvirt virtual network is mapped to a HostVirtualSwitch. On the
physical side a HostVirtualSwitch can be connected to PhysicalNics.
On the virtual side a HostVirtualSwitch has HostPortGroups that are
mapped to libvirt virtual network's portgroups. Typically there is
HostPortGroups named 'VM Network' that is used to connect virtual
machines to a HostVirtualSwitch. A second HostPortGroup typically
named 'Management Network' is used to connect the hypervisor itself
to the HostVirtualSwitch. This one is not mapped to a libvirt virtual
network's portgroup. There can be more HostPortGroups than those
typical two on a HostVirtualSwitch.
+---------------+-------------------+
...---| | | +-------------+
| HostPortGroup | |---| PhysicalNic |
| VM Network | | | vmnic0 |
...---| | | +-------------+
+---------------+ HostVirtualSwitch |
| vSwitch0 |
+---------------+ |
| HostPortGroup | |
...---| Management | |
| Network | |
+---------------+-------------------+
The virtual counterparts of the PhysicalNic is the HostVirtualNic for
the hypervisor and the VirtualEthernetCard for the virtual machines
that are grouped into HostPortGroups.
+---------------------+ +---------------+---...
| VirtualEthernetCard |---| |
+---------------------+ | HostPortGroup |
+---------------------+ | VM Network |
| VirtualEthernetCard |---| |
+---------------------+ +---------------+
|
+---------------+
+---------------------+ | HostPortGroup |
| HostVirtualNic |---| Management |
+---------------------+ | Network |
+---------------+---...
The currently implemented network driver can list, define and undefine
HostVirtualSwitches including HostPortGroups for virtual machines.
Existing HostVirtualSwitches cannot be edited yet. This will be added
in a followup patch.
Parallels Cloud Server has one serious discrepancy with libvirt:
libvirt stores domain configuration files in one place, and storage
files in other places (with the API of storage pools and storage volumes).
Parallels Cloud Server stores all domain data in a single directory,
for example, you may have domain with name fedora-15, which will be
located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be
in '/var/parallels/fedora-15.pvm/harddisk1.hdd'.
I've decided to create storage driver, which produces pseudo-volumes
(xml files with volume description), and they will be 'converted' to
real disk images after attaching to a VM.
So if someone creates VM with one hard disk using virt-manager,
at first virt-manager creates a new volume, and then defines a
domain. We can lookup a volume by path in XML domain definition
and find out location of new domain and size of its hard disk.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels driver is 'stateless', like vmware or openvz drivers.
It collects information about domains during startup using
command-line utility prlctl. VMs in Parallels are identified by UUIDs
or unique names, which can be used as respective fields in
virDomainDef structure. Currently only basic info, like
description, virtual cpus number and memory amount, is implemented.
Querying devices information will be added in the next patches.
Parallels doesn't support non-persistent domains - you can't run
a domain having only disk image, it must always be registered
in system.
Functions for querying domain info have been just copied from
test driver with some changes - they extract needed data from
previously created list of virDomainObj objects.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels Cloud Server is a cloud-ready virtualization
solution that allows users to simultaneously run multiple virtual
machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/
Also beta version of Parallels Cloud Server can be downloaded there.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Move the code that handles the LXC monitor out of the
lxc_process.c file and into lxc_monitor.{c,h}
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commands in node device group moved from virsh.c to virsh-nodedev.c,
* virsh.c: Remove commands in node device group.
* virsh-nodedev.c: New file, filled with commands in node device group
* po/POTFILES.in: Add virsh-nodedev.c
* cfg.mk: Skip to check config.h including for virsh-nodedev.c
Commands in host group moved from virsh.c to virsh-host.c,
* virsh.c: Remove commands in host group.
* virsh-host.c: New file, filled with commands in host group
* po/POTFILES.in: Add virsh-host.c
* cfg.mk: Skip to check config.h including for virsh-host.c
Commands to manage domain snapshot are moved from virsh.c to
virsh-snapshot.c.
* virsh.c: Remove domain snapshot commands.
* virsh-snapshot.c: New file, filled with domain snapshot commands.
* po/POTFILES.in: Add virsh-snapshot.c
* cfg.mk: Skip strcase and config.h including checking for
virsh-snapshot.c
Commands to manage secret are moved from virsh.c to virsh-secret.c,
with a few helpers for secret command use.
* virsh.c: Remove secret commands and a few helpers.
(vshCommandOptSecret, and vshCommandOptSecretBy)
* virsh-secret.c: New file, filled with secret commands and its helpers.
* po/POTFILES.in: Add virsh-secret.c
* cfg.mk: Skip to check config.h including for virsh-secret.c
Commands to manage network filter are moved from virsh.c to virsh-nwfilter.c,
with a few helpers for network filter command use.
* virsh.c: Remove network filter commands and a few helpers.
(vshCommandOptNWFilter, and vshCommandOptNWFilterBy)
* virsh-nwfilter.c: New file, filled with network filter commands and its helpers.
* po/POTFILES.in: Add virsh-nwfilter.c
* cfg.mk: Skip to check config.h including for virsh-nwfilter.c
Commands to manage host interface are moved from virsh.c to
virsh-interface.c, with a few helpers for interface command use.
* virsh.c: Remove interface commands and a few helpers.
(vshCommandOptInterface, vshCommandOptInterfaceBy)
* virsh-interface.c: New file, filled with interface commands and
its helpers.
* cfg.mk: Skip to check config.h including for virsh-interface.c
* po/POTFILES.in: Add virsh-interface.c
Commands to manage network are moved from virsh.c to virsh-network.c,
with a few helpers for network command use.
* virsh.c: Remove network commands and a few helpers.
* virsh-network.c: New file, filled with network commands and its
helpers.
* po/POTFILES.in: Add virsh-network.c
* cfg.mk: Skip to check config.h including for virsh-network.c
This splits commands of storage pool group into virsh-pool.c,
The helpers not for common use are moved too. Standard copyright
is added for the new file.
* tools/virsh.c:
Remove commands for storage storage pool and a few helpers.
(vshCommandOptVol, vshCommandOptVolBy).
* tools/virsh-pool.c:
New file, filled with commands of storage pool group and its
helpers.
* po/POTFILES.in:
Add virsh-pool.c
* cfg.mk:
Skip to check config.h including for virsh-pool.c
This splits commands of storage volume group into virsh-volume.c,
The helpers not for common use are moved too. Standard copyright
is added for the new file.
* tools/virsh.c:
Remove commands for storage storage volume and a few helpers.
(vshCommandOptVol, vshCommandOptVolBy).
* tools/virsh-volume.c:
New file, filled with commands of storage volume group and its
helpers.
* po/POTFILES.in:
Add virsh-volume.c
* cfg.mk:
Skip to check config.h including for virsh-volume.c
This splits commands to manage domain into virsh-domain.c,The helpers
not for common use are moved into them too. Standard copyright is added
for the new file.
* tools/virsh.c:
- Remove commands for domain group, and one helper
(vshDomainVcpuStateToString)
- vshStreamSink is moved before commands's definition for it's
also used by commands not of domain group, such as volUpload.
* tools/virsh-domain.c:
- New file, commands for domain group and the one helper are
moved into it.
* po/POTFILES.in:
- Add virsh-domain.c
* cfg.mk:
- Skip to check config.h including for virsh-domain.c
This splits commands commands to monitor domain status into
virsh-domain-monitor.c. The helpers not for common use are moved too.
Standard copyright is added.
* tools/virsh.c:
- Remove commands for domain monitoring group and a few helpers (
vshDomainIOErrorToString, vshGetDomainDescription,
vshDomainControlStateToString, vshDomainStateToString) not for
common use.
- Remove (incldue "intprops.h").
* tools/virsh-domain-monitor.c:
- New file, filled with commands of domain monitor group.
- Add "intprops.h".
* cfg.mk:
- Skip strcase checking for virsh-domain-monitor.c
- Skip to check config.h including for virsh-domain-monitor.c
* po/POTFILES.in
- Add virsh-domain-monitor.c
Move all the code that manages stop/start of LXC processes
into separate lxc_process.{c,h} file to make the lxc_driver.c
file smaller
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Move the cgroup setup code out of the lxc_controller.c file
and into lxc_cgroup.{c,h}. This reduces the size of the
lxc_controller.c file and paves the way to invoke cgroup
setup from lxc_driver.c instead of lxc_controller.c in the
future
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch brings support to manage sheepdog pools and volumes to libvirt.
It uses the "collie" command-line utility that comes with sheepdog for that.
A sheepdog pool in libvirt maps to a sheepdog cluster.
It needs a host and port to connect to, which in most cases
is just going to be the default of localhost on port 7000.
A sheepdog volume in libvirt maps to a sheepdog vdi.
To create one specify the pool, a name and the capacity.
Volumes can also be resized later.
In the volume XML the vdi name has to be put into the <target><path>.
To use the volume as a disk source for virtual machines specify
the vdi name as "name" attribute of the <source>.
The host and port information from the pool are specified inside the host tag.
<disk type='network'>
...
<source protocol="sheepdog" name="vdi_name">
<host name="localhost" port="7000"/>
</source>
</disk>
To work right this patch parses the output of collie,
so it relies on the raw output option. There recently was a bug which caused
size information to be reported wrong. This is fixed upstream already and
will be in the next release.
Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
The virnetdevtap.c and viruri.c files had two error report
messages which were not annotated with _(...)
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently, we either generate some cmd*Edit commands (cmdPoolEdit
and cmdNetworkEdit) via sed script or copy the body of cmdEdit
(e.g. cmdInterfaceEdit, cmdNWFilterEdit, etc.). This fact makes
it harder to implement any new feature to our editing system.
Therefore switch to new implementation - define macros to:
- dump XML (EDIT_GET_XML)
- take an action if XML wasn't changed,
usually just vshPrint() (EDIT_NOT_CHANGED)
- define new object (EDIT_DEFINE) - the edited XML is in @doc_edited
- free object defined by EDIT_DEFINE (EDIT_FREE)
and #include "virsh-edit.c"
This patch adds DHCP snooping support to libvirt. The learning method for
IP addresses is specified by setting the "CTRL_IP_LEARNING" variable to one of
"any" [default] (existing IP learning code), "none" (static only addresses)
or "dhcp" (DHCP snooping).
Active leases are saved in a lease file and reloaded on restart or HUP.
The following interface XML activates and uses the DHCP snooping:
<interface type='bridge'>
<source bridge='virbr0'/>
<filterref filter='clean-traffic'>
<parameter name='CTRL_IP_LEARNING' value='dhcp'/>
</filterref>
</interface>
All filters containing the variable 'IP' are automatically adjusted when
the VM receives an IP address via DHCP. However, multiple IP addresses per
interface are silently ignored in this patch, thus only supporting one IP
address per interface. Multiple IP address support is added in a later
patch in this series.
Signed-off-by: David L Stevens <dlstevens@us.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
To ensure consistent error reporting of invalid arguments,
provide a number of predefined helper methods & macros.
- An arg which must not be NULL:
virCheckNonNullArgReturn(argname, retvalue)
virCheckNonNullArgGoto(argname, label)
- An arg which must be NULL
virCheckNullArgGoto(argname, label)
- An arg which must be positive (ie 1 or greater)
virCheckPositiveArgGoto(argname, label)
- An arg which must not be 0
virCheckNonZeroArgGoto(argname, label)
- An arg which must be zero
virCheckZeroArgGoto(argname, label)
- An arg which must not be negative (ie 0 or greater)
virCheckNonNegativeArgGoto(argname, label)
* src/libvirt.c, src/libvirt-qemu.c,
src/nodeinfo.c, src/datatypes.c: Update to use
virCheckXXXX macros
* po/POTFILES.in: Add libvirt-qemu.c and virterror_internal.h
* src/internal.h: Define macros for checking invalid args
* src/util/virterror_internal.h: Define macros for reporting
invalid args
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds support for a new storage backend with RBD support.
RBD is the RADOS Block Device and is part of the Ceph distributed storage
system.
It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
To function this backend relies on librbd and librados being present on the
local system.
The backend also supports Cephx authentication for safe authentication with
the Ceph cluster.
For storing credentials it uses the built-in secret mechanism of libvirt.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
DBus connection. The HAL device code further requires that
the DBus connection is integrated with the event loop and
provides such glue logic itself.
The forthcoming FirewallD integration also requires a
dbus connection with event loop integration. Thus we need
to pull the current event loop glue out of the HAL driver.
Thus we create src/util/virdbus.{c,h} files. This contains
just one method virDBusGetSystemBus() which obtains a handle
to the single shared system bus instance, with event glue
automagically setup.
* configure.ac docs/news.html.in libvirt.spec.in: update for the release
* po/*.po*: updated a number of languages translation including new
indian languages and regenerated
To follow latest naming conventions, rename src/util/authhelper.[ch]
to src/util/virauth.[ch].
* src/util/authhelper.[ch]: Rename to src/util/virauth.[ch]
* src/esx/esx_driver.c, src/hyperv/hyperv_driver.c,
src/phyp/phyp_driver.c, src/xenapi/xenapi_driver.c: Update
for renamed include files
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The '.ini' file format is a useful alternative to the existing
config file style, when you need to have config files which
are hashes of hashes. The 'virKeyFilePtr' object provides a
way to parse these file types.
* src/Makefile.am, src/util/virkeyfile.c,
src/util/virkeyfile.h: Add .ini file parser
* tests/Makefile.am, tests/virkeyfiletest.c: Test
basic parsing capabilities
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds a set of functions used in creating console streams for
domains using PTYs and ensures mutually exclusive access to the PTYs.
If mutually exclusive access is not used, two clients may open the same
console, which results in corruption on both clients as both of them
race to read data from the PTY.
Two approaches are used to ensure this:
1) Internal data structure holding open PTYs.
This is used internally and enables the user to forcibly
terminate another console connection eg. when somebody leaves
the console open on another host.
2) UUCP style lock files:
This uses UUCP lock files according to the FHS
( http://www.pathname.com/fhs/pub/fhs-2.3.html#VARLOCKLOCKFILES )
to check if other programs (like minicom) are not using the pty
device of the console.
This feature is disabled by default and may be enabled using
configure parameter
--with-console-lock-files=/path/to/lock/file/directory
or --with-console-lock-files=auto (which tries to infer the
location from OS used (currently only linux).
On usual linux systems, normal users may not write to the
/var/lock directory containing the locks. This poses problems
while in session mode. If the current user has no access to the
lockfile directory, check for presence of the file is still
done, but no lock file is created. This does NOT result in an
error.
This patch allows libvirt to add interfaces to already
existing Open vSwitch bridges. The following syntax in
domain XML file can be used:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'/>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
or if libvirt should auto-generate the interfaceid use
following syntax:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
It is also possible to pass an optional profileid. To do that
use following syntax:
<interface type='bridge'>
<source bridge='ovsbr'/>
<mac address='00:55:1a:65:a2:8d'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'
profileid='test-profile'/>
</virtualport>
</interface>
To create Open vSwitch bridge install Open vSwitch and
run the following command:
ovs-vsctl add-br ovsbr
The auto-generated WWN comply with the new addressing schema of WWN:
<quote>
the first nibble is either hex 5 or 6 followed by a 3-byte vendor
identifier and 36 bits for a vendor-specified serial number.
</quote>
We choose hex 5 for the first nibble. And for the 3-bytes vendor ID,
we uses the OUI according to underlying hypervisor type, (invoking
virConnectGetType to get the virt type). e.g. If virConnectGetType
returns "QEMU", we use Qumranet's OUI (00:1A:4A), if returns
ESX|VMWARE, we use VMWARE's OUI (00:05:69). Currently it only
supports qemu|xen|libxl|xenapi|hyperv|esx|vmware drivers. The last
36 bits are auto-generated.
Rename the src/util/netlink files to src/util/virnetlink to
better fit the naming scheme. Also rename nlComm to virNetlinkCommand.
Signed-off-by: D. Herrendoerfer <d.herrendoerfer@herrendoerfer.name>
Curently security labels can be of type 'dynamic' or 'static'.
If no security label is given, then 'dynamic' is assumed. The
current code takes advantage of this default, and avoids even
saving <seclabel> elements with type='dynamic' to disk. This
means if you temporarily change security driver, the guests
can all still start.
With the introduction of sVirt to LXC though, there needs to be
a new default of 'none' to allow unconfined LXC containers.
This patch introduces two new security label types
- default: the host configuration decides whether to run the
guest with type 'none' or 'dynamic' at guest start
- none: the guest will run unconfined by security policy
The 'none' label type will obviously be undesirable for some
deployments, so a new qemu.conf option allows a host admin to
mandate confined guests. It is also possible to turn off default
confinement
security_default_confined = 1|0 (default == 1)
security_require_confined = 1|0 (default == 0)
* src/conf/domain_conf.c, src/conf/domain_conf.h: Add new
seclabel types
* src/security/security_manager.c, src/security/security_manager.h:
Set default sec label types
* src/security/security_selinux.c: Handle 'none' seclabel type
* src/qemu/qemu.conf, src/qemu/qemu_conf.c, src/qemu/qemu_conf.h,
src/qemu/libvirtd_qemu.aug: New security config options
* src/qemu/qemu_driver.c: Tell security driver about default
config
To assist people in verifying that their host is operating in an
optimal manner, provide a 'virt-host-validate' command. For each
type of hypervisor, it will check any pre-requisites, or other
good recommendations and report what's working & what is not.
eg
# virt-host-validate
QEMU: Checking for device /dev/kvm : FAIL (Check that the 'kvm-intel' or 'kvm-amd' modules are loaded & the BIOS has enabled virtualization)
QEMU: Checking for device /dev/vhost : WARN (Load the 'vhost_net' module to improve performance of virtio networking)
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux >= 2.6.26 : PASS
This warns people if they have vmx/svm, but don't have /dev/kvm. It
also warns about missing /dev/vhost net.
In preparation for the patch to include Murmurhash3, which
introduces a virhashcode.h and virhashcode.c files, rename
the existing hash.h and hash.c to virhash.h and virhash.c
respectively.
There is now a standard QEMU guest agent that can be installed
and given a virtio serial channel
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/f16x86_64.agent'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
</channel>
The protocol that runs over the guest agent is JSON based and
very similar to the JSON monitor. We can't use exactly the same
code because there are some odd differences in the way messages
and errors are structured. The qemu_agent.c file is based on
a combination and simplification of qemu_monitor.c and
qemu_monitor_json.c
* src/qemu/qemu_agent.c, src/qemu/qemu_agent.h: Support for
talking to the agent for shutdown
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add thread
helpers for talking to the agent
* src/qemu/qemu_process.c: Connect to agent whenever starting
a guest
* src/qemu/qemu_monitor_json.c: Make variable static
Preparation for another patch that refactors common patterns
into the new file for fewer lines of code overall.
* src/util/util.h (virTypedParameterArrayClear): Move...
* src/util/virtypedparam.h: ...to new file.
(virTypedParameterArrayValidate, virTypedParameterAssign): New
prototypes.
* src/util/util.c (virTypedParameterArrayClear): Likewise.
* src/util/virtypedparam.c: New file.
* po/POTFILES.in: Mark file for translation.
* src/Makefile.am (UTIL_SOURCES): Build it.
* src/libvirt_private.syms (util.h): Split...
(virtypedparam.h): to new section.
(virkeycode.h): Sort.
* daemon/remote.c: Adjust callers.
* tools/virsh.c: Likewise.
The logging APIs need to be able to generate formatted timestamps
using only async signal safe functions. This rules out using
gmtime/localtime/malloc/gettimeday(!) and much more.
Introduce a new internal API which is async signal safe.
virTimeMillisNowRaw replacement for gettimeofday. Uses clock_gettime
where available, otherwise falls back to the unsafe
gettimeofday
virTimeFieldsNowRaw replacements for gmtime(), convert a timestamp
virTimeFieldsThenRaw into a broken out set of fields. No localtime()
replacement is provided, because converting to
local time is not practical with only async signal
safe APIs.
virTimeStringNowRaw replacements for strftime() which print a timestamp
virTimeStringThenRaw into a string, using a pre-determined format, with
a fixed size buffer (VIR_TIME_STRING_BUFLEN)
For each of these there is also a version without the Raw postfix
which raises a full libvirt error. These versions are not async
signal safe
* src/Makefile.am, src/util/virtime.c, src/util/virtime.h: New files
* src/libvirt_private.syms: New APis
* configure.ac: Check for clock_gettime in -lrt
* tests/virtimetest.c, tests/Makefile.am: Test new APIs
Add the core functions that implement the functionality of the API.
Suspend is done by using an asynchronous mechanism so that we can return
the status to the caller before the host gets suspended. This asynchronous
operation is achieved by suspending the host in a separate thread of
execution. However, returning the status to the caller is only best-effort,
but not guaranteed.
To resume the host, an RTC alarm is set up (based on how long we want to
suspend) before suspending the host. When this alarm fires, the host
gets woken up.
Suspend-to-RAM operation on a host running Linux can take upto more than 20
seconds, depending on the load of the system. (Freezing of tasks, an operation
preceding any suspend operation, is given up after a 20 second timeout).
And Suspend-to-Disk can take even more time, considering the time required
for compaction, creating the memory image and writing it to disk etc.
So, we do not allow the user to specify a suspend duration of less than 60
seconds, to be on the safer side, since we don't want to prematurely declare
failure when we only had to wait for some more time.
The original patch for commit 4789fb2 considered renaming a file,
then backed out the name change, but forgot to back out the POTFILES.in
change, resulting in 'make syntax-check' failure.
This patch adds support for a systemd init service for libvirtd
and libvirt-guests. The libvirtd.service is *not* written to use
socket activation, since we want libvirtd to start on boot so it
can do guest auto-start.
The libvirt-guests.service is pretty lame, just exec'ing the
original init script for now. Ideally we would factor out the
functionality, into some shared tool.
Instead of
./configure --with-init-script=redhat
You can now do
./configure --with-init-script=systemd
Or better still:
./configure --with-init-script=systemd+redhat
We can also now support install of the upstart init script
* configure.ac: Add systemd, and systemd+redhat options to
--with-init-script option
* daemon/Makefile.am: Install systemd services
* daemon/libvirtd.sysconf: Add note about unused env variable
with systemd
* daemon/libvirtd.service.in: libvirtd systemd service unit
* libvirt.spec.in: Add scripts to installing systemd services
and migrating from legacy init scripts
* tools/Makefile.am: Install systemd services
* tools/libvirt-guests.init.sh: Rename to tools/libvirt-guests.init.in
* tools/libvirt-guests.service.in: systemd service unit
Move the ifaceMacvtapLinkDump and ifaceGetNthParent functions
into virnetdevvportprofile.c since they are specific to that
code. This avoids polluting the headers with the Linux specific
netlink data types
* src/util/interface.c, src/util/interface.h: Move
ifaceMacvtapLinkDump and ifaceGetNthParent functions and delete
remaining file
* src/util/virnetdevvportprofile.c: Add ifaceMacvtapLinkDump
and ifaceGetNthParent functions
* src/network/bridge_driver.c, src/nwfilter/nwfilter_gentech_driver.c,
src/nwfilter/nwfilter_learnipaddr.c, src/util/virnetdevmacvlan.c:
Remove include of interface.h
Rename the macvtap.c file to virnetdevmacvlan.c to reflect its
functionality. Move the port profile association code out into
virnetdevvportprofile.c. Make the APIs available unconditionally
to callers
* src/util/macvtap.h: rename to src/util/virnetdevmacvlan.h,
* src/util/macvtap.c: rename to src/util/virnetdevmacvlan.c
* src/util/virnetdevvportprofile.c, src/util/virnetdevvportprofile.h:
Pull in vport association code
* src/Makefile.am, src/conf/domain_conf.h, src/qemu/qemu_conf.c,
src/qemu/qemu_conf.h, src/qemu/qemu_driver.c: Update include
paths & remove conditional compilation
The src/lxc/veth.c file contains APIs for managing veth devices,
but some of the APIs duplicate stuff from src/util/virnetdev.h.
Delete thed duplicate APIs and rename the remaining ones to
follow virNetDevVethXXXX
* src/lxc/veth.c, src/lxc/veth.h: Rename APIs & delete duplicates
* src/lxc/lxc_container.c, src/lxc/lxc_controller.c,
src/lxc/lxc_driver.c: Update for API renaming
The src/util/network.c file is a dumping ground for many different
APIs. Split it up into 5 pieces, along functional lines
- src/util/virnetdevbandwidth.c: virNetDevBandwidth type & helper APIs
- src/util/virnetdevvportprofile.c: virNetDevVPortProfile type & helper APIs
- src/util/virsocketaddr.c: virSocketAddr and APIs
- src/conf/netdev_bandwidth_conf.c: XML parsing / formatting
for virNetDevBandwidth
- src/conf/netdev_vport_profile_conf.c: XML parsing / formatting
for virNetDevVPortProfile
* src/util/network.c, src/util/network.h: Split into 5 pieces
* src/conf/netdev_bandwidth_conf.c, src/conf/netdev_bandwidth_conf.h,
src/conf/netdev_vport_profile_conf.c, src/conf/netdev_vport_profile_conf.h,
src/util/virnetdevbandwidth.c, src/util/virnetdevbandwidth.h,
src/util/virnetdevvportprofile.c, src/util/virnetdevvportprofile.h,
src/util/virsocketaddr.c, src/util/virsocketaddr.h: New pieces
* daemon/libvirtd.h, daemon/remote.c, src/conf/domain_conf.c,
src/conf/domain_conf.h, src/conf/network_conf.c,
src/conf/network_conf.h, src/conf/nwfilter_conf.h,
src/esx/esx_util.h, src/network/bridge_driver.c,
src/qemu/qemu_conf.c, src/rpc/virnetsocket.c,
src/rpc/virnetsocket.h, src/util/dnsmasq.h, src/util/interface.h,
src/util/iptables.h, src/util/macvtap.c, src/util/macvtap.h,
src/util/virnetdev.h, src/util/virnetdevtap.c,
tools/virsh.c: Update include files
Following the renaming of the bridge management APIs, we can now
split the source file into 3 corresponding pieces
* src/util/virnetdev.c: APIs for any type of network interface
* src/util/virnetdevbridge.c: APIs for bridge interfaces
* src/util/virnetdevtap.c: APIs for TAP interfaces
* src/util/virnetdev.c, src/util/virnetdev.h,
src/util/virnetdevbridge.c, src/util/virnetdevbridge.h,
src/util/virnetdevtap.c, src/util/virnetdevtap.h: Copied
from bridge.{c,h}
* src/util/bridge.c, src/util/bridge.h: Split into 3 pieces
* src/lxc/lxc_driver.c, src/network/bridge_driver.c,
src/openvz/openvz_driver.c, src/qemu/qemu_command.c,
src/qemu/qemu_conf.h, src/uml/uml_conf.c, src/uml/uml_conf.h,
src/uml/uml_driver.c: Update #include directives
Currently every caller of the brXXX APIs has to store the returned
errno value and then raise an error message. This results in
inconsistent error messages across drivers, additional burden on
the callers and makes the error reporting inaccurate since it is
hard to distinguish different scenarios from 1 errno value.
* src/util/bridge.c: Raise errors instead of returning errnos
* src/lxc/lxc_driver.c, src/network/bridge_driver.c,
src/qemu/qemu_command.c, src/uml/uml_conf.c,
src/uml/uml_driver.c: Remove error reporting code
Domain listing, basic information retrieval and domain life cycle
management is implemented. But currently the domain XML output
lacks the complete devices section.
The driver uses OpenWSMAN to directly communicate with a Hyper-V
server over its WS-Management interface exposed via Microsoft WinRM.
The driver is based on the work of Michael Sievers. This started in
the same master program project group at the University of Paderborn
as the ESX driver.
See Michael's blog for details: http://hyperv4libvirt.wordpress.com/
Add a generator script to generate the structs and serialization
information for OpenWSMAN.
openwsman.h collects workarounds for problems in OpenWSMAN <= 2.2.6.
There are also disabled sections that would use ws_serializer_free_mem
but can't because it's broken in OpenWSMAN <= 2.2.6. Patches to fix
this have been posted upstream.
In daemons using pidfiles to protect against concurrent
execution there is a possibility that a crash may leave a stale
pidfile on disk, which then prevents later restart of the daemon.
To avoid this problem, introduce a pair of APIs which make
use of virFileLock to ensure crash-safe & race condition-safe
pidfile acquisition & releae
* src/libvirt_private.syms, src/util/virpidfile.c,
src/util/virpidfile.h: Add virPidFileAcquire and virPidFileRelease
* configure.ac docs/news.html.in libvirt.spec.in: updates for new
release
* po/*.po*: pulled translations from the transifex teams and regenerated
localizations
O_DIRECT has stringent requirements. Rather than make lots of changes
at each site that wants to use O_DIRECT, it is easier to offload
the work through a helper process that mirrors the I/O between a
pipe and the actual direct fd, so that the other end of the pipe
no longer has to worry about constraints.
Plus, if the kernel ever gains better posix_fadvise support, then we
only have to touch a single file to let all callers benefit from a
more efficient way to avoid file system caching.
* src/util/virfile.h (virFileDirectFdFlag, virFileDirectFdNew)
(virFileDirectFdClose, virFileDirectFdFree): New prototypes.
* src/util/virdirect.c: Implement new wrapper object.
* src/libvirt_private.syms (virfile.h): Export new symbols.
* cfg.mk (useless_free_options): Add to list.
* po/POTFILES.in: Add new translations.
This tweaks the RPC generator to cope with some naming
conventions used for the QEMU specific APIs
* daemon/remote.c: Server side dispatcher
* src/remote/remote_driver.c: Client side dispatcher
* src/remote/qemu_protocol.x: Wire protocol definition
* src/rpc/gendispatch.pl: Use '$structprefix' in method
names, fix QEMU flags and fix dispatcher method names
The last patch was incomplete. The translated strings merely
moved between generated file names, rather than disappearing.
* cfg.mk (generated_files): Update generated file names.
* po/POTFILES.in: Add remote_dispatch.h
This guts the libvirtd daemon, removing all its networking and
RPC handling code. Instead it calls out to the new virServerPtr
APIs for all its RPC & networking work
As a fallout all libvirtd daemon error reporting now takes place
via the normal internal error reporting APIs. There is no need
to call separate error reporting APIs in RPC code, nor should
code use VIR_WARN/VIR_ERROR for reporting fatal problems anymore.
* daemon/qemu_dispatch_*.h, daemon/remote_dispatch_*.h: Remove
old generated dispatcher code
* daemon/qemu_dispatch.h, daemon/remote_dispatch.h: New dispatch
code
* daemon/dispatch.c, daemon/dispatch.h: Remove obsoleted code
* daemon/remote.c, daemon/remote.h: Rewrite for new dispatch
APIs
* daemon/libvirtd.c, daemon/libvirtd.h: Remove all networking
code
* daemon/stream.c, daemon/stream.h: Update for new APIs
* daemon/Makefile.am: Link to libvirt-net-rpc-server.la
To facilitate creation of new clients using XDR RPC services,
pull alot of the remote driver code into a set of reusable
objects.
- virNetClient: Encapsulates a socket connection to a
remote RPC server. Handles all the network I/O for
reading/writing RPC messages. Delegates RPC encoding
and decoding to the registered programs
- virNetClientProgram: Handles processing and dispatch
of RPC messages for a single RPC (program,version).
A program can register to receive async events
from a client
- virNetClientStream: Handles generic I/O stream
integration to RPC layer
Each new client program now merely needs to define the list of
RPC procedures & events it wants and their handlers. It does
not need to deal with any of the network I/O functionality at
all.
Allow RPC servers to advertise themselves using MDNS,
via Avahi
* src/rpc/virnetserver.c, src/rpc/virnetserver.h: Allow
registration of MDNS services via avahi
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Add
API to fetch the listen port number
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Add API to
fetch the local port number
* src/rpc/virnetservermdns.c, src/rpc/virnetservermdns.h: Represent
an MDNS advertisement
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.
This provides two modules for handling SASL
* virNetSASLContext provides the process-wide state, currently
just a whitelist of usernames on the server and a one time
library init call
* virNetTLSSession provides the per-connection state, ie the
SASL session itself. This also include APIs for providing
data encryption/decryption once the session is established
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsaslcontext.c, src/rpc/virnetsaslcontext.h: Generic
SASL handling code
This provides two modules for handling TLS
* virNetTLSContext provides the process-wide state, in particular
all the x509 credentials, DH params and x509 whitelists
* virNetTLSSession provides the per-connection state, ie the
TLS session itself.
The virNetTLSContext provides APIs for validating a TLS session's
x509 credentials. The virNetTLSSession includes APIs for performing
the initial TLS handshake and sending/recving encrypted data
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnettlscontext.c, src/rpc/virnettlscontext.h: Generic
TLS handling code
Introduces a simple wrapper around the raw POSIX sockets APIs
and name resolution APIs. Allows for easy creation of client
and server sockets with correct usage of name resolution APIs
for protocol agnostic socket setup.
It can listen for UNIX and TCP stream sockets.
It can connect to UNIX, TCP streams directly, or indirectly
to UNIX sockets via an SSH tunnel or external command
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Generic
sockets APIs
* tests/Makefile.am: Add socket test
* tests/virnetsockettest.c: New test case
* tests/testutils.c: Avoid overriding LIBVIRT_DEBUG settings
* tests/ssh.c: Dumb helper program for SSH tunnelling tests
This provides a new struct that contains a buffer for the RPC
message header+payload, as well as a decoded copy of the message
header. There is an API for applying a XDR encoding & decoding
of the message headers and payloads. There are also APIs for
maintaining a simple FIFO queue of message instances.
Expected usage scenarios are:
To send a message
msg = virNetMessageNew()
...fill in msg->header fields..
virNetMessageEncodeHeader(msg)
...loook at msg->header fields to determine payload filter
virNetMessageEncodePayload(msg, xdrfilter, data)
...send msg->bufferLength worth of data from buffer
To receive a message
msg = virNetMessageNew()
...read VIR_NET_MESSAGE_LEN_MAX of data into buffer
virNetMessageDecodeLength(msg)
...read msg->bufferLength-msg->bufferOffset of data into buffer
virNetMessageDecodeHeader(msg)
...look at msg->header fields to determine payload filter
virNetMessageDecodePayload(msg, xdrfilter, data)
...run payload processor
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Internal
message handling API.
* testutils.c, testutils.h: Helper for printing binary differences
* virnetmessagetest.c: Validate all XDR encoding/decoding
In a first cleanup step, make nlComm from macvtap.c commonly available
for other code to use. Since nlComm uses Linux-specific structures as
parameters it's prototype is only visible on Linux.
Sanlock is a project that implements a disk-paxos locking
algorithm. This is suitable for cluster deployments with
shared storage.
* src/Makefile.am: Add dlopen plugin for sanlock
* src/locking/lock_driver_sanlock.c: Sanlock driver
* configure.ac: Check for sanlock
* libvirt.spec.in: Add a libvirt-lock-sanlock RPM
Define the basic framework lock manager plugins. The
basic plugin API for 3rd parties to implemented is
defined in
src/locking/lock_driver.h
This allows dlopen()able modules for alternative locking
schemes, however, we do not install the header. This
requires lock plugins to be in-tree allowing changing of
the lock manager plugin API in future.
The libvirt code for loading & calling into plugins
is in
src/locking/lock_manager.{c,h}
* include/libvirt/virterror.h, src/util/virterror.c: Add
VIR_FROM_LOCKING
* src/locking/lock_driver.h: API for lock driver plugins
to implement
* src/locking/lock_manager.c, src/locking/lock_manager.h:
Internal API for managing locking
* src/Makefile.am: Add locking code
We were 31/73 on whether to translate; since less than 50% translated
and since VIR_INFO is less than VIR_WARN which also doesn't translate,
this makes sense.
* cfg.mk (sc_prohibit_gettext_markup): Add VIR_INFO, since it
falls between WARN and DEBUG.
* daemon/libvirtd.c (qemudDispatchSignalEvent, remoteCheckAccess)
(qemudDispatchServer): Adjust offenders.
* daemon/remote.c (remoteDispatchAuthPolkit): Likewise.
* src/network/bridge_driver.c (networkReloadIptablesRules)
(networkStartNetworkDaemon, networkShutdownNetworkDaemon)
(networkCreate, networkDefine, networkUndefine): Likewise.
* src/qemu/qemu_driver.c (qemudDomainDefine)
(qemudDomainUndefine): Likewise.
* src/storage/storage_driver.c (storagePoolCreate)
(storagePoolDefine, storagePoolUndefine, storagePoolStart)
(storagePoolDestroy, storagePoolDelete, storageVolumeCreateXML)
(storageVolumeCreateXMLFrom, storageVolumeDelete): Likewise.
* src/util/bridge.c (brProbeVnetHdr): Likewise.
* po/POTFILES.in: Drop src/util/bridge.c.
Make sure that xgettext scans generated files for translatable
strings, rather than just files stored in libvirt.git.
* .gnulib: Update, for bootstrap and syntax-check fixes.
* bootstrap: Resynchronize with gnulib.
* cfg.mk (generated_files): Define.
* po/POTFILES.in: Add more files with _().
Stop storing the generated files for the remote protocol client
and server in source control. The generated files will still be
included in the result of 'make dist' to avoid end-users needing
to generate the files
Signed-off-by: Eric Blake <eblake@redhat.com>
Unfortunately, this means that the strings marked for translation
in generated files are not picked up by gnulib's syntax-check,
I'm working on fixing that in gnulib.
* .gitignore, cfg.mk, po/POTFILES.in: Reflect deletion.
In preparation for removing generated files, it is necessary
to tell automake that the generated files must be distributed
but not directly compiled (since they are included into the
body of a larger .c file that is compiled). Hence, even though
these files are code and not headers in the strict sense of
the word, it is easier to rename them to .h for automake's sake.
* daemon/remote_client_bodies.c: Rename to .h.
* daemon/qemu_client_bodies.c: Likewise.
* src/remote/remote_client_bodies.c: Likewise.
* src/remote/qemu_client_bodies.c: Likewise.
* daemon/Makefile.am (remote_dispatch_bodies.c)
(qemu_dispatch_bodies.c): Rename to .h.
(remote.c, EXTRA_DIST): Reflect rename.
* daemon/remote.c: Likewise.
* daemon/remote_generator.pl: Likewise.
* src/Makefile.am (remote/remote_driver.c): Likewise.
* src/remote/remote_driver.c: Likewise.
* po/POTFILES.in: Likewise.
* cfg.mk (exclude_file_name_regexp--sc_require_config_h)
(exclude_file_name_regexp--sc_require_config_h_first)
(exclude_file_name_regexp--sc_prohibit_empty_lines_at_EOF):
Likewise.
This patch just covers the simple functions without explicit return
values. There is more to be handled.
The generator collects the members of the XDR argument structs and uses
this information to generate the function bodies.
Exclude the generated files from offending syntax-checks.
Suggested by Richard W.M. Jones
* configure.ac libvirt.spec.in docs/news.html.in: update and document
the release
* po/*.po*: update localizations for german, polish, spanish, ukrainian
and vietnamese coming from transifex, regenerate
Also mark error messages in block_stats.c for translation, add the
new macro to the msg_gen functions in cfg.mk and add block_stats.c
to po/POTFILES.in
The O_NONBLOCK flag doesn't work as desired on plain files
or block devices. Introduce an I/O helper program that does
the blocking I/O operations, communicating over a pipe that
can support O_NONBLOCK
* src/fdstream.c, src/fdstream.h: Add non-blocking I/O
on plain files/block devices
* src/Makefile.am, src/util/iohelper.c: I/O helper program
* src/qemu/qemu_driver.c, src/lxc/lxc_driver.c,
src/uml/uml_driver.c, src/xen/xen_driver.c: Update for
streams API change
The Open Nebula driver has been unmaintained since it was first
introduced. The only commits have been for tree-wide cleanups.
It also has a major design flaw, in that it only knows about guests
that it has created itself, which makes it of very limited use.
Discussions wrt evolution of the VMWare ESX driver, concluded that
it should limit itself to single-node ESX operation and not try to
manage the multi-node architecture of VirtualCenter. Open Nebula
is a cluster like Virtual Center, not a single node system, so
the same reasoning applies.
The DeltaCloud project includes an Open Nebula driver and is a much
better fit architecturally, since it is explicitly targetting the
distributed multihost cluster scenario.
Thus this patch deletes the libvirt Open Nebula driver with the
recommendation that people use DeltaCloud for managing it instead.
* configure.ac: Remove probe for xmlrpc & --with-one arg
* daemon/Makefile.am, daemon/libvirtd.c, src/Makefile.am: Remove
ONE driver build
* src/opennebula/one_client.c, src/opennebula/one_client.h,
src/opennebula/one_conf.c, src/opennebula/one_conf.h,
src/opennebula/one_driver.c, src/opennebula/one_driver.c: Delete
files
* autobuild.sh, libvirt.spec.in, mingw32-libvirt.spec.in: Remove
build rules for Open Nebula
* docs/drivers.html.in, docs/sitemap.html.in: Remove reference
to OpenNebula
* docs/drvone.html.in: Delete file
Add a new xen driver based on libxenlight [1], which is the primary
toolstack starting with Xen 4.1.0. The driver is stateful and runs
privileged only.
Like the existing xen-unified driver, the libxenlight driver is
accessed with xen:// URI. Driver selection is based on the status
of xend. If xend is running, the libxenlight driver will not load
and xen:// connections are handled by xen-unified. If xend is not
running *and* the libxenlight driver is available, xen://
connections are deferred to the libxenlight driver.
V6:
- Address several code style issues noted by Daniel Veillard
- Make drive work with xen:/// URI
- Hold domain object reference while domain is injected in
libvirt event loop. Race found and fixed by Markus Groß.
V5:
- Ensure events are unregistered when domain private data
is destroyed. Discovered and fixed by Markus Groß.
V4:
- Handle restart of libvirtd, reconnecting to previously
started domains
- Rebased to current master
- Tested against Xen 4.1 RC7-pre (c/s 22961:c5d121fd35c0)
V3:
- Reserve vnc port within driver when autoport=yes
V2:
- Update to Xen 4.1 RC6-pre (c/s 22940:5a4710640f81)
- Rebased to current master
- Plug memory leaks found by Stefano Stabellini and valgrind
- Handle SHUTDOWN_crash domain death event
[1] http://lists.xensource.com/archives/html/xen-devel/2009-11/msg00436.html
Not all applications have an existing event loop they need
to integrate with. Forcing them to implement the libvirt
event loop integration APIs is an undue burden. This just
exposes our simple poll() based implementation for apps
to use. So instead of calling
virEventRegister(....callbacks...)
The app would call
virEventRegisterDefaultImpl()
And then have a thread somewhere calling
static bool quit = false;
....
while (!quit)
virEventRunDefaultImpl()
* daemon/libvirtd.c, tools/console.c,
tools/virsh.c: Convert to public event loop APIs
* include/libvirt/libvirt.h.in, src/libvirt_private.syms: Add
virEventRegisterDefaultImpl and virEventRunDefaultImpl
* src/util/event.c: Implement virEventRegisterDefaultImpl
and virEventRunDefaultImpl using poll() event loop
* src/util/event_poll.c: Add full error reporting
* src/util/virterror.c, include/libvirt/virterror.h: Add
VIR_FROM_EVENTS
The introduction of the v3 migration protocol, along with
support for migration cookies, will significantly expand
the size of the migration code. Move it all to a separate
file to make it more manageable
The functions are not moved 100%. The API entry points
remain in the main QEMU driver, but once the public
virDomainPtr is resolved to the internal virDomainObjPtr,
all following code is moved.
This will allow the new v3 API entry points to call into the
same shared internal migration functions
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainFormatXML helper method
* src/qemu/qemu_driver.c: Remove all migration code
* src/qemu/qemu_migration.c, src/qemu/qemu_migration.h: Add
all migration code.
Move the qemudStartVMDaemon and qemudShutdownVMDaemon
methods into a separate file, renaming them to
qemuProcessStart, qemuProcessStop. All helper methods
called by these are also moved & renamed to match
* src/Makefile.am: Add qemu_process.c/.h
* src/qemu/qemu_command.c: Add qemuDomainAssignPCIAddresses
* src/qemu/qemu_command.h: Add VNC port min/max
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
domain event queue helpers
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
all QEMU process startup/shutdown functions
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add
all QEMU process startup/shutdown functions
* configure.ac docs/news.html.in libvirt.spec.in: bump version and add docs
* po/*.po*: updated Gujarati, Polish and Dutch localisations and regenerated
* tools/libvirt-guests.init.in: Rename...
* tools/libvirt-guests.init.sh: ...so that xgettext's language
detection via suffix will work.
* po/POTFILES.in: Update all references.
* tools/Makefile.am (EXTRA_DIST, libvirt-guests.init): Likewise.
* tools/libvirt-guests.init.sh: Use only POSIX shell features, which
includes using gettext.sh for translation rather than $"".
* tools/Makefile.am (libvirt-guests.init): Supply a few more substitutions.
* po/POTFILES.in: Mark that libvirt-guests.init needs translation.
Signed-off-by: Eric Blake <eblake@redhat.com>
The current security driver usage requires horrible code like
if (driver->securityDriver &&
driver->securityDriver->domainSetSecurityHostdevLabel &&
driver->securityDriver->domainSetSecurityHostdevLabel(driver->securityDriver,
vm, hostdev) < 0)
This pair of checks for NULL clutters up the code, making the driver
calls 2 lines longer than they really need to be. The goal of the
patchset is to change the calling convention to simply
if (virSecurityManagerSetHostdevLabel(driver->securityDriver,
vm, hostdev) < 0)
The first check for 'driver->securityDriver' being NULL is removed
by introducing a 'no op' security driver that will always be present
if no real driver is enabled. This guarentees driver->securityDriver
!= NULL.
The second check for 'driver->securityDriver->domainSetSecurityHostdevLabel'
being non-NULL is hidden in a new abstraction called virSecurityManager.
This separates the driver callbacks, from main internal API. The addition
of a virSecurityManager object, that is separate from the virSecurityDriver
struct also allows for security drivers to carry state / configuration
information directly. Thus the DAC/Stack drivers from src/qemu which
used to pull config from 'struct qemud_driver' can now be moved into
the 'src/security' directory and store their config directly.
* src/qemu/qemu_conf.h, src/qemu/qemu_driver.c: Update to
use new virSecurityManager APIs
* src/qemu/qemu_security_dac.c, src/qemu/qemu_security_dac.h
src/qemu/qemu_security_stacked.c, src/qemu/qemu_security_stacked.h:
Move into src/security directory
* src/security/security_stack.c, src/security/security_stack.h,
src/security/security_dac.c, src/security/security_dac.h: Generic
versions of previous QEMU specific drivers
* src/security/security_apparmor.c, src/security/security_apparmor.h,
src/security/security_driver.c, src/security/security_driver.h,
src/security/security_selinux.c, src/security/security_selinux.h:
Update to take virSecurityManagerPtr object as the first param
in all callbacks
* src/security/security_nop.c, src/security/security_nop.h: Stub
implementation of all security driver APIs.
* src/security/security_manager.h, src/security/security_manager.c:
New internal API for invoking security drivers
* src/libvirt.c: Add missing debug for security APIs
Now the VMware driver doesn't depend on the ESX driver anymore.
Add a WITH_VMX option that depends on WITH_ESX and WITH_VMWARE.
Also add a libvirt_vmx.syms file.
Move some escaping functions from esx_util.c to vmx.c.
Adapt the test suite, ESX and VMware driver to the new code layout.
Don't require dlopen, but link to ole32 and oleaut32 on Windows.
Don't expose g_pVBoxFuncs anymore. It was only used to get the
version of the API. Make VBoxCGlueInit return the version instead.
This simplifies the implementation of the MSCOM glue layer.
Get the VirtualBox version from the registry.
Add a dummy implementation of the nsIEventQueue to the MSCOM glue
as there seems to be no direct equivalent with MSCOM. It might be
implemented using the normal window message loop. This requires
additional investigation.
The QEMU driver file is far too large. Move all the hotplug
helper code out into a separate file. No functional change.
* src/qemu/qemu_hotplug.c, src/qemu/qemu_hotplug.h,
src/Makefile.am: Add hotplug helper file
* src/qemu/qemu_driver.c: Delete hotplug code
The QEMU driver file is far too large. Move all the hostdev
helper code out into a separate file. No functional change.
* src/qemu/qemu_hostdev.c, src/qemu/qemu_hostdev.h,
src/Makefile.am: Add hostdev helper file
* src/qemu/qemu_driver.c: Delete hostdev code
The QEMU driver file is far too large. Move all the cgroup
helper code out into a separate file. No functional change.
* src/qemu/qemu_cgroup.c, src/qemu/qemu_cgroup.h,
src/Makefile.am: Add cgroup helper file
* src/qemu/qemu_driver.c: Delete cgroup code
Move the code for handling the QEMU virDomainObjPtr private
data, and custom XML namespace into a separate file
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: New file
for private data & namespace code
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
private data & namespace code
* src/qemu/qemu_driver.h, src/qemu/qemu_command.h: Update
includes
* src/Makefile.am: Add src/qemu/qemu_domain.c
The qemu_conf.c code is doing three jobs, driver config file
loading, QEMU capabilities management and QEMU command line
management. Move the command line code into its own file
* src/qemu/qemu_command.c, src/qemu/qemu_command.h: New
command line management code
* src/qemu/qemu_conf.c, src/qemu/qemu_conf.h: Delete command
line code
* src/qemu/qemu_conf.h, src/qemu_conf.c: Adapt for API renames
* src/Makefile.am: add src/qemu/qemu_command.c
* src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_text.c: Add
import of qemu_command.h