The build currently fails when trying to create virnetprotocol.c
into $(builddir)/rpc, which doesn't exist. But since the file
is part of the tarball, it should be generated into $(srcdir).
Caught by autobuild.sh.
* src/Makefile.am (VIR_NET_RPC_GENERATED): Generate into srcdir.
The dnsmasq commandline was being built as a part of running
dnsmasq. This patch puts the commandline build into a separate
function (and exports it as a private API) making it possible to build
a dnsmasq commandline without executing it, so that we can write a
test program to verify that the proper commandlines are being created.
Signed-off-by: Michal Novotny <minovotn@redhat.com>
To ensure virnetprotocol.[ch] are generated before any other
files, add them to BUILT_SOURCES and MAINTAINERCLEANFILES.
At the same time, move ESX_DRIVER_GENERATED out of DISTCLEAN
and into MAINTAINERCLEANFILES, since they are included in
EXTRA_DIST
* src/Makefile.am: Add virnetprotocol.[ch] to BUILT_SOURCES
The Makefile.am rules for generating RPC protocol had a couple
of bugs
- A instance of remote/rpcgen_fix.pl was not changed
to rpc/genprotocol.pl
- A dep from rpc/virnetmessage.h on the generated
rpc/virnetprotocol.h was missing
- The generated rpc/virnetprotocol.[ch] were not listed
in MAINTAINERCLEANFILES
* Makefile.am: Fix RPC protocol generation
Move the daemon/remote_generator.pl to src/rpc/gendispatch.pl
and move the src/remote/rpcgen_fix.pl to src/rpc/genprotocol.pl
* daemon/Makefile.am: Update for new name/location of generator
* src/Makefile.am: Update for new name/location of generator
To facilitate creation of new clients using XDR RPC services,
pull alot of the remote driver code into a set of reusable
objects.
- virNetClient: Encapsulates a socket connection to a
remote RPC server. Handles all the network I/O for
reading/writing RPC messages. Delegates RPC encoding
and decoding to the registered programs
- virNetClientProgram: Handles processing and dispatch
of RPC messages for a single RPC (program,version).
A program can register to receive async events
from a client
- virNetClientStream: Handles generic I/O stream
integration to RPC layer
Each new client program now merely needs to define the list of
RPC procedures & events it wants and their handlers. It does
not need to deal with any of the network I/O functionality at
all.
Allow RPC servers to advertise themselves using MDNS,
via Avahi
* src/rpc/virnetserver.c, src/rpc/virnetserver.h: Allow
registration of MDNS services via avahi
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Add
API to fetch the listen port number
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Add API to
fetch the local port number
* src/rpc/virnetservermdns.c, src/rpc/virnetservermdns.h: Represent
an MDNS advertisement
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.
This provides two modules for handling SASL
* virNetSASLContext provides the process-wide state, currently
just a whitelist of usernames on the server and a one time
library init call
* virNetTLSSession provides the per-connection state, ie the
SASL session itself. This also include APIs for providing
data encryption/decryption once the session is established
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsaslcontext.c, src/rpc/virnetsaslcontext.h: Generic
SASL handling code
This provides two modules for handling TLS
* virNetTLSContext provides the process-wide state, in particular
all the x509 credentials, DH params and x509 whitelists
* virNetTLSSession provides the per-connection state, ie the
TLS session itself.
The virNetTLSContext provides APIs for validating a TLS session's
x509 credentials. The virNetTLSSession includes APIs for performing
the initial TLS handshake and sending/recving encrypted data
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnettlscontext.c, src/rpc/virnettlscontext.h: Generic
TLS handling code
Introduces a simple wrapper around the raw POSIX sockets APIs
and name resolution APIs. Allows for easy creation of client
and server sockets with correct usage of name resolution APIs
for protocol agnostic socket setup.
It can listen for UNIX and TCP stream sockets.
It can connect to UNIX, TCP streams directly, or indirectly
to UNIX sockets via an SSH tunnel or external command
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Generic
sockets APIs
* tests/Makefile.am: Add socket test
* tests/virnetsockettest.c: New test case
* tests/testutils.c: Avoid overriding LIBVIRT_DEBUG settings
* tests/ssh.c: Dumb helper program for SSH tunnelling tests
This provides a new struct that contains a buffer for the RPC
message header+payload, as well as a decoded copy of the message
header. There is an API for applying a XDR encoding & decoding
of the message headers and payloads. There are also APIs for
maintaining a simple FIFO queue of message instances.
Expected usage scenarios are:
To send a message
msg = virNetMessageNew()
...fill in msg->header fields..
virNetMessageEncodeHeader(msg)
...loook at msg->header fields to determine payload filter
virNetMessageEncodePayload(msg, xdrfilter, data)
...send msg->bufferLength worth of data from buffer
To receive a message
msg = virNetMessageNew()
...read VIR_NET_MESSAGE_LEN_MAX of data into buffer
virNetMessageDecodeLength(msg)
...read msg->bufferLength-msg->bufferOffset of data into buffer
virNetMessageDecodeHeader(msg)
...look at msg->header fields to determine payload filter
virNetMessageDecodePayload(msg, xdrfilter, data)
...run payload processor
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Internal
message handling API.
* testutils.c, testutils.h: Helper for printing binary differences
* virnetmessagetest.c: Validate all XDR encoding/decoding
This patch defines the basics of a generic RPC protocol in XDR.
This is wire ABI compatible with the original remote_protocol.x.
It takes everything except for the RPC calls / events from that
protocol
- The basic header virNetMessageHeader (aka remote_message_header)
- The error object virNetMessageError (aka remote_error)
- Two dummy objects virNetMessageDomain & virNetMessageNetwork
sadly needed to keep virNetMessageError ABI compatible with
the old remote_error
The RPC protocol supports method calls, async events and
bidirectional data streams as before
* src/Makefile.am: Add rules for generating RPC code from
protocol & define a new libvirt-net-rpc.la helper library
* src/rpc/virnetprotocol.x: New generic RPC protocol
In a first cleanup step, make nlComm from macvtap.c commonly available
for other code to use. Since nlComm uses Linux-specific structures as
parameters it's prototype is only visible on Linux.
Since the addition of the lock manager framework in 6a943419c5
dlopen is always required, but the checks in configure wasn't changed
to reflect that. This didn't show up directly because the VirtualBox
driver linking dlopen in covered it. But disabling the VirtualBox
driver makes the build fail due to missing dlopen.
Change the dlopen check in configure to pick up dlopen when available.
Reported by Ruben Kerkhof.
Sanlock is a project that implements a disk-paxos locking
algorithm. This is suitable for cluster deployments with
shared storage.
* src/Makefile.am: Add dlopen plugin for sanlock
* src/locking/lock_driver_sanlock.c: Sanlock driver
* configure.ac: Check for sanlock
* libvirt.spec.in: Add a libvirt-lock-sanlock RPM
To facilitate use of the locking plugins from hypervisor drivers,
introduce a higher level API for locking virDomainObjPtr instances.
In includes APIs targetted to VM startup, and hotplug/unplug
* src/Makefile.am: Add domain lock API
* src/locking/domain_lock.c, src/locking/domain_lock.h: High
level API for domain locking
To allow hypervisor drivers to assume that a lock driver impl
will be guaranteed to exist, provide a 'nop' impl that is
compiled into the library
* src/Makefile.am: Add nop driver
* src/locking/lock_driver_nop.c, src/locking/lock_driver_nop.h:
Nop lock driver implementation
* src/locking/lock_manager.c: Enable direct access of 'nop'
driver, instead of dlopen()ing it.
Define the basic framework lock manager plugins. The
basic plugin API for 3rd parties to implemented is
defined in
src/locking/lock_driver.h
This allows dlopen()able modules for alternative locking
schemes, however, we do not install the header. This
requires lock plugins to be in-tree allowing changing of
the lock manager plugin API in future.
The libvirt code for loading & calling into plugins
is in
src/locking/lock_manager.{c,h}
* include/libvirt/virterror.h, src/util/virterror.c: Add
VIR_FROM_LOCKING
* src/locking/lock_driver.h: API for lock driver plugins
to implement
* src/locking/lock_manager.c, src/locking/lock_manager.h:
Internal API for managing locking
* src/Makefile.am: Add locking code
Anything generated that must end up in the tarball must either
have unconditional rules for generation (remote_protocol.c) or
must live in libvirt.git for the case where the person running
'make dist' has disabled the configure options that control the
rebuild of the generated file (remote_protocol-structs).
* src/Makefile.am (remote_protocol-structs): Add a dependency and
document why it must live in git.
($(srcdir)/remote/%_protocol.c, $(srcdir)/remote/%_protocol.c):
Unconditionally generate.
Several vSphere API methods are called on global objects like the
FileManager, the PerformanceManager or the SearchIndex. The generator
input file allows to mark such methods and the generator generates
such method in a way that automatically handles marked parameter. This
is done by some special macros. Those were manually written and this
patch moves them to the generator.
Noticed this while trying to run rpcgen on cygwin.
* src/Makefile.am ($(srcdir)/remote/%_protocol.h)
($(srcdir)/remote/%_protocol.c): Add a dependency.
Always generate the rpc files, and require rpcgen during bootstrap.
* daemon/Makefile.am: Removed generated files with
maintainer-clean target
* src/Makefile.am: Removed generated files with
maintainer-clean target. Always run 'rpcgen' if
generated files are missing
In preparation for removing generated files, it is necessary
to tell automake that the generated files must be distributed
but not directly compiled (since they are included into the
body of a larger .c file that is compiled). Hence, even though
these files are code and not headers in the strict sense of
the word, it is easier to rename them to .h for automake's sake.
* daemon/remote_client_bodies.c: Rename to .h.
* daemon/qemu_client_bodies.c: Likewise.
* src/remote/remote_client_bodies.c: Likewise.
* src/remote/qemu_client_bodies.c: Likewise.
* daemon/Makefile.am (remote_dispatch_bodies.c)
(qemu_dispatch_bodies.c): Rename to .h.
(remote.c, EXTRA_DIST): Reflect rename.
* daemon/remote.c: Likewise.
* daemon/remote_generator.pl: Likewise.
* src/Makefile.am (remote/remote_driver.c): Likewise.
* src/remote/remote_driver.c: Likewise.
* po/POTFILES.in: Likewise.
* cfg.mk (exclude_file_name_regexp--sc_require_config_h)
(exclude_file_name_regexp--sc_require_config_h_first)
(exclude_file_name_regexp--sc_prohibit_empty_lines_at_EOF):
Likewise.
commit d4601696 introduces two more generated files: esx_vi.generated.h
and esx_vi.generated.h. But we do not include them into dist file.
It will break building if using dist file to build.
The libexec program libvirt_iohelper is only for libvirtd. If we build rpm
without libvirtd, we will receive the following messages:
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/wency/rpmbuild/BUILDROOT/libvirt-0.9.0-1.el6.x86_64
error: Installed (but unpackaged) file(s) found:
/usr/libexec/libvirt_iohelper
* src/Makefile.am src/libvirt_private.syms configure.ac: share and
reuse the sexpr routines from sexpr.h of the old xen driver
* src/libxl/libxl_driver.c: implements libxlDomainXMLFromNative and
libxlDomainXMLToNative
The O_NONBLOCK flag doesn't work as desired on plain files
or block devices. Introduce an I/O helper program that does
the blocking I/O operations, communicating over a pipe that
can support O_NONBLOCK
* src/fdstream.c, src/fdstream.h: Add non-blocking I/O
on plain files/block devices
* src/Makefile.am, src/util/iohelper.c: I/O helper program
* src/qemu/qemu_driver.c, src/lxc/lxc_driver.c,
src/uml/uml_driver.c, src/xen/xen_driver.c: Update for
streams API change
* src/Makefile.am (remote_protocol-structs): Flatten tabs.
* src/remote_protocol-structs: Likewise. Also add a hint to emacs
to make it easier to keep spaces in the file.
The Open Nebula driver has been unmaintained since it was first
introduced. The only commits have been for tree-wide cleanups.
It also has a major design flaw, in that it only knows about guests
that it has created itself, which makes it of very limited use.
Discussions wrt evolution of the VMWare ESX driver, concluded that
it should limit itself to single-node ESX operation and not try to
manage the multi-node architecture of VirtualCenter. Open Nebula
is a cluster like Virtual Center, not a single node system, so
the same reasoning applies.
The DeltaCloud project includes an Open Nebula driver and is a much
better fit architecturally, since it is explicitly targetting the
distributed multihost cluster scenario.
Thus this patch deletes the libvirt Open Nebula driver with the
recommendation that people use DeltaCloud for managing it instead.
* configure.ac: Remove probe for xmlrpc & --with-one arg
* daemon/Makefile.am, daemon/libvirtd.c, src/Makefile.am: Remove
ONE driver build
* src/opennebula/one_client.c, src/opennebula/one_client.h,
src/opennebula/one_conf.c, src/opennebula/one_conf.h,
src/opennebula/one_driver.c, src/opennebula/one_driver.c: Delete
files
* autobuild.sh, libvirt.spec.in, mingw32-libvirt.spec.in: Remove
build rules for Open Nebula
* docs/drivers.html.in, docs/sitemap.html.in: Remove reference
to OpenNebula
* docs/drvone.html.in: Delete file
Add a new xen driver based on libxenlight [1], which is the primary
toolstack starting with Xen 4.1.0. The driver is stateful and runs
privileged only.
Like the existing xen-unified driver, the libxenlight driver is
accessed with xen:// URI. Driver selection is based on the status
of xend. If xend is running, the libxenlight driver will not load
and xen:// connections are handled by xen-unified. If xend is not
running *and* the libxenlight driver is available, xen://
connections are deferred to the libxenlight driver.
V6:
- Address several code style issues noted by Daniel Veillard
- Make drive work with xen:/// URI
- Hold domain object reference while domain is injected in
libvirt event loop. Race found and fixed by Markus Groß.
V5:
- Ensure events are unregistered when domain private data
is destroyed. Discovered and fixed by Markus Groß.
V4:
- Handle restart of libvirtd, reconnecting to previously
started domains
- Rebased to current master
- Tested against Xen 4.1 RC7-pre (c/s 22961:c5d121fd35c0)
V3:
- Reserve vnc port within driver when autoport=yes
V2:
- Update to Xen 4.1 RC6-pre (c/s 22940:5a4710640f81)
- Rebased to current master
- Plug memory leaks found by Stefano Stabellini and valgrind
- Handle SHUTDOWN_crash domain death event
[1] http://lists.xensource.com/archives/html/xen-devel/2009-11/msg00436.html
as described at
http://wiki.debian.org/ToolChain/DSOLinkinghttps://fedoraproject.org/wiki/UnderstandingDSOLinkChange
otherwise the build fails on current Debian unstable with:
CCLD libvirtd
/usr/bin/ld: ../src/.libs/libvirt_driver_lxc.a(libvirt_driver_lxc_la-lxc_container.o): undefined reference to symbol 'capng_apply'
/usr/bin/ld: note: 'capng_apply' is defined in DSO //usr/lib/libcap-ng.so.0 so try adding it to the linker command line
CCLD libvirtd
/usr/bin/ld: ../src/.libs/libvirt_driver_storage.a(libvirt_driver_storage_la-storage_backend.o): undefined reference to symbol 'fgetfilecon'
/usr/bin/ld: note: 'fgetfilecon' is defined in DSO //lib/libselinux.so.1 so try adding it to the linker command line
//lib/libselinux.so.1: could not read symbols: Invalid operation
and similar errors.
The event loop implementation is used by more than just the
daemon, so move it into the shared area.
* daemon/event.c, src/util/event_poll.c: Renamed
* daemon/event.h, src/util/event_poll.h: Renamed
* tools/Makefile.am, tools/console.c, tools/virsh.c: Update
to use new virEventPoll APIs
* daemon/mdns.c, daemon/mdns.c, daemon/Makefile.am: Update
to use new virEventPoll APIs
$ ./configure
...
$ make
...
GEN libvirt.syms
...
$ ./configure --with-driver-modules
...
$ make
...
libvirt.syms doesn't get regenerated but it should as it should
contain virDriverLoadModule now.
* src/Makefile.am (libvirt.syms): Depend on configure changes.
Reported by Matthias Bolte.
The introduction of the v3 migration protocol, along with
support for migration cookies, will significantly expand
the size of the migration code. Move it all to a separate
file to make it more manageable
The functions are not moved 100%. The API entry points
remain in the main QEMU driver, but once the public
virDomainPtr is resolved to the internal virDomainObjPtr,
all following code is moved.
This will allow the new v3 API entry points to call into the
same shared internal migration functions
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
qemuDomainFormatXML helper method
* src/qemu/qemu_driver.c: Remove all migration code
* src/qemu/qemu_migration.c, src/qemu/qemu_migration.h: Add
all migration code.
Move the qemudStartVMDaemon and qemudShutdownVMDaemon
methods into a separate file, renaming them to
qemuProcessStart, qemuProcessStop. All helper methods
called by these are also moved & renamed to match
* src/Makefile.am: Add qemu_process.c/.h
* src/qemu/qemu_command.c: Add qemuDomainAssignPCIAddresses
* src/qemu/qemu_command.h: Add VNC port min/max
* src/qemu/qemu_domain.c, src/qemu/qemu_domain.h: Add
domain event queue helpers
* src/qemu/qemu_driver.c, src/qemu/qemu_driver.h: Remove
all QEMU process startup/shutdown functions
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add
all QEMU process startup/shutdown functions
The name convention of device mapper disk is different, and 'parted'
can't be used to delete a device mapper disk partition. e.g.
Name Path
-----------------------------------------
3600a0b80005ad1d7000093604cae912fp1 /dev/mapper/3600a0b80005ad1d7000093604cae912fp1
Error: Expecting a partition number.
This patch introduces 'dmsetup' to fix it.
Changes:
- New function "virIsDevMapperDevice" in "src/utils/utils.c"
- remove "is_dm_device" in "src/storage/parthelper.c", use
"virIsDevMapperDevice" instead.
- Requires "device-mapper" for 'with-storage-disk" in "libvirt.spec.in"
- Check "dmsetup" in 'configure.ac' for "with-storage-disk"
- Changes on "src/Makefile.am" to link against libdevmapper
- New entry for "virIsDevMapperDevice" in "src/libvirt_private.syms"
Changes from v1 to v3:
- s/virIsDeviceMapperDevice/virIsDevMapperDevice/g
- replace "virRun" with "virCommand"
- sort the list of util functions in "libvirt_private.syms"
- ATTRIBUTE_NONNULL(1) for virIsDevMapperDevice declaration.
e.g.
Name Path
-----------------------------------------
3600a0b80005ad1d7000093604cae912fp1 /dev/mapper/3600a0b80005ad1d7000093604cae912fp1
Vol /dev/mapper/3600a0b80005ad1d7000093604cae912fp1 deleted
Name Path
-----------------------------------------
When built as modules, the connection drivers live
in $LIBDIR/libvirt/drivers. Now we add lock manager
drivers, we need to distinguish. So move the existing
modules to 'connection-driver'
* src/Makefile.am: Move module install dir
* src/driver.c: Move module search dir
The current security driver usage requires horrible code like
if (driver->securityDriver &&
driver->securityDriver->domainSetSecurityHostdevLabel &&
driver->securityDriver->domainSetSecurityHostdevLabel(driver->securityDriver,
vm, hostdev) < 0)
This pair of checks for NULL clutters up the code, making the driver
calls 2 lines longer than they really need to be. The goal of the
patchset is to change the calling convention to simply
if (virSecurityManagerSetHostdevLabel(driver->securityDriver,
vm, hostdev) < 0)
The first check for 'driver->securityDriver' being NULL is removed
by introducing a 'no op' security driver that will always be present
if no real driver is enabled. This guarentees driver->securityDriver
!= NULL.
The second check for 'driver->securityDriver->domainSetSecurityHostdevLabel'
being non-NULL is hidden in a new abstraction called virSecurityManager.
This separates the driver callbacks, from main internal API. The addition
of a virSecurityManager object, that is separate from the virSecurityDriver
struct also allows for security drivers to carry state / configuration
information directly. Thus the DAC/Stack drivers from src/qemu which
used to pull config from 'struct qemud_driver' can now be moved into
the 'src/security' directory and store their config directly.
* src/qemu/qemu_conf.h, src/qemu/qemu_driver.c: Update to
use new virSecurityManager APIs
* src/qemu/qemu_security_dac.c, src/qemu/qemu_security_dac.h
src/qemu/qemu_security_stacked.c, src/qemu/qemu_security_stacked.h:
Move into src/security directory
* src/security/security_stack.c, src/security/security_stack.h,
src/security/security_dac.c, src/security/security_dac.h: Generic
versions of previous QEMU specific drivers
* src/security/security_apparmor.c, src/security/security_apparmor.h,
src/security/security_driver.c, src/security/security_driver.h,
src/security/security_selinux.c, src/security/security_selinux.h:
Update to take virSecurityManagerPtr object as the first param
in all callbacks
* src/security/security_nop.c, src/security/security_nop.h: Stub
implementation of all security driver APIs.
* src/security/security_manager.h, src/security/security_manager.c:
New internal API for invoking security drivers
* src/libvirt.c: Add missing debug for security APIs
Add vboxArrayGetWithUintArg to handle new signature variations. Also
refactor vboxArrayGet* implementation to use a common helper function.
Deal with the incompatible changes in the VirtualBox 4.0 API. This
includes major changes in virtual machine and storage medium lookup,
in RDP server property handling, in session/lock handling and other
minor areas.
VirtualBox 4.0 also dropped the old event API and replaced it with a
completely new one. This is not fixed yet and will be addressed in
another patch. Therefore, currently the domain events are supported
for VirtualBox 3.x only.
Based on initial work from Jean-Baptiste Rouault.
This fixes the build from a tarball and makes autobuild.sh
work again.
This should actually have been part of this earlier commit:
esx: Move VMX handling code out of the driver directory
42b2f35d36
Reported by Eric Blake.
All other drivers are explicitly linked to gnulib. The VMware
driver lacked this, resulting in mdir_name being an undefine
symbol.
Explicitly link the VMware driver to gnulib to fix this.