Similar to the recent qemu_protocol-structs addition.
* src/virnetprotocol-structs: New file.
* src/Makefile.am (%_protocol-structs): Factor body...
(PDWTAGS): ...into new helper macro.
(virnetprotocol-structs): New rule.
(PROTOCOL_STRUCTS): Add virnetprotocol-structs.
The LXC and UML drivers can both make use of auditing. Move
the qemu_audit.{c,h} files to src/conf/domain_audit.{c,h}
* src/conf/domain_audit.c: Rename from src/qemu/qemu_audit.c
* src/conf/domain_audit.h: Rename from src/qemu/qemu_audit.h
* src/Makefile.am: Remove qemu_audit.{c,h}, add domain_audit.{c,h}
* src/qemu/qemu_audit.h, src/qemu/qemu_cgroup.c,
src/qemu/qemu_command.c, src/qemu/qemu_driver.c,
src/qemu/qemu_hotplug.c, src/qemu/qemu_migration.c,
src/qemu/qemu_process.c: Update for changed audit API names
Since we are going to add some libvirt-qemu.so entry points in
0.9.4, we might as well start checking for RPC stability, just
as for libvirt.so.
* src/Makefile.am (PROTOCOL_STRUCTS): New variable.
(remote_protocol-structs): Rename...
(%_protocol-structs): ...and make more generic.
* src/qemu_protocol-structs: New file.
According to the automake manual, CPPFLAGS (aka INCLUDES, as spelled
in automake 1.9.6) should only include -I, -D, and -U directives; more
generic directives like -Wall belong in CFLAGS since they affect more
phases of the build process. Therefore, we should be sticking CFLAGS
additions into a CFLAGS container, not a CPPFLAGS container.
* src/Makefile.am (libvirt_driver_vmware_la_CFLAGS): Use AM_CFLAGS.
(INCLUDES): Move CFLAGS items...
(AM_CFLAGS): ...to their proper location.
* python/Makefile.am (INCLUDES, AM_CFLAGS): Likewise.
* tests/Makefile.am (INCLUDES, AM_CFLAGS): Likewise.
(commandtest_CFLAGS, commandhelper_CFLAGS)
(virnetmessagetest_CFLAGS, virnetsockettest_CFLAGS): Use AM_CFLAGS.
EXTRA_DIST files should unconditionally be part of the tarball,
rather than depending on the presence of sanlock-devel.
Meanwhile, parallel builds could fail if we don't use mkdir -p.
* src/Makefile.am (EXTRA_DIST): Always ship sanlock .aug and
template .conf files.
(%-sanlock.conf): Use MKDIR_P.
Introduce a configuration file with a single parameter
'require_lease_for_disks', which is used to decide whether
it is allowed to start a guest which has read/write disks,
but without any leases.
* libvirt.spec.in: Add sanlock config file and augeas
lens
* src/Makefile.am: Install sanlock config file and
augeas lens
* src/locking/libvirt_sanlock.aug: Augeas master lens
* src/locking/test_libvirt_sanlock.aug: Augeas test file
* src/locking/sanlock.conf: Example sanlock config
* src/locking/lock_driver_sanlock.c: Wire up loading
of configuration file
This guts the current remote driver, removing all its networking
handling code. Instead it calls out to the new virClientPtr and
virClientProgramPtr APIs for all RPC & networking work.
* src/Makefile.am: Link remote driver with generic RPC code
* src/remote/remote_driver.c: Gut code, replacing with RPC
API calls
* src/rpc/gendispatch.pl: Update for changes in the way
streams are handled
The build currently fails when trying to create virnetprotocol.c
into $(builddir)/rpc, which doesn't exist. But since the file
is part of the tarball, it should be generated into $(srcdir).
Caught by autobuild.sh.
* src/Makefile.am (VIR_NET_RPC_GENERATED): Generate into srcdir.
The dnsmasq commandline was being built as a part of running
dnsmasq. This patch puts the commandline build into a separate
function (and exports it as a private API) making it possible to build
a dnsmasq commandline without executing it, so that we can write a
test program to verify that the proper commandlines are being created.
Signed-off-by: Michal Novotny <minovotn@redhat.com>
To ensure virnetprotocol.[ch] are generated before any other
files, add them to BUILT_SOURCES and MAINTAINERCLEANFILES.
At the same time, move ESX_DRIVER_GENERATED out of DISTCLEAN
and into MAINTAINERCLEANFILES, since they are included in
EXTRA_DIST
* src/Makefile.am: Add virnetprotocol.[ch] to BUILT_SOURCES
The Makefile.am rules for generating RPC protocol had a couple
of bugs
- A instance of remote/rpcgen_fix.pl was not changed
to rpc/genprotocol.pl
- A dep from rpc/virnetmessage.h on the generated
rpc/virnetprotocol.h was missing
- The generated rpc/virnetprotocol.[ch] were not listed
in MAINTAINERCLEANFILES
* Makefile.am: Fix RPC protocol generation
Move the daemon/remote_generator.pl to src/rpc/gendispatch.pl
and move the src/remote/rpcgen_fix.pl to src/rpc/genprotocol.pl
* daemon/Makefile.am: Update for new name/location of generator
* src/Makefile.am: Update for new name/location of generator
To facilitate creation of new clients using XDR RPC services,
pull alot of the remote driver code into a set of reusable
objects.
- virNetClient: Encapsulates a socket connection to a
remote RPC server. Handles all the network I/O for
reading/writing RPC messages. Delegates RPC encoding
and decoding to the registered programs
- virNetClientProgram: Handles processing and dispatch
of RPC messages for a single RPC (program,version).
A program can register to receive async events
from a client
- virNetClientStream: Handles generic I/O stream
integration to RPC layer
Each new client program now merely needs to define the list of
RPC procedures & events it wants and their handlers. It does
not need to deal with any of the network I/O functionality at
all.
Allow RPC servers to advertise themselves using MDNS,
via Avahi
* src/rpc/virnetserver.c, src/rpc/virnetserver.h: Allow
registration of MDNS services via avahi
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Add
API to fetch the listen port number
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Add API to
fetch the local port number
* src/rpc/virnetservermdns.c, src/rpc/virnetservermdns.h: Represent
an MDNS advertisement
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.
This provides two modules for handling SASL
* virNetSASLContext provides the process-wide state, currently
just a whitelist of usernames on the server and a one time
library init call
* virNetTLSSession provides the per-connection state, ie the
SASL session itself. This also include APIs for providing
data encryption/decryption once the session is established
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsaslcontext.c, src/rpc/virnetsaslcontext.h: Generic
SASL handling code
This provides two modules for handling TLS
* virNetTLSContext provides the process-wide state, in particular
all the x509 credentials, DH params and x509 whitelists
* virNetTLSSession provides the per-connection state, ie the
TLS session itself.
The virNetTLSContext provides APIs for validating a TLS session's
x509 credentials. The virNetTLSSession includes APIs for performing
the initial TLS handshake and sending/recving encrypted data
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnettlscontext.c, src/rpc/virnettlscontext.h: Generic
TLS handling code
Introduces a simple wrapper around the raw POSIX sockets APIs
and name resolution APIs. Allows for easy creation of client
and server sockets with correct usage of name resolution APIs
for protocol agnostic socket setup.
It can listen for UNIX and TCP stream sockets.
It can connect to UNIX, TCP streams directly, or indirectly
to UNIX sockets via an SSH tunnel or external command
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Generic
sockets APIs
* tests/Makefile.am: Add socket test
* tests/virnetsockettest.c: New test case
* tests/testutils.c: Avoid overriding LIBVIRT_DEBUG settings
* tests/ssh.c: Dumb helper program for SSH tunnelling tests
This provides a new struct that contains a buffer for the RPC
message header+payload, as well as a decoded copy of the message
header. There is an API for applying a XDR encoding & decoding
of the message headers and payloads. There are also APIs for
maintaining a simple FIFO queue of message instances.
Expected usage scenarios are:
To send a message
msg = virNetMessageNew()
...fill in msg->header fields..
virNetMessageEncodeHeader(msg)
...loook at msg->header fields to determine payload filter
virNetMessageEncodePayload(msg, xdrfilter, data)
...send msg->bufferLength worth of data from buffer
To receive a message
msg = virNetMessageNew()
...read VIR_NET_MESSAGE_LEN_MAX of data into buffer
virNetMessageDecodeLength(msg)
...read msg->bufferLength-msg->bufferOffset of data into buffer
virNetMessageDecodeHeader(msg)
...look at msg->header fields to determine payload filter
virNetMessageDecodePayload(msg, xdrfilter, data)
...run payload processor
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Internal
message handling API.
* testutils.c, testutils.h: Helper for printing binary differences
* virnetmessagetest.c: Validate all XDR encoding/decoding
This patch defines the basics of a generic RPC protocol in XDR.
This is wire ABI compatible with the original remote_protocol.x.
It takes everything except for the RPC calls / events from that
protocol
- The basic header virNetMessageHeader (aka remote_message_header)
- The error object virNetMessageError (aka remote_error)
- Two dummy objects virNetMessageDomain & virNetMessageNetwork
sadly needed to keep virNetMessageError ABI compatible with
the old remote_error
The RPC protocol supports method calls, async events and
bidirectional data streams as before
* src/Makefile.am: Add rules for generating RPC code from
protocol & define a new libvirt-net-rpc.la helper library
* src/rpc/virnetprotocol.x: New generic RPC protocol
In a first cleanup step, make nlComm from macvtap.c commonly available
for other code to use. Since nlComm uses Linux-specific structures as
parameters it's prototype is only visible on Linux.
Since the addition of the lock manager framework in 6a943419c5
dlopen is always required, but the checks in configure wasn't changed
to reflect that. This didn't show up directly because the VirtualBox
driver linking dlopen in covered it. But disabling the VirtualBox
driver makes the build fail due to missing dlopen.
Change the dlopen check in configure to pick up dlopen when available.
Reported by Ruben Kerkhof.
Sanlock is a project that implements a disk-paxos locking
algorithm. This is suitable for cluster deployments with
shared storage.
* src/Makefile.am: Add dlopen plugin for sanlock
* src/locking/lock_driver_sanlock.c: Sanlock driver
* configure.ac: Check for sanlock
* libvirt.spec.in: Add a libvirt-lock-sanlock RPM
To facilitate use of the locking plugins from hypervisor drivers,
introduce a higher level API for locking virDomainObjPtr instances.
In includes APIs targetted to VM startup, and hotplug/unplug
* src/Makefile.am: Add domain lock API
* src/locking/domain_lock.c, src/locking/domain_lock.h: High
level API for domain locking
To allow hypervisor drivers to assume that a lock driver impl
will be guaranteed to exist, provide a 'nop' impl that is
compiled into the library
* src/Makefile.am: Add nop driver
* src/locking/lock_driver_nop.c, src/locking/lock_driver_nop.h:
Nop lock driver implementation
* src/locking/lock_manager.c: Enable direct access of 'nop'
driver, instead of dlopen()ing it.
Define the basic framework lock manager plugins. The
basic plugin API for 3rd parties to implemented is
defined in
src/locking/lock_driver.h
This allows dlopen()able modules for alternative locking
schemes, however, we do not install the header. This
requires lock plugins to be in-tree allowing changing of
the lock manager plugin API in future.
The libvirt code for loading & calling into plugins
is in
src/locking/lock_manager.{c,h}
* include/libvirt/virterror.h, src/util/virterror.c: Add
VIR_FROM_LOCKING
* src/locking/lock_driver.h: API for lock driver plugins
to implement
* src/locking/lock_manager.c, src/locking/lock_manager.h:
Internal API for managing locking
* src/Makefile.am: Add locking code
Anything generated that must end up in the tarball must either
have unconditional rules for generation (remote_protocol.c) or
must live in libvirt.git for the case where the person running
'make dist' has disabled the configure options that control the
rebuild of the generated file (remote_protocol-structs).
* src/Makefile.am (remote_protocol-structs): Add a dependency and
document why it must live in git.
($(srcdir)/remote/%_protocol.c, $(srcdir)/remote/%_protocol.c):
Unconditionally generate.
Several vSphere API methods are called on global objects like the
FileManager, the PerformanceManager or the SearchIndex. The generator
input file allows to mark such methods and the generator generates
such method in a way that automatically handles marked parameter. This
is done by some special macros. Those were manually written and this
patch moves them to the generator.
Noticed this while trying to run rpcgen on cygwin.
* src/Makefile.am ($(srcdir)/remote/%_protocol.h)
($(srcdir)/remote/%_protocol.c): Add a dependency.
Always generate the rpc files, and require rpcgen during bootstrap.
* daemon/Makefile.am: Removed generated files with
maintainer-clean target
* src/Makefile.am: Removed generated files with
maintainer-clean target. Always run 'rpcgen' if
generated files are missing
In preparation for removing generated files, it is necessary
to tell automake that the generated files must be distributed
but not directly compiled (since they are included into the
body of a larger .c file that is compiled). Hence, even though
these files are code and not headers in the strict sense of
the word, it is easier to rename them to .h for automake's sake.
* daemon/remote_client_bodies.c: Rename to .h.
* daemon/qemu_client_bodies.c: Likewise.
* src/remote/remote_client_bodies.c: Likewise.
* src/remote/qemu_client_bodies.c: Likewise.
* daemon/Makefile.am (remote_dispatch_bodies.c)
(qemu_dispatch_bodies.c): Rename to .h.
(remote.c, EXTRA_DIST): Reflect rename.
* daemon/remote.c: Likewise.
* daemon/remote_generator.pl: Likewise.
* src/Makefile.am (remote/remote_driver.c): Likewise.
* src/remote/remote_driver.c: Likewise.
* po/POTFILES.in: Likewise.
* cfg.mk (exclude_file_name_regexp--sc_require_config_h)
(exclude_file_name_regexp--sc_require_config_h_first)
(exclude_file_name_regexp--sc_prohibit_empty_lines_at_EOF):
Likewise.
commit d4601696 introduces two more generated files: esx_vi.generated.h
and esx_vi.generated.h. But we do not include them into dist file.
It will break building if using dist file to build.
The libexec program libvirt_iohelper is only for libvirtd. If we build rpm
without libvirtd, we will receive the following messages:
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/wency/rpmbuild/BUILDROOT/libvirt-0.9.0-1.el6.x86_64
error: Installed (but unpackaged) file(s) found:
/usr/libexec/libvirt_iohelper
* src/Makefile.am src/libvirt_private.syms configure.ac: share and
reuse the sexpr routines from sexpr.h of the old xen driver
* src/libxl/libxl_driver.c: implements libxlDomainXMLFromNative and
libxlDomainXMLToNative
The O_NONBLOCK flag doesn't work as desired on plain files
or block devices. Introduce an I/O helper program that does
the blocking I/O operations, communicating over a pipe that
can support O_NONBLOCK
* src/fdstream.c, src/fdstream.h: Add non-blocking I/O
on plain files/block devices
* src/Makefile.am, src/util/iohelper.c: I/O helper program
* src/qemu/qemu_driver.c, src/lxc/lxc_driver.c,
src/uml/uml_driver.c, src/xen/xen_driver.c: Update for
streams API change
* src/Makefile.am (remote_protocol-structs): Flatten tabs.
* src/remote_protocol-structs: Likewise. Also add a hint to emacs
to make it easier to keep spaces in the file.
The Open Nebula driver has been unmaintained since it was first
introduced. The only commits have been for tree-wide cleanups.
It also has a major design flaw, in that it only knows about guests
that it has created itself, which makes it of very limited use.
Discussions wrt evolution of the VMWare ESX driver, concluded that
it should limit itself to single-node ESX operation and not try to
manage the multi-node architecture of VirtualCenter. Open Nebula
is a cluster like Virtual Center, not a single node system, so
the same reasoning applies.
The DeltaCloud project includes an Open Nebula driver and is a much
better fit architecturally, since it is explicitly targetting the
distributed multihost cluster scenario.
Thus this patch deletes the libvirt Open Nebula driver with the
recommendation that people use DeltaCloud for managing it instead.
* configure.ac: Remove probe for xmlrpc & --with-one arg
* daemon/Makefile.am, daemon/libvirtd.c, src/Makefile.am: Remove
ONE driver build
* src/opennebula/one_client.c, src/opennebula/one_client.h,
src/opennebula/one_conf.c, src/opennebula/one_conf.h,
src/opennebula/one_driver.c, src/opennebula/one_driver.c: Delete
files
* autobuild.sh, libvirt.spec.in, mingw32-libvirt.spec.in: Remove
build rules for Open Nebula
* docs/drivers.html.in, docs/sitemap.html.in: Remove reference
to OpenNebula
* docs/drvone.html.in: Delete file
Add a new xen driver based on libxenlight [1], which is the primary
toolstack starting with Xen 4.1.0. The driver is stateful and runs
privileged only.
Like the existing xen-unified driver, the libxenlight driver is
accessed with xen:// URI. Driver selection is based on the status
of xend. If xend is running, the libxenlight driver will not load
and xen:// connections are handled by xen-unified. If xend is not
running *and* the libxenlight driver is available, xen://
connections are deferred to the libxenlight driver.
V6:
- Address several code style issues noted by Daniel Veillard
- Make drive work with xen:/// URI
- Hold domain object reference while domain is injected in
libvirt event loop. Race found and fixed by Markus Groß.
V5:
- Ensure events are unregistered when domain private data
is destroyed. Discovered and fixed by Markus Groß.
V4:
- Handle restart of libvirtd, reconnecting to previously
started domains
- Rebased to current master
- Tested against Xen 4.1 RC7-pre (c/s 22961:c5d121fd35c0)
V3:
- Reserve vnc port within driver when autoport=yes
V2:
- Update to Xen 4.1 RC6-pre (c/s 22940:5a4710640f81)
- Rebased to current master
- Plug memory leaks found by Stefano Stabellini and valgrind
- Handle SHUTDOWN_crash domain death event
[1] http://lists.xensource.com/archives/html/xen-devel/2009-11/msg00436.html
as described at
http://wiki.debian.org/ToolChain/DSOLinkinghttps://fedoraproject.org/wiki/UnderstandingDSOLinkChange
otherwise the build fails on current Debian unstable with:
CCLD libvirtd
/usr/bin/ld: ../src/.libs/libvirt_driver_lxc.a(libvirt_driver_lxc_la-lxc_container.o): undefined reference to symbol 'capng_apply'
/usr/bin/ld: note: 'capng_apply' is defined in DSO //usr/lib/libcap-ng.so.0 so try adding it to the linker command line
CCLD libvirtd
/usr/bin/ld: ../src/.libs/libvirt_driver_storage.a(libvirt_driver_storage_la-storage_backend.o): undefined reference to symbol 'fgetfilecon'
/usr/bin/ld: note: 'fgetfilecon' is defined in DSO //lib/libselinux.so.1 so try adding it to the linker command line
//lib/libselinux.so.1: could not read symbols: Invalid operation
and similar errors.
The event loop implementation is used by more than just the
daemon, so move it into the shared area.
* daemon/event.c, src/util/event_poll.c: Renamed
* daemon/event.h, src/util/event_poll.h: Renamed
* tools/Makefile.am, tools/console.c, tools/virsh.c: Update
to use new virEventPoll APIs
* daemon/mdns.c, daemon/mdns.c, daemon/Makefile.am: Update
to use new virEventPoll APIs