To ensure virnetprotocol.[ch] are generated before any other
files, add them to BUILT_SOURCES and MAINTAINERCLEANFILES.
At the same time, move ESX_DRIVER_GENERATED out of DISTCLEAN
and into MAINTAINERCLEANFILES, since they are included in
EXTRA_DIST
* src/Makefile.am: Add virnetprotocol.[ch] to BUILT_SOURCES
The Makefile.am rules for generating RPC protocol had a couple
of bugs
- A instance of remote/rpcgen_fix.pl was not changed
to rpc/genprotocol.pl
- A dep from rpc/virnetmessage.h on the generated
rpc/virnetprotocol.h was missing
- The generated rpc/virnetprotocol.[ch] were not listed
in MAINTAINERCLEANFILES
* Makefile.am: Fix RPC protocol generation
The qemuMigrationPrepareDirect/PrepareTunnel methods accidentally
set the domain job to QEMU_JOB_MIGRATION_OUT when it should have
been QEMU_JOB_MIGRATION_IN. This didn't have any ill-effect, but
it is none-the-less wrong.
* src/qemu/qemu_migration.c: Fix job type
The code emitting taint warnings was mistakenly thinking
that guests run from the QEMU session driver were tainted
for having high privileges. This is of course nonsense
since the session driver is always unprivileged
* src/qemu/qemu_domain.c: Don't warn for high privileges in
non-privileged QEMU
If an application is using libvirt + KVM as a piece of its
internal infrastructure to perform a specific task, it can
be desirable to guarentee the VM dies when the virConnectPtr
disconnects from libvirtd. This ensures the app can't leak
any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as
a flag when starting guests enables this to be done.
* include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL
* src/qemu/qemu_driver.c: Support automatic killing of guests
upon connection close
* tools/virsh.c: Add --autokill flag to 'start' and 'create'
commands
Migration is a multi-step process
1. Begin(src)
2. Prepare(dst)
3. Perform(src)
4. Finish(dst)
5. Confirm(src)
At step 2, a QEMU process is lauched in the destination to
accept the incoming migration. Occasionally the process
that is controlling the migration workflow aborts, and fails
to call step 4, Finish. This leaves a QEMU process running
on the target (albeit with paused CPUs). Unfortunately because
step 2 actives a job on the QEMU process, it is unkillable by
normal means.
By registering the VM for autokill against the src virConnectPtr
in step 2, we can ensure that the guest is forcefully killed off
if the connection is closed without step 4 being invoked
* src/qemu/qemu_migration.c: Register autokill in PrepareDirect
and PrepareTunnel. Unregister autokill on successful run
of Finish
* src/qemu/qemu_process.c: Unregister autokill when stopping a
process
Sometimes it is useful to be able to automatically destroy a guest when
a connection is closed. For example, kill an incoming migration if
the client managing the migration dies. This introduces a map between
guest 'uuid' strings and virConnectPtr objects. When a connection is
closed, any associated guests are killed off.
* src/qemu/qemu_conf.h: Add autokill hash table to qemu driver
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs
for performing autokill of guests associated with a connection
* src/qemu/qemu_driver.c: Initialize autodestroy map
For controlled shutdown we issue a 'system_powerdown' command
to the QEMU monitor. This triggers an ACPI event which (most)
guest OS wire up to a controlled shutdown. There is no equiv
ACPI event to trigger a controlled reboot. This patch attempts
to fake a reboot.
- In qemuDomainObjPrivatePtr we have a bool fakeReboot
flag.
- The virDomainReboot method sets this flag and then
triggers a normal 'system_powerdown'.
- The QEMU process is started with '-no-shutdown'
so that the guest CPUs pause when it powers off the
guest
- When we receive the 'POWEROFF' event from QEMU JSON
monitor if fakeReboot is not set we invoke the
qemuProcessKill command and shutdown continues
normally
- If fakeReboot was set, we spawn a background thread
which issues 'system_reset' to perform a warm reboot
of the guest hardware. Then it issues 'cont' to
start the CPUs again
* src/qemu/qemu_command.c: Add -no-shutdown flag if
we have JSON support
* src/qemu/qemu_domain.h: Add 'fakeReboot' flag to
qemuDomainObjPrivate struct
* src/qemu/qemu_driver.c: Fake reboot using the
system_powerdown command if JSON support is available
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_json.h,
src/qemu/qemu_monitor_text.c, src/qemu/qemu_monitor_text.h: Add
binding for system_reset command
* src/qemu/qemu_process.c: Reset the guest & start CPUs if
fakeReboot is set
Move the daemon/remote_generator.pl to src/rpc/gendispatch.pl
and move the src/remote/rpcgen_fix.pl to src/rpc/genprotocol.pl
* daemon/Makefile.am: Update for new name/location of generator
* src/Makefile.am: Update for new name/location of generator
To facilitate creation of new clients using XDR RPC services,
pull alot of the remote driver code into a set of reusable
objects.
- virNetClient: Encapsulates a socket connection to a
remote RPC server. Handles all the network I/O for
reading/writing RPC messages. Delegates RPC encoding
and decoding to the registered programs
- virNetClientProgram: Handles processing and dispatch
of RPC messages for a single RPC (program,version).
A program can register to receive async events
from a client
- virNetClientStream: Handles generic I/O stream
integration to RPC layer
Each new client program now merely needs to define the list of
RPC procedures & events it wants and their handlers. It does
not need to deal with any of the network I/O functionality at
all.
Allow RPC servers to advertise themselves using MDNS,
via Avahi
* src/rpc/virnetserver.c, src/rpc/virnetserver.h: Allow
registration of MDNS services via avahi
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Add
API to fetch the listen port number
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Add API to
fetch the local port number
* src/rpc/virnetservermdns.c, src/rpc/virnetservermdns.h: Represent
an MDNS advertisement
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.
This extends the basic virNetSocket APIs to allow them to have
a handle to the TLS/SASL session objects, once established.
This ensures that any data reads/writes are automagically
passed through the TLS/SASL encryption layers if required.
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Wire up
SASL/TLS encryption
This provides two modules for handling SASL
* virNetSASLContext provides the process-wide state, currently
just a whitelist of usernames on the server and a one time
library init call
* virNetTLSSession provides the per-connection state, ie the
SASL session itself. This also include APIs for providing
data encryption/decryption once the session is established
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsaslcontext.c, src/rpc/virnetsaslcontext.h: Generic
SASL handling code
This provides two modules for handling TLS
* virNetTLSContext provides the process-wide state, in particular
all the x509 credentials, DH params and x509 whitelists
* virNetTLSSession provides the per-connection state, ie the
TLS session itself.
The virNetTLSContext provides APIs for validating a TLS session's
x509 credentials. The virNetTLSSession includes APIs for performing
the initial TLS handshake and sending/recving encrypted data
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnettlscontext.c, src/rpc/virnettlscontext.h: Generic
TLS handling code
Introduces a simple wrapper around the raw POSIX sockets APIs
and name resolution APIs. Allows for easy creation of client
and server sockets with correct usage of name resolution APIs
for protocol agnostic socket setup.
It can listen for UNIX and TCP stream sockets.
It can connect to UNIX, TCP streams directly, or indirectly
to UNIX sockets via an SSH tunnel or external command
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Generic
sockets APIs
* tests/Makefile.am: Add socket test
* tests/virnetsockettest.c: New test case
* tests/testutils.c: Avoid overriding LIBVIRT_DEBUG settings
* tests/ssh.c: Dumb helper program for SSH tunnelling tests
This provides a new struct that contains a buffer for the RPC
message header+payload, as well as a decoded copy of the message
header. There is an API for applying a XDR encoding & decoding
of the message headers and payloads. There are also APIs for
maintaining a simple FIFO queue of message instances.
Expected usage scenarios are:
To send a message
msg = virNetMessageNew()
...fill in msg->header fields..
virNetMessageEncodeHeader(msg)
...loook at msg->header fields to determine payload filter
virNetMessageEncodePayload(msg, xdrfilter, data)
...send msg->bufferLength worth of data from buffer
To receive a message
msg = virNetMessageNew()
...read VIR_NET_MESSAGE_LEN_MAX of data into buffer
virNetMessageDecodeLength(msg)
...read msg->bufferLength-msg->bufferOffset of data into buffer
virNetMessageDecodeHeader(msg)
...look at msg->header fields to determine payload filter
virNetMessageDecodePayload(msg, xdrfilter, data)
...run payload processor
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Internal
message handling API.
* testutils.c, testutils.h: Helper for printing binary differences
* virnetmessagetest.c: Validate all XDR encoding/decoding
This patch defines the basics of a generic RPC protocol in XDR.
This is wire ABI compatible with the original remote_protocol.x.
It takes everything except for the RPC calls / events from that
protocol
- The basic header virNetMessageHeader (aka remote_message_header)
- The error object virNetMessageError (aka remote_error)
- Two dummy objects virNetMessageDomain & virNetMessageNetwork
sadly needed to keep virNetMessageError ABI compatible with
the old remote_error
The RPC protocol supports method calls, async events and
bidirectional data streams as before
* src/Makefile.am: Add rules for generating RPC code from
protocol & define a new libvirt-net-rpc.la helper library
* src/rpc/virnetprotocol.x: New generic RPC protocol
On RHEL 5, I got:
/usr/bin/python ./generator.py /usr/bin/python
File "./generator.py", line 427
"virStreamFree", # Needed in custom virStream __del__, but free shouldn't
^
SyntaxError: invalid syntax
* python/generator.py (function_skip_python_impl): Use same syntax
as other skip lists.
GCC complained about a C99 for-loop declaration outside of C99 mode
when compiling on RHEL 5.
* src/qemu/qemu_driver.c (qemudDomainPinVcpuFlags): Avoid C99 for
loop, since gcc 4.1.2 hates it.
This patch adds documentation about the 802.1Qbh related parameters
of the virtualport element for 'direct' interfaces.
Signed-off-by: David S. Wang <dwang2@cisco.com>
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com>
Gnulib has been busy, with 397 commits; it's easier to update now
even without any known libvirt issue to be fixed, rather than
having to analyze an even larger changeset later on.
* .gnulib: Update to latest, for lots of changes.
* bootstrap: Synchronize to upstream.
This patch fixes the compilation of netlink.c and interface.c on those
systems missing either libnl or that have an older linux/if_link.h
include file not supporting macvtap or VF_PORTS.
WITH_MACVTAP is '1' if newer include files were detected, '0' otherwise.
IFLA_PORT_MAX is defined in linux/if_link.h if yet more functionality is
supported.
Turns out I was right in removing this the first time :) This is
needed in our custom __del__ function, but the C code wasn't
being generated. Add new infrastructure to do what we want
volDelete used to return VIR_ERR_INTERNAL_ERROR when attempting to
delete a volume which was still being allocated. It should return
VIR_ERR_OPERATION_INVALID.
* src/storage/storage_driver.c: Fix return of volDelete.
See previous patch for why this is good...
* src/xen/xen_driver.h (xenXMConfCache): Manage filename
dynamically.
* src/xen/xm_internal.c (xenXMConfigCacheAddFile)
(xenXMConfigFree, xenXMDomainDefineXML): Likewise.
POSIX allows implementations where PATH_MAX is undefined, leading
to compilation error. Not to mention that even if it is defined,
it is often wasteful in relation to the amount of data being stored.
All clients of vol->key were audited, and found not to care about
whether key is static or dynamic, except for these offenders:
* src/datatypes.h (struct _virStorageVol): Manage key dynamically.
* src/datatypes.c (virReleaseStorageVol): Free key.
(virGetStorageVol): Copy key.
In a second cleanup step this patch makes several interface functions from macvtap.c commonly available by moving them into interface.c and prefixing their names with 'iface'. Those functions taking Linux-specific structures as parameters are only visible on Linux.
ifaceRestoreMacAddress returns the return code from the ifaceSetMacAddr call and display an error message if setting the MAC address did not work. The caller is unchanged and still ignores the return code (which is ok).
In a first cleanup step, make nlComm from macvtap.c commonly available
for other code to use. Since nlComm uses Linux-specific structures as
parameters it's prototype is only visible on Linux.
We weren't using the @FOO@ notation for a Makefile substitution,
but instead for a sed rule, so using [@]FOO@ instead avoids the
need to exempt this syntax check.
* cfg.mk (_makefile_at_at_check_exceptions): Delete.
* tools/Makefile.am (virt-xml-validate, virt-pki-validate): Avoid
tripping syntax-check.
Reported by Daniel P. Berrange.
Files under src/util must not depend on src/conf
Solve the macvtap problem by moving the definition
of macvtap modes from domain_conf.h into macvtap.h
* src/util/macvtap.c, src/util/macvtap.h: Add enum
for macvtap modes
* src/conf/domain_conf.c, src/conf/domain_conf.h: Remove
enum for macvtap modes
This fixes a number of issues most of them raised by Eric Blake on the
generated documentation output:
- parsing of "long long int" and similar
- add parsing of unions within a struct
- remove spurious " * " fron comments on structure fields and enums
- fix concatenation of base type and name in arrays
- extend XSLT to cope with union in structs
* docs/apibuild.py: fix and extend API extraction tool
* docs/newapi.xsl: extend the stylesheets to cope with union in
public structures
For virtio disks and interfaces, qemu allows users to enable or disable
ioeventfd feature. This means, qemu can execute domain code, while
another thread waits for I/O event. Basically, in some cases it is win,
in some loss. This feature is available via 'ioeventfd' attribute in disk
and interface <driver> element. It accepts 'on' and 'off'. Leaving this
attribute out defaults to hypervisor decision.
When building rpms for newer Fedora or RHEL, take advantage of the
newer netcf packaging to guarantee interface snapshot support.
* libvirt.spec.in (BuildRequires): Bump minimum version on
platforms that support netcf 0.1.8.