Commit 12317957ec introduced an incompatible
architectural change for the AppArmor security driver. Specifically,
virSecurityManagerSetAllLabel() is now called much later in
src/qemu/qemu_process.c:qemuProcessStart(). Previously, SetAllLabel() was
called immediately after GenLabel() such that after the dynamic label (profile
name) was generated, SetAllLabel() would be called to create and load the
AppArmor profile into the kernel before qemuProcessHook() was executed. With
12317957ec, qemuProcessHook() is now called
before SetAllLabel(), such that aa_change_profile() ends up being called
before the AppArmor profile is loaded into the kernel (via ProcessLabel() in
qemuProcessHook()).
This patch addresses the change by making GenLabel() load the AppArmor
profile into the kernel after the label (profile name) is generated.
SetAllLabel() is then adjusted to only reload_profile() and append stdin_fn to
the profile when it is specified. This also makes the AppArmor driver work
like its SELinux counterpart with regard to SetAllLabel() and stdin_fn.
Bug-Ubuntu: https://launchpad.net/bugs/801569
When adding virDomainGetVcpusFlags in commit ea3f5c6, I did
enough rebasing that the doc comments in libvirt.c no longer
matched the final chosen enum names in libvirt.h.
And now we've gone ahead and deprecated the names
VIR_DOMAIN_VCPU_{LIVE,CONFIG}.
* src/libvirt.c (virDomainGetVcpusFlags): Fix comment.
I'm not sure when Py_ssize_t was introduced; but Fedora 14 Python 2.7
has it, while RHEL 5 Python 2.4 lacks it. It should be easy enough
to adjust if someone runs into problems.
* python/typewrappers.h (Py_ssize_t): Define for older python.
Use NUMA's older nodemask_t (fixed-size map) rather than the newer
'struct bitmask' (variable-size) in order to still compile on RHEL 5,
with its numactl-devel-0.9.8.
* src/qemu/qemu_process.c [HAVE_NUMA]: Prefer back-compat mode.
(qemuProcessInitNumaMemoryPolicy): Use older nodemask_t.
To ensure virnetprotocol.[ch] are generated before any other
files, add them to BUILT_SOURCES and MAINTAINERCLEANFILES.
At the same time, move ESX_DRIVER_GENERATED out of DISTCLEAN
and into MAINTAINERCLEANFILES, since they are included in
EXTRA_DIST
* src/Makefile.am: Add virnetprotocol.[ch] to BUILT_SOURCES
The Makefile.am rules for generating RPC protocol had a couple
of bugs
- A instance of remote/rpcgen_fix.pl was not changed
to rpc/genprotocol.pl
- A dep from rpc/virnetmessage.h on the generated
rpc/virnetprotocol.h was missing
- The generated rpc/virnetprotocol.[ch] were not listed
in MAINTAINERCLEANFILES
* Makefile.am: Fix RPC protocol generation
The qemuMigrationPrepareDirect/PrepareTunnel methods accidentally
set the domain job to QEMU_JOB_MIGRATION_OUT when it should have
been QEMU_JOB_MIGRATION_IN. This didn't have any ill-effect, but
it is none-the-less wrong.
* src/qemu/qemu_migration.c: Fix job type
The code emitting taint warnings was mistakenly thinking
that guests run from the QEMU session driver were tainted
for having high privileges. This is of course nonsense
since the session driver is always unprivileged
* src/qemu/qemu_domain.c: Don't warn for high privileges in
non-privileged QEMU
If an application is using libvirt + KVM as a piece of its
internal infrastructure to perform a specific task, it can
be desirable to guarentee the VM dies when the virConnectPtr
disconnects from libvirtd. This ensures the app can't leak
any VMs it was using. Adding VIR_DOMAIN_START_AUTOKILL as
a flag when starting guests enables this to be done.
* include/libvirt/libvirt.h.in: All VIR_DOMAIN_START_AUTOKILL
* src/qemu/qemu_driver.c: Support automatic killing of guests
upon connection close
* tools/virsh.c: Add --autokill flag to 'start' and 'create'
commands
Migration is a multi-step process
1. Begin(src)
2. Prepare(dst)
3. Perform(src)
4. Finish(dst)
5. Confirm(src)
At step 2, a QEMU process is lauched in the destination to
accept the incoming migration. Occasionally the process
that is controlling the migration workflow aborts, and fails
to call step 4, Finish. This leaves a QEMU process running
on the target (albeit with paused CPUs). Unfortunately because
step 2 actives a job on the QEMU process, it is unkillable by
normal means.
By registering the VM for autokill against the src virConnectPtr
in step 2, we can ensure that the guest is forcefully killed off
if the connection is closed without step 4 being invoked
* src/qemu/qemu_migration.c: Register autokill in PrepareDirect
and PrepareTunnel. Unregister autokill on successful run
of Finish
* src/qemu/qemu_process.c: Unregister autokill when stopping a
process
Sometimes it is useful to be able to automatically destroy a guest when
a connection is closed. For example, kill an incoming migration if
the client managing the migration dies. This introduces a map between
guest 'uuid' strings and virConnectPtr objects. When a connection is
closed, any associated guests are killed off.
* src/qemu/qemu_conf.h: Add autokill hash table to qemu driver
* src/qemu/qemu_process.c, src/qemu/qemu_process.h: Add APIs
for performing autokill of guests associated with a connection
* src/qemu/qemu_driver.c: Initialize autodestroy map
For controlled shutdown we issue a 'system_powerdown' command
to the QEMU monitor. This triggers an ACPI event which (most)
guest OS wire up to a controlled shutdown. There is no equiv
ACPI event to trigger a controlled reboot. This patch attempts
to fake a reboot.
- In qemuDomainObjPrivatePtr we have a bool fakeReboot
flag.
- The virDomainReboot method sets this flag and then
triggers a normal 'system_powerdown'.
- The QEMU process is started with '-no-shutdown'
so that the guest CPUs pause when it powers off the
guest
- When we receive the 'POWEROFF' event from QEMU JSON
monitor if fakeReboot is not set we invoke the
qemuProcessKill command and shutdown continues
normally
- If fakeReboot was set, we spawn a background thread
which issues 'system_reset' to perform a warm reboot
of the guest hardware. Then it issues 'cont' to
start the CPUs again
* src/qemu/qemu_command.c: Add -no-shutdown flag if
we have JSON support
* src/qemu/qemu_domain.h: Add 'fakeReboot' flag to
qemuDomainObjPrivate struct
* src/qemu/qemu_driver.c: Fake reboot using the
system_powerdown command if JSON support is available
* src/qemu/qemu_monitor.c, src/qemu/qemu_monitor.h,
src/qemu/qemu_monitor_json.c, src/qemu/qemu_monitor_json.h,
src/qemu/qemu_monitor_text.c, src/qemu/qemu_monitor_text.h: Add
binding for system_reset command
* src/qemu/qemu_process.c: Reset the guest & start CPUs if
fakeReboot is set
Move the daemon/remote_generator.pl to src/rpc/gendispatch.pl
and move the src/remote/rpcgen_fix.pl to src/rpc/genprotocol.pl
* daemon/Makefile.am: Update for new name/location of generator
* src/Makefile.am: Update for new name/location of generator
To facilitate creation of new clients using XDR RPC services,
pull alot of the remote driver code into a set of reusable
objects.
- virNetClient: Encapsulates a socket connection to a
remote RPC server. Handles all the network I/O for
reading/writing RPC messages. Delegates RPC encoding
and decoding to the registered programs
- virNetClientProgram: Handles processing and dispatch
of RPC messages for a single RPC (program,version).
A program can register to receive async events
from a client
- virNetClientStream: Handles generic I/O stream
integration to RPC layer
Each new client program now merely needs to define the list of
RPC procedures & events it wants and their handlers. It does
not need to deal with any of the network I/O functionality at
all.
Allow RPC servers to advertise themselves using MDNS,
via Avahi
* src/rpc/virnetserver.c, src/rpc/virnetserver.h: Allow
registration of MDNS services via avahi
* src/rpc/virnetserverservice.c, src/rpc/virnetserverservice.h: Add
API to fetch the listen port number
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Add API to
fetch the local port number
* src/rpc/virnetservermdns.c, src/rpc/virnetservermdns.h: Represent
an MDNS advertisement
To facilitate creation of new daemons providing XDR RPC services,
pull a lot of the libvirtd daemon code into a set of reusable
objects.
* virNetServer: A server contains one or more services which
accept incoming clients. It maintains the list of active
clients. It has a list of RPC programs which can be used
by clients. When clients produce a complete RPC message,
the server passes this onto the corresponding program for
handling, and queues any response back with the client.
* virNetServerClient: Encapsulates a single client connection.
All I/O for the client is handled, reading & writing RPC
messages.
* virNetServerProgram: Handles processing and dispatch of
RPC method calls for a single RPC (program,version).
Multiple programs can be registered with the server.
* virNetServerService: Encapsulates socket(s) listening for
new connections. Each service listens on a single host/port,
but may have multiple sockets if on a dual IPv4/6 host.
Each new daemon now merely has to define the list of RPC procedures
& their handlers. It does not need to deal with any network related
functionality at all.
This extends the basic virNetSocket APIs to allow them to have
a handle to the TLS/SASL session objects, once established.
This ensures that any data reads/writes are automagically
passed through the TLS/SASL encryption layers if required.
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Wire up
SASL/TLS encryption
This provides two modules for handling SASL
* virNetSASLContext provides the process-wide state, currently
just a whitelist of usernames on the server and a one time
library init call
* virNetTLSSession provides the per-connection state, ie the
SASL session itself. This also include APIs for providing
data encryption/decryption once the session is established
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsaslcontext.c, src/rpc/virnetsaslcontext.h: Generic
SASL handling code
This provides two modules for handling TLS
* virNetTLSContext provides the process-wide state, in particular
all the x509 credentials, DH params and x509 whitelists
* virNetTLSSession provides the per-connection state, ie the
TLS session itself.
The virNetTLSContext provides APIs for validating a TLS session's
x509 credentials. The virNetTLSSession includes APIs for performing
the initial TLS handshake and sending/recving encrypted data
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnettlscontext.c, src/rpc/virnettlscontext.h: Generic
TLS handling code
Introduces a simple wrapper around the raw POSIX sockets APIs
and name resolution APIs. Allows for easy creation of client
and server sockets with correct usage of name resolution APIs
for protocol agnostic socket setup.
It can listen for UNIX and TCP stream sockets.
It can connect to UNIX, TCP streams directly, or indirectly
to UNIX sockets via an SSH tunnel or external command
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetsocket.c, src/rpc/virnetsocket.h: Generic
sockets APIs
* tests/Makefile.am: Add socket test
* tests/virnetsockettest.c: New test case
* tests/testutils.c: Avoid overriding LIBVIRT_DEBUG settings
* tests/ssh.c: Dumb helper program for SSH tunnelling tests
This provides a new struct that contains a buffer for the RPC
message header+payload, as well as a decoded copy of the message
header. There is an API for applying a XDR encoding & decoding
of the message headers and payloads. There are also APIs for
maintaining a simple FIFO queue of message instances.
Expected usage scenarios are:
To send a message
msg = virNetMessageNew()
...fill in msg->header fields..
virNetMessageEncodeHeader(msg)
...loook at msg->header fields to determine payload filter
virNetMessageEncodePayload(msg, xdrfilter, data)
...send msg->bufferLength worth of data from buffer
To receive a message
msg = virNetMessageNew()
...read VIR_NET_MESSAGE_LEN_MAX of data into buffer
virNetMessageDecodeLength(msg)
...read msg->bufferLength-msg->bufferOffset of data into buffer
virNetMessageDecodeHeader(msg)
...look at msg->header fields to determine payload filter
virNetMessageDecodePayload(msg, xdrfilter, data)
...run payload processor
* src/Makefile.am: Add to libvirt-net-rpc.la
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Internal
message handling API.
* testutils.c, testutils.h: Helper for printing binary differences
* virnetmessagetest.c: Validate all XDR encoding/decoding
This patch defines the basics of a generic RPC protocol in XDR.
This is wire ABI compatible with the original remote_protocol.x.
It takes everything except for the RPC calls / events from that
protocol
- The basic header virNetMessageHeader (aka remote_message_header)
- The error object virNetMessageError (aka remote_error)
- Two dummy objects virNetMessageDomain & virNetMessageNetwork
sadly needed to keep virNetMessageError ABI compatible with
the old remote_error
The RPC protocol supports method calls, async events and
bidirectional data streams as before
* src/Makefile.am: Add rules for generating RPC code from
protocol & define a new libvirt-net-rpc.la helper library
* src/rpc/virnetprotocol.x: New generic RPC protocol
On RHEL 5, I got:
/usr/bin/python ./generator.py /usr/bin/python
File "./generator.py", line 427
"virStreamFree", # Needed in custom virStream __del__, but free shouldn't
^
SyntaxError: invalid syntax
* python/generator.py (function_skip_python_impl): Use same syntax
as other skip lists.
GCC complained about a C99 for-loop declaration outside of C99 mode
when compiling on RHEL 5.
* src/qemu/qemu_driver.c (qemudDomainPinVcpuFlags): Avoid C99 for
loop, since gcc 4.1.2 hates it.
This patch adds documentation about the 802.1Qbh related parameters
of the virtualport element for 'direct' interfaces.
Signed-off-by: David S. Wang <dwang2@cisco.com>
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: Vasanthy Kolluri <vkolluri@cisco.com>
Gnulib has been busy, with 397 commits; it's easier to update now
even without any known libvirt issue to be fixed, rather than
having to analyze an even larger changeset later on.
* .gnulib: Update to latest, for lots of changes.
* bootstrap: Synchronize to upstream.
This patch fixes the compilation of netlink.c and interface.c on those
systems missing either libnl or that have an older linux/if_link.h
include file not supporting macvtap or VF_PORTS.
WITH_MACVTAP is '1' if newer include files were detected, '0' otherwise.
IFLA_PORT_MAX is defined in linux/if_link.h if yet more functionality is
supported.
Turns out I was right in removing this the first time :) This is
needed in our custom __del__ function, but the C code wasn't
being generated. Add new infrastructure to do what we want
volDelete used to return VIR_ERR_INTERNAL_ERROR when attempting to
delete a volume which was still being allocated. It should return
VIR_ERR_OPERATION_INVALID.
* src/storage/storage_driver.c: Fix return of volDelete.
See previous patch for why this is good...
* src/xen/xen_driver.h (xenXMConfCache): Manage filename
dynamically.
* src/xen/xm_internal.c (xenXMConfigCacheAddFile)
(xenXMConfigFree, xenXMDomainDefineXML): Likewise.