https://bugzilla.redhat.com/show_bug.cgi?id=830201
In older Fedora, the spec file for libivrt depended on avahi, which
included avahi-daemon, which in turn depended on dbus. But now that
avahi libs and avahi-daemon are (correctly) in separate pacakges,
and since we REALLY don't want a mandatory dependency on avahi-daemon,
and considering that our init scripts require the messagebus service
from dbus, we need to explicitly require dbus ourselves.
* libvirt.spec.in (Requires): Add dbus for libvirt-daemon.
https://bugzilla.redhat.com/show_bug.cgi?id=830201
The initscript and upstart services depend on dbus starting
before libvirtd. When we first wrote the systemd script, we
tried to do the same, but we depended on dbus.target (which
does not exist) in comparison to network.target (which does
exist), so we removed that in commit 4c7973e. But we still
need dbus up and running first, especially now that we want
to support shutdown inhibition via dbus (whereas we originally
needed dbus only for firewall control).
http://www.freedesktop.org/software/systemd/man/systemd.target.html
explains how a target (such as network.target) is just a collection
of common services bundled together, and why we want network.target
but dbus.service.
* daemon/libvirtd.service.in (Unit): Depend on dbus starting
first.
Only one error in qemu_monitor was already using the relatively
new OPERATION_UNSUPPORTED error, even though it is a better fit
for all of the messages related to options that are unsupported
due to the version of qemu in use rather than due to a user's
XML or .conf file choice. Suggested by Osier Yang.
* src/qemu/qemu_monitor.c (qemuMonitorSendFileHandle)
(qemuMonitorAddHostNetwork, qemuMonitorRemoveHostNetwork)
(qemuMonitorAttachDrive, qemuMonitorDiskSnapshot)
(qemuMonitorDriveMirror, qemuMonitorTransaction)
(qemuMonitorBlockCommit, qemuMonitorDrivePivot)
(qemuMonitorBlockJob, qemuMonitorSystemWakeup)
(qemuMonitorGetVersion, qemuMonitorGetMachines)
(qemuMonitorGetCPUDefinitions, qemuMonitorGetCommands)
(qemuMonitorGetEvents, qemuMonitorGetKVMState)
(qemuMonitorGetObjectTypes, qemuMonitorGetObjectProps)
(qemuMonitorGetTargetArch): Use better error category.
Without this patch, attempts to create a disk snapshot when qemu
is too old results in a cryptic message:
virsh # snapshot-create 23 --disk-only
error: operation failed: Failed to take snapshot: unknown command: 'snapshot_blkdev'
Now it reports:
virsh # snapshot-create 23 --disk-only
error: unsupported configuration: live disk snapshot not supported with this QEMU binary
All versions of qemu that support live disk snapshot also support
QMP (basically upstream qemu 1.1 and later, and backports to RHEL 6.2).
* src/qemu/qemu_capabilities.h (QEMU_CAPS_DISK_SNAPSHOT): New
capability.
* src/qemu/qemu_capabilities.c (qemuCaps): Track it.
(qemuCapsProbeQMPCommands): Set it.
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateDiskActive): Use
it.
* src/qemu/qemu_monitor.c (qemuMonitorDiskSnapshot): Simplify.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONDiskSnapshot):
Likewise.
* src/qemu/qemu_monitor_text.h (qemuMonitorTextDiskSnapshot):
Delete.
* src/qemu/qemu_monitor_text.c (qemuMonitorTextDiskSnapshot):
Likewise.
RHEL 6.3 uses dbus-devel-1.2.24, which lacked support for the
DBUS_TYPE_UNIX_FD define (contrast with Fedora 18 using 1.6.8).
But since it is an older dbus, it also lacks support for shutdown
inhibitions as provided by newer systemd.
Compilation failure introduced in commit 31330926.
* src/rpc/virnetserver.c (virNetServerAddShutdownInhibition):
Compile out if dbus is too old.
Implement the domainManagedSave, domainHasManagedSaveImage, and
domainManagedSaveRemove functions in the libvirt legacy xen driver.
domainHasManagedSaveImage check the managedsave image from filesystem
everytime. This is different from qemu and libxl driver. In qemu or
libxl driver, there is a hasManagesSave flag in virDomainObjPtr which
is not used in xen legacy driver. This flag could not add into xen
driver ptr either, because the driver ptr will be released at the end of
every libvirt api call. Meanwhile, AFAIK, xen store all the flags in
xen not in libvirt xen driver. There is no need to add this flag in xen.
Signed-off-by: Bamvor Jian Zhang <bjzhang@suse.com>
Add code in the python binding to cope with the new APIs
virConnectRegisterCloseCallback and
virConnectUnregisterCloseCallback. Also demonstrate their
use in the python domain events demo
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When the session dies or when the system is going to be shut down
we issue a virStateStop() call to instruct drivers to prepare to
be stopped. This will remove any previously acquire inhibitions.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Use the freedesktop inhibition DBus service to prevent host
shutdown or session logout while any VMs are running.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently to deal with auto-shutdown libvirtd must periodically
poll all stateful drivers. Thus sucks because it requires
acquiring both the driver lock and locks on every single virtual
machine. Instead pass in a "inhibit" callback to virStateInitialize
which drivers can invoke whenever they want to inhibit shutdown
due to existance of active VMs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The only important state that should prevent libvirtd shutdown
is from running VMs. Networks, host devices, network filters
and storage pools are all long lived resources that have no
significant in-memory state. They should not block shutdown.
Commit 71d1256 tried to fix a problem where rebasing an old
branch on top of newer libvirt.git resulted in automake failing
because of a missing AUTHORS file. However, while the fix
worked for an incremental 'make', it did not work for someone
that directly reran './autogen.sh'. Reported by Laine Stump.
* autogen.sh (autoreconf): Check for same conditions as cfg.mk.
* cfg.mk (_update_required): Add comments.
The patch adds the backend driver to support iSCSI format storage pools
and volumes for ESX host. The mapping of ESX iSCSI specifics to Libvirt
is as follows:
1. ESX static iSCSI target <------> Libvirt Storage Pools
2. ESX iSCSI LUNs <------> Libvirt Storage Volumes.
The above understanding is based on http://libvirt.org/storage.html.
The operation supported on iSCSI pools includes:
1. List storage pools & volumes.
2. Get XML descriptor operaion on pools & volumes.
3. Lookup operation on pools & volumes by name, UUID and path (if applicable).
iSCSI pools does not support operations such as: Create / remove pools
and volumes.
Since we can't (currently) rely on the ability to provide blanket
support for all possible network changes by calling the toplevel
netdev hostside disconnect/connect functions (due to qemu only
supporting a lockstep between initialization of host side and guest
side of devices), in order to support live change of an interface's
nwfilter we need to make a special purpose function to only call the
nwfilter teardown and setup functions if the filter for an interface
(or its parameters) changes. The pattern is nearly identical to that
used to change the bridge that an interface is connected to.
This patch was inspired by a request from Guido Winkelmann
<guido@sagersystems.de>, who tested an earlier version.
To detect if an interface's nwfilter has changed, we need to also
compare the filterparams, which is a hashtable of virNWFilterVarValue.
virHashEqual can do this nicely, but requires a pointer to a function
that will compare two of the items being stored in the hashes.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=881480
These three functions:
virDomainNetGetActualBridgeName
virDomainNetGetActualDirectDev
virDomainNetGetActualDirectMode
return attributes that are in a union whose contents are interpreted
differently depending on the actual->type and so they should only
return non-0 when actual->type is 'bridge' (in the first case) or
'direct' (in the other two cases, but I had neglected to do that, so
...DirectDev() was returning bridge.brname (which happens to share the
same spot in the union with direct.linkdev) if actual->type was
'bridge', and ...BridgeName was returning direct.linkdev when
actual->type was 'direct'.
How does this involve Bug 881480 (which was about the inability to
switch between two networks that both have "<forward mode='bridge'/>
<bridge name='xxx'/>"? Whenever the return value of
virDomainNetGetActualDirectDev() for the new and old network
definitions doesn't match, qemuDomainChangeNet() requires a "complete
reconnect" of the device, which qemu currently doesn't
support. ...DirectDev() *should* have been returning NULL for old and
new, but was instead returning the old and new bridge names, which
differ.
(The other two functions weren't causing any behavioral problems in
virDomainChangeNet(), but their problem and fix was identical, so I
included them in this same patch).
Libvirt's helper API's when called directly don't raise the error so
that virsh remembers it. Subsequent calls to libvirt API's might reset
the error.
In case of schedinfo virDomainFree() in the cleanup section resets the
error when virTypedParameterAssignFromStr() fails.
This patch adds function vshSaveLibvirtError() that can be called after
calling libvirt helper APIs to ensure the error is remembered.
Fix the null pointer access when UUID is not specified.
Introduce a bool 'uuidUsable' to virStoragePoolAuthCephx that indicates
if uuid was specified or not and use it instead of the pointless
comparison of the static UUID array to NULL.
Add an error message if both uuid and usage are specified.
Fixes:
Error: FORWARD_NULL (CWE-476):
libvirt-0.10.2/src/conf/storage_conf.c:461: var_deref_model: Passing
null pointer "uuid" to function "virUUIDParse(char const *, unsigned
char *)", which dereferences it. (The dereference is assumed on the
basis of the 'nonnull' parameter attribute.)
Error: NO_EFFECT (CWE-398):
libvirt-0.10.2/src/conf/storage_conf.c:979: array_null: Comparing an
array to null is not useful: "src->auth.cephx.secret.uuid != NULL".
virStringSplit requires a non-NULL input, but commit cef78ed forgot
to follow the rule.
* tools/virsh-domain.c (cmdReboot, cmdShutdown): Avoid NULL deref.
Commit a21f5112 fixed one API, but missed two others that also
failed to log their 'flags' argument.
* src/libvirt.c (virNodeSuspendForDuration, virDomainGetHostname):
Log flags parameter.
The virNodeSuspend API allows for a duration of 0, to mean no
timed wakup. virsh needlessly forbids this though
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This introduces a few new APIs for dealing with strings.
One to split a char * into a char **, another to join a
char ** into a char *, and finally one to free a char **
There is a simple test suite to validate the edge cases
too. No more need to use the horrible strtok_r() API,
or hand-written code for splitting strings.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add support for doing controlled shutdown / reboot in the LXC
driver. The default behaviour is to try talking to /dev/initctl
inside the container's virtual root (/proc/$INITPID/root). This
works with sysvinit or systemd. If that file does not exist
then send SIGTERM (for shutdown) or SIGHUP (for reboot). These
signals are not any kind of particular standard for shutdown
or reboot, just something apps can choose to handle. The new
virDomainSendProcessSignal allows for sending custom signals.
We might allow the choice of SIGTERM/HUP to be configured for
LXC containers via the XML in the future.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virDomainShutdownFlags and virDomainReboot APIs allow the caller
to request the operation is implemented via either acpi button press
or a guest agent. For containers, a couple of other methods make
sense, a message to /dev/initctl, and direct kill(SIGTERM|HUP) of
the container init process.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The fact that only the guest agent, or ACPI flag can be used
when requesting reboot/shutdown is merely a limitation of the
QEMU driver impl at this time. Thus it should not be in
libvirt.c code
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
To be able todo controlled shutdown/reboot of containers an
API to talk to init via /dev/initctl is required. Fortunately
this is quite straightforward to implement, and is supported
by both sysvinit and systemd. Upstart support for /dev/initctl
is unclear.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When seeing a message
virNetSASLContextCheckIdentity:146 : SASL client admin not allowed in whitelist
it isn't immediately obvious that 'admin' is the identity
being checked. Quote the string to make it more obvious
The default machine type must be stored in the first element of
the caps->machineTypes array. This was done for help output
parsing but not for QMP probing.
Added a helper function qemuSetDefaultMachine to apply the same
fix up for both probing methods.
Further, it was necessary to set caps->nmachineTypes after QMP
probing.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=872292
Libvirt should not attempt to call a QMP command that has not been
documented in qemu.git - if future qemu introduces a command by the
same name but with subtly different semantics, then libvirt will be
broken when trying to use that command.
We also had some code that could never be reached - some of our
commands have an alternate for new vs. old qemu HMP commands; but
if we are new enough to support QMP, we only need a fallback to
the new HMP counterpart, and don't need to try for a QMP counterpart
for the old HMP version.
See also this attempt to convert the three snapshot commands to QMP:
https://lists.gnu.org/archive/html/qemu-devel/2012-07/msg01597.html
although it looks like that will still not happen before qemu 1.3.
That thread eventually decided that qemu would use the name
'save-vm' rather than 'savevm', which mitigates the fact that
libvirt's attempt to use a QMP 'savevm' would be broken, but we
might not be as lucky on the other commands.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONSetCPU)
(qemuMonitorJSONAddDrive, qemuMonitorJSONDriveDel)
(qemuMonitorJSONCreateSnapshot, qemuMonitorJSONLoadSnapshot)
(qemuMonitorJSONDeleteSnapshot): Use only HMP fallback for now.
(qemuMonitorJSONAddHostNetwork, qemuMonitorJSONRemoveHostNetwork)
(qemuMonitorJSONAttachDrive, qemuMonitorJSONGetGuestDriveAddress):
Delete; QMP implies QEMU_CAPS_DEVICE, which prefers AddNetdev,
RemoveNetdev, and AddDrive anyways (qemu_hotplug.c has all callers).
* src/qemu/qemu_monitor.c (qemuMonitorAddHostNetwork)
(qemuMonitorRemoveHostNetwork, qemuMonitorAttachDrive): Reflect
deleted commands.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONAddHostNetwork)
(qemuMonitorJSONRemoveHostNetwork, qemuMonitorJSONAttachDrive):
Likewise.
https://bugzilla.redhat.com/show_bug.cgi?id=876828
Commit 38c4a9cc introduced a regression in hot unplugging of disks
from qemu, where cgroup device ACLs were no longer being revoked
(thankfully not a security hole: cgroup ACLs only prevent open()
of the disk; so reverting the ACL prevents future abuse but doesn't
stop abuse from an fd that was already opened before the ACL change).
Commit 1b2ebf95 overlooked that there were two spots affected.
* src/qemu/qemu_hotplug.c (qemuDomainDetachDiskDevice):
Transfer backing chain before deletion.
* src/qemu/qemu_driver.c (qemuDomainDetachDeviceDiskLive): Fix
spacing (partly to ensure a different-looking patch).
Also removed some unreachable code found by coverity:
libvirt-0.10.2/src/nwfilter/nwfilter_driver.c:259: unreachable: This
code cannot be reached: "nwfilterDriverUnlock(driver...".
On error, virStoragePoolGetAutostart would return -1 leaving autostart
untouched.
Removed the misleading debug message as well.
Error: CHECKED_RETURN (CWE-252):
libvirt-0.10.2/tools/virsh-pool.c:1386: unchecked_value: No check of the
return value of "virStoragePoolGetAutostart(pool, &autostart)".
This patch adds two labels and gets rid of a ton of duplicated code.
This patch also fixes some error message and switches most of them to
proper error reporting functions.
This patch adds macros to help retrieve configuration values from qemu
driver's configuration. Some configuration options are grouped
together in the process.
This bug resolves CVE-2012-3411, which is described in the following
bugzilla report:
https://bugzilla.redhat.com/show_bug.cgi?id=833033
The following report is specifically for libvirt on Fedora:
https://bugzilla.redhat.com/show_bug.cgi?id=874702
In short, a dnsmasq instance run with the intention of listening for
DHCP/DNS requests only on a libvirt virtual network (which is
constructed using a Linux host bridge) would also answer queries sent
from outside the virtualization host.
This patch takes advantage of a new dnsmasq option "--bind-dynamic",
which will cause the listening socket to be setup such that it will
only receive those requests that actually come in via the bridge
interface. In order for this behavior to actually occur, not only must
"--bind-interfaces" be replaced with "--bind-dynamic", but also all
"--listen-address" options must be replaced with a single
"--interface" option. Fully:
--bind-interfaces --except-interface lo --listen-address x.x.x.x ...
(with --listen-address possibly repeated) is replaced with:
--bind-dynamic --interface virbrX
Of course libvirt can't use this new option if the host's dnsmasq
doesn't have it, but we still want libvirt to function (because the
great majority of libvirt installations, which only have mode='nat'
networks using RFC1918 private address ranges (e.g. 192.168.122.0/24),
are immune to this vulnerability from anywhere beyond the local subnet
of the host), so we use the new dnsmasqCaps API to check if dnsmasq
supports the new option and, if not, we use the "old" option style
instead. In order to assure that this permissiveness doesn't lead to a
vulnerable system, we do check for non-private addresses in this case,
and refuse to start the network if both a) we are using the old-style
options, and b) the network has a publicly routable IP
address. Hopefully this will provide the proper balance of not being
disruptive to those not practically affected, and making sure that
those who *are* affected get their dnsmasq upgraded.
(--bind-dynamic was added to dnsmasq in upstream commit
54dd393f3938fc0c19088fbd319b95e37d81a2b0, which was included in
dnsmasq-2.63)
This new function returns true if the given address is in the range of
any "private" or "local" networks as defined in RFC1918 (IPv4) or
RFC3484/RFC4193 (IPv6), otherwise they return false.
These ranges are:
192.168.0.0/16
172.16.0.0/16
10.0.0.0/24
FC00::/7
FEC0::/10
In order to optionally take advantage of new features in dnsmasq when
the host's version of dnsmasq supports them, but still be able to run
on hosts that don't support the new features, we need to be able to
detect the version of dnsmasq running on the host, and possibly
determine from the help output what options are in this dnsmasq.
This patch implements a greatly simplified version of the capabilities
code we already have for qemu. A dnsmasqCaps device can be created and
populated either from running a program on disk, reading a file with
the concatenated output of "dnsmasq --version; dnsmasq --help", or
examining a buffer in memory that contains the concatenated output of
those two commands. Simple functions to retrieve capabilities flags,
the version number, and the path of the binary are also included.
bridge_driver.c creates a single dnsmasqCaps object at driver startup,
and disposes of it at driver shutdown. Any time it must be used, the
dnsmasqCapsRefresh method is called - it checks the mtime of the
binary, and re-runs the checks if the binary has changed.
networkxml2argvtest.c creates 2 "artificial" dnsmasqCaps objects at
startup - one "restricted" (doesn't support --bind-dynamic) and one
"full" (does support --bind-dynamic). Some of the test cases use one
and some the other, to make sure both code pathes are tested.