Just as leaving managed save metadata behind can cause problems
when creating a new domain that happens to collide with the name
of the just-deleted domain, the same is true of leaving any
snapshot metadata behind. For safety sake, extend the semantic
change of commit b26a9fa9 to also cover snapshot metadata as a
reason to reject undefining an inactive domain. A future patch
will make sure that shutdown of a transient domain automatically
deletes snapshot metadata (whether by destroy, shutdown, or
guest-initiated action). Management apps of transient domains
should take care to capture xml of snapshots, if it is necessary
to recreate the snapshot metadata on a later transient domain
with the same name and uuid.
This also documents a new flag that hypervisors can choose to
support as a shortcut for taking care of the metadata as part of
the undefine process; however, nontrivial driver support for these
flags will be deferred to future patches.
Note that ESX and VBox can never be transient; therefore, they
do not have to worry about automatic cleanup after shutdown
(the persistent domain still remains); likewise they never
store snapshot metadata, so the undefine flag is trivial.
The nontrivial work remaining is thus in the qemu driver.
* include/libvirt/libvirt.h.in
(VIR_DOMAIN_UNDEFINE_SNAPSHOTS_METADATA): New flag.
* src/libvirt.c (virDomainUndefine, virDomainUndefineFlags):
Document new limitations and flag.
* src/esx/esx_driver.c (esxDomainUndefineFlags): Trivial
implementation.
* src/vbox/vbox_tmpl.c (vboxDomainUndefineFlags): Likewise.
* src/qemu/qemu_driver.c (qemuDomainUndefineFlags): Enforce
the limitations.
Redefining a qemu snapshot requires a bit of a tweak to the common
snapshot parsing code, but the end result is quite nice.
Be careful that redefinitions do not introduce circular parent
chains. Also, we don't want to allow conversion between online
and offline existing snapshots. We could probably do some more
validation for snapshots that don't already exist to make sure
they are even feasible, by parsing qemu-img output, but that
can come later.
* src/conf/domain_conf.h (virDomainSnapshotParseFlags): New
internal flags.
* src/conf/domain_conf.c (virDomainSnapshotDefParseString): Alter
signature to take internal flags.
* src/esx/esx_driver.c (esxDomainSnapshotCreateXML): Update caller.
* src/vbox/vbox_tmpl.c (vboxDomainSnapshotCreateXML): Likewise.
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateXML): Support
new public flags.
Supporting NO_METADATA on snapshot creation is interesting - we must
still return a valid opaque snapshot object, but the user can't get
anything out of it (unless we add a virDomainSnapshotGetName()),
since it is no longer registered with the domain.
Also, virsh now tries to query for secure xml, in anticipation of
when we store <domain> xml inside <domainsnapshot>; for now, we
can trivially support it, since we have nothing secure.
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateXML): Support
new flag.
(qemuDomainSnapshotGetXMLDesc): Trivially support VIR_DOMAIN_XML_SECURE.
The first two flags are essential for being able to replicate
snapshot hierarchies across multiple hosts, which will come in
handy for supervised migrations. It also allows a management app
to take a snapshot of a transient domain, save the metadata, stop
the domain, recreate a new transient domain by the same name,
redefine the snapshot, then revert to it.
This is not quite as convenient as leaving the metadata behind
after a domain is no longer around, but doing that has a few
problems: 1. the libvirt API can only delete snapshot metadata
if there is a valid domain handle to use to get to that snapshot
object - if stale data is left behind without a domain, there is
no way to request that the data be cleaned up. 2. creating a new
domain with the same name but different uuid than the older
domain where a snapshot existed cannot use the older snapshot
data; this risks confusing libvirt, and forbidding the stale
data is similar to the recent patch to forbid stale managed save.
The first two flags might be useful on hypervisors with no metadata,
but only for modifying the notion of the current snapshot;
however, I don't know how to do that for ESX or VBox.
The third flag is a convenience option, to combine a creation with
a delete metadata into one step. It is trivial for hypervisors
with no metadata.
The qemu changes will be involved enough to warrant a separate patch.
* include/libvirt/libvirt.h.in
(VIR_DOMAIN_SNAPSHOT_CREATE_REDEFINE)
(VIR_DOMAIN_SNAPSHOT_CREATE_CURRENT)
(VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA): New flags.
* src/libvirt.c (virDomainSnapshotCreateXML): Document them, and
enforce mutual exclusion.
* src/esx/esx_driver.c (esxDomainSnapshotCreateXML): Trivial
implementation.
* src/vbox/vbox_tmpl.c (vboxDomainSnapshotCreateXML): Likewise.
* docs/formatsnapshot.html.in: Document re-creation.
To make it easier to know when undefine will fail because of existing
snapshot metadata, we need to know how many snapshots have metadata.
Also, it is handy to filter the list of snapshots to just those that
have no parents; document that flag now, but implement it in later patches.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_SNAPSHOT_LIST_ROOTS)
(VIR_DOMAIN_SNAPSHOT_LIST_METADATA): New flags.
* src/libvirt.c (virDomainSnapshotNum)
(virDomainSnapshotListNames): Document them.
* src/esx/esx_driver.c (esxDomainSnapshotNum)
(esxDomainSnapshotListNames): Implement trivial flag.
* src/vbox/vbox_tmpl.c (vboxDomainSnapshotNum)
(vboxDomainSnapshotListNames): Likewise.
* src/qemu/qemu_driver.c (qemuDomainSnapshotNum)
(qemuDomainSnapshotListNames): Likewise.
Adding this was trivial compared to the previous patch for fixing
qemu snapshot deletion in the first place.
* src/qemu/qemu_driver.c (qemuDomainSnapshotDiscard): Add
parameter.
(qemuDomainSnapshotDiscardDescendant, qemuDomainSnapshotDelete):
Update callers.
A future patch will make it impossible to remove a domain if it
would leave behind any libvirt-tracked metadata about snapshots,
since stale metadata interferes with a new domain by the same name.
But requiring snaphot contents to be deleted before removing a
domain is harsh; with qemu, qemu-img can still make use of the
contents after the libvirt domain is gone. Therefore, we need
an option to get rid of libvirt tracking information, but not
the actual contents. For hypervisors that do not track any
metadata in libvirt, the implementation is trivial; all remaining
hypervisors (really, just qemu) will be dealt with separately.
* include/libvirt/libvirt.h.in
(VIR_DOMAIN_SNAPSHOT_DELETE_METADATA_ONLY): New flag.
* src/libvirt.c (virDomainSnapshotDelete): Document it.
* src/esx/esx_driver.c (esxDomainSnapshotDelete): Trivially
supported when there is no libvirt metadata.
* src/vbox/vbox_tmpl.c (vboxDomainSnapshotDelete): Likewise.
Similar to the last patch in isolating the filtering from the
client actions, so that clients don't have to reinvent the
filtering.
* src/conf/domain_conf.h (virDomainSnapshotForEachChild): New
prototype.
* src/libvirt_private.syms (domain_conf.h): Export it.
* src/conf/domain_conf.c (virDomainSnapshotActOnChild)
(virDomainSnapshotForEachChild): New functions.
(virDomainSnapshotCountChildren): Delete.
(virDomainSnapshotHasChildren): Simplify.
* src/qemu/qemu_driver.c (qemuDomainSnapshotReparentChildren)
(qemuDomainSnapshotDelete): Likewise.
Deleting a snapshot and all its descendants had problems with
tracking the current snapshot. The deletion does not necessarily
proceed in depth-first order, so a parent could be deleted
before a child, wreaking havoc on passing the notion of the
current snapshot to the parent. Furthermore, even if traversal
were depth-first, doing multiple file writes to pass current up
the chain one snapshot at a time is wasteful, comparing to a
single update to the current snapshot at the end of the algorithm.
* src/qemu/qemu_driver.c (snap_remove): Add field.
(qemuDomainSnapshotDiscard): Add parameter.
(qemuDomainSnapshotDiscardDescendant): Adjust accordingly.
(qemuDomainSnapshotDelete): Properly reset current.
This one's nasty. Ever since we fixed virHashForEach to prevent
nested hash iterations for safety reasons (commit fba550f6),
virDomainSnapshotDelete with VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN
has been broken for qemu: it deletes children, while leaving
grandchildren intact but pointing to a no-longer-present parent.
But even before then, the code would often appear to succeed to
clean up grandchildren, but risked memory corruption if you have
a large and deep hierarchy of snapshots.
For acting on just children, a single virHashForEach is sufficient.
But for acting on an entire subtree, it requires iteration; and
since we declared recursion as invalid, we have to switch to a
while loop. Doing this correctly requires quite a bit of overhaul,
so I added a new helper function to isolate the algorithm from the
actions, so that callers do not have to reinvent the iteration.
Note that this _still_ does not handle CHILDREN correctly if one
of the children is the current snapshot; that will be next.
* src/conf/domain_conf.h (_virDomainSnapshotDef): Add mark.
(virDomainSnapshotForEachDescendant): New prototype.
* src/libvirt_private.syms (domain_conf.h): Export it.
* src/conf/domain_conf.c (virDomainSnapshotMarkDescendant)
(virDomainSnapshotActOnDescendant)
(virDomainSnapshotForEachDescendant): New functions.
* src/qemu/qemu_driver.c (qemuDomainSnapshotDiscardChildren):
Replace...
(qemuDomainSnapshotDiscardDescenent): ...with callback that
doesn't nest hash traversal.
(qemuDomainSnapshotDelete): Use new function.
Each snapshot lookup was iterating over the entire hash table, O(n),
instead of honing in directly on the hash key, amortized O(1).
Besides, fixing this means that virDomainSnapshotFindByName can now
be used inside another virHashForeach iteration (without this patch,
attempts to lookup a snapshot by name during a hash iteration will
fail due to nested iteration).
* src/conf/domain_conf.c (virDomainSnapshotFindByName): Simplify.
(virDomainSnapshotObjListSearchName): Delete unused function.
For a system checkpoint of a running or paused domain, it's fairly
easy to honor new flags for altering which state to use after the
revert. For an inactive snapshot, the revert has to be done while
there is no qemu process, so do back-to-back transitions; this also
lets us revert to inactive snapshots even for transient domains.
* src/qemu/qemu_driver.c (qemuDomainRevertToSnapshot): Support new
flags.
Commit 5e47785 broke reverts to offline system checkpoint snapshots
with older qemu, since there is no longer any code path to use
qemu -loadvm on next boot. Meanwhile, reverts to offline system
checkpoints have been broken for newer qemu, both before and
after that commit, since -loadvm no longer works to revert to
disk state without accompanying vm state. Fix both of these by
using qemu-img to revert disk state.
Meanwhile, consolidate the (now 3) clients of a qemu-img iteration
over all disks of a VM into one function, so that any future
algorithmic fixes to the FIXMEs in that function after partial
loop iterations are dealt with at once. That does mean that this
patch doesn't handle partial reverts very well, but we're not
making the situation any worse in this patch.
* src/qemu/qemu_driver.c (qemuDomainRevertToSnapshot): Use
qemu-img rather than 'qemu -loadvm' to revert to offline snapshot.
(qemuDomainSnapshotRevertInactive): New helper.
(qemuDomainSnapshotCreateInactive): Factor guts...
(qemuDomainSnapshotForEachQcow2): ...into new helper.
(qemuDomainSnapshotDiscard): Use it.
If you take a checkpoint snapshot of a running domain, then pause
qemu, then restore the snapshot, the result should be a running
domain, but the code was leaving things paused. Furthermore, if
you take a checkpoint of a paused domain, then run, then restore,
there was a brief but non-deterministic window of time where the
domain was running rather than paused. Fix both of these
discrepancies by always pausing before restoring.
Also, check that the VM is active every time lock is dropped
between two monitor calls.
Finally, straighten out the events that get emitted on each
transition.
* src/qemu/qemu_driver.c (qemuDomainRevertToSnapshot): Always
pause before reversion, and improve events.
Implement the new running/paused overrides for saved state management.
Unfortunately, for virDomainSaveImageDefineXML, the saved state
updates are write-only - I don't know of any way to expose a way
to query the current run/pause setting of an existing save image
file to the user without adding a new API or modifying the domain
xml of virDomainSaveImageGetXMLDesc to include a new element to
reflect the state bit encoded into the save image. However, I
don't think this is a show-stopper, since the API is designed to
leave the state bit alone unless an explicit flag is used to
change it.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemuDomainSaveImageOpen): Adjust signature.
(qemuDomainSaveFlags, qemuDomainManagedSave)
(qemuDomainRestoreFlags, qemuDomainSaveImageGetXMLDesc)
(qemuDomainSaveImageDefineXML, qemuDomainObjRestore): Adjust
callers.
While it is nice that snapshots and saved images remember whether
the domain was running or paused, sometimes the restoration phase
wants to guarantee a particular state (paused to allow hot-plugging,
or running without needing to call resume). This introduces new
flags to allow the control, and a later patch will implement the
flags for qemu.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_SAVE_RUNNING)
(VIR_DOMAIN_SAVE_PAUSED, VIR_DOMAIN_SNAPSHOT_REVERT_RUNNING)
(VIR_DOMAIN_SNAPSHOT_REVERT_PAUSED): New flags.
* src/libvirt.c (virDomainSaveFlags, virDomainRestoreFlags)
(virDomainManagedSave, virDomainSaveImageDefineXML)
(virDomainRevertToSnapshot): Document their use, and enforce
mutual exclusion.
There are two classes of management apps that track events - one
that only cares about on/off (and only needs to track EVENT_STARTED
and EVENT_STOPPED), and one that cares about paused/running (also
tracks EVENT_SUSPENDED/EVENT_RESUMED). To keep both classes happy,
any transition that can go from inactive to paused must emit two
back-to-back events - one for started and one for suspended (since
later resuming of the domain will only send RESUMED, but the first
class isn't tracking that).
This also fixes a bug where virDomainCreateWithFlags with the
VIR_DOMAIN_START_PAUSED flag failed to start paused when restoring
from a managed save image.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_EVENT_SUSPENDED_RESTORED)
(VIR_DOMAIN_EVENT_SUSPENDED_FROM_SNAPSHOT)
(VIR_DOMAIN_EVENT_RESUMED_FROM_SNAPSHOT): New sub-events.
* src/qemu/qemu_driver.c (qemuDomainRevertToSnapshot): Use them.
(qemuDomainSaveImageStartVM): Likewise, and add parameter.
(qemudDomainCreate, qemuDomainObjStart): Send suspended event when
starting paused.
(qemuDomainObjRestore): Add parameter.
(qemuDomainObjStart, qemuDomainRestoreFlags): Update callers.
* examples/domain-events/events-c/event-test.c
(eventDetailToString): Map new detail strings.
QEMU uses USB bus name "usb.0" when using the legacy -usb argument.
If we want to allow USB devices to specify their addresses with legacy
-usb, we should either in case of legacy bus name drop the 0 from the
address bus, or just drop the 0 from device id. This patch does the
later.
Another solution would be to permit addressing on non-legacy USB
controllers only.
So that devices can be attached to hubs. Example, to attach to first
port of a usb-hub on port 1.
<hub type='usb'>
<address type='usb' bus='0' port='1'/>
</hub>
<input type='mouse' type='usb'>
<address type='usb' bus='0' port='1.1'/>
</hub>
also add a test entry
Commit 6766ff10 introduced a corner case bug with snapshot creation:
if a snapshot is created, but then we hit OOM while trying to
create the return value of the function, then we have polluted the
internal directory with the snapshot metadata with no way to clean
it up from the running libvirtd.
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateXML): Don't
write metadata file on OOM condition.
Newer QEMU introduced cache=directsync for -drive, this patchset
is to expose it in libvirt layer.
* Introduced a new QEMU capability flag ($prefix_CACHE_DIRECTSYNC),
As even $prefix_CACHE_V2 is set, we can't known if directsync
is supported.
This patch adds the ability to make the filesystem for a filesystem
pool during a pool build.
The patch adds two new flags, no overwrite and overwrite, to control
when mkfs gets executed. By default, the patch preserves the
current behavior, i.e., if no flags are specified, pool build on a
filesystem pool only makes the directory on which the filesystem
will be mounted.
If the no overwrite flag is specified, the target device is checked
to determine if a filesystem of the type specified in the pool is
present. If a filesystem of that type is already present, mkfs is
not executed and the build call returns an error. Otherwise, mkfs
is executed and any data present on the device is overwritten.
If the overwrite flag is specified, mkfs is always executed, and any
existing data on the target device is overwritten unconditionally.
Several users have reported problems with 'virsh start' failing because
it was encountering a managed save situation where the managed save file
was incomplete. Be more robust to this by using two different magic
numbers, so that newer libvirt can gracefully handle an incomplete file
differently than a complete one, while older libvirt will at least fail
up front rather than trying to load only to have qemu fail at the end.
Managed save is a convenience - it exists to preserve as much state
as possible; if the state was not preserved, it is reasonable to just
log that fact, then proceed with a fresh boot. On the other hand,
user saves are under user control, so we must fail, but by making
the failure message distinct, the user can better decide how to handle
the situation of an incomplete save file.
* src/qemu/qemu_driver.c (QEMUD_SAVE_PARTIAL): New define.
(qemuDomainSaveInternal): Use it to mark incomplete images.
(qemuDomainSaveImageOpen, qemuDomainObjRestore): Add parameter
that controls what to do with partial images.
(qemuDomainRestoreFlags, qemuDomainSaveImageGetXMLDesc)
(qemuDomainSaveImageDefineXML, qemuDomainObjStart): Update callers.
Based on an initial idea by Osier Yang.
In a SELinux or root-squashing NFS environment, libvirt has to go
through some hoops to create a new file that qemu can then open()
by name. Snapshots are a case where we want to guarantee an empty
file that qemu can open; also, reopening a save file to convert it
from being marked partial to complete requires a reopen to avoid
O_DIRECT headaches. Refactor some existing code to make it easier
to reuse in later patches.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Drop parameter.
* src/qemu/qemu_migration.c (qemuMigrationToFile): Let cgroup do
the stat, rather than asking caller to do it and pass info down.
* src/qemu/qemu_driver.c (qemuOpenFile): New function, pulled from...
(qemuDomainSaveInternal): ...here.
(doCoreDump, qemuDomainSaveImageOpen): Use it here as well.
After supporting multi function pci device, we only reserve function 1 on slot 1.
The user can use the other function on slot 1 in the xml config file. We should
detect this wrong usage.
Currently, the lxc implementation invokes 'ip' and 'ifconfig' commands
inside a container using 'virRun'. That has the side effect of requiring
those commands to be present and to function in a manner consistent with
the usage. Some small roots (such as ttylinux) may not have 'ip' or
'ifconfig'.
This patch replaces the use of these commands with usage of
netdevice. The result is that lxc containers do not have to implement
those commands, and lxc in libvirt is only dependent on the netdevice
interface.
I've tested this patch locally against the ubuntu libvirt version enough
to verify its generally sane. I attempted to build upstream today, but
failed with:
/usr/bin/ld:
../src/.libs/libvirt_driver_qemu.a(libvirt_driver_qemu_la-qemu_domain.o):
undefined reference to symbol 'xmlXPathRegisterNs@@LIBXML2_2.4.30
Thats probably a local issue only, but I wanted to get this patch up and
see what others thought of it. This is ubuntu bug
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/828211 .
Hi,
I'm seeing an issue with udev and libvirt-lxc. Libvirt-lxc creates
/dev/ptmx as a symlink to /dev/pts/ptmx. When udev starts up, it
checks the device type, sees ptmx is 'not right', and replaces it
with a 'proper' ptmx.
In lxc, /dev/ptmx is bind-mounted from /dev/pts/ptmx instead of being
symlinked, so udev sees the right device type and leaves it alone.
A patch like the following seems to work for me. Would there be
any objections to this?
>From 4c5035de52de7e06a0de9c5d0bab8c87a806cba7 Mon Sep 17 00:00:00 2001
From: Ubuntu <ubuntu@domU-12-31-39-14-F0-B3.compute-1.internal>
Date: Wed, 31 Aug 2011 18:15:54 +0000
Subject: [PATCH 1/1] make ptmx a bind mount rather than symlink
udev on some systems checks the device type of /dev/ptmx, and replaces it if
not as expected. The symlink created by libvirt-lxc therefore gets replaced.
By creating it as a bind mount, the device type is correct and udev leaves it
alone.
Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com>
The libvirt BlockPull API supports the use of an initial bandwidth limit but the
qemu block_stream API does not. To get the desired behavior we use the two APIs
strung together: first BlockPull, then BlockJobSetSpeed. We can do this at the
driver level to avoid duplicated code in each monitor path.
Signed-off-by: Adam Litke <agl@us.ibm.com>
Due to an unfortunate precedent in qemu, the units for the bandwidth parameter
to block_job_set_speed are different between the text monitor and the qmp
monitor. While the qmp monitor uses bytes/s, the text monitor expects MB/s.
Correct the units for the text interface.
Signed-off-by: Adam Litke <agl@us.ibm.com>
On systems with many pcpus, the sexpr returned by xend can be quite
large for dom0 when it is configured to have #vcpus = #pcpus (default).
E.g. on a 80 pcpu system, where dom0 had 80 vcpus, the sexpr details
for dom0 was 73817 bytes! Increase maximum buffer size to 256k.
xenDaemonDomainFetch() was overwriting errors reported by
xend_get() and xend_req(). E.g. without patch
error: failed Xen syscall xenDaemonDomainFetch failed to find this domain
with patch
error: internal error Xend returned HTTP Content-Length of 73817, which exceeds
maximum of 65536
Commit 2c85644b0b attempted to
fix a problem with tracking RPC messages from streams by doing
- if (msg->header.type == VIR_NET_REPLY) {
+ if (msg->header.type == VIR_NET_REPLY ||
+ (msg->header.type == VIR_NET_STREAM &&
+ msg->header.status != VIR_NET_CONTINUE)) {
client->nrequests--;
In other words any stream packet, with status NET_OK or NET_ERROR
would cause nrequests to be decremented. This is great if the
packet from from a synchronous virStreamFinish or virStreamAbort
API call, but wildly wrong if from a server initiated abort.
The latter resulted in 'nrequests' being decremented below zero.
This then causes all I/O for that client to be stopped.
Instead of trying to infer whether we need to decrement the
nrequests field, from the message type/status, introduce an
explicit 'bool tracked' field to mark whether the virNetMessagePtr
object is subject to tracking.
Also add a virNetMessageClear function to allow a message
contents to be cleared out, without adversely impacting the
'tracked' field as a naive memset() would do
* src/rpc/virnetmessage.c, src/rpc/virnetmessage.h: Add
a 'bool tracked' field and virNetMessageClear() API
* daemon/remote.c, daemon/stream.c, src/rpc/virnetclientprogram.c,
src/rpc/virnetclientstream.c, src/rpc/virnetserverclient.c,
src/rpc/virnetserverprogram.c: Switch over to use
virNetMessageClear() and pass in the 'bool tracked' value
when creating messages.
Parted does not report disk size in 512 byte units, but
rather the disks' logical sector size, which with modern
drives might be 4k.
* src/storage/parthelper.c: Remove hardcoded 512 byte sector
size
Introduced by 5e495c8b, except the ones for checking if numa
is supported by host, all the NO_SUPPORT are changed back. For
the ones about numa checking, change them into INTERNAL_ERROR.
If the libxl driver is compiled in, then everytime libvirtd
starts up on a non-Xen Dom0 host, it logs a error message.
Since this is an expected condition, we should not log at
'error' level, only 'info'.
* src/libxl/libxl_driver.c: Lower log level for certain
expected errors during driver init
It is possible (expected/likely in Fedora 15) for a cgroup controller
to be mounted in multiple locations at the same time, due to bind
mounts. Currently we leak memory if this happens, because we overwrite
the previous 'mountPoint' string. Instead just accept the first match
we find.
* src/util/cgroup.c: Only accept first match for a cgroup
controller mount
The virSecurityManagerSetProcessFDLabel method was introduced
after a mis-understanding from a conversation about SELinux
socket labelling. The virSecurityManagerSetSocketLabel method
should have been used for all such scenarios.
* src/security/security_apparmor.c, src/security/security_apparmor.c,
src/security/security_driver.h, src/security/security_manager.c,
src/security/security_manager.h, src/security/security_selinux.c,
src/security/security_stack.c: Remove SetProcessFDLabel driver
It is not possible to change the label of a TCP socket once it
has been opened. When creating a TCP socket care must be taken
to ensure the socket creation label is set & then cleared.
Remove the bogus call to virSecurityManagerSetProcessFDLabel
from the lock driver guest setup code and instead make use of
virSecurityManagerSetSocketLabel
The code for creating a sanlock lockspace accidentally used
SANLK_NAME_LEN instead of SANLK_PATH_LEN for a size check.
This meant disk paths were limited to 48 bytes !
* src/locking/lock_driver_sanlock.c: Fix disk path length
check
There is no reason to forbid pausing an autodestroy domain
(not to mention that 'virsh start --paused --autodestroy'
succeeds in creating a paused autodestroy domain).
Meanwhile, qemu was failing to enforce the API documentation that
autodestroy domains cannot be saved. And while the original
documentation only mentioned save/restore, snapshots are another
form of saving that are close enough in semantics as to make no
sense on one-shot domains.
* src/qemu/qemu_driver.c (qemudDomainSuspend): Drop bogus check.
(qemuDomainSaveInternal, qemuDomainSnapshotCreateXML): Forbid
saves of autodestroy domains.
* src/libvirt.c (virDomainCreateWithFlags, virDomainCreateXML):
Document snapshot interaction.
According to qemu-kvm/qerror.c all messages start with a capital
"Device ", but the current code only scans for the lower case "device ".
This results in "virDomainUpdateDeviceFlags()" to not detect locked
CD-ROMs and reporting success even in the case of a failure:
# virsh qemu-monitor-command "$VM" change\ drive-ide0-0-0\ \"/var/lib/libvirt/images/ucs_2.4-0-sec4-20110714145916-dvd-amd64.iso\"
Device 'drive-ide0-0-0' is locked
# virsh update-device "$VM" /dev/stdin <<<"<disk type='file' device='cdrom'><driver name='qemu' type='raw'/><source file='/var/lib/libvirt/images/ucs_2.4-0-sec4-20110714145916-dvd-amd64.iso'/><target dev='hda' bus='ide'/><readonly/><alias name='ide0-0-0'/><address type='drive' controller='0' bus='0' unit='0'/></disk>"
Device updated successfully
Signed-off-by: Philipp Hahn <hahn@univention.de>
There have been several instances of people having problems with
a broken managed save file, and not aware that they could use
'virsh managedsave-remove dom' to fix things. Making it possible
to do this as part of starting a domain makes the same functionality
easier to find, and one less API call.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_START_FORCE_BOOT): New
flag.
* src/libvirt.c (virDomainCreateWithFlags): Document it.
* src/qemu/qemu_driver.c (qemuDomainObjStart): Alter signature.
(qemuAutostartDomain, qemuDomainStartWithFlags): Update callers.
* tools/virsh.c (cmdStart): Expose it in virsh.
* tools/virsh.pod (start): Document it.
Back in 2008 when this line of util.h was written, gnulib's verify
module didn't allow the use of multiple verify() in one file
in combination with our choice of gcc -W options. But that has
since been fixed in gnulib, and newer gnulib even maps verify()
to the C1x feature of _Static_assert, which gives even nicer
diagnostics with a new enough compiler, so we might as well go
with the simpler verify().
* src/util/util.h (VIR_ENUM_IMPL): Use simpler verify, now that
gnulib module is smarter.
Commit 3261761 made it possible to use pipes instead of sockets
for outgoing tunneled migration; however, it caused a regression
because the pipe was never given a SELinux label.
* src/qemu/qemu_migration.c (doTunnelMigrate): Label outgoing pipe.
The bufferOffset has been initialized to zero in virNetMessageEncodePayloadRaw(),
so, we use bufferLength to represent the length of message which is going to be
sent to client side.
From: Matthias Bolte <matthias.bolte@googlemail.com>
Tested-by: Jim Fehlig <jfehlig@novell.com>
Matthias provided this patch to fix an issue I encountered in the
generator with APIs containing call-by-ref long type, e.g.
int virDomainMigrateGetMaxSpeed(virDomainPtr domain,
unsigned long *bandwidth,
unsigned int flags);
Domain listing, basic information retrieval and domain life cycle
management is implemented. But currently the domain XML output
lacks the complete devices section.
The driver uses OpenWSMAN to directly communicate with a Hyper-V
server over its WS-Management interface exposed via Microsoft WinRM.
The driver is based on the work of Michael Sievers. This started in
the same master program project group at the University of Paderborn
as the ESX driver.
See Michael's blog for details: http://hyperv4libvirt.wordpress.com/
Add a generator script to generate the structs and serialization
information for OpenWSMAN.
openwsman.h collects workarounds for problems in OpenWSMAN <= 2.2.6.
There are also disabled sections that would use ws_serializer_free_mem
but can't because it's broken in OpenWSMAN <= 2.2.6. Patches to fix
this have been posted upstream.
When a user migrates a domain by command as
libvirt saves vm's domain XML config in destination host after migration.
But it saves vm->def. Then, the saved XML contains some garbage.
<domain type='kvm' id='50'>
^^^^^^^^
...
<console type='pty' tty='/dev/pts/5'>
^^^^^^^^^^^^^^^^^
Avoid saving unnecessary things by saving persistent vm definition.
In case we add a new program in the future (we did that in the past and
we are going to do it again soon) current daemon will behave badly with
new client that wants to use the new program. Before the RPC rewrite we
used to just send an error reply to any request with unknown program.
With the RPC rewrite in 0.9.3 the daemon just closes the connection
through which such request was sent. This patch fixes this regression.
If users wants to connect to remote unix socket, e.g.
'qemu+unix://<remote>/system' currently the <remote> part is ignored,
ending up connecting to localhost. Connecting to remote socket is not
supported and user should have used TLS/TCP/SSH instead.
On success, the 'sendkey' command does not return any data, so
any data in the reply should be considered to be an error
message
* src/qemu/qemu_monitor_text.c: Treat non-"" reply data as an
error message for 'sendkey' command
The QEMU 'sendkey' command expects keys to be encoded in the same
way as the RFB extended keycode set. Specifically it wants extended
keys to have the high bit of the first byte set, while the Linux
XT KBD driver codeset uses the low bit of the second byte. To deal
with this we introduce a new keymap 'RFB' and use that in the QEMU
driver
* include/libvirt/libvirt.h.in: Add VIR_KEYCODE_SET_RFB
* src/qemu/qemu_driver.c: Use RFB keycode set instead of XT KBD
* src/util/virkeycode-mapgen.py: Auto-generate the RFB keycode
set from the XT KBD set
* src/util/virkeycode.c: Add RFB keycode entry to table. Add a
verify check on cardinality of the codeOffset table
This API labels all sockets created until ClearSocketLabel is called in
a way that a vm can access them (i.e., they are labeled with svirt_t
based label in SELinux).
The APIs are designed to label a socket in a way that the libvirt daemon
itself is able to access it (i.e., in SELinux the label is virtd_t based
as opposed to svirt_* we use for labeling resources that need to be
accessed by a vm). The new name reflects this.
When virStreamAbort is called on a stream that has not been used yet,
quite confusing error is returned: "this function is not supported by
the connection driver". Let's just ignore such streams as there's
nothing to abort anyway.
If migration failed on source daemon, the migration is automatically
canceled by the daemon itself. Thus we don't need to call
virDomainMigrateConfirm3(cancelled=1). Calling it doesn't cause any harm
but the resulting error message printed in logs may confuse people.
Audit all changes to the qemu vm->current_snapshot, and make them
update the saved xml file for both the previous and the new
snapshot, so that there is always at most one snapshot with
<active>1</active> in the xml, and that snapshot is used as the
current snapshot even across libvirtd restarts.
This patch does not fix the case of virDomainSnapshotDelete(,CHILDREN)
where one of the children is the current snapshot; that will be later.
* src/conf/domain_conf.h (_virDomainSnapshotDef): Alter member
type and name.
* src/conf/domain_conf.c (virDomainSnapshotDefParseString)
(virDomainSnapshotDefFormat): Update clients.
* docs/schemas/domainsnapshot.rng: Tighten rng.
* src/qemu/qemu_driver.c (qemuDomainSnapshotLoad): Reload current
snapshot.
(qemuDomainSnapshotCreateXML, qemuDomainRevertToSnapshot)
(qemuDomainSnapshotDiscard): Track current snapshot.
Changing the current vm, and writing that change to the file
system, all before a new qemu starts, is risky; it's hard to
roll back if starting the new qemu fails for some reason.
Instead of abusing vm->current_snapshot and making the command
line generator decide whether the current snapshot warrants
using -loadvm, it is better to just directly pass a snapshot all
the way through the call chain if it is to be loaded.
This frees up the last use of snapshot->def->active for qemu's
use, so the next patch can repurpose that field for tracking
which snapshot is current.
* src/qemu/qemu_command.c (qemuBuildCommandLine): Don't use active
field of snapshot.
* src/qemu/qemu_process.c (qemuProcessStart): Add a parameter.
* src/qemu/qemu_process.h (qemuProcessStart): Update prototype.
* src/qemu/qemu_migration.c (qemuMigrationPrepareAny): Update
callers.
* src/qemu/qemu_driver.c (qemudDomainCreate)
(qemuDomainSaveImageStartVM, qemuDomainObjStart)
(qemuDomainRevertToSnapshot): Likewise.
(qemuDomainSnapshotSetCurrentActive)
(qemuDomainSnapshotSetCurrentInactive): Delete unused functions.
https://bugzilla.redhat.com/show_bug.cgi?id=727709
mentions that if qemu fails to create the snapshot (such as what
happens on Fedora 15 qemu, which has qmp but where savevm is only
in hmp, and where libvirt is old enough to not try the hmp fallback),
then 'virsh snapshot-list dom' will show a garbage snapshot entry,
and the libvirt internal directory for storing snapshot metadata
will have a bogus file.
This fixes the fallout bug of polluting the snapshot-list with
garbage on failure (the root cause of the F15 bug of not having
fallback to hmp has already been fixed in newer libvirt releases).
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateXML): Allocate
memory before making snapshot, and cleanup on failure. Don't
dereference NULL if transient domain exited during snapshot creation.
* src/qemu/qemu_migration.c: avoid dead 'ret' assignment and silence
clang warning.
Detected by ccc-analyzer:
libvirt.c:4277:5: warning: Value stored to 'ret' is never read
ret = domain->conn->driver->domainMigrateConfirm3
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* src/qemu/qemu_migration.c: avoid dead 'ret' assignment and silence
clang warning.
Detected by ccc-analyzer:
CC libvirt_driver_qemu_la-qemu_migration.lo
qemu/qemu_migration.c:2046:5: warning: Value stored to 'ret' is never read
ret = qemuMigrationConfirm(driver, sconn, vm,
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
virFileOpenAs takes desired uid:gid as arguments, and not only uses
them for a fork/setuid/setgid when retrying failed open operations,
but additionally always forces the opened file to be owned by the
given uid:gid.
One example of the problems this causes is that, when restoring a
domain from a file that is owned by the qemu user, opening the file
chowns it to root. if dynamic_ownership=1 this is coincidentally
expected, but if dynamic_ownership=0, no existing file should ever
have its ownership changed.
This patch adds an extra check before calling fchown() - it only does
it if O_CREAT was passed to virFileOpenAs() in the openflags.
pciDeviceListSteal(pcidevs, dev) removes dev from pcidevs reducing
the length of pcidevs, so moving onto what was the next dev is wrong.
Instead callers should pop entry 0 repeatedly until pcidevs is empty.
Signed-off-by: Steve Hodgson <shodgson@solarflare.com>
Signed-off-by: Shradha Shah <sshah@solarflare.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
I was testing a virsh patch, and wanted to see if I had passed the
flags I thought. But with LIBVIRT_DEBUG in the environment, I just
saw:
14:24:52.359: 15022: debug : virDomainSnapshotNum:15586 : dom=0xc9c180, (VM: name=rhel_6-64, uuid=48f8e8e7-e14f-0e14-02f0-ce71997bdcab),
including a trailing space. This fixes the issues.
* src/libvirt.c: Log flag parameters, even if currently unused.
(VIR_DOMAIN_DEBUG_0): Drop trailing comma in log.
(VIR_DOMAIN_DEBUG_1): Split guts into...
(VIR_DOMAIN_DEBUG_2): ...new macro.
* src/qemu/qemu_monitor_json.c: Handle error "CommandNotFound" and
report the error.
* src/qemu/qemu_monitor_text.c: If a sub info command is not found,
it prints the output of "help info", for other commands,
"unknown command" is printed.
Without this patch, libvirt always report:
An error occurred, but the cause is unknown
This patch was adapted from a patch by Osier Yang <jyang@redhat.com> to
break out detection of unrecognized text monitor commands into a separate
function.
Signed-off-by: Adam Litke <agl@us.ibm.com>
s/VIR_ERR_NO_SUPPORT/VIR_ERR_OPERATION_INVALID/
Special case is changes on lxcDomainInterfaceStats, if it's not
implemented on the platform, prints error like:
lxcError(VIR_ERR_OPERATION_INVALID, "%s",
_("interface stats not implemented on this platform"));
As the function is supported by driver actually, error like
VIR_ERR_NO_SUPPORT is confused.
Now, bad key-code in send-key can cause segmentation fault in libvirt.
(example)
% virsh send-key --codeset win32 12
error: End of file while reading data: Input/output error
This is caused by overrun at scanning keycode array.
Fix it.
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Often, we want to use XPath functions on the just-parsed document;
fold this into the parser function for convenience.
* src/util/xml.h (virXMLParseHelper): Add argument.
(virXMLParseStrHelper, virXMLParseFileHelper): Delete.
(virXMLParseCtxt, virXMLParseStringCtxt, virXMLParseFileCtxt): New
macros.
* src/libvirt_private.syms (xml.h): Remove deleted functions.
* src/util/xml.c (virXMLParseHelper): Add argument.
(virXMLParseStrHelper, virXMLParseFileHelper): Delete.
ACK was given too soon. According to the code, the xm driver is
only used for inactive domains, and has no notion of an active
domain, thus, it cannot support undefine of a running domain.
The real fix for xen needs to be in the unified driver and/or
the xend level.
This reverts commit 49186deda6.
* src/qemu/qemu_monitor_text.c: BALLOON_PREFIX was defined as
"balloon: actual=", which cause "actual=" is stripped early before
the real parsing. This patch changes BALLOON_PREFIX into "balloon: ",
and modifies related functions, also renames
"qemuMonitorParseExtraBalloonInfo" to "qemuMonitorParseBalloonInfo",
as after the changing, it parses all the info returned by "info balloon".
Although we are flushing cache after some critical writes (e.g.
volume creation), after some others we do not (e.g. volume cloning).
This patch fix this issue. That is for volume cloning, writing
header of logical volume, and storage wipe.
When spice_tls is set but listen_tls is not, we don't initialize
GnuTLS library. So any later gnutls call (e.g. during migration,
where we initialize a certificate) will access uninitialized GnuTLS
internal structs and throws an error.
Although, we might now initialize GnuTLS twice, it is safe according
to the documentation:
This function can be called many times,
but will only do something the first time.
This patch creates 2 functions: virNetTLSInit and virNetTLSDeinit
with respect to written above.
Regression introduced in commit b7e5ca4.
Mingw lacks kill(), but we were only using it for a sanity check;
so we can go with one less check.
Also, on OOM error, this function should outright fail rather than
claim that the pid file was successfully read.
* src/util/virpidfile.c (virPidFileReadPathIfAlive): Skip kill
call where unsupported, and report error on OOM.
If a client had initiated a stream abort, it will have a call
waiting for a reply in the queue. If more data continues to
arrive on the stream, the abort command could mistakenly get
signalled as complete. Remove the code from async data processing
that looked for waiting calls. Add a sanity check to ensure no
async call can ever be marked as needing a reply
* src/rpc/virnetclient.c: Ensure async data packets can't
trigger a reply
A virsh command like:
migrate --live --copy-storage-all Guest qemu+ssh://user@host/system
--persistent --verbose
shows
Migration: [ 0 %]
during the storage copy and does not start counting
until the ram transfer starts
Fix this by scraping optional disk transfer status, and adding it
into the progress meter.
Otherwise the device will still be bound to pci-stub driver even
it's set as "managed=yes" when do detaching. Of course, it won't
triger any driver reprobing too.
If a stream gets a server initiated abort, the client may still
send an abort request before it receives the server side abort.
This causes the server to send back another abort for the
stream. Since the protocol defines that abort is the last thing
to be sent, the client gets confused by this second abort from
the server. If the stream is already shutdown, just drop any
client requested abort, rather than sending back another message.
This fixes the regression from previous versions.
Tested as follows
In one virsh session
virsh # start foo
virsh # console foo
In other virsh session
virsh # destroy foo
The first virsh session should be able to continue issuing
commands without error. Prior to this patch it saw
virsh # list
error: Failed to list active domains
error: An error occurred, but the cause is unknown
virsh # list
error: Failed to list active domains
error: no call waiting for reply with prog 536903814 vers 1 serial 9
* src/rpc/virnetserverprogram.c: Drop abort requests
for streams which no longer exist
Every active stream results in a reference being held on the
virNetServerClientPtr object. This meant that if a client quit
with any streams active, although all I/O was stopped the
virNetServerClientPtr object would leak. This causes libvirtd
to leak any file handles associated with open streams when a
client quit
To fix this, when we call virNetServerClientClose there is a
callback invoked which lets the daemon release the streams
and thus the extra references
* daemon/remote.c: Add a hook to close all streams
* daemon/stream.c, daemon/stream.h: Add API for releasing
all streams
* src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h:
Allow registration of a hook to trigger when closing client
Get rid of the #if __linux__ check in virPidFileReadPathIfAlive that
was preventing a check of a symbolic link in /proc/<pid>/exe on
non-linux platforms against an expected executable. Replace
this with a run-time check testing whether the /proc/<pid>/exe is a
symbolic link and if so call the function doing the comparison
against the expected file the link is supposed to point to.
This patch renames getPhysfn to getPhysfnDev and adds code to get the
Physical function and Virtual Function index of the direct attach linkdev (if
the direct attach interface is a SRIOV VF). The idea is to send the port
profile message to a PF if the direct attach interface is a SRIOV VF.
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: David Wang <dwang2@cisco.com>
This patch adds the following functions to get PF/VF relationship of an SRIOV
network interface:
ifaceIsVirtualFunction: Function to check if a network interface is a SRIOV VF
ifaceGetVirtualFunctionIndex: Function to get VF index if a network interface is a SRIOV VF
ifaceGetPhysicalFunction: Function to get the PF net interface name of a SRIOV VF net interface
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: David Wang <dwang2@cisco.com>
This patch adds the following helper functions:
pciDeviceIsVirtualFunction: Function to check if a pci device is a sriov VF
pciGetVirtualFunctionIndex: Function to get the VF index of a sriov VF
pciDeviceNetName: Function to get the network device name of a pci device
pciConfigAddressCompare: Function to compare pci config addresses
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: David Wang <dwang2@cisco.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
This patch moves some of the sriov related pci code from node_device driver
to src/util/pci.[ch]. Some functions had to go thru name and argument list
change to accommodate the move.
Signed-off-by: Roopa Prabhu <roprabhu@cisco.com>
Signed-off-by: Christian Benvenuti <benve@cisco.com>
Signed-off-by: David Wang <dwang2@cisco.com>
In some versions of qemu, both virtio-blk-pci and virtio-net-pci
devices can have an event_idx setting that determines some details of
event processing. When it is enabled, it "reduces the number of
interrupts and exits for the guest". qemu will automatically enable
this feature when it is available, but there may be cases where this
new feature could actually make performance worse (NB: no such case
has been found so far).
As a safety switch in case such a situation is encountered in the
field, this patch adds a new attribute "event_idx" to the <driver>
element of both disk and interface devices. event_idx can be set to
"on" (to force event_idx on in case qemu has it disabled by default)
or "off" (for force event_idx off). In the case that event_idx support
isn't present in qemu, the attribute is ignored (this on the advice of
the qemu developer).
docs/formatdomain.html.in: document the new flag (marking it as
"don't mess with this!"
docs/schemas/domain.rng: add event_idx in appropriate places
src/conf/domain_conf.[ch]: add event_idx to parser and formatter
src/libvirt_private.syms: export
virDomainVirtioEventIdx(From|To)String
src/qemu/qemu_capabilities.[ch]: detect and report event_idx in
disk/net
src/qemu/qemu_command.c: add event_idx parameter to qemu commandline
when appropriate.
tests/qemuxml2argvdata/qemuxml2argv-event_idx.args,
tests/qemuxml2argvdata/qemuxml2argv-event_idx.xml,
tests/qemuxml2argvtest.c,
tests/qemuxml2xmltest.c: test cases for event_idx.
By opening a connection to remote qemu process ourselves and passing the
socket to qemu we get much better errors than just "migration failed"
when the connection is opened by qemu.
The core of these two functions is very similar and most of it is even
exactly the same. Factor out the core functionality into a separate
function to remove code duplication and make further changes easier.
With gcc 4.5.1:
util/virpidfile.c: In function 'virPidFileAcquirePath':
util/virpidfile.c:308:66: error: nested extern declaration of '_gl_verify_function2' [-Wnested-externs]
Then in tests/commandtest.c, the new virPidFile APIs need to be used.
* src/util/virpidfile.c (virPidFileAcquirePath): Move verify to
top level.
* tests/commandtest.c: Use new pid APIs.
In daemons using pidfiles to protect against concurrent
execution there is a possibility that a crash may leave a stale
pidfile on disk, which then prevents later restart of the daemon.
To avoid this problem, introduce a pair of APIs which make
use of virFileLock to ensure crash-safe & race condition-safe
pidfile acquisition & releae
* src/libvirt_private.syms, src/util/virpidfile.c,
src/util/virpidfile.h: Add virPidFileAcquire and virPidFileRelease
In some cases the caller of virPidFileRead might like extra checks
to determine whether the pid just read is really the one they are
expecting. This adds virPidFileReadIfAlive which will check whether
the pid is still alive with kill(0, -1), and (on linux only) will
look at /proc/$PID/path
* libvirt_private.syms, util/virpidfile.c, util/virpidfile.h: Add
virPidFileReadIfValid and virPidFileReadPathIfValid
* network/bridge_driver.c: Use new APIs to check PID validity
The functions for manipulating pidfiles are in util/util.{c,h}.
We will shortly be adding some further pidfile related functions.
To avoid further growing util.c, this moves the pidfile related
functions into a dedicated virpidfile.{c,h}. The functions are
also all renamed to have 'virPidFile' as their name prefix
* util/util.h, util/util.c: Remove all pidfile code
* util/virpidfile.c, util/virpidfile.h: Add new APIs for pidfile
handling.
* lxc/lxc_controller.c, lxc/lxc_driver.c, network/bridge_driver.c,
qemu/qemu_process.c: Add virpidfile.h include and adapt for API
renames
Add some simple wrappers around the fcntl() discretionary file
locking capability.
* src/util/util.c, src/util/util.h, src/libvirt_private.syms: Add
virFileLock and virFileUnlock APIs
We forgot to add virDomainUndefineFlags for a couple of hypervisors.
This wires up trivial versions (since neither hypervisor supports
managed save yet, they do not need to support any flags).
* src/vbox/vbox_tmpl.c (vboxDomainCreateXML): Update caller.
(vboxDomainUndefine): Move guts...
(vboxDomainUndefineFlags): ...to new function.
* src/xenapi/xenapi_driver.c (xenapiDomainUndefine)
(xenapiDomainUndefineFlags): Likewise.
Our logic throws off analyzer tools:
ptr var = NULL;
if (flags == 0) flags = live ? _LIVE : _CONFIG;
if (flags & _LIVE) do stuff
if (flags & _CONFIG) var = non-null;
if (flags & _LIVE) do more stuff
else if (flags & _CONFIG) use var
the tools keep thinking that var can still be NULL in the last
if clause, adding the hint shuts them up.
* src/qemu/qemu_driver.c (qemuDomainSetBlkioParameters): Add a
static analysis hint.
While the first encountered dns host record is being parsed, it's
possible for virNetworkDef::hosts to point to memory that has been
allocated, but virNetworkDef::nhosts to still be 0. If there is a
failure during that time, virNetworkDef::hosts will be leaked.
Although this isn't currently the case for virNetworkDef::txtrecords,
it could become that way through future re-factoring, and it hurts
nothing to restructure the freeing of txtrecord data to match that of
hosts data.
The following XML:
<serial type='udp'>
<source mode='connect' service='9999'/>
</serial>
is accepted by domain_conf.c but maps to the qemu command line:
-chardev udp,host=127.0.0.1,port=2222,localaddr=(null),localport=(null)
qemu can cope with everything omitting except the connection port, which
seems to also be the intent of domain_conf validation, so let's not
generate bogus command lines for that case.
The defaults are empty strings for addresses and 0 for the localport
Additionally, tweak the qemu cli parsing to handle omitted host
parameters
for -serial udp
Transient domains reject attempts to set autostart, and using
virDomainCreate to restart a domain only works on persistent
domains. Therefore, managed save makes no sense on transient
domains, and should be rejected up front rather than creating
an otherwise unrecoverable managed save file.
Besides, transient domains imply that a lot more management is
being done by the upper layer; this includes the assumption
that the upper layer is okay managing the saved state file
created by virDomainSave, and does not need to use managed save.
* src/libvirt.c: Document that transient domains are incompatible
with managed save.
* src/qemu/qemu_driver.c (qemuDomainManagedSave): Enforce it.
* src/libxl/libxl_driver.c (libxlDomainManagedSave): Likewise.
I noticed some inconsistent use of 'else'.
* src/qemu/qemu_driver.c (qemuCPUCompare)
(qemuDomainSnapshotCreateXML, qemuDomainRevertToSnapshot)
(qemuDomainSnapshotDiscard): Match coding conventions.
If a snapshot with the name already exists, virDomainSnapshotAssignDef()
just returns NULL, in which case the snapshot definition is leaked.
Currently this leak is not a big problem, since qemuDomainSnapshotLoad()
is only called once during initial startup of libvirtd.
Signed-off-by: Philipp Hahn <hahn@univention.de>
A previous commit gave the LXC driver the ability to mount
block devices for the container filesystem. Through use of
the loopback device functionality, we can build on this to
support use of plain file images for LXC filesytems.
By setting the LO_FLAGS_AUTOCLEAR flag we can ensure that
the loop device automatically disappears when the container
dies / shuts down
* src/lxc/lxc_container.c: Raise error if we see a file
based filesystem, since it should have been turned into
a loopback device already
* src/lxc/lxc_controller.c: Rewrite any filesystems of
type=file, into type=block, by binding the file image
to a free loop device
Currently the LXC driver can only populate filesystems from
host filesystems, using bind mounts. This patch allows host
block devices to be mounted. It autodetects the filesystem
format at mount time, and adds the block device to the cgroups
ACL. Example usage is
<filesystem type='block' accessmode='passthrough'>
<source dev='/dev/sda1'/>
<target dir='/home'/>
</filesystem>
* src/lxc/lxc_container.c: Mount block device filesystems
* src/lxc/lxc_controller.c: Add block device filesystems
to cgroups ACL
An application container shouldn't get a private /dev. Fix
the regression from 6d37888e6a
* src/lxc/lxc_container.c: Don't mount /dev for app containers
Detected by ccc-analyzer, reported by Alex Jia.
qemuProcessStart always calls qemuProcessWaitForMonitor with a
non-negative position, but qemuProcessAttach always calls with -1.
In the latter case, there is no log file we can scrape, so we
also should not be trying to scrape the logs if the qemu process
died at the very end.
* src/qemu/qemu_process.c (qemuProcessWaitForMonitor): Don't try
to read from log in qemuProcessAttach case.
This addresses https://bugzilla.redhat.com/show_bug.cgi?id=713728
When "defining" a new network (or one that exists but isn't currently
active) the new definition is stored in network->def, but for a
network that already exists and is active, the new definition is
stored in network->newDef, and then moved over to network->def as soon
as the network is destroyed.
However, the code that writes the dhcp and dns hosts files used by
dnsmasq was always using network->def for its information, even when
the new data was actually in network->newDef, so the hosts files
always lagged one edit behind the definition.
This patch changes the code to keep the pointer to the new definition
after it's been assigned into the network, and use it directly
(regardless of whether it's stored in network->newDef or network->def)
to construct the hosts files.
Value stored to 'ret' is never read, so remove this dead assignment.
* src/qemu/qemu_monitor_text.c: kill dead assignment.
Signed-off-by: Alex Jia <ajia@redhat.com>
Value stored to 'ret' is never read, in fact, 'cleanup' section will
directly return -1 when function is fail, so remove this dead assignment.
* src/qemu/qemu_process.c: kill dead assignment.
Signed-off-by: Alex Jia <ajia@redhat.com>
When trying to use any SASL authentication for TCP sockets by
setting auth_tls = "sasl" in libvirtd.conf on server side, the
client will hang because of the sasl session relocking other than
dropping the lock when exiting virNetSASLSessionExtKeySize()
* src/rpc/virnetsaslcontext.c: virNetSASLSessionExtKeySize drop the
lock on exit
This patch introduces a internal RPC API "virNetServerClose", which
is standalone with "virNetServerFree". it closes all the socket fds,
and unlinks the unix socket paths, regardless of whether the socket
is still referenced or not.
This is to address regression bug:
https://bugzilla.redhat.com/show_bug.cgi?id=725702
Detection based on gnutls_session doesn't work because GnuTLS 2.x.y
comes with a compat.h that defines gnutls_session to gnutls_session_t.
Instead detect this based on LIBGNUTLS_VERSION_MAJOR. Move this from
configure/config.h to gnutls_1_0_compat.h and make sure that all users
include gnutls_1_0_compat.h properly.
Also fix header guard in gnutls_1_0_compat.h.
Leak detected by Coverity; only possible on unlikely ptsname_r
failure. Additionally, the man page for ptsname_r states that
failure is merely non-zero, not necessarily -1.
* src/util/util.c (virFileOpenTtyAt): Avoid leak on ptsname_r
failure.
Coverity detected that ifaceGetNthParent had already dereferenced
'nth' prior to the conditional; all callers already complied with
passing a non-NULL pointer so make this part of the contract.
* src/util/interface.h (ifaceGetNthParent): Add annotations.
* src/util/interface.c (ifaceGetNthParent): Drop useless null check.
In virNetServerNew, Coverity didn't realize that srv->mdsnGroupName
can only be non-NULL if mdsnGroupName was non-NULL.
In virNetServerRun, Coverity didn't realize that the array is non-NULL
if the array count is non-zero.
* src/rpc/virnetserver.c (virNetServerNew): Use alternate pointer.
(virNetServerRun): Give coverity a hint.
Coverity complained that 395 out of 409 virAsprintf calls are
checked, and therefore assumed that the remaining cases are bugs
waiting to happen. But in each of these cases, a failed virAsprintf
will properly set the target string to NULL, and pass on that
failure to the caller, without wasting efforts to check the call.
Adding the ignore_value silences Coverity.
* src/conf/domain_audit.c (virDomainAuditGetRdev): Ignore
virAsprintf return value, when it behaves like we need.
* src/network/bridge_driver.c (networkDnsmasqLeaseFileNameDefault)
(networkRadvdConfigFileName, networkBridgeDummyNicName)
(networkRadvdPidfileBasename): Likewise.
* src/util/storage_file.c (absolutePathFromBaseFile): Likewise.
* src/openvz/openvz_driver.c (openvzGenerateContainerVethName):
Likewise.
* src/util/command.c (virCommandTranslateStatus): Likewise.
Quite a few leaks detected by coverity. For chr, the leaks were
close enough to the allocations to plug in place; for disk, the
leaks were separated from the allocation by enough other lines with
intermediate failure cases that I refactored the cleanup instead.
* src/qemu/qemu_command.c (qemuParseCommandLine): Plug leaks.
Warning detected by Coverity. No need for the NULL check, and
removing it silences the warning without any semantic change.
* src/qemu/qemu_migration.c (qemuMigrationFinish): All entries to
endjob had non-NULL vm.
Detected by Coverity. Freeing the wrong variable results in both
a memory leak and the likelihood of the caller dereferencing through
a freed pointer.
* src/rpc/virnettlscontext.c (virNetTLSSessionNew): Free correct
variable.
Coverity detected that 5 of 6 callers of virJSONValueArrayGet checked
for a NULL return; and that by not checking we risk a null deref
during an error. The error is unlikely since the prior call to
virJSONValueArraySize would probably have already caught any botched
JSON array parse, but better safe than sorry.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetBlockJobInfo):
Check for NULL.
(qemuMonitorJSONExtractPtyPaths): Fix typo.
Detected by Coverity. We want to compare the result of fnmatch 'rv',
not our pre-set return value 'ret'.
* src/rpc/virnetsaslcontext.c (virNetSASLContextCheckIdentity):
Check correct variable.
Revert 6a1f5f568f. Now that libvirt_iohelper takes fds by
inheritance rather than by open() (commit 1eb66479), there is
no longer a race where the parent can unlink() a file prior to
the iohelper open()ing the same file. From there, it makes
more sense to have the callers both create and unlink, rather
than the caller create and the stream unlink, since the latter
was only needed when iohelper had to do the unlink.
* src/fdstream.h (virFDStreamOpenFile, virFDStreamCreateFile):
Callers are responsible for deletion.
* src/fdstream.c (virFDStreamOpenFileInternal): Don't leak created
file on failure.
(virFDStreamOpenFile, virFDStreamCreateFile): Drop parameter.
* src/lxc/lxc_driver.c (lxcDomainOpenConsole): Update callers.
* src/qemu/qemu_driver.c (qemuDomainScreenshot)
(qemuDomainOpenConsole): Likewise.
* src/storage/storage_driver.c (storageVolumeDownload)
(storageVolumeUpload): Likewise.
* src/uml/uml_driver.c (umlDomainOpenConsole): Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainScreenshot): Likewise.
* src/xen/xen_driver.c (xenUnifiedDomainOpenConsole): Likewise.
The previous qemu patch could end up calling unlink(tmp) before
tmp was the name of a valid file (unlinking a fileXXXXXX template
instead), or calling unlink(tmp) twice on success (once here,
and once at the end of the stream). Meanwhile, vbox also suffered
from the same leaked tmp file bug.
* src/qemu/qemu_driver.c (qemuDomainScreenshot): Don't unlink on
success, or on invalid name.
* src/vbox/vbox_tmpl.c (vboxDomainScreenshot): Don't leak temp file.
Spotted by Coverity. Gnutls documents that buffer must be NULL
if gnutls_x509_crt_get_key_purpose_oid is to be used to determine
the correct size needed for allocating a buffer.
* src/rpc/virnettlscontext.c
(virNetTLSContextCheckCertKeyPurpose): Initialize buffer.
Spotted by coverity. If pipe2 fails, then we attempt to close
uninitialized fds, which may result in a double-close.
* src/rpc/virnetserver.c (virNetServerSignalSetup): Initialize fds.
Steps to reproduce this problem (vm1 is not running):
for i in `seq 50`; do virsh managedsave vm1& done; killall virsh
Pre-patch, virNetServerClientClose could end up setting client->sock
to NULL prior to other cleanup functions trying to use client->sock.
This fixes things by checking for NULL in more places, and by deferring
the cleanup until after all queued messages have been served.
* src/rpc/virnetserverclient.c (virNetServerClientRegisterEvent)
(virNetServerClientGetFD, virNetServerClientIsSecure)
(virNetServerClientLocalAddrString)
(virNetServerClientRemoteAddrString): Check for closed socket.
(virNetServerClientClose): Rearrange close sequence.
Analysis from Wen Congyang.
This patch adds an internal function openvzGetVEStatus to
get the real state of the domain. This function is used in
various places in the driver, in particular to detect when
the domain has been shut down by the user with the "halt"
command.
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
whether or not previous return value is -1, the following codes will be
executed for a inactive guest in src/qemu/qemu_driver.c:
ret = virDomainSaveConfig(driver->configDir, persistentDef);
and if everything is okay, 'ret' is assigned to 0, the previous 'ret'
will be overwritten, this patch will fix this issue.
* src/qemu/qemu_driver.c: avoid return value is overwritten when give a argument
in out of blkio weight range for a inactive guest.
* how to reproduce?
% virsh blkiotune ${guestname} --weight 10
% echo $?
Note: guest must be inactive, argument 10 in out of blkio weight range,
and can get a error information by checking libvirtd.log, however,
virsh hasn't raised any error information, and return value is 0.
https://bugzilla.redhat.com/show_bug.cgi?id=726304
Signed-off-by: Alex Jia <ajia@redhat.com>
whether or not previous return value is -1, the following codes will be
executed for a inactive guest in qemuDomainSetMemoryParameters:
ret = virDomainSaveConfig(driver->configDir, persistentDef);
and if everything is okay, 'ret' is assigned to 0, the previous 'ret'
will be overwritten, this patch will fix this issue.
* src/qemu/qemu_driver.c: avoid return value is overwritten when set
min_guarante value to a inactive guest.
* how to reproduce?
% virsh memtune ${guestname} --min_guarante 1024
% echo $?
Note: guest must be inactive, in fact, 'min_guarante' hasn't been implemented
in memory tunable, and I can get the error when check actual libvirtd.log,
however, virsh hasn't raised any error information, and return value is 0.
Signed-off-by: Alex Jia <ajia@redhat.com>