Regression introduced by commit 258e06c85b, "ret" could be set to 1
or 0 by virStorageBackendFileSystemIsMounted before goto cleanup.
This could mislead the callers (up to the public API
virStoragePoolDestroy) to return success even the underlying umount
command fails.
I have been testing libvirt v1.0.0 for deployment within my
organization, and in the process discovered what appears to be a bug
that breaks virsh attach-device, when attaching an RBD volume to an
instance. First, here is the error presented, with v1.0.0 (this worked
in v0.10.2):
[root@host ~]# virsh attach-device W5APQ8 G84VV1.xml
error: Failed to attach device from G84VV1.xml
error: cannot open file 'dc3-1-test/G84VV1': No such file or directory
Using git bisect, I narrowed the problem down to this as the first
commit to break this setup:
4d34c92947 is the first bad commit
Both generated with
qemu-system-x86_64 --help > qemu-1.2.0
qemu-system-x86_64 \
-device ? \
-device pci-assign,? \
-device virtio-blk-pci,? \
-device virtio-net-pci,? \
-device scsi-disk,? \
-device PIIX4_PM,? \
-device usb-redir,? \
-device ide-drive,? \
-device usb-host,? 2> qemu-1.2.0-device
It seems I missed a few -device flags when doing this last time and I
mixed up qemu and qemu-kvm.
Commit a4c19459aa only added the
QEMU capability flag, command line option and added the boot element
for redirdev's in the XML schema.
This patch adds support for parsing and writing the XML with redirdevs
with the boot flag. It also ignores unknown XML elements in redirdev
instead of failing with:
"error: An error occurred, but the cause is unknown"
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=805414
If .gnulib is deleted unexpectedly, autogen.sh will fail with message:
fatal: ambiguous argument '.gnulib': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
which is actually given by git diff .gnulib, which doesn't exist.
In the case to run bootstrap to create .gnulib.
Upcoming patches for revert-and-clone branching of snapshots need
to be able to copy a domain definition; make this step reusable.
* src/conf/domain_conf.h (virDomainDefCopy): New prototype.
* src/conf/domain_conf.c (virDomainObjCopyPersistentDef): Split...
(virDomainDefCopy): ...into new function.
(virDomainObjSetDefTransient): Use it.
* src/libvirt_private.syms (domain_conf.h): Export it.
* src/qemu/qemu_driver.c (qemuDomainRevertToSnapshot): Use it.
Now that we can filter on this information, we should also make
it easy to get at.
* tools/virsh-snapshot.c (cmdSnapshotInfo): Add another output
row, and switch to XPath queries rather than strstr.
Relatively straight-forward. And since qemu was already using
VIR_DOMAIN_SNAPSHOT_FILTERS_ALL, with 6 different APIs all calling
into this common code, I've instantly added all 5 flags to 6 APIs.
* src/conf/snapshot_conf.h (VIR_DOMAIN_SNAPSHOT_FILTERS_ALL):
Enable new filters.
* src/conf/snapshot_conf.c (virDomainSnapshotObjListGetNames):
Prep the new flags.
(virDomainSnapshotObjListCopyNames): Actually do the filtering.
Snapshot filtering based on types is useful enough to add
back-compat support into virsh. It is also rather easy - all
versions of libvirt that don't understand the new filter flags
already gave us sufficient information in a single XML field
to reconstruct all the information we need (that is, it isn't
until libvirt 1.0.1 that we have more interesting types of
snapshots, such as offline external).
* tools/virsh-snapshot.c (vshSnapshotFilter): New function.
(vshSnapshotListCollect): Add fallback support.
As we enable more modes of snapshot creation, it becomes more important
to be able to quickly filter based on snapshot properties. This patch
introduces new filter flags; subsequent patches will introduce virsh
back-compat filtering, as well as actual libvirt filtering.
* include/libvirt/libvirt.h.in (virDomainSnapshotListFlags): Add
five new flags in two new groups.
* src/libvirt.c (virDomainSnapshotNum, virDomainSnapshotListNames)
(virDomainListAllSnapshots, virDomainSnapshotNumChildren)
(virDomainSnapshotListChildrenNames)
(virDomainSnapshotListAllChildren): Document them.
* src/conf/snapshot_conf.h (VIR_DOMAIN_SNAPSHOT_FILTERS_STATUS)
(VIR_DOMAIN_SNAPSHOT_FILTERS_LOCATION): Add new convenience filter
collection macros.
* tools/virsh-snapshot.c (cmdSnapshotList): Add 5 new flags.
* tools/virsh.pod (snapshot-list): Document them.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=873134
The reported problem is that an attempt to restore a saved domain that
was configured with <currentMemory> and <memory> set to some (same for
both) number that's not a multiple of 4096KiB results in an error like
this:
error: Failed to start domain libvirt_test_api
error: XML error: current memory '4001792k' exceeds maximum '4000768k'
(in this case, currentMemory was set to 4000000KiB).
The reason for this failure is:
1) a saved image contains the "live xml" of the domain at the time of
the save.
2) the live xml of a running domain gets its currentMemory
(a.k.a. cur_balloon) directly from the qemu monitor rather than from
the configuration of the domain.
3) the value reported by qemu is (sometimes) not exactly what was
originally given to qemu when the domain was started, but is rounded
up to [some indeterminate granularity] - in some versions of qemu that
granularity is apparently 1MiB, and in others it is 4MiB.
4) When the XML is parsed to setup the state of the restored domain,
the XML parser for <currentMemory> compares it to <memory> (which is
the maximum allowed memory size for the domain) and if <currentMemory>
is greater than the next 1024KiB boundary above <memory>, it spits out
an error and fails.
For example (from the BZ) if you start qemu on RHEL6 with both
<currentMemory> and <memory> of 4000000 (this number is in KiB),
libvirt's dominfo or dumpxml will report "4001792" back (rounded up to
next 4MiB) for 10-20 seconds after the start, then revert to reporting
"4000000". On Fedora 16 (which uses qemu-1.0), it will instead report
"4000768" (rounded up to next 1MiB). On Fedora 17 (qemu-1.2), it seems
to always report "4000000". ("4000000" is of course okay, and
"4000768" is also okay since that's the next 1024KiB boundary above
"4000000" and the parser was already allowing for that. But "4001792
is *not* okay and produces the error message.)
This patch solves the problem by changing the allowed "fudge factor"
when parsing from 1024KiB to 4096KiB to match the maximum up-rounding
that could be done in qemu.
(I had earlier thought to fix this by up-rounding <memory> in the
dumpxml that's put into the saved image, but that wouldn't have fixed
the case where the save image was produced by an "unfixed"
libvirtd.)
Prior to this patch, 'virsh nodecpumap' on older kernels reported:
error: Unable to get cpu map
error: out of memory
* src/nodeinfo.c (linuxParseCPUmax): Don't overwrite error.
(nodeGetCPUBitmap): Provide backup implementation.
On RHEL 5, I was getting a segfault trying to start libvirtd,
because we were failing virNodeParseSocket but not checking
for errors, and then calling CPU_SET(-1, &sock_map) as a result.
But if you don't have a topology/physical_package_id file,
then you can just assume that the cpu belongs to socket 0.
* src/nodeinfo.c (virNodeGetCpuValue): Change bool into
default_value.
(virNodeParseSocket): Allow for default value when file is missing,
different from fatal error on reading file.
(virNodeParseNode): Update call sites to fail on error.
This reverts commits 5f63a5cb42
and ff86b0c97b. After much list
discussion, consensus was that libvirt aliases should be reserved
to correct typos, otherwise it risks confusion. Rather, we
should implement a way for users to provide their own aliases
as part of their virsh configuration preferences.
For disk snapshots, the user could request an external snapshot
but not supply a filename; later on, we would check this condition
and generate a suitable name if possible, or gracefully error out
when not possible (such as when the original file was a block
device). But unless we come up with a suitable way to generate
external memory file names, we have no later code point that was
checking for NULL, so we should forbid this up front.
* src/conf/snapshot_conf.c (virDomainSnapshotDefParseString):
Avoid NULL deref, since we don't generate names yet.
It may take some time for sanlock to add a lockspace. And if user
restart libvirtd service meanwhile, the fresh daemon can fail adding
the same lockspace with EINPROGRESS. Recent sanlock has
sanlock_inq_lockspace() function which should block until lockspace
changes state. If we are building against older sanlock we should
retry a few times before claiming an error. This issue can be easily
reproduced:
for i in {1..1000} ; do echo $i; service libvirtd restart; sleep 2; done
20
Stopping libvirtd daemon: [FAILED]
Starting libvirtd daemon: [ OK ]
21
Stopping libvirtd daemon: [ OK ]
Starting libvirtd daemon: [ OK ]
22
Stopping libvirtd daemon: [ OK ]
Starting libvirtd daemon: [ OK ]
error : virLockManagerSanlockSetupLockspace:334 : Unable to add
lockspace /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__: Operation now in
progress
Modified the places where virNodeGetInfo was used for the purpose
of obtaining the maximum node CPU number. Transparently falling
back to virNodeGetInfo in case of failure.
Wrote utility function vshNodeGetCPUCount to compute node CPU
number.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Modified the places where virNodeGetInfo was used for the purpose
of obtaining the maximum node CPU number. Transparently falling
back to virNodeGetInfo in case of failure.
Wrote a utility function getPyNodeCPUCount for that purpose.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Since /sys/devices/system/cpu/present is not available on
older kernels like on RHEL 5.x nodeGetCPUCount will
fail there. The fallback implemented is to scan for
/sys/devices/system/cpu/cpuNN entries.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
This simplifies the top-level code, at the cost of using a little more
stack space. The primary benefit is being able to send more fields
without knowing in advance how many of them, and of which types, these
fields will be, and without having to individually add buffer variables.
The code imposes an upper limit on the total number of iovs/buffers
used, and fields that wouldn't fit are silently dropped. This is not
significant in this patch, but will affect the following one.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
... and update all users. No change in functionality, the parameter
will be used later.
The metadata representation is as minimal as possible, but requires
the caller to allocate an array on stack explicitly.
The alternative of using varargs in the virLogMessage() callers:
* Would not allow the caller to optionally omit some metadata elements,
except by having two calls to virLogMessage.
* Would not be as type-safe (e.g. using int vs. size_t), and the compiler
wouldn't be able to do type checking
* Depending on parameter order:
a) virLogMessage(..., message format, message params...,
metadata..., NULL)
can not be portably implemented (parse_printf_format() is a glibc
function)
b) virLogMessage(..., metadata..., NULL,
message format, message params...)
would prevent usage of ATTRIBUTE_FMT_PRINTF and the associated
compiler checking.
Signed-off-by: Miloslav Trmač <mitr@redhat.com>
Ever since commit 7b21981c started generating AUTHORS, we now have
the situation that if you flip between two branches in the same
git repository that cross that commit boundary, then 'make' will
fail due to automake complaining about AUTHORS not existing. The
simplest solution is to realize that if AUTHORS does not exist,
then we flipped branches so we will need to rerun bootstrap
anyways; and rerunning bootstrap ensures AUTHORS will exist in time.
* cfg.mk (_update_required): Also depend on AUTHORS.
The "restart" function for locks allocates a new array according to
and pre-sets its length, then reads the owner pids from a JSON
document in a loop. Rather than adding each owner at a different
index, though, it repeatedly overwrites the last element of the array
with all the owners.
This patch adds a helper to determine if snapshots are external and uses
the helper to fix detection of those in snapshot deletion code.
Snapshots are external if they have an external memory image or if the
disk locations are external. As mixed snapshots are forbidden for now
we need to check just one disk to know.
qemumonitorjsontest creates a temporary directory to hold the socket
that is simulating the monitor socket. The directory containing the
socket wasn't disposed properly at the end of the test leaving garbage
in the temporary folder.
When doing the qemumonitorjsontest on a machine under heavy load the
test tends to deadlock from time to time. This patch adds the hack to
break the event loop that is used in virsh.
The AMD Bulldozer architecture uses so called "Clustered integer core
modules" that count both as threads and cores. This patch expects the
cpu to be detected using the new fallback condition otherwise twice the
number of processors would be detected.
This test data was gathered on an AMD MagnyCours machine that reports it
has only one NUMA node although the hardware is consisting of 4. As
duplicate core id's are ignored the reported topology was bogous. This
should be fixed by the previous patch.
Reported and data provided by George-Cristian Bîrzan.
Lately there were a few reports of the output of the virsh nodeinfo
command being inaccurate. This patch tries to avoid that by checking if
the topology actually makes sense. If it doesn't we then report a
synthetic topology that indicates to the user that the host capabilities
should be checked for the actual topology.
Currently, if user calls virDomainAbortJob we just issue
'migrate_cancel' and hope for the best. However, if user calls
the API in wrong phase when migration hasn't been started yet
(perform phase) the cancel request is just ignored. With this
patch, the request is remembered and as soon as perform phase
starts, migration is cancelled.
For S390, the default console target type cannot be of type 'serial'.
It is necessary to at least interpret the 'arch' attribute
value of the os/type element to produce the correct default type.
Therefore we need to extend the signature of defaultConsoleTargetType
to account for architecture. As a consequence all the drivers
supporting this capability function must be updated.
Despite the amount of changed files, the only change in behavior is
that for S390 the default console target type will be 'virtio'.
N.B.: A more future-proof approach could be to to use hypervisor
specific capabilities to determine the best possible console type.
For instance one could add an opaque private data pointer to the
virCaps structure (in case of QEMU to hold capsCache) which could
then be passed to the defaultConsoleTargetType callback to determine
the console target type.
Seems to be however a bit overengineered for the use case...
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
When the libvirt daemon is restarted it tries to reconnect to running
qemu domains. Since commit d38897a5d4 the
re-connection code runs in separate threads. In the original
implementation the maximum of domain ID's (that is used as an
initializer for numbering guests created next) while libvirt was
reconnecting to the guest.
With the threaded implementation this opens a possibility for race
conditions with the thread that is autostarting guests. When there's a
guest running with id 1 and the daemon is restarted. The autostart code
is reached first and spawns the first guest that should be autostarted
as id 1. This results into the following unwanted situation:
# virsh list
Id Name State
----------------------------------------------------
1 guest1 running
1 guest2 running
This patch extracts the detection code before the re-connection threads
are started so that the maximum id of the guests being reconnected to is
known.
The only semantic change created by this is if the guest with greatest ID
quits before we are able to reconnect it's ID is used anyway as the
greatest one as without this patch the greatest ID of a process we could
successfuly reconnect to would be used.