Although we require various C99 features, we don't yet require a
complete C99 compiler. On RHEL 5, compilation complained:
qemu/qemu_command.c: In function 'qemuBuildGraphicsCommandLine':
qemu/qemu_command.c:4688: error: 'for' loop initial declaration used outside C99 mode
* src/qemu/qemu_command.c (qemuBuildGraphicsCommandLine): Declare
variable sooner.
* src/qemu/qemu_process.c (qemuProcessInitPasswords): Likewise.
The patch refactors the current ESX storage driver due to following reasons:
1. Given most of the public APIs exposed by the storage driver in Libvirt
remains same, ESX storage driver should not implement logic specific
for only one supported format (current implementation only supports VMFS).
2. Decoupling interface from specific storage implementation gives us an
extensible design to hook implementation for other supported storage
formats.
This patch refactors the current driver to implement it as a facade pattern i.e.
the driver exposes all the public libvirt APIs, but uses backend drivers to get
the required task done. The backend drivers provide implementation specific to
the type of storage device.
File changes:
------------------
esx_storage_driver.c ----> esx_storage_driver.c (base storage driver)
|
|---> esx_storage_backend_vmfs.c (VMFS backend)
When no security driver is specified libvirt_lxc segfaults as a debug
message tries to access security labels for the container that are not
present.
This problem was introduced in commit 6c3cf57d6c.
Early jumps to the cleanup label caused a crash of the libvirt_lxc
container helper as the cleanup section called
virLXCControllerDeleteInterfaces(ctrl) without checking the ctrl argument
for NULL. The argument was de-referenced soon after.
$ /usr/libexec/libvirt_lxc
/usr/libexec/libvirt_lxc: missing --name argument for configuration
Segmentation fault
This will simplify the refactoring of the ESX storage driver to support
a VMFS and an iSCSI backend.
One of the tasks the storage driver needs to do is to decide which backend
driver needs to be invoked for a given request. This approach extends
virStoragePool and virStorageVol to store extra parameters:
1. privateData: stores pointer to respective backend storage driver.
2. privateDataFreeFunc: stores cleanup function pointer.
virGetStoragePool and virGetStorageVol are modfied to accept these extra
parameters as user params. virStoragePoolDispose and virStorageVolDispose
checks for cleanup operation if available.
The private data pointer allows the ESX storage driver to store a pointer
to the used backend with each storage pool and volume. This avoids the need
to detect the correct backend in each storage driver function call.
The new model supports following features in addition to those supported
by SandyBridge:
fma, pcid, movbe, fsgsbase, bmi1, hle, avx2, smep, bmi2, erms, invpcid,
rtm
Commit 258e06c removed setting of the volume type to
VIR_STORAGE_VOL_BLOCK, which leads to failures in
storageVolumeCreateXMLFrom.
The type (and target.format) of the volume was set to zero. In
virStorageBackendGetBuildVolFromFunction, this gets interpreted as
VIR_STORAGE_FILE_NONE and the qemu-img tool is called with unknown
"none" format.
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=879780
bridge_driver.h: silence gcc warnings:
statement with no effect [-Wunused-value]
unused variable 'net' [-Wunused-variable]
virdrivermoduletest.c: don't require network driver module
if it hasn't been built.
The virLXCControllerClientCloseHook method was mistakenly
assuming that the private data associated with the network
client was the virLXCControllerPtr. In fact it was just a
dummy int, so we were derefencing a bogus struct. The
frequent result of this was that we would never quit, because
we tried to arm a non-existant timer.
Fix the code by removing the dummy private data and just
using the virLXCControllerPtr instance as private data
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
It is possible for there to be deleted timers when we
calculate the next timeout, and they must be skipped.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The event code is a no-op if requested to update a non-existent
timer/handle watch. This makes it hard to detect bugs in the
caller who have passed bogus data. Add a VIR_WARN output in
such cases, since the API does not allow for return errors.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The docs for virDiskNameToIndex claim it ignores partition
numbers. In actual fact though, a code ordering bug means
that a partition number will cause the code to accidentally
multiply the result by 26.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
I was convicted that space at EOL should no be there
even for qemu help data. Hence, I've removed one in
commit bb2f621611. However, it turns out we want
it exactly the way qemu produces it. So I should undo
my premature fix. A patch against qemu has been posted
as well.
Commit e0c469e58b that fixes the detection
of image chain wasn't complete. Iteration through the backing image
chain has to stop at the last existing image if some of the images are
missing otherwise the backing chain that is cached contains entries with
paths being set to NULL resulting to:
error: Unable to allow access for disk path (null): Bad address
Fortunately stat() is kind enough not to crash when it's presented with
a NULL argument. At least on Linux.
The error "... but the cause is unknown" appeared for XMLs similar to
this:
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/dev/zero'/>
<target dev='sr0'/>
</disk>
Notice unsupported disk type (for the driver), but also no address
specified. The first part is not a problem and we should not abort
immediately because of that, but the combination with the address
unknown was causing an unspecified error.
While fixing this, I added an error to one place where this return
value was not managed properly.
Fixes this error when building with -Werror on Alpine Linux:
util/processinfo.c: In function 'virProcessInfoSetAffinity':
util/processinfo.c:52:5: error: implicit declaration of function 'malloc' [-Werror=implicit-function-declaration]
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
Currently the LXC driver logs audit messages when a container
is started or stopped. These audit messages, however, contain
the PID of the libvirt_lxc supervisor process. To enable
sysadmins to correlate with audit messages generated by
processes /inside/ the container, we need to include the
container init process PID.
We can't do this in the main 'start' audit message, since
the init PID is not available at that point. Instead we output
a completely new audit record, that lists both PIDs.
type=VIRT_CONTROL msg=audit(1353433750.071:363): pid=20180 uid=0 auid=501 ses=3 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='virt=lxc op=init vm="busy" uuid=dda7b947-0846-1759-2873-0f375df7d7eb vm-pid=20371 init-pid=20372 exe="/home/berrange/src/virt/libvirt/daemon/.libs/lt-libvirtd" hostname=? addr=? terminal=pts/6 res=success'
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The LXC controller code currently directly invokes the
libvirt main loop code. The problem is that this misses
the cleanup of virNetServerClient connections that
virNetServerRun takes care of.
The result is that when libvirtd is stopped, the
libvirt_lxc controller process gets stuck in a I/O loop.
When libvirtd is then started again, it fails to connect
to the controller and thus kills off the entire domain.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Regression introduced by commit 258e06c85b, "ret" could be set to 1
or 0 by virStorageBackendFileSystemIsMounted before goto cleanup.
This could mislead the callers (up to the public API
virStoragePoolDestroy) to return success even the underlying umount
command fails.
I have been testing libvirt v1.0.0 for deployment within my
organization, and in the process discovered what appears to be a bug
that breaks virsh attach-device, when attaching an RBD volume to an
instance. First, here is the error presented, with v1.0.0 (this worked
in v0.10.2):
[root@host ~]# virsh attach-device W5APQ8 G84VV1.xml
error: Failed to attach device from G84VV1.xml
error: cannot open file 'dc3-1-test/G84VV1': No such file or directory
Using git bisect, I narrowed the problem down to this as the first
commit to break this setup:
4d34c92947 is the first bad commit
Both generated with
qemu-system-x86_64 --help > qemu-1.2.0
qemu-system-x86_64 \
-device ? \
-device pci-assign,? \
-device virtio-blk-pci,? \
-device virtio-net-pci,? \
-device scsi-disk,? \
-device PIIX4_PM,? \
-device usb-redir,? \
-device ide-drive,? \
-device usb-host,? 2> qemu-1.2.0-device
It seems I missed a few -device flags when doing this last time and I
mixed up qemu and qemu-kvm.
Commit a4c19459aa only added the
QEMU capability flag, command line option and added the boot element
for redirdev's in the XML schema.
This patch adds support for parsing and writing the XML with redirdevs
with the boot flag. It also ignores unknown XML elements in redirdev
instead of failing with:
"error: An error occurred, but the cause is unknown"
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=805414
If .gnulib is deleted unexpectedly, autogen.sh will fail with message:
fatal: ambiguous argument '.gnulib': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
which is actually given by git diff .gnulib, which doesn't exist.
In the case to run bootstrap to create .gnulib.
Upcoming patches for revert-and-clone branching of snapshots need
to be able to copy a domain definition; make this step reusable.
* src/conf/domain_conf.h (virDomainDefCopy): New prototype.
* src/conf/domain_conf.c (virDomainObjCopyPersistentDef): Split...
(virDomainDefCopy): ...into new function.
(virDomainObjSetDefTransient): Use it.
* src/libvirt_private.syms (domain_conf.h): Export it.
* src/qemu/qemu_driver.c (qemuDomainRevertToSnapshot): Use it.
Now that we can filter on this information, we should also make
it easy to get at.
* tools/virsh-snapshot.c (cmdSnapshotInfo): Add another output
row, and switch to XPath queries rather than strstr.
Relatively straight-forward. And since qemu was already using
VIR_DOMAIN_SNAPSHOT_FILTERS_ALL, with 6 different APIs all calling
into this common code, I've instantly added all 5 flags to 6 APIs.
* src/conf/snapshot_conf.h (VIR_DOMAIN_SNAPSHOT_FILTERS_ALL):
Enable new filters.
* src/conf/snapshot_conf.c (virDomainSnapshotObjListGetNames):
Prep the new flags.
(virDomainSnapshotObjListCopyNames): Actually do the filtering.
Snapshot filtering based on types is useful enough to add
back-compat support into virsh. It is also rather easy - all
versions of libvirt that don't understand the new filter flags
already gave us sufficient information in a single XML field
to reconstruct all the information we need (that is, it isn't
until libvirt 1.0.1 that we have more interesting types of
snapshots, such as offline external).
* tools/virsh-snapshot.c (vshSnapshotFilter): New function.
(vshSnapshotListCollect): Add fallback support.
As we enable more modes of snapshot creation, it becomes more important
to be able to quickly filter based on snapshot properties. This patch
introduces new filter flags; subsequent patches will introduce virsh
back-compat filtering, as well as actual libvirt filtering.
* include/libvirt/libvirt.h.in (virDomainSnapshotListFlags): Add
five new flags in two new groups.
* src/libvirt.c (virDomainSnapshotNum, virDomainSnapshotListNames)
(virDomainListAllSnapshots, virDomainSnapshotNumChildren)
(virDomainSnapshotListChildrenNames)
(virDomainSnapshotListAllChildren): Document them.
* src/conf/snapshot_conf.h (VIR_DOMAIN_SNAPSHOT_FILTERS_STATUS)
(VIR_DOMAIN_SNAPSHOT_FILTERS_LOCATION): Add new convenience filter
collection macros.
* tools/virsh-snapshot.c (cmdSnapshotList): Add 5 new flags.
* tools/virsh.pod (snapshot-list): Document them.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=873134
The reported problem is that an attempt to restore a saved domain that
was configured with <currentMemory> and <memory> set to some (same for
both) number that's not a multiple of 4096KiB results in an error like
this:
error: Failed to start domain libvirt_test_api
error: XML error: current memory '4001792k' exceeds maximum '4000768k'
(in this case, currentMemory was set to 4000000KiB).
The reason for this failure is:
1) a saved image contains the "live xml" of the domain at the time of
the save.
2) the live xml of a running domain gets its currentMemory
(a.k.a. cur_balloon) directly from the qemu monitor rather than from
the configuration of the domain.
3) the value reported by qemu is (sometimes) not exactly what was
originally given to qemu when the domain was started, but is rounded
up to [some indeterminate granularity] - in some versions of qemu that
granularity is apparently 1MiB, and in others it is 4MiB.
4) When the XML is parsed to setup the state of the restored domain,
the XML parser for <currentMemory> compares it to <memory> (which is
the maximum allowed memory size for the domain) and if <currentMemory>
is greater than the next 1024KiB boundary above <memory>, it spits out
an error and fails.
For example (from the BZ) if you start qemu on RHEL6 with both
<currentMemory> and <memory> of 4000000 (this number is in KiB),
libvirt's dominfo or dumpxml will report "4001792" back (rounded up to
next 4MiB) for 10-20 seconds after the start, then revert to reporting
"4000000". On Fedora 16 (which uses qemu-1.0), it will instead report
"4000768" (rounded up to next 1MiB). On Fedora 17 (qemu-1.2), it seems
to always report "4000000". ("4000000" is of course okay, and
"4000768" is also okay since that's the next 1024KiB boundary above
"4000000" and the parser was already allowing for that. But "4001792
is *not* okay and produces the error message.)
This patch solves the problem by changing the allowed "fudge factor"
when parsing from 1024KiB to 4096KiB to match the maximum up-rounding
that could be done in qemu.
(I had earlier thought to fix this by up-rounding <memory> in the
dumpxml that's put into the saved image, but that wouldn't have fixed
the case where the save image was produced by an "unfixed"
libvirtd.)
Prior to this patch, 'virsh nodecpumap' on older kernels reported:
error: Unable to get cpu map
error: out of memory
* src/nodeinfo.c (linuxParseCPUmax): Don't overwrite error.
(nodeGetCPUBitmap): Provide backup implementation.
On RHEL 5, I was getting a segfault trying to start libvirtd,
because we were failing virNodeParseSocket but not checking
for errors, and then calling CPU_SET(-1, &sock_map) as a result.
But if you don't have a topology/physical_package_id file,
then you can just assume that the cpu belongs to socket 0.
* src/nodeinfo.c (virNodeGetCpuValue): Change bool into
default_value.
(virNodeParseSocket): Allow for default value when file is missing,
different from fatal error on reading file.
(virNodeParseNode): Update call sites to fail on error.
This reverts commits 5f63a5cb42
and ff86b0c97b. After much list
discussion, consensus was that libvirt aliases should be reserved
to correct typos, otherwise it risks confusion. Rather, we
should implement a way for users to provide their own aliases
as part of their virsh configuration preferences.
For disk snapshots, the user could request an external snapshot
but not supply a filename; later on, we would check this condition
and generate a suitable name if possible, or gracefully error out
when not possible (such as when the original file was a block
device). But unless we come up with a suitable way to generate
external memory file names, we have no later code point that was
checking for NULL, so we should forbid this up front.
* src/conf/snapshot_conf.c (virDomainSnapshotDefParseString):
Avoid NULL deref, since we don't generate names yet.
It may take some time for sanlock to add a lockspace. And if user
restart libvirtd service meanwhile, the fresh daemon can fail adding
the same lockspace with EINPROGRESS. Recent sanlock has
sanlock_inq_lockspace() function which should block until lockspace
changes state. If we are building against older sanlock we should
retry a few times before claiming an error. This issue can be easily
reproduced:
for i in {1..1000} ; do echo $i; service libvirtd restart; sleep 2; done
20
Stopping libvirtd daemon: [FAILED]
Starting libvirtd daemon: [ OK ]
21
Stopping libvirtd daemon: [ OK ]
Starting libvirtd daemon: [ OK ]
22
Stopping libvirtd daemon: [ OK ]
Starting libvirtd daemon: [ OK ]
error : virLockManagerSanlockSetupLockspace:334 : Unable to add
lockspace /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__: Operation now in
progress
Modified the places where virNodeGetInfo was used for the purpose
of obtaining the maximum node CPU number. Transparently falling
back to virNodeGetInfo in case of failure.
Wrote utility function vshNodeGetCPUCount to compute node CPU
number.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Modified the places where virNodeGetInfo was used for the purpose
of obtaining the maximum node CPU number. Transparently falling
back to virNodeGetInfo in case of failure.
Wrote a utility function getPyNodeCPUCount for that purpose.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>