Currently we try to chown any directory passed to virDirCreate,
even if the user didn't request any explicit owner/group via the
pool/vol XML.
This causes issues with qemu:///session: try to build a pool of
a root owned directory like /tmp, and it fails trying to chown the
directory to the session user. Instead it should just leave things
as they are, unless the user requests changing permissions via
the pool XML.
Similarly this is annoying if creating a storage pool via system
libvirtd of an existing directory in user $HOME, it's now owned
by root.
The virDirCreate function is pretty convoluted, since it needs to
fork off in certain specific cases. Try to document that, to make
it clear where exactly we are changing behavior.
The current code attempts to handle this, but it only catches mkdir
failing with EEXIST. However if say trying to build /tmp for an
unprivileged qemu:///session, mkdir will fail with EPERM.
Rather than catch any errors, just don't attempt mkdir if the directory
already exists.
Set the capability based on qmp query, or qemu version. The qmp query
includes vmport with 2.2, but no longer with 2.3. It lists only
non-machine specific capabilities, so check the qemu version too until a
machine-specific query is supported.
A new feature that can be turned on or off.
The QEMU machine vmport option allows to set the VMWare IO port
emulation. This emulation is useful for absolute pointer input when the
guest has vmware input drivers, and is enabled by default for kvm.
However it is unnecessary for Spice-enabled VM, since the agent already
handles absolute pointer and multi-monitors. Furthermore, it prevents
Spice from switching to relative input since the regular ps/2 pointer
driver is replaced by the vmware driver. It is thus advised to disable
vmport when using a Spice VM. This will permit the Spice client to
switch from absolute to relative pointer, as it may be required for
certain games or applications.
The --maximum option wasn't properly parsed and the equivalent flag
wasn't set. Fix this bug and also rewrite the way we check this option
by using new macro. The new approach is that --maximum requires
--config, no other combination is allowed, because they don't make sense.
The new error will be:
# virsh setvcpus test --maximum 10
error: Option --config is required by option --maximum
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1204033
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Now that we have macros for exclusive flags and flag requirements we can
use them to cleanup the code for setvcpus and error out for all wrong
flag combination.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Inspired by commit 7e437ee7 that introduced similar macros for virsh
commands so we don't have to repeat the same code all over.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Found by Laine and discussed a bit on internal IRC.
Commit id c56fe7f1d6 added support for creating a command line to support
scsi-disk.channel.
Series was here:
http://www.redhat.com/archives/libvir-list/2012-February/msg01052.html
Which pointed to a design proposal here:
http://permalink.gmane.org/gmane.comp.emulators.libvirt/50428
Which states (in part):
Libvirt should check for the QEMU "scsi-disk.channel" property. If it
is unavailable, QEMU will only support channel=lun=0 and 0<=target<=7.
However, the check added was ensuring that bus != lun *and* bus != 0. So
if bus == lun and both were non zero, we'd never make the second check.
Changing this to an *or* check fixes the check, but still is less readable
than the just checking each for 0
Recent commit 198cc1d3 introduced integration of lockd and sanlock into
libxl, but forget to update libvirt.spec.in to also list new files
distributed via package.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Since the qemu capabilities are not initialized for offline VMs the
caller might get suboptimal error message:
$ virsh blockjob VM PATH --bandwidth 1
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the checks after we make sure that the VM is alive.
Just as we allow stopping filesystem pools when they were unmounted
externally, do not fail to stop an iscsi pool when someone else
closed the session externally.
Reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1171984
The phyp driver stuffed it into a DomainDefPtr during its attachdevice
routine, but the value is never advertised via capabilities so it should
be safe to drop.
Have the phyp driver use OSTYPE_LINUX, which is what it advertises via
capabilities.
In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
We wait until they are all ready, or one of them aborts.
In qemuMigrationCancelDriveMirror, we wait until all mirrors are
properly stopped. This is necessary to ensure that destination VM is
fully in sync with the (paused) source VM.
If a drive mirror can not be cancelled, then the destination is not in a
consistent state. In this case it is not safe to continue with the
migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The !modern code path needs to call qemuBlockJobEventProcess directly.
the modern code path will call it via qemuBlockJobSyncWait.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Other threads may be blocked in qemuBlockJobSyncWait. Ensure that
they're woken up when the domain is stopped.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
qemuBlockJobSyncBegin and qemuBlockJobSyncEnd delimit a region of code
where block job events are processed "synchronously".
qemuBlockJobSyncWait and qemuBlockJobSyncWaitWithTimeout wait for an
event generated by a block job.
The Wait* functions may be called multiple times while the synchronous
block job is active. Any pending block job event will be processed by
only when Wait* or End is called. disk->blockJobStatus is reset by
these functions, so if it is needed a pointer to a
virConnectDomainEventBlockJobStatus variable should be passed as the
last argument. It is safe to pass NULL if you do not care about the
block job status.
All functions assume the VM object is locked. The Wait* functions will
unlock the object for as long as they are waiting. They will return -1
and report an error if the domain exits before an event is received.
Typical use is as follows:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
qemuBlockJobSyncBegin(disk);
... start block job ...
if (qemuBlockJobSyncWait(driver, vm, disk, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
... possibly start other block jobs
or wait for further events ...
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
To perform other tasks periodically while waiting for an event:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
unsigned long long timeout = 500 * 1000ull; /* milliseconds */
qemuBlockJobSyncBegin(disk);
... start block job ...
do {
... do other task ...
if (qemuBlockJobSyncWaitWithTimeout(driver, vm, disk,
timeout, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
} while (status == -1);
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
We will want to use synchronous block jobs from qemu_migration as well,
so split this function out into a new source file.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The documentation states that for shallow block copy the image has to
have the same guest visible content as backing file of the current
image if the file is being reused. This condition can be achieved also
with a raw file (or a qcow without a backing file) so remove the
condition that would disallow it.
(This patch additionally fixes crash described in
https://bugzilla.redhat.com/show_bug.cgi?id=1215569 )
The free callback should be qemuMonitorChardevInfoFree rather
than just 'free' when virHashCreate'ing the chardevInfo hash.
==29959== 24 bytes in 2 blocks are definitely lost in loss record 19 of 53
==29959== at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==29959== by 0xB95C679: strdup (in /lib64/libc-2.20.so)
==29959== by 0x63C6546: virStrdup (virstring.c:709)
==29959== by 0x4805ED: qemuMonitorJSONExtractChardevInfo (qemu_monitor_json.c:3429)
==29959== by 0x4807A5: qemuMonitorJSONGetChardevInfo (qemu_monitor_json.c:3479)
==29959== by 0x434AEC: testQemuMonitorJSONqemuMonitorJSONGetChardevInfo (qemumonitorjsontest.c:1824)
==29959== by 0x436F2F: virtTestRun (testutils.c:211)
==29959== by 0x436932: mymain (qemumonitorjsontest.c:2404)
==29959== by 0x4382EA: virtTestMain (testutils.c:863)
==29959== by 0x436B27: main (qemumonitorjsontest.c:2423)
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Zhou Yimin <zhouyimin@huawei.com>
It would be used in qemumonitorjsontest, thus we make it non-static.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Zhou Yimin <zhouyimin@huawei.com>
Trying to use qemu:///session to create a storage pool pointing at
/tmp will usually fail with something like:
$ virsh pool-start tmp
error: Failed to start pool tmp
error: cannot open volume '/tmp/systemd-private-c38cf0418d7a4734a66a8175996c384f-colord.service-kEyiTA': Permission denied
If any volume in an FS pool can't be opened by the daemon, the refresh
fails, and the pool can't be used.
This causes pain for virt-install/virt-manager though. Imaging a user
downloads a disk image to /tmp. virt-manager wants to import /tmp as
a storage pool, so we can detect what disk format it is, and set the
XML correctly. However this case will likely fail as explained above.
Change the logic here to skip volumes that fail to open. This could
conceivably cause user complaints along the lines of 'why doesn't
libvirt show $ROOT-OWNED-VOLUME-FOO', but figuring that currently
the pool won't even startup, I don't think there are any current
users that care about that case.
https://bugzilla.redhat.com/show_bug.cgi?id=1103308
If you end up with a state file for a pool that no longer starts up
or refreshes correctly, the state file is never removed and adds
noise to the logs everytime libvirtd is started.
If the initial state syncing fails, delete the statefile.