We should distinguish between success and timeout, to let the user
handle those two events differently.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Before:
# virsh blockjob r7 vdc
error: An error occurred, but the cause is unknown
After:
# virsh blockjob r7 vdc
error: Disk 'vdc' not found in the domain
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1241355
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1142631
Commit id 'e0e290552' added a check to determine if the same bus
had the same target value. It seems that's not quite good enough
as the check should check the target name value regardless of bus type.
Also added a DO_TEST_DIFFERENT to exhibit the issue
This patch reverts commit 4749d82a which tried to tweak the logic in
volume creation. We did realloc and update our object list before we executed
volume building within a specific storage backend. If that failed, we
had to update (again) our object list to the original state as it was before the
build and delete the volume from the pool (even though it didn't exist - this
truly depends on the backend).
I misunderstood the base idea to be able to poll the status of the volume
creation using vol-info. After commit 4749d82a this wasn't possible
anymore, although no BZ has been reported yet.
Commit 4749d82a also claimed to fix
https://bugzilla.redhat.com/show_bug.cgi?id=1223177, but commit c8be606b of the
same series as 4749d82ad (which was more of a refactor than a fix)
fixes the same issue so the revert should be pretty straightforward.
Further more, BZ https://bugzilla.redhat.com/show_bug.cgi?id=1241454 can be
fixed with this revert.
Setting of 'val' is a boolean expression, so handle it that way and
adjust the check/return logic to be clearer
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since a future patch will need the device path generated when adding a
shared host device, remove the qemuAddSharedHostdev and inline the two
calls into qemuAddSharedHostdev and qemuRemoveSharedHostdev
Signed-off-by: John Ferlan <jferlan@redhat.com>
Split out the current function in order to share the code with hostdev
in a future patch. Failure to match the expected sgio value against what
is stored will cause an error which the caller would need to handle since
only the caller has the disk (or eventually hostdev) specific data in
order to uniquely identify the disk in an error message.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Set the state of virDomainObj in the functions that
actually change the domain state, instead of the generic
libxlDomainCleanup function. This approach gives functions
calling libxlDomainCleanup more flexibility wrt when and
how they change virDomainObj state via virDomainObjSetState.
The prior approach of calling virDomainObjSetState in
libxlDomainCleanup resulted in the following incorrect
coding pattern in the various functions that change
domain state
libxlDomain<DoStateTransition>
call libxl function to do state transition
emit lifecycle event
libxlDomainCleanup
virDomainObjSetState
Once simple manifestation of this bug is seeing a domain
running in virt-manager after selecting the shutdown button,
even after the domain has long shutdown.
In Xen, dom0 is really just another domain that supports ballooning,
adding/removing devices, changing vcpu configuration, etc. This patch
adds support to the libxl driver for managing dom0. Note that the
legacy xend driver has long supported managing dom0.
Operations that are not supported on dom0 are filtered in libvirt
where a sensible error is reported. Errors from libxl are not
always helpful. E.g., attempting a save on dom0 results in
2015-06-23 15:25:05 MDT libxl: debug: libxl_dom.c:1570:libxl__toolstack_save: domain=0 toolstack data size=8
2015-06-23 15:25:05 MDT libxl: debug: libxl.c:979:do_libxl_domain_suspend: ao 0x7f7e68000b70: inprogress: poller=0x7f7e68000930, flags=i
2015-06-23 15:25:05 MDT libxl-save-helper: debug: starting save: Success
2015-06-23 15:25:05 MDT xc: detail: xc_domain_save_suse: starting save of domid 0
2015-06-23 15:25:05 MDT xc: error: Couldn't map live_shinfo (3 = No such process): Internal error
2015-06-23 15:25:05 MDT xc: detail: Save exit of domid 0 with errno=3
2015-06-23 15:25:05 MDT libxl-save-helper: debug: complete r=1: No such process
2015-06-23 15:25:05 MDT libxl: error: libxl_dom.c:1876:libxl__xc_domain_save_done: saving domain: domain did not respond to suspend request: No such process
2015-06-23 15:25:05 MDT libxl: error: libxl_dom.c:2033:remus_teardown_done: Remus: failed to teardown device for guest with domid 0, rc -8
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Add a single boolean function to handle whether the hostdev is shared or not.
Use the new function for the qemu{Add|Remove}SharedHostdev calls as well
as qemuSetUnprivSGIO. NB: This third usage fixes a possible bug where
if this feature is enabled at some time in the future and the shareable flag
wasn't set, the sgio would have been erroneously set.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Update the descriptions for disk and hostdev sgio in order to indicate
not all hypervisors and OS's support this feature
Signed-off-by: John Ferlan <jferlan@redhat.com>
If user passes an invalid address for shared memory device to qemu,
neither libvirt nor qemu will report an error, but qemu will auto assign
a pci address to the shared memory device.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
As the backend of shmem server is a unix type chr device, save it in
virDomainChrSourceDef, so we can reuse the existing code for chr device.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Rename qemuBuildShmemDevCmd to qemuBuildShmemDevStr and change the
return type so that it can be reused in the device hotplug code later.
And split the chardev creation part in a new function
qemuBuildShmemBackendStr for reuse in the device hotplug code later.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
It is better not to assume that newly created network should be
connected to a bridge with same name, but specify it explicitly
by PRL_USE_VNET_NAME_FOR_BRIDGE_NAME flag.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Since QEMU commit ea96bc6 [1]:
i386: drop FDC in pc-q35-2.4+ if neither it nor floppy drives are wanted
the floppy controller is no longer implicit.
Specify it explicitly on the command line if the machine type version
is 2.4 or later.
Note that libvirt's floppy drives do not result in QEMU implying the
controller, because libvirt uses if=none instead of if=floppy.
https://bugzilla.redhat.com/show_bug.cgi?id=1227880
[1] http://git.qemu.org/?p=qemu.git;a=commitdiff;h=ea96bc6
For the implicit controller, we set them via -global.
Separating them will allow reuse for explicit fdc controller as well.
No functional impact apart from one extra allocation.
Explicit 'enum' keyword does not work with portablexdr-rpcgeb, causing its
parser to fail. Fix method is borrowed from virnetprotocol.x
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Commit 2a31c5f0 introduced support for storage pool state XMLs, however
it also introduced a regression:
if (!virstoragePoolObjIsActive(pool)) {
virStoragePoolObjUnlock(pool);
continue;
}
The idea behind this was that since we've got state XMLs and the pool
wasn't marked as active by autostart routine (if the autostart flag had been
set earlier), the pool is inactive and we can leave it be and continue with
other pools. However, filesystem type pools like fs,dir, possibly netfs are
supposed to be active if the filesystem is mounted on the host. And this is
exactly where the regression occurs, e.g. pool type 'dir' which has been
previously destroyed and marked as !autostart gets filtered out
by the condition above.
The resolution should be simply to remove the condition completely,
all pools will get their 'active' flag updated by check callback and if
they do not support such callback, the logic doesn't change and such
pools will be inactive by default (e.g. RBD, even if a state XML exists).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238610
Optimize the virBitmap to array-of-char bitmap conversion by skipping
trailing zero bytes.
This also fixes a regression when requesting iothread information from a
live VM since after commit 825df8c315 the
bitmap returned from virProcessGetAffinity is too big to be formatted
properly via RPC. A user would get the following error:
error: Unable to get domain IOThreads information
error: Unable to encode message payload
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1238589
When use setvcpus command with --guest option to a offline vm,
we will get error:
# virsh setvcpus test3 1 --guest
error: Guest agent is not responding: QEMU guest agent is not connected
However guest is not running, agent status could not be connected.
In this case, report domain is not running will be better than agent is
not connected. Move the guest status check more early to output error to
point out guest status is not right.
Also from the logic, a running vm is a basic requirement to use
agent, we cannot use agent if vm is not running.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We support only one IPv4 and one IPv6 default gateway.
If static IPs are not present in instance config,
then we switch on DHCP for this adapter.
PrlVmDevNet_SetAutoApply to makes necessary settings within guest OS
In linux case it creates network startup scripts
/etc/sysconfig/network-scripts/ifcfg-ethN and fills it with necessary
parameters.
There should be at least one domain for each guest
in cababilities. And in current code we don't add
domain for this guest for example.
if ((guest = virCapabilitiesAddGuest(caps, VIR_DOMAIN_OSTYPE_HVM,
VIR_ARCH_X86_64,
"vz",
NULL, 0, NULL)) == NULL)
Anyway, with two virt types it looks a litte messy, so let's
move adding guest and domain to a separate function.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Current version of SDK event dispatcing is incorrect. For most VM events (add,
delete etc) issuer type is PIE_DISPATCHER. Actually analyzing issuer type
doesn't have any benifints so this patch get rid of it. All dispatching is done
only on event type.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Avoid a false positive since Coverity find a path in virResizeN which
could return 0 prior to the allocation of memory and thus flags a
possible NULL dereference. Instead allocate the output buffer based
on 'nparams' and only fill it partially if need be - shouldn't be too
much a waste of space. Quicker than multiple VIR_RESIZE_N calls or
two loops of STREQ's sandwiched around a single VIR_ALLOC_N using
'n' matches from a first loop to generate the 'n' addresses to return
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '81dd81e' caused a regression when attempting to print a
specific vcpuid that is out of the range of the maximum vcpus for
the guest, such as:
$ virsh vcpupin $dom 1000
VCPU: CPU Affinity
----------------------------------
$
Rather than just recover the old message, let's adjust the message based
on what would be displayed for a similar failure in the set path, such as:
$ virsh vcpupin $dom 1000
error: vcpu 1000 is out of range of persistent cpu count 2
$ virsh vcpupin $dom 1000 --live
error: vcpu 1000 is out of range of live cpu count 2
$
Signed-off-by: Luyao Huang <lhuang@redhat.com>
QEMU working in vhost-user mode communicates with the other end (i.e.
some virtual router application) via unix domain sockets. This requires
that permissions for the socket files are correctly written into
/etc/apparmor.d/libvirt/libvirt-UUID.files.
Signed-off-by: Michal Dubiel <md@semihalf.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Inheritance among CPU model is cool but it makes reviewing CPU model
definitions and comparing them to CPU models from QEMU rather hard and
unpleasant. Let's define all CPU models from scratch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>