This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
A destroy operation can take considerable time on large memory
domains due to scrubbing the domain's memory. Unlock the
virDomainObj while libxl_domain_destroy is executing.
Implement libxlDomainDestroyInternal wrapper to handle unlocking,
calling destroy, and locking. Change all callers of
libxl_domain_destroy to use the wrapper.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
A job should be acquired at the beginning of a domain destroy operation,
not at the end when cleaning up the domain. Fix two occurrences of this
late job acquisition in the libxl driver. Doing so renders
libxlDomainCleanupJob unused, so it is removed.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Let callers of libxlDomainStart decide when it is appropriate to
acquire a job on the associated virDomainObj.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Add support for HVM direct kernel boot in libxl. Also add a
test to verify domXML <-> native conversions.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Recent testing on large memory systems revealed a bug in the Xen xl
tool's freemem() function. When autoballooning is enabled, freemem()
is used to ensure enough memory is available to start a domain,
ballooning dom0 if necessary. When ballooning large amounts of memory
from dom0, freemem() would exceed its self-imposed wait time and
return an error. Meanwhile, dom0 continued to balloon. Starting the
domain later, after sufficient memory was ballooned from dom0, would
succeed. The libvirt implementation in libxlDomainFreeMem() suffers
the same bug since it is modeled after freemem().
In the end, the best place to fix the bug on the Xen side was to
slightly change the behavior of libxl_wait_for_memory_target().
Instead of failing after caller-provided wait_sec, the function now
blocks as long as dom0 memory ballooning is progressing. It will return
failure only when more memory is needed to reach the target and wait_sec
have expired with no progress being made. See xen.git commit fd3aa246.
There was a dicussion on how this would affect other libxl apps like
libvirt
http://lists.xen.org/archives/html/xen-devel/2015-03/msg00739.html
If libvirt containing this patch was build against a Xen containing
the old libxl_wait_for_memory_target() behavior, libxlDomainFreeMem()
will fail after 30 sec and domain creation will be terminated.
Without this patch and with old libxl_wait_for_memory_target() behavior,
libxlDomainFreeMem() does not succeed after 30 sec, but returns success
anyway. Domain creation continues resulting in all sorts of fun stuff
like cpu soft lockups in the guest OS. It was decided to properly fix
libxl_wait_for_memory_target(), and if anything improve the default
behavior of apps using the freemem reference impl in xl.
xl was patched to accommodate the change in libxl_wait_for_memory_target()
with xen.git commit 883b30a0. This patch does the same in the libxl
driver. While at it, I changed the logic to essentially match
freemem() in $xensrc/tools/libxl/xl_cmdimpl.c. It was a bit cleaner
IMO and will make it easier to spot future, potentially interesting
divergences.
Although needed in the Xen 4.1 libxl days, there is no longer any
benefit to having per-domain libxl_ctx. On the contrary, their use
makes the code unecessarily complicated and prone to deadlocks under
load. As suggested by the libxl maintainers, use a single libxl_ctx
as a handle to libxl instead of per-domain ctx's.
One downside to using a single libxl_ctx is there are no longer
per-domain log files for log messages emitted by libxl. Messages
for all domains will be sent to /var/log/libvirt/libxl/libxl-driver.log.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
libxlDomainFreeMem() is only used in libxl_domain.c and thus should
be declared static. While at it, change the signature to take a
libxl_ctx instead of libxlDomainObjPrivatePtr, since only the
libxl_ctx is needed.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
This function now only enables domain death events. Simply call
libxl_evenable_domain_death() instead of an unnecessary wrapper.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Informing libxl how to handle its child proceses should be done once
during driver initialization, not once for each domain-specific
libxl_ctx object. The related libxl documentation in
$xen-src/tools/libxl/libxl_event.h even mentions that "it is best to
call this at initialisation".
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Long ago I incorrectly associated libxl fd and timer registrations
with per-domain libxl_ctx objects. When creating a libxlDomainObjPrivate,
a libxl_ctx is allocated, and libxl_osevent_register_hooks is called
passing a pointer to the libxlDomainObjPrivate. When an fd or timer
registration occurred, the registration callback received the
libxlDomainObjPrivate, containing the per-domain libxl_ctx. This
libxl_ctx was then used when informing libxl about fd events or
timer expirations.
The problem with this approach is that fd and timer registrations do not
share the same lifespan as libxlDomainObjPrivate, and hence the per-domain
libxl_ctx ojects. The result is races between per-domain libxl_ctx's being
destoryed and events firing on associated fds/timers, typically manifesting
as an assert in libxl
libxl_internal.h:2788: libxl__ctx_unlock: Assertion `!r' failed
There is no need to associate libxlDomainObjPrivate objects with libxl's
desire to use libvirt's event loop. Instead, the driver-wide libxl_ctx can
be used for the fd and timer registrations.
This patch moves the fd and timer handling code away from the
domain-specific code in libxl_domain.c into libxl_driver.c. While at it,
function names were changed a bit to better describe their purpose.
The unnecessary locking was also removed since the code simply provides a
wrapper over the event loop interface. Indeed the locks may have been
causing some deadlocks when repeatedly creating/destroying muliple domains.
There have also been rumors about such deadlocks during parallel OpenStack
Tempest runs.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
This patch adds code that parses and formats configuration for memory
devices.
A simple configuration would be:
<memory model='dimm'>
<target>
<size unit='KiB'>524287</size>
<node>0</node>
</target>
</memory>
A complete configuration of a memory device:
<memory model='dimm'>
<source>
<pagesize unit='KiB'>4096</pagesize>
<nodemask>1-3</nodemask>
</source>
<target>
<size unit='KiB'>524287</size>
<node>1</node>
</target>
</memory>
This patch preemptively forbids use of the <memory> device in individual
drivers so the users are warned right away that the device is not
supported.
Add a XML element that will allow to specify maximum supportable memory
and the count of memory slots to use with memory hotplug.
To avoid possible confusion and misuse of the new element this patch
also explicitly forbids the use of the maxMemory setting in individual
drivers's post parse callbacks. This limitation will be lifted when the
support is implemented.
With the current control flow the post parse callback returned success
right away for fully virtualized VMs. To allow adding additional checks
into the post parse callback tweak the conditions so that the function
doesn't return early except for error cases.
To clarify the original piece of code borrow the wording from the commit
message for the patch that introduced the code.
xen.git commit babeca32 added a pkgconfig file for libxenlight,
allowing libxl apps to determine the location of Xen binaries
such as firmware blobs, device emulator, etc.
This patch adds support for xenlight.pc in the libxl driver, falling
back to the previous configure logic if not found. It introduces
LIBXL_FIRMWARE_DIR and LIBXL_EXECBIN_DIR to define the firmware and
libexec_bin locations. If xenlight.pc does not exist, the defines
are set to the current hardcoded paths. The capabilities'
<emulator> and <loader> elements are updated to use the paths.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
When converting domXML from native, the libxl driver was overwriting
useful errors from the xenconfig parsing code with a useless, generic
error. E.g. "internal error: parsing xm config failed" vs
"internal error: config value usbdevice was malformed". Remove the
redundant (and useless) error reporting in the libxl driver.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Commit 4ab8cd77 added a check requiring input devices to have
a bus type of VIR_DOMAIN_INPUT_BUS_USB, failing to start the
domain otherwise. But virDomainDefParseXML adds implicit mouse
and keyboard if a graphics device is configured. See calls to
virDomainDefMaybeAddInput.
The regression is fixed by removing the check requiring USB input
devices, and skipping non-USB input devices when populating USB
'usbdevice' in libxl_domain_build_info struct.
As pointed out by jtomko in his review of the IOThreads pinning code:
http://www.redhat.com/archives/libvir-list/2015-March/msg00495.html
there are some comments sprinkled in indicating IOThreads were using
the same structure as the VcpuPin code...
This is the first patch of a few that will change the virDomainVcpuPin*
structures and code to just virDomainPin* - starting with the data
structure naming...
As there are two possible approaches to define a domain's memory size -
one used with legacy, non-NUMA VMs configured in the <memory> element
and per-node based approach on NUMA machines - the user needs to make
sure that both are specified correctly in the NUMA case.
To avoid this burden on the user I'd like to replace the NUMA case with
automatic totaling of the memory size. To achieve this I need to replace
direct access to the virDomainMemtune's 'max_balloon' field with
two separate getters depending on the desired size.
The two sizes are needed as:
1) Startup memory size doesn't include memory modules in some
hypervisors.
2) After startup these count as the usable memory size.
Note that the comments for the functions are future aware and document
state that will be present after a few later patches.
It will not be possible to detach such device later. Also improve
logging in such cases.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
A helper that never returns an error and treats bits out of bitmap range
as false.
Use it everywhere we use ignore_value on virBitmapGetBit, or loop over
the bitmap size.
In the old days of a global driver lock, it was necessary to unlock
the driver after a domain restore operation. When the global lock
was removed from the driver, some remnants were left behind in
libxlDomainRestoreFlags. Remove this unneeded (and incorrect) code.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Domain death watch is already disabled in libxlDomainCleanup. No
need to disable it a second and third time.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
This implement handling of <backenddomain name=''/> parameter introduced
in previous patch.
Works on Xen >= 4.3, because only there libxl supports setting backend
domain by name. Specifying backend domain by ID or UUID is currently not
supported.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Periodically my Coverity scan will return a checked_return failure
for libxlDomainShutdownThread call to libxlDomainStart. Followed the
libxlAutostartDomain example in order to check the status, emit a
message, and continue on.
When initializing a libxl_domain_build_info struct with
libxl_domain_build_info_init(), VNC is enabled by default. As a
result, VMs configured with no graphics still have VNC enabled.
This behavior is a regression wrt to the legacy Xen driver.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Do not silently ignore its value. LibXL support only one address, so
refuse multiple IPs.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The path to the pty of a Xen PV console is set only in
virDomainOpenConsole. But this is done too late. A call to
virDomainGetXMLDesc done before OpenConsole will not have the path to
the pty, but a call after OpenConsole will.
e.g. of the current issue.
Starting a domain with '<console type="pty"/>'
Then:
virDomainGetXMLDesc():
<devices>
<console type='pty'>
<target type='xen' port='0'/>
</console>
</devices>
virDomainOpenConsole()
virDomainGetXMLDesc():
<devices>
<console type='pty' tty='/dev/pts/30'>
<source path='/dev/pts/30'/>
<target type='xen' port='0'/>
</console>
</devices>
The patch intend to have the TTY path on the first call of GetXMLDesc.
This is done by setting up the path at domain start up instead of in
OpenConsole.
https://bugzilla.redhat.com/show_bug.cgi?id=1170743
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
The virDomainDefineXMLFlags and virDomainCreateXML APIs both
gain new flags allowing them to be told to validate XML.
This updates all the drivers to turn on validation in the
XML parser when the flags are set
Now that xenconfig supports parsing and formatting Xen's
XL config format, integrate it into the libxl driver's
connectDomainXML{From,To}Native functions.
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
The virDomainDefParse* and virDomainDefFormat* methods both
accept the VIR_DOMAIN_XML_* flags defined in the public API,
along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
defined in domain_conf.c.
This is seriously confusing & error prone for a number of
reasons:
- VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
formatting operation
- Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
to parse or to format, but not both.
This patch cleanly separates out the flags. There are two
distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
flags that are used by the corresponding methods. The
VIR_DOMAIN_XML_* flags received via public API calls must
be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
needed.
The various calls to virDomainDefParse which hardcoded the
use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
Now that xenconfig supports parsing and formatting Xen's
XL config format, integrate it into the libxl driver's
connectDomainXML{From,To}Native functions.
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Since virNetworkFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Commit id 'cb88d433' refactored the calling sequence to use a thread;
however, in doing so "lost" the check for if virNetSocketAccept returns
failure. Since other code makes that check, Coverity complains. Although
a false positive, adding back the failure check pacifies Coverity
This patch contains three domain cleanup improvements in the migration
finish phase, ensuring a domain is properly disposed when a failure is
detected or the migration is cancelled.
The check for virDomainObjIsActive is moved to libxlDomainMigrationFinish,
where cleanup can occur if migration failed and the domain is inactive.
The 'cleanup' label was missplaced in libxlDomainMigrationFinish, causing
a migrated domain to remain in the event of an error or cancelled migration.
In cleanup, the domain was not removed from the driver's list of domains.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
During the perform phase of migration, the domain is started on
the dst host in a running state if VIR_MIGRATE_PAUSED flag is not
specified. In the finish phase, the domain is also unpaused if
VIR_MIGRATE_PAUSED flag is unset. I've noticed this second unpause
fails if the domain was already unpaused following the perform phase.
This patch changes the perform phase to always start the domain
paused, and defers unpausing, if requested, to the finish phase.
Unpausing should occur in the finish phase anyhow, where the domain
can be properly destroyed if the perform phase fails and migration
is cancelled.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Moving data reception of the perform phase of migration to a
thread introduces a race with the finish phase, where checking
if the domain is active races with the thread finishing the
perform phase. The race is easily solved by acquiring a job in
the finish phase, which must wait for the perform phase job to
complete.
While wrapping the finish phase in a job, noticed the virDomainObj
was being unlocked in a callee - libxlDomainMigrationFinish. Move
the unlocking to libxlDomainMigrateFinish3Params, where the lock
is acquired.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
The libxl driver receives migration data within an IO callback invoked
by the event loop, effectively disabling the event loop while migration
occurs.
This patch moves receving of the migration data to a thread. The
incoming connection is still accepted in the IO callback, but control
is immediately returned to the event loop after spawning the thread.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Specifying an explicit path to pygrub (e.g. BINDIR "/pygrub") only works if
Xen and libvirt happen to be installed to the same prefix. A more flexible
approach is to simply specify "pygrub" which will cause libxl to use the
correct path which it knows (since it is built with the same prefix as pygrub).
This is particular problematic in the Debian packaging, since the Debian Xen
package relocates pygrub into a libexec dir, however I think this change makes
sense upstream.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
On error, libxlMakeDomBuildInfo() frees the caller-provided
libxl_domain_build_info struct embedded in libxl_domain_config,
causing a segfault
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f9c13020700 (LWP 40988)]
(gdb) bt
0 0x00007f9c162f95b4 in free () from /lib64/libc.so.6
1 0x00007f9c0d0965ad in libxl_bitmap_dispose () from
/usr/lib64/libxenlight.so.4.4
2 0x00007f9c0d0a73bf in libxl_domain_build_info_dispose ()
from /usr/lib64/libxenlight.so.4.4
3 0x00007f9c0d0a7974 in libxl_domain_config_dispose () from
/usr/lib64/libxenlight.so.4.4
4 0x00007f9c0d2e00c5 in libxlDomainStart (driver=0x7f9c0400e4e0,
vm=0x7f9c0412b0d0, start_paused=false, restore_fd=-1) at
libxl/libxl_domain.c:1323
5 0x00007f9c0d2e1d4b in libxlDomainCreateXML (conn=0x7f9c000009a0,...)
at libxl/libxl_driver.c:660
Remove the call to libxl_domain_build_info_dispose() from
libxlMakeDomBuildInfo(). On error, callers will dispose the
libxl_domain_config object, which in turn disposes the build info.
With the introduction of the libxlDomainGetEmulatorType function,
it is trivial to support a user-specfied <emulator> in the libxl
driver. This patch is based loosely on David Scott's old patch
to do the same
https://www.redhat.com/archives/libvir-list/2013-April/msg02119.html
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
To prepare for introducing a single global driver, rename the
virDriver struct to virHypervisorDriver and the registration
API to virRegisterHypervisorDriver()
This started as an investigation into an issue where libvirt (using the
libxl driver) and the Xen host, like an old couple, could not agree on
who is responsible for selecting the VNC port to use.
Things usually (and a bit surprisingly) did work because, just like that
old couple, they had the same idea on what to do by default. However it
was possible that this ended up in a big argument.
The problem is that display information exists in two different places:
in the vfbs list and in the build info. And for launching the device model,
only the latter is used. But that never gets initialized from libvirt. So
Xen allows the device model to select a default port while libvirt thinks
it has told Xen that this is done by libvirt (though the vfbs config).
While fixing that, I made a stab at actually evaluating the configuration
of the video device. So that it is now possible to at least decide between
a Cirrus or standard VGA emulation and to modify the VRAM within certain
limits using libvirt.
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
This patch introduces a function to detect whether the specified
emulator is QEMU_XEN or QEMU_XEN_TRADITIONAL. Detection is based on the
string "Options specific to the Xen version:" in '$qemu -help' output.
AFAIK, the only qemu containing that string in help output is the
old Xen fork (aka qemu-dm).
Note:
QEMU_XEN means a qemu that contains support for Xen.
QEMU_XEN_TRADITIONAL means Xen's old forked qemu 0.10.2
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Commit 4dfc34c3 missed copying the user-specified keymap to
libxl_domain_build_info struct when creating a VFB device.
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
There is no need to acquire the driver-wide lock in
libxlDomainDefineXML. When switching to jobs in the libxl
driver, most driver-wide locks were removed. The locking here
was preserved since I mistakenly thought virDomainObjListAdd
needed protection. This is not the case, so remove the
unnecessary locking.
The libxl driver was blindly assigning libvirt's
virDomainLifecycleAction to libxl's libxl_action_on_shutdown, when
in fact the various actions take on different values in these enums.
Introduce helpers to properly map the enum values.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Test suites using the port allocator don't want to have different
behaviour depending on whether a port is in use on the host. Add
a VIR_PORT_ALLOCATOR_SKIP_BIND_CHECK which test suites can use
to skip the bind() test. The port allocator will thus only track
ports in use by the test suite process itself. This is fine when
using the port allocator to generate guest configs which won't
actually be launched
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
With all the changes in my previous foray into this code, I forgot to
remove the libxlDomainEventQueue(driver, event); call inside the
dom == NULL condition.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Coverity noted that all callers to libxlDomainEventQueue() could ensure
the second parameter (event) was true before calling except this case.
As I look at the code and how events are used - it seems that prior to
generating an event for the dom == NULL condition, the resume/suspend
event should be queue'd after the virDomainSaveStatus() call which will
goto cleanup and queue the saved event anyway.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In libxlDomainMigrationPrepare() if the uri_in is false, then
'hostname' is allocated and used "generically" in the routine,
but not freed. Conversely, if uri_in is true, then a uri is
allocated and hostname is set to the uri->hostname value and
likewise generically used.
At function exit, hostname wasn't free'd in the !uri_in path,
so that was added. To just make it clearer on usage the else
path became the call to virURIFree() although I suppose technically
it didn't have to since it would be a call using (NULL)
Commit b55cc5f4e did a shallow copy of libxl_{sdl,vnc}_info from the
domain config to the build info, which resulted in double-freeing
strings contained in the structures during cleanup, which later
resulted in a libvirtd crash. Fix by performing a deep copy of the
structure, VIR_STRDUP'ing embedded strings instead of simply copying
their pointers.
Fixes the following issue reported on the libvirt dev list
https://www.redhat.com/archives/libvir-list/2014-August/msg01112.html
This patch adds support for the QEMU vhost-user feature to libvirt.
vhost-user enables the communication between a QEMU virtual machine
and other userspace process using the Virtio transport protocol.
It uses a char dev (e.g. Unix socket) for the control plane,
while the data plane based on shared memory.
The XML looks like:
<interface type='vhostuser'>
<mac address='52:54:00:3b:83:1a'/>
<source type='unix' path='/tmp/vhost.sock' mode='server'/>
<model type='virtio'/>
</interface>
Signed-off-by: Michele Paolino <m.paolino@virtualopensystems.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Code logic in libxlDomainAttachDeviceFlags and libxlDomainDetachDeviceFlags
is wrong with return value in error cases.
'ret' was being set to 0 if 'flags & VIR_DOMAIN_DEVICE_MODIFY_CONFIG' was
false. Then if something like virDomainDeviceDefParse() failed in the
VIR_DOMAIN_DEVICE_MODIFY_LIVE logic, the error would be reported but the
function would return success.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
This was converted to a typedef in 5a2bd4c917 "conf: more enum
cleanups in "src/conf/domain_conf.h"" causing:
libxl/libxl_conf.c: In function 'libxlDiskSetDiscard':
libxl/libxl_conf.c:724:19: error: conversion to incomplete type
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
In libxlDomainMigrationConfirm(), a transient domain is removed
from the domain list after successful migration. Later in cleanup,
the domain object is unlocked, resulting in a crash
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fb4208ed700 (LWP 12044)]
0x00007fb4267251e6 in virClassIsDerivedFrom (klass=0xdeadbeef,
parent=0x7fb42830d0c0) at util/virobject.c:169
169 if (klass->magic == parent->magic)
(gdb) bt
0 0x00007fb4267251e6 in virClassIsDerivedFrom (klass=0xdeadbeef,
parent=0x7fb42830d0c0) at util/virobject.c:169
1 0x00007fb42672591b in virObjectIsClass (anyobj=0x7fb4100082b0,
klass=0x7fb42830d0c0) at util/virobject.c:365
2 0x00007fb42672583c in virObjectUnlock (anyobj=0x7fb4100082b0)
at util/virobject.c:338
3 0x00007fb41a8c7d7a in libxlDomainMigrationConfirm (driver=0x7fb4100404c0,
vm=0x7fb4100082b0, flags=1, cancelled=0) at libxl/libxl_migration.c:583
Fix by setting the virDomainObjPtr to NULL after removing it from
the domain list.
During migration, the libxl driver starts a modify job in the
begin phase, ending the job in the confirm phase. This is
essentially VIR_MIGRATE_CHANGE_PROTECTION semantics, but the
driver does not support that flag. Without CHANGE_PROTECTION
support, the job would never be terminated in error conditions
where migrate confirm phase is not executed. Further attempts
to modify the domain would result in failure to acquire a job
after LIBXL_JOB_WAIT_TIME.
Similar to the qemu driver, end the job in the begin phase.
Protecting the domain object across all phases of migration can
be done in a future patch adding CHANGE_PROTECTION support.
In libxlDomainMigrationPrepare(), a new virDomainObj is created
from the incoming domain def and added to the driver's domain
list, but never removed if there are subsequent failures during
the prepare phase.
targethost# virsh list --all
sourcehost# virsh migrate --live dom xen+ssh://targethost/system
error: operation failed: Fail to create socket for incoming migration.
targethost# virsh list --all
error: Failed to list domains
error: name in virGetDomain must not be NULL
After adding code to remove the domain on prepare failure, noticed
that libvirtd crashed due to double free of the virDomainDef. Similar
to the qemu driver, pass a pointer to virDomainDefPtr so it can be set
to NULL once a virDomainObj is created from it.
In the future we might need to track state of individual images. Move
the readonly and shared flags to the virStorageSource struct so that we
can keep them in a per-image basis.
Replace:
if (virBufferError(&buf)) {
virBufferFreeAndReset(&buf);
virReportOOMError();
...
}
with:
if (virBufferCheckError(&buf) < 0)
...
This should not be a functional change (unless some callers
misused the virBuffer APIs - a different error would be reported
then)
So far, we only report an error if formatting the siblings bitmap
in NUMA topology fails.
Be consistent and always report error in virCapabilitiesFormatXML.
Xen PV domains always have a PV console, so add one to the domain
config via post-parse callback if not explicitly specified in
the XML. The legacy Xen driver behaves similarly, causing a
regression when switching to the new Xen toolstack. I.e.
virsh console pv-domain
will no longer work after upgrading a xm/xend stack to xl/libxl.
libxl interface for vcpu pinning is changing in Xen 4.5. Basically,
libxl_set_vcpuaffinity() now wants one more parameter. That is
representative of 'VCPU soft affinity', which libvirt does not use.
To mark such change, the macro LIBXL_HAVE_VCPUINFO_SOFT_AFFINITY is
defined. Use it as a gate and, if present, re-#define the calls from
the old to the new interface, to avoid breaking the build.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
libxl does not support save, restore, or migrate on all architectures,
notably ARM. Detect whether libxl supports these operations using
LIBXL_HAVE_NO_SUSPEND_RESUME. If not supported, drop advertisement of
<migration_features>.
Found by Ian Campbell while improving Xen's OSSTEST infrastructure
http://lists.xen.org/archives/html/xen-devel/2014-06/msg02171.html
Currently, only LXC has hostdev mode 'capabilities' support,
so the other drivers should forbid to define it in XML.
The hostdev mode check is added to devicesPostParseCallback()
for each hypervisor driver.
But there are some drivers lack function devicesPostParseCallback(),
so only add check for qemu, libxl, openvz, uml, xen, xenapi.
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
The libxl driver currently sets the disk backend to
LIBXL_DISK_BACKEND_TAP when <driver name='file'> is specified
in the <disk> config. qdisk should be prefered with this
configuration, otherwise existing configuration such as the
following, which worked with the old Xen driver, will not work
with the libxl driver
<disk type='file' device='cdrom'>
<driver name='file'/>
<source file='/path/to/some/iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
</disk>
In addition, tap performs poorly compared to qdisk.
Migration code specifies the problematic non-cooperative resume mode
which is a known issue with Xen's libxl [1]. Instead, use the better
supported cooperative mode.
Without this, guests BUG() in xen_irq_resume after failing to bind
still-bound event channels.
[1] http://bugs.xenproject.org/xen/bug/30
Generally, <interface> ... <script> is only supported for
type='ethernet'. Due to the long and pervasive use of
<interface type='bridge'>
...
<script path='foo'/>
</interface>
in Xen domain configuration, it was agreed to allow the use
of <script> with type='bridge' for backwards compatibility. See
the following discussion thread
http://www.redhat.com/archives/libvir-list/2013-April/msg00755.html
This patch limits the use of <script> to interface types ethernet
and bridge, raising an unsupported config error if <script> is
specified for all other interface types.
While at it, use VIR_ERR_CONFIG_UNSUPPORTED instead of
VIR_ERR_INTERNAL_ERROR when reporting unsupported interface types.
There are two places where you'll find info on page sizes. The first
one is under <cpu/> element, where all supported pages sizes are
listed. Then the second one is under each <cell/> element which refers
to concrete NUMA node. At this place, the size of page's pool is
reported. So the capabilities XML looks something like this:
<capabilities>
<host>
<uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid>
<cpu>
<arch>x86_64</arch>
<model>Westmere</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='1' threads='1'/>
...
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
...
<topology>
<cells num='4'>
<cell id='0'>
<memory unit='KiB'>4054408</memory>
<pages unit='KiB' size='4'>1013602</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
</cpus>
</cell>
<cell id='1'>
<memory unit='KiB'>4071072</memory>
<pages unit='KiB' size='4'>1017768</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='1' socket_id='0' core_id='0' siblings='1'/>
</cpus>
</cell>
...
</cells>
</topology>
...
</host>
<guest/>
</capabilities>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>