Since the flag was not enabled when 'eating' the migration cookie,
libvirt reported a bogus error when memory hotplug was enabled:
unsupported migration cookie feature memory-hotplug
The error was ignored though due to a bug in the code so it slipped
through testing.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1278404
Commit id '0f7436ca' added virNetDevWaitDadFinish using ATTRIBUTE_NONNULL
for both arguments, although one is a non-null argument. A Coverity build
balks at that.
Rather than "if (virNetDevFeatureAvailable(ifname, &cmd))" change the
success criteria to "if (virNetDevFeatureAvailable(ifname, &cmd) == 1)".
The called helper returns -1 on failure, 0 on not found, and 1 on found.
Thus a failure was setting bits.
Introduced by commit ac3ed20 which changed the helper's return
values without adjusting its callers
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Commit id 'fdda3760' only managed a symptom where it was possible to
create a file in a pool without libvirt's knowledge, so it was reverted.
The real fix is to have all the createVol API's which actually create
a volume (disk, logical, zfs) and the buildVol API's which handle the
real creation of some volume file (fs, rbd, sheepdog) manage deleting
any volume which they create when there is some sort of error in
processing the volume.
This way the onus isn't left up to the storage_driver to determine whether
the buildVol failure was due to some failure as a result of adjustments
made to the volume after creation such as getting sizes, changing ownership,
changing volume protections, etc. or simple a failure in creation.
Without needing to consider that the volume has to be removed, the
buildVol failure path only needs to remove the volume from the pool.
This way if a creation failed due to duplicate name, libvirt wouldn't
remove a volume that it didn't create in the pool target.
This reverts commit fdda37608a.
This commit only manages a symptom of finding a buildRet failure
where a volume was not listed in the pool, but someone created the
volume outside of libvirt in the pool being managed by libvirt.
After successfully returning from virFileOpenAs, if subsequent calls fail,
then we need to remove the file since our caller expects that failures after
creation will remove the created file.
After a successful qemu-img/qcow-create of the backing file, if we
fail to stat the file, change it owner/group, or mode, then the
cleanup path should remove the file.
Currently the code does not handle the NFS root squash environment
properly since if the file gets created, then the subsequent chmod
will fail in a root squash environment where we're creating a file
in the pool with qemu tools, such as seen via:
$ virsh vol-create-from $pool $file.xml file.img --inputpool $pool
assuming $file.xml is creating a file of "<format type='qcow2'"> from
an existing file.img in the pool of "<format type='raw'>".
This patch will utilize the virCommandSetUmask when creating the file
in the NETFS pool. The virCommandSetUmask API was added in commit id
'0e1a1a8c4', which was after the original code was developed in commit
id 'e1f27784' to attempt to handle the root squash environment.
Also, rather than blindly attempting to chmod, check to see if the
st_mode bits from the stat match what we're trying to set and only
make the chmod if they don't.
Also, a slight adjustment to the fallback algorithm to move the
virCommandSetUID/virCommandSetGID inside the if (!filecreated) since
they're only useful if we need to attempt to create the file again.
VIR_DEBUG and VIR_WARN will automatically add a new line to the message,
having "\n" at the end or at the beginning of the message results in
empty lines.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
nodeset should be freed in both success and failure paths.
While tmppath is freed immediately after it's consumed, moving it from
error to cleanup label is a bit more consistent and robust.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Generally, we use "ret" variable for storing the value we are going to
return at the and of a function, but this is not the case in
qemuProcessStart. Let's rename "ret" as "rv".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart was passing char * migrateFrom as the third argument to
qemuPrepareNVRAM. We should explicitly convert the pointer to bool which
is what the function expects.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This was originally set to 5 seconds, but times of 5.5 to 7 seconds
were experienced. Since it's an arbitrary number intended to prevent
an infinite hang, having it a bit too high won't hurt anything, and 20
seconds looks to be adequate (i.e. I think/hope we don't need to make
it tunable in libvirtd.conf)
If DAD not finished in 5 seconds, user will get an
unknown error like this:
# virsh net-start ipv6
error: Failed to start network ipv6
error: An error occurred, but the cause is unknown
Call virReportError to set an error.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Build on non-Linux fails because the virNetDevWaitDadFinish() stub
has unused parameters. Fix by adding appropriate ATTRIBUTE_UNUSED
for these parameters.
Pushing under build-breaker rule.
commit db488c79 assumed that dnsmasq would complete IPv6 DAD before
daemonizing, but in reality it doesn't wait, which creates problems
when libvirt's bridge driver sets the matching "dummy tap device" to
IFF_DOWN prior to DAD completing.
This patch waits for DAD completion by periodically polling the kernel
using netlink to check whether there are any IPv6 addresses assigned
to bridge which have a 'tentative' state (if there are any in this
state, then DAD hasn't yet finished). After DAD is finished, execution
continues. To avoid an endless hang in case something was wrong with
the kernel's DAD, we wait a maximum of 5 seconds.
https://bugzilla.redhat.com/show_bug.cgi?id=1270715
Commit id '9deb96f' removed the code to fetch the nodeset from the
CpusetMems cgroup for a running vm in favor of using the return from
virDomainNumatuneFormatNodeset introduced by commit id '43b67f2e7'.
However, that API will return the value of the passed 'auto_nodeset'
when placement is VIR_DOMAIN_NUMATUNE_PLACEMENT_AUTO, which happens
to be NULL.
Since commit id 'c74d58ad' started using priv->autoNodeset in order
to manage the auto placement value during qemuProcessStart, it should
be passed along in order to return the correct value if the domain
requests the auto placement.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
When a RBD volume has snapshots it can not be removed.
This patch introduces a new flag to force volume removal,
VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS.
With this flag any existing snapshots will be removed prior to
removing the volume.
No existing mechanism in libvirt allowed us to pass such information,
so that's why a new flag was introduced.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
After commit a26669d7, we only jump to error when
virDomainMigrateUnmanagedParams return a value less than -1.
this will make the migrate result always be success even we
meet some problem.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
In commit f41be296, we moved vm->persistent check into
qemuDomainRemoveInactive, but we didn't change the vm->persistent
before call qemuDomainRemoveInactive in some place before and just
call it to remove the inactive vm.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Lets use wrapper functions virLockDaemonLock and
virLockDaemonUnlock instead of virMutexLock and virMutexUnlock.
This has no functional impact, but it's easier to read (at least
for me).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This calls the PCI-, USB- and SCSI-specific functions just
like qemuHostdev{Prepare,ReAttach}DomainDevices() already do,
and was the missing piece for the qemuHostdev API to nicely
mirror the virHostdev API.
Update qemuProcessReconnect() to use the new function.
Adopt the same names used for virHostdevUpdateActive*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
Adopt the same names used for virHostdevReAttach*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
Fix a cut-n-paste error from commit id '35eecdde' where the previous
check for max_sectors seems to have been copied, but the error message
parameter not updated to be ioeventfd
Commit id '1c24cfe9' added error messages for virNumaSetPagePoolSize;
however, virNumaGetHugePageInfo also uses virNumaGetHugePageInfoPath
in order to build the path, but it never checked upon return if
the built path exists which could lead to an error message as follows:
$ virsh freepages 0 1
error: Failed to open file
'/sys/devices/system/node/node0/hugepages/hugepages-1kB/free_hugepages':
No such file or directory
Rather than add the same message for the other two callers, adjust
the virNumaGetHugePageInfoPath in order not only build the path, but
also check if the built path exists. If the path does not exist,
then generate the error message and return failure.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit id '1c24cfe9' added new checks and error messaes for failure
scenarios. Let's adjust those error messages to after the call to
virNumaGetHugePageInfoPath in order to provide a more specific error
message depending on node and page_size
After this patch:
# virsh allocpages --pagesize 2047 --pagecount 1 --cellno 0
error: operation failed: page size 2047 is not available on node 0
# virsh allocpages --pagesize 2047 --pagecount 1
error: operation failed: page size 2047 is not available
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1265114
Refactor helper virNumaGetHugePageInfoPath to handle returning a directory
path when passed a page_size of 0 and suffix == NULL into a new helper
virNumaGetHugePageInfoDir which will only be called when a directory
path is expected to be returned. This solves the issue where the helper
was called with page_size == 0 expecting a file path in return, but
instead got a directory path and failed in virFileReadAll with:
error : virFileReadAll:1358 : Failed to read file
'/sys/devices/system/node/node0/hugepages/': Is a directory
Since virNumaGetPages API expects to return a directory by passing
page_size == 0 and suffix == NULL, it will now call the new helper.
Callers to virNumaGetHugePageInfoPath expect to return a file path
which could then be used in the call to virFileReadAll.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
We have macros for both positive and negative string matching.
Therefore there is no need to use !STREQ or !STRNEQ. At the same
time as we are dropping this, new syntax-check rule is
introduced to make sure we won't introduce it again.
Signed-off-by: Ishmanpreet Kaur Khera <khera.ishman@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The following functions are implemented:
vzDomainIsUpdated, vzDomainGetVcpusFlags and vzDomainGetMaxVcpus.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
The following functions were implemented:
vzNodeGetCPUStats, vzNodeGetMemoryStats,
vzNodeGetCellsFreeMemory and vzNodeGetFreeMemory.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Because we have no limitation for maximal number of vcpus in containers
we report as maximum 1028 just for the sake of common sence.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Even though the APIs are not implemented yet, they create a
skeleton that can be filled in later.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This function should really be called only when we want to change
ownership of a file (or disk source). Lets switch to calling a
wrapper function which will eventually record the current owner
of the file and call virSecurityDACSetOwnershipInternal
subsequently.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is pure code adjustment. The structure is going to be needed
later as it will hold a reference that will be used to talk to
virtlockd. However, so far this is no functional change just code
preparation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is pure code adjustment. The structure is going to be needed
later as it will hold a reference that will be used to talk to
virtlockd. However, so far this is no functional change just code
preparation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
It's better if we stat() file that we are about to chown() at
first and check if there's something we need to change. Not that
it would make much difference, but for the upcoming patches we
need to be doing stat() anyway. Moreover, if we do things this
way, we can drop @chown_errno variable which will become
redundant.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Correctly mark the places where we need to remember and recall
file ownership. We don't want to mislead any potential developer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So we have this mechanism that on SIGUSR1 the virtlockd dumps its
internal state into a JSON file, reexec itself and the reloads
the internal state back. However, there's a bug in our
implementation:
(gdb) signal SIGUSR1
Continuing with signal SIGUSR1.
[Thread 0x7fd094f7b700 (LWP 10602) exited]
process 10600 is executing new program: /home/zippy/work/libvirt/libvirt.git/src/virtlockd
warning: Could not load shared library symbols for linux-vdso.so.1.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7fb28bc3c700 (LWP 14501)]
Program received signal SIGSEGV, Segmentation fault.
0x00007fb29133d530 in virExpandN (ptrptr=0x70, size=8, countptr=0x68, add=1, report=true, domcode=7, filename=0x7fb29138aeab "rpc/virnetserver.c", funcname=0x7fb29138b680 <__FUNCTION__.15821> "virNetServerAddProgram", linenr=661) at util/viralloc.c:288
288 if (*countptr + add < *countptr) {
(gdb) bt
#0 0x00007fb29133d530 in virExpandN (ptrptr=0x70, size=8, countptr=0x68, add=1, report=true, domcode=7, filename=0x7fb29138aeab "rpc/virnetserver.c", funcname=0x7fb29138b680 <__FUNCTION__.15821> "virNetServerAddProgram", linenr=661) at util/viralloc.c:288
#1 0x00007fb29132a267 in virNetServerAddProgram (srv=0x0, prog=0x7fb2915d08b0) at rpc/virnetserver.c:661
#2 0x00007fb29131f27f in main (argc=1, argv=0x7fff8f771298) at locking/lock_daemon.c:1445
Notice the NULL @srv passed to frame 2? Usually, the @srv
variable is initialized on fresh start. However, in case of
daemon reload, the code path that is responsible for initializing
the value was not triggered and therefore we crashed immediately.
Fix this by always setting the variable.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Tunnelled migration can hang if the destination qemu exits despite all the
ABI checks. This happens whenever the destination qemu exits before the
complete transfer is noticed by source qemu. The savevm state checks at
runtime can fail at destination and cause qemu to error out.
The source qemu cant notice it as the EPIPE is not propogated to it.
The qemuMigrationIOFunc() notices the stream being broken from virStreamSend()
and it cleans up the stream alone. The qemuMigrationWaitForCompletion() would
never get to 100% transfer completion.
The qemuMigrationWaitForCompletion() never breaks out as well since
the ssh connection to destination is healthy, and the source qemu also thinks
the migration is ongoing as the Fd to which it transfers, is never
closed or broken. So, the migration will hang forever. Even Ctrl-C on the
virsh migrate wouldn't be honoured. Close the source side FD when there is
an error in the stream. That way, the source qemu updates itself and
qemuMigrationWaitForCompletion() notices the failure.
Close the FD for all kinds of errors to be sure. The error message is not
copied for EPIPE so that the destination error is copied instead later.
Note:
Reproducible with repeated migrations between Power hosts running in different
subcores-per-core modes.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1264008
The existing algorithm assumed that someone was making small, incremental
changes; however, it is possible to change iothreads from 0 (or relatively
small number) to some really large number and the algorithm would possibly
spin its wheels doing unnecessary searches.
So, optimize the algorithm using a bitmap to find available iothread_id's
starting at 1 that aren't already defined by a "<thread id='#'>" and
filling in the iothreadids array with those iothread_id values.
https://bugzilla.redhat.com/show_bug.cgi?id=1249981
When qemuDomainPinIOThread was added in commit id 'fb562614', a check
for the IOThread capability was not needed since a check for iothreadpids
covered the condition where the support for IOThreads was not present.
The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
was able to query the monitor for IOThreads. It would only do that if
the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
However, when iothreadids were added in commit id '8d4614a5' and the
check for iothreadpids was replaced by a search through the iothreadids[]
array for the matching iothread_id that left open the possibility that
an iothreadids[] array was defined, but the entries essentially pointed
to elements with only the 'iothread_id' defined leaving the 'thread_id'
value of 0 and eventually the cpumap entry of NULL.
This was because, the original IOThreads commit id '72edaae7' only
checked if IOThreads were defined and if the emulator had the IOThreads
capability, then IOThread objects were added at startup. The "capability
failure" check was only done when a disk was assigned to an IOThread in
qemuCheckIOThreads. This was because the initial implementation had no way
to dynamically add IOThreads, but it was possible to dynamically add a
disk to the domain. So the decision was if the domain supported it, then
add the IOThread objects. Then if a disk with an IOThread defined was
added, it could check the capability and fail to add if not there. This
just meant the 'iothreads' value was essentially ignored.
Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
deletion of IOThread objects. So it was no longer necessary to generate
IOThread objects to dynamically attach a disk to. However, the startup
and disk check code was not modified to reflect this.
This patch will move the capability failure check to when IOThread
objects are being added to the command line. Thus a domain that has
IOThreads defined will not be started if the emulator doesn't support
the capability. This means when qemuCheckIOThreads is called to add
a disk, it's no longer necessary to check the capability. Instead the
code can use the IOThreadFind call to indicate that the IOThread
doesn't exist.
Finally because it could be possible to have a domain running with the
iothreadids[] defined prior to this change if libvirtd is restarted each
having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
check fails and remove the elements and array if it exists.
With these changes in place, it turns out the cputune-numatune test
was failing because the right bit wasn't set in the test. So used the
opportunity to fix that and create a test that would expect to fail
with some sort of iothreads defined and used, but not having the
correct capability.
Although theoretically both should be the same value, the niothreadids
should be used in favor of iothreads when performing comparisons. This
leaves the iothreads as a purely numeric value to be saved in the config
file. The one exception to the rule is virDomainIOThreadIDDefArrayInit
where the iothreadids are being generated from the iothreads count since
iothreadids were added after initial iothreads support.
Event implementations need to be registered before a connection to the
Hypervisor is opened, otherwise event handling can be impaired (e.g.
delayed messages). This fact is referenced in an e-mail [1], but should
also be noted in the documentation of the registration functions.
[1] https://www.redhat.com/archives/libvirt-users/2014-April/msg00011.html
Create a separate local API that will fill in the iothreadid array
entries that were not defined by <iothread id='#'> entries in the XML.
Signed-off-by: John Ferlan <jferlan@redhat.com>
After a successful creation of a directory, if some other call results
in returning a failure, let's remove the directory we created to
prevent another round trip or confusion in the caller. In particular, this
function can be called during a storage backend buildVol, so in order
to ensure that caller doesn't need to distinguish between failed create
or some other failure after create, just remove the directory we created.
Signed-off-by: John Ferlan <jferlan@redhat.com>
After a successful creation of a file, if some other call results
in returning a failure, let's unlink the file we created to prevent
another round trip or confusion in the caller. In particular, this
function can be called during a storage backend buildVol, so in order
to ensure that caller doesn't need to distinguish between failed create
or some other failure after create, just remove the volume we created.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Track when the logical volume was successfully created in order to
properly handle the call to virStorageBackendLogicalDeleteVol. It's
possible that the failure to create was because someone created an
LV in the pool outside of libvirt's knowledge. In this case, we don't
want to delete that LV. A subsequent or future refresh of the pool
will find the volume and cause an earlier failure
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '1b5685da' refactored the code to move buildvoldef inside
the buildVol conditional; however, the VIR_FREE of the memory was
left only when 'buildret' failed, thus we're leaking memory.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after a buildVol
succeeds in storageVolCreateXML, thus a volStorageBackendSheepdogRefreshVolInfo
call in virStorageBackendSheepdogBuildVol is no longer necessary.
Additionally, the 'conn' parameter becomes unused.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after the buildVol
succeeds in storageVolCreateXML, thus the volStorageBackendRBDRefreshVolInfo
call in virStorageBackendRBDBuildVol is no longer necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Without this, building on cygwin fails with:
CC libvirt_admin_la-libvirt-admin.lo
libvirt-admin.c:25:21: fatal error: rpc/rpc.h: No such file or directory
#include <rpc/rpc.h>
^
Reported-by: Yaakov Selkowitz <yselkowi@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Our apibuild.py script does not cope with ATTRIBUTE_NONNULL:
Parse Error: parsing function type, ')' expected
Got token ('name', 'char')
Last token: ('name', 'char')
Token queue: [('op', '*'), ('name', 'dconnuri'), ('sep', ')')]
Line 3297 end:
Makefile:2441: recipe for target '../../docs/apibuild.py.stamp' failed
Let's drop it. Moreover, up until e17ae3ccc2 where it was
introduced we did not really care about NULL-ity of dconnuri. And
moreover the ATTRIBUTE_NONNULL merely checks for static calls
over NULL, it won't catch the dynamic ones, where a NULL is
passed by a variable at runtime.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1256999
After creating a copy of the 'authdef' in a pool -> disk translation,
unconditionally clear the 'authType' in the resulting disk auth def
structure since that's used for a storage pool and not a disk. This
ensures virStorageAuthDefFormat will properly format the <auth> XML
for a <disk> (e.g. it won't have a <auth type='%s'.../>).
Check dconnuri is not null or we will catch nullpointer later.
I hope this makes Coverity happy.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
virDomainMigrateUnmanagedParams is not a good candidate for this functionality
as it is used by migrate family functions too and its have its own checks that
are superset of extracted and we don't need to check twice.
Actually name of the function is slightly misleading as there is also a check
for consistensy of flags parameter alone. So it could be refactored further and
reused by all migrate functions but for now let it be a matter of a different
patchset.
It is *not* a pure refactoring patch as it introduces offline check for older
versions. Looks like it must be done that way and no one will be broken too.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Finally on this step we get what we were aimed for - toURI{1, 2} (and
migration{*} APIs too) now can work thru V3_PARAMS protocol. Execution path
goes thru unchanged virDomainMigrateUnmanaged adapter function which is called
by all target places.
Note that we keep the fact that direct migration never works
thru V3_PARAMS proto. We can't change this aspect without
further investigation.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Move virDomainMigrateUnmanagedProto* expected params list check into
function itself and use common virTypedParamsCheck for this purpose.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Extract parameter adaptation and checking which is protocol dependent into
designated functions. Leave only branching and common checks in
virDomainMigrateUnmanagedParams.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Let's put main functionality into params version of virDomainMigrateUnmanaged
as a preparation step for merging it with virDomainMigratePeer2PeerParams.
virDomainMigrateUnmanaged then does nothing more then just adapting arguments.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
p2p plain and direct function are good candidates for code reuse. Their main
function is same - to branch among different versions of migration protocol and
implementation of this function is also same. Also they have other common
functionality in lesser aspects. So let's merge them.
But as they have different signatures we have to get to convention on how to
pass direct migration 'uri' in 'dconnuri' and 'miguri'. Fortunately we alreay
have such convention in parameters passed to toURI2 function, just let's follow
it. 'uri' is passed in miguri and dconnuri is ignored.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
We use miguri name for this parameter in other places. So
make naming more consitent.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Direct migration should work if *perform3 is present but *perform
is not. This is situation when driver migration is implemented
after new version of driver function is introduced. We should not
be forced to support old version too as its parameter space is
subspace of newer one.
This is more structured code so it will be easier to add branch for _PARAMS
protocol here. It is not a pure refactoring strictly speaking as we remove
scenarios for broken cases when driver defines V3 feature and implements
perform function. So it is additionally a more solid code.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
'useParams' parameter usage is an example of control coupling. Most of the work
inside the function is done differently except for the uri check. Lets split
this function into two, one with extensible parameters set and one with hardcoded
parameter set.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
The internal representation of a JSON array counts the items in
size_t. However, for some reason, when asking for the count it's
reported as int. Firstly, we need the function to return a signed
type as it's returning -1 on an error. But, not every system has
integer the same size as size_t. Therefore, lets return ssize_t.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 4e8032272f used $(builddir) in the header search
path to fix a build issue; however, $(builddir) is not defined
by old autoconf versions such as the one available in CentOS 5,
resulting in the following error:
cc1: error: /util: No such file or directory
make[3]: *** [libvirt_driver_la-fdstream.lo] Error 1
Since $(builddir) is defined to always be '.', just use that
value directly instead.
Since a9fe620372, we are generating virkeymaps.h at build
time; however, we are not including $(builddir)/util in the
header search path, so when doing a VPATH build the compiler
is unable to locate the file.
make[2]: Entering directory
`/home/jenkins/libvirt/systems/libvirt-fedora-20/build/src'
GEN util/virkeymaps.h
...
CC util/libvirt_util_la-virkeycode.lo
CC util/libvirt_util_la-virkeyfile.lo
CC util/libvirt_util_la-virlockspace.lo
CC util/libvirt_util_la-virlog.lo
../../src/util/virkeycode.c:27:24: fatal error: virkeymaps.h: No such file or directory
#include "virkeymaps.h"
^
compilation terminated.
Coverity notices that net->ifname is potentially referenced after a
VIR_FREE(). Since the net->ifname will eventually be free'd during
virDomainDefFree when calling virDomainNetDefFree, let's just that
processing take care the free.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since the strtok_r call in libxlCapsInitGuests expects a non NULL first
parameter when the third parameter is NULL, we need to check that
the returned 'capabilities' from a libxl_get_version_info call is
not NULL and error out if so since the code expects it.
Signed-off-by: John Ferlan <jferlan@redhat.com>
So imagine you want to crate new security manager:
if (!(mgr = virSecurityManagerNew("selinux", "QEMU", false, true, false, true)));
Hard to parse, right? What about this:
if (!(mgr = virSecurityManagerNew("selinux", "QEMU",
VIR_SECURITY_MANAGER_DEFAULT_CONFINED |
VIR_SECURITY_MANAGER_PRIVILEGED)));
Now that's better! This is what the commit does.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This gets rid of the partially enforced alignment and makes it less
likely for a bogus value to be introduced in the enumeration.
Capabilities are divided in five-element groups for better readability.
Use #define for QEMU_CAPS_NET_NAME and QEMU_CAPS_HOST_NET_ADD, both
of which are aliases for QEMU_CAPS_0_10.
qemuMigrationIsAllowed would disallow offline migration if the VM
contained host devices or memory modules. Since during offline migration
we don't transfer any state we can safely migrate VMs with such
configuration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1265049
Use the migration @flags for checking various migration aspects rather
than picking them out as booleans. Document the new semantics in the
function header.
Now that qemuMigrationIsAllowed is always called with @vm, we can drop
the @def argument and simplify the control flow.
Additionally the comment is invalid so drop it.
Extract the hostdev check from qemuMigrationIsAllowed into a separate
function since that is the only part that needs to be done in the v2
migration protocol prepare phase on the destination. All other checks
were added when the v3 protocol existed so they don't need to be
extracted.
This change will allow to drop the @def argument for
qemuMigrationIsAllowed and further simplify the function.
In fact, it was never used as far as vz has no features supporting it.
That is why there will be no harm to anyone if we just remove this code to
prevent further misunderstanding and efforts to support dead code.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
At the time this code was added we had intentions to support libvirt interface
to manage vz networks. In fact, it was never implemented completely to work
correctly that makes me think that there will be no harm to anyone if we just
rip it off. Moreover, in vz7 we started to use libvirt bridge network driver to
manage networks.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Even though QEMU on the source host reports completed migration and thus
we move to the Finish phase, QEMU on the destination host may still be
processing migration data. Thus before we can start guest CPUs on the
destination, we have to wait for a completed migration event.
https://bugzilla.redhat.com/show_bug.cgi?id=1265902
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
With new QEMU which supports migration events,
qemuMigrationCheckJobStatus needs to explicitly query QEMU for migration
statistics once migration is completed to make sure the caller sees
up-to-date statistics with both old and new QEMU. However, some callers
are not interested in the statistics at all and once we start waiting
for a completed migration on the destination host too, checking the
statistics would even fail. Let's push the decision whether to update
the statistics or not to the caller.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function already has two bool parameters and we will need to add a
new one. Let's switch to flags to make the callers readable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The destination host gets detailed statistics about the current
migration form the source host via migration cookie and copies them to
the domain object so that they can be queried using
virDomainGetJobStats. However, we should only copy statistics to the
domain object when migration finished successfully.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>