A bunch of files that we don't currently parse, and are very
unlikely to ever start parsing, made their way into the nodeinfo
test data. Get rid of them.
The number of vCPUs for a guest must be between 1 and the
maximum value configured in the domain XML. This commit
introduces checks to make sure that passing count <= 0
results in an error.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1248277
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Fix a cut-n-paste error from commit id '35eecdde' where the previous
check for max_sectors seems to have been copied, but the error message
parameter not updated to be ioeventfd
Commit id '1c24cfe9' added error messages for virNumaSetPagePoolSize;
however, virNumaGetHugePageInfo also uses virNumaGetHugePageInfoPath
in order to build the path, but it never checked upon return if
the built path exists which could lead to an error message as follows:
$ virsh freepages 0 1
error: Failed to open file
'/sys/devices/system/node/node0/hugepages/hugepages-1kB/free_hugepages':
No such file or directory
Rather than add the same message for the other two callers, adjust
the virNumaGetHugePageInfoPath in order not only build the path, but
also check if the built path exists. If the path does not exist,
then generate the error message and return failure.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit id '1c24cfe9' added new checks and error messaes for failure
scenarios. Let's adjust those error messages to after the call to
virNumaGetHugePageInfoPath in order to provide a more specific error
message depending on node and page_size
After this patch:
# virsh allocpages --pagesize 2047 --pagecount 1 --cellno 0
error: operation failed: page size 2047 is not available on node 0
# virsh allocpages --pagesize 2047 --pagecount 1
error: operation failed: page size 2047 is not available
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1265114
Refactor helper virNumaGetHugePageInfoPath to handle returning a directory
path when passed a page_size of 0 and suffix == NULL into a new helper
virNumaGetHugePageInfoDir which will only be called when a directory
path is expected to be returned. This solves the issue where the helper
was called with page_size == 0 expecting a file path in return, but
instead got a directory path and failed in virFileReadAll with:
error : virFileReadAll:1358 : Failed to read file
'/sys/devices/system/node/node0/hugepages/': Is a directory
Since virNumaGetPages API expects to return a directory by passing
page_size == 0 and suffix == NULL, it will now call the new helper.
Callers to virNumaGetHugePageInfoPath expect to return a file path
which could then be used in the call to virFileReadAll.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
We have macros for both positive and negative string matching.
Therefore there is no need to use !STREQ or !STRNEQ. At the same
time as we are dropping this, new syntax-check rule is
introduced to make sure we won't introduce it again.
Signed-off-by: Ishmanpreet Kaur Khera <khera.ishman@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This has been broken for a looong time - in fact, we've been
shipping a mostly-empty NEWS file for at least the past two years.
Including the html namespace and using it for matching elements,
like hacking1.xsl and hacking2.xsl were already doing, makes the
NEWS file useful again.
Add a note explaining that the release list has been split up
by year as well.
The following functions are implemented:
vzDomainIsUpdated, vzDomainGetVcpusFlags and vzDomainGetMaxVcpus.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
The following functions were implemented:
vzNodeGetCPUStats, vzNodeGetMemoryStats,
vzNodeGetCellsFreeMemory and vzNodeGetFreeMemory.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Because we have no limitation for maximal number of vcpus in containers
we report as maximum 1028 just for the sake of common sence.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Even though the APIs are not implemented yet, they create a
skeleton that can be filled in later.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This function should really be called only when we want to change
ownership of a file (or disk source). Lets switch to calling a
wrapper function which will eventually record the current owner
of the file and call virSecurityDACSetOwnershipInternal
subsequently.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is pure code adjustment. The structure is going to be needed
later as it will hold a reference that will be used to talk to
virtlockd. However, so far this is no functional change just code
preparation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is pure code adjustment. The structure is going to be needed
later as it will hold a reference that will be used to talk to
virtlockd. However, so far this is no functional change just code
preparation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
It's better if we stat() file that we are about to chown() at
first and check if there's something we need to change. Not that
it would make much difference, but for the upcoming patches we
need to be doing stat() anyway. Moreover, if we do things this
way, we can drop @chown_errno variable which will become
redundant.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Correctly mark the places where we need to remember and recall
file ownership. We don't want to mislead any potential developer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So we have this mechanism that on SIGUSR1 the virtlockd dumps its
internal state into a JSON file, reexec itself and the reloads
the internal state back. However, there's a bug in our
implementation:
(gdb) signal SIGUSR1
Continuing with signal SIGUSR1.
[Thread 0x7fd094f7b700 (LWP 10602) exited]
process 10600 is executing new program: /home/zippy/work/libvirt/libvirt.git/src/virtlockd
warning: Could not load shared library symbols for linux-vdso.so.1.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7fb28bc3c700 (LWP 14501)]
Program received signal SIGSEGV, Segmentation fault.
0x00007fb29133d530 in virExpandN (ptrptr=0x70, size=8, countptr=0x68, add=1, report=true, domcode=7, filename=0x7fb29138aeab "rpc/virnetserver.c", funcname=0x7fb29138b680 <__FUNCTION__.15821> "virNetServerAddProgram", linenr=661) at util/viralloc.c:288
288 if (*countptr + add < *countptr) {
(gdb) bt
#0 0x00007fb29133d530 in virExpandN (ptrptr=0x70, size=8, countptr=0x68, add=1, report=true, domcode=7, filename=0x7fb29138aeab "rpc/virnetserver.c", funcname=0x7fb29138b680 <__FUNCTION__.15821> "virNetServerAddProgram", linenr=661) at util/viralloc.c:288
#1 0x00007fb29132a267 in virNetServerAddProgram (srv=0x0, prog=0x7fb2915d08b0) at rpc/virnetserver.c:661
#2 0x00007fb29131f27f in main (argc=1, argv=0x7fff8f771298) at locking/lock_daemon.c:1445
Notice the NULL @srv passed to frame 2? Usually, the @srv
variable is initialized on fresh start. However, in case of
daemon reload, the code path that is responsible for initializing
the value was not triggered and therefore we crashed immediately.
Fix this by always setting the variable.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Tunnelled migration can hang if the destination qemu exits despite all the
ABI checks. This happens whenever the destination qemu exits before the
complete transfer is noticed by source qemu. The savevm state checks at
runtime can fail at destination and cause qemu to error out.
The source qemu cant notice it as the EPIPE is not propogated to it.
The qemuMigrationIOFunc() notices the stream being broken from virStreamSend()
and it cleans up the stream alone. The qemuMigrationWaitForCompletion() would
never get to 100% transfer completion.
The qemuMigrationWaitForCompletion() never breaks out as well since
the ssh connection to destination is healthy, and the source qemu also thinks
the migration is ongoing as the Fd to which it transfers, is never
closed or broken. So, the migration will hang forever. Even Ctrl-C on the
virsh migrate wouldn't be honoured. Close the source side FD when there is
an error in the stream. That way, the source qemu updates itself and
qemuMigrationWaitForCompletion() notices the failure.
Close the FD for all kinds of errors to be sure. The error message is not
copied for EPIPE so that the destination error is copied instead later.
Note:
Reproducible with repeated migrations between Power hosts running in different
subcores-per-core modes.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1264008
The existing algorithm assumed that someone was making small, incremental
changes; however, it is possible to change iothreads from 0 (or relatively
small number) to some really large number and the algorithm would possibly
spin its wheels doing unnecessary searches.
So, optimize the algorithm using a bitmap to find available iothread_id's
starting at 1 that aren't already defined by a "<thread id='#'>" and
filling in the iothreadids array with those iothread_id values.
https://bugzilla.redhat.com/show_bug.cgi?id=1249981
When qemuDomainPinIOThread was added in commit id 'fb562614', a check
for the IOThread capability was not needed since a check for iothreadpids
covered the condition where the support for IOThreads was not present.
The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
was able to query the monitor for IOThreads. It would only do that if
the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
However, when iothreadids were added in commit id '8d4614a5' and the
check for iothreadpids was replaced by a search through the iothreadids[]
array for the matching iothread_id that left open the possibility that
an iothreadids[] array was defined, but the entries essentially pointed
to elements with only the 'iothread_id' defined leaving the 'thread_id'
value of 0 and eventually the cpumap entry of NULL.
This was because, the original IOThreads commit id '72edaae7' only
checked if IOThreads were defined and if the emulator had the IOThreads
capability, then IOThread objects were added at startup. The "capability
failure" check was only done when a disk was assigned to an IOThread in
qemuCheckIOThreads. This was because the initial implementation had no way
to dynamically add IOThreads, but it was possible to dynamically add a
disk to the domain. So the decision was if the domain supported it, then
add the IOThread objects. Then if a disk with an IOThread defined was
added, it could check the capability and fail to add if not there. This
just meant the 'iothreads' value was essentially ignored.
Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
deletion of IOThread objects. So it was no longer necessary to generate
IOThread objects to dynamically attach a disk to. However, the startup
and disk check code was not modified to reflect this.
This patch will move the capability failure check to when IOThread
objects are being added to the command line. Thus a domain that has
IOThreads defined will not be started if the emulator doesn't support
the capability. This means when qemuCheckIOThreads is called to add
a disk, it's no longer necessary to check the capability. Instead the
code can use the IOThreadFind call to indicate that the IOThread
doesn't exist.
Finally because it could be possible to have a domain running with the
iothreadids[] defined prior to this change if libvirtd is restarted each
having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
check fails and remove the elements and array if it exists.
With these changes in place, it turns out the cputune-numatune test
was failing because the right bit wasn't set in the test. So used the
opportunity to fix that and create a test that would expect to fail
with some sort of iothreads defined and used, but not having the
correct capability.
Although theoretically both should be the same value, the niothreadids
should be used in favor of iothreads when performing comparisons. This
leaves the iothreads as a purely numeric value to be saved in the config
file. The one exception to the rule is virDomainIOThreadIDDefArrayInit
where the iothreadids are being generated from the iothreads count since
iothreadids were added after initial iothreads support.
The condition checking whether --format was specified was incorrect.
virsh crashed if the following format was used:
virsh dump VM dump --format '' --memory-only
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1272301
Event implementations need to be registered before a connection to the
Hypervisor is opened, otherwise event handling can be impaired (e.g.
delayed messages). This fact is referenced in an e-mail [1], but should
also be noted in the documentation of the registration functions.
[1] https://www.redhat.com/archives/libvirt-users/2014-April/msg00011.html
Create a separate local API that will fill in the iothreadid array
entries that were not defined by <iothread id='#'> entries in the XML.
Signed-off-by: John Ferlan <jferlan@redhat.com>
There were some inconsistencies, eg. the number of digits used for
the day. The month name was also spelled out instead of abbreviated
in some instances.
There were some inconsistencies; now the section title is always
one of Bug Fixes, Cleanups, Documentation, Features, Improvements,
Portability, Security.
Some of the paragraphs were not properly indented: while this was
not a problem in the HTML version, you could tell the difference
in the plain text version.
The changes for releases earlier than 0.7.1 were mostly lumped
together as opposed to being tidly organized with one change per
line, like we have done from that point onwards.
As a result, they look awful in the HTML version and don't work
too well in the plain text version either.
Luckily, except for the very first releases, the information is
still very detailed, so it's enough to organize it properly.
The changes for releases earlier than 0.7.1 were mostly lumped
together as opposed to being tidly organized with one change per
line, like we have done from that point onwards.
As a result, they look awful in the HTML version and don't work
too well in the plain text version either.
Luckily, except for the very first releases, the information is
still very detailed, so it's enough to organize it properly.
The changes for releases earlier than 0.7.1 were mostly lumped
together as opposed to being tidly organized with one change per
line, like we have done from that point onwards.
As a result, they look awful in the HTML version and don't work
too well in the plain text version either.
Luckily, except for the very first releases, the information is
still very detailed, so it's enough to organize it properly.
The description for this release, unlike all other descriptions,
was inside a <p> element; however, the XSLT stylesheet contains a
template that drops all <p> elements from the output file, so it
never made it to the generated NEWS file.
Use a <li> element, same as all other releases, instead.
It should redirect stdout to /dev/null first,
then redirect stderr to whatever stdout currently points at.
Signed-off-by: Wei Jiangang <weijg.fnst@cn.fujitsu.com>
After a successful creation of a directory, if some other call results
in returning a failure, let's remove the directory we created to
prevent another round trip or confusion in the caller. In particular, this
function can be called during a storage backend buildVol, so in order
to ensure that caller doesn't need to distinguish between failed create
or some other failure after create, just remove the directory we created.
Signed-off-by: John Ferlan <jferlan@redhat.com>
After a successful creation of a file, if some other call results
in returning a failure, let's unlink the file we created to prevent
another round trip or confusion in the caller. In particular, this
function can be called during a storage backend buildVol, so in order
to ensure that caller doesn't need to distinguish between failed create
or some other failure after create, just remove the volume we created.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Track when the logical volume was successfully created in order to
properly handle the call to virStorageBackendLogicalDeleteVol. It's
possible that the failure to create was because someone created an
LV in the pool outside of libvirt's knowledge. In this case, we don't
want to delete that LV. A subsequent or future refresh of the pool
will find the volume and cause an earlier failure
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '1b5685da' refactored the code to move buildvoldef inside
the buildVol conditional; however, the VIR_FREE of the memory was
left only when 'buildret' failed, thus we're leaking memory.
Signed-off-by: John Ferlan <jferlan@redhat.com>