Commit a0670ae caused a regression in 'virsh event' and
'virsh qemu-monitor-event' - if a user tries to filter the
command to a specific domain, an error message is printed:
$ virsh event dom --loop
error: internal error: virsh qemu-monitor-event: no domain VSH_OT_DATA option
and then the command continues as though no domain had been
supplied (giving events for ALL domains, instead of the
requested one). This is because the code was incorrectly
assuming that all "domain" options would be supplied via a
mandatory VSH_OT_DATA, even though "domain" is optional for
these two commands, so we had changed them to VSH_OT_STRING
to quit failing for other reasons (ever since it was decided
that VSH_OT_DATA and VSH_OT_STRING should no longer be
synonyms).
In looking at the situation, though, the code for looking up
a domain was making a pointless check for whether the option
exists prior to finding the option's string value, as
vshCommandOptStringReq does just fine at reporting any errors
when looking up a string whether or not the option was present.
So this is a case of regression fixing by pure code deletion :)
* tools/virsh-domain.c (vshCommandOptDomainBy): Drop useless filter.
* tools/virsh-interface.c (vshCommandOptInterfaceBy): Likewise.
* tools/virsh-network.c (vshCommandOptNetworkBy): Likewise.
* tools/virsh-nwfilter.c (vshCommandOptNWFilterBy): Likewise.
* tools/virsh-secret.c (vshCommandOptSecret): Likewise.
* tools/virsh.h (vshCmdHasOption): Drop unused function.
* tools/virsh.c (vshCmdHasOption): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
If a virAsprintf() within the function fails, we call VIR_FREE()
over @rundir variable and jump onto cleanup label, where it is
freed again. It doesn't hurt, but not make much sense too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit f6563bc3 introduced HMP impl of the function (so that a different
uglier function could be removed). Before the HMP code is called there's
a leftover check that the monitor is JSON which inhibits the code from
working.
https://bugzilla.redhat.com/show_bug.cgi?id=1200149
Even though we have a mutex mechanism so that two clients don't spawn
two daemons, it's not strong enough. It can happen that while one
client is spawning the daemon, the other one fails to connect.
Basically two possible errors can happen:
error: Failed to connect socket to '/home/mprivozn/.cache/libvirt/libvirt-sock': Connection refused
or:
error: Failed to connect socket to '/home/mprivozn/.cache/libvirt/libvirt-sock': No such file or directory
The problem in both cases is, the daemon is only starting up, while we
are trying to connect (and fail). We should postpone the connecting
phase until the daemon is started (by the other thread that is
spawning it). In order to do that, create a file lock 'libvirt-lock'
in the directory where session daemon would create its socket. So even
when called from multiple processes, spawning a daemon will serialize
on the file lock. So only the first to come will spawn the daemon.
Tested-by: Richard W. M. Jones <rjones@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Two non-static functions in virjson.c were missing their export info in
libvirt_private.syms, so they couldn't be used anywhere it the code (and
that's about to get changed).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Luckily we are allocating structs as clean memory and
PTHREAD_MUTEX_INITIALIZER is "{ 0 }", so nothing happened, but it should
still be created as lockable object.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Check the proposed pool source host XML definition against existing gluster
pools to ensure the incoming definition doesn't use the same source dir and
soure host XML definition as an existing pool.
Check the proposed pool source host XML definition against existing sheepdog
pools to ensure the incoming definition doesn't use the same source host XML
definition as an existing pool.
Rather than have duplicate code doing the same check, have the netfs
matching processing code use the new virStoragePoolSourceMatchSingleHost.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Create a separate iSCSI Source matching subroutine. Makes the calling
code a bit cleaner as well as sets up for future patches which need to
do better source hosts[0].name processing/checking.
As part of the effort the logic will be inverted from a multi-level
if statement to a series of single level checks for better readability
and further separation
Signed-off-by: John Ferlan <jferlan@redhat.com>
When acquiring resource via sanlock fails, we would report it as
VIR_ERR_INTERNAL_ERROR, which is not very friendly to applications using
libvirt. Moreover, the lockd driver would report the same failure as
VIR_ERR_RESOURCE_BUSY, which looks better.
Unfortunately, in sanlock driver we don't really know if acquiring the
resource failed because it was already locked or there was another
reason behind. But the end result is the same and I think using
VIR_ERR_RESOURCE_BUSY reason for all acquire failures is still better
than what we have now.
https://bugzilla.redhat.com/show_bug.cgi?id=1165119
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit 49ed6cff is broken on mingw and other non-linux platforms:
CCLD libvirt.la
Cannot export virNetDevSysfsFile: symbol not defined
collect2: error: ld returned 1 exit status
* src/util/virnetdev.c: Provide virNetDevSysfsFile fallback.
Signed-off-by: Eric Blake <eblake@redhat.com>
Found by ./autobuild.sh during a mingw cross-compile:
Commit 8a96e87 was not innocuous - glibc happens to leak the
definition of time() through other headers, so that even without
<sys/select.h>, virrandom.c compiled just fine. But on mingw,
we were not so lucky; <sys/select.h> was important for its side
effect of dragging in <time.h>, and we now have nothing providing
the declaration of time():
../../src/util/virrandom.c: In function 'virRandomOnceInit':
../../src/util/virrandom.c:65:5: error: implicit declaration of function 'time' [-Werror=implicit-function-declaration]
unsigned int seed = time(NULL) ^ getpid();
^
../../src/util/virrandom.c:65:5: error: nested extern declaration of 'time' [-Werror=nested-externs]
Signed-off-by: Eric Blake <eblake@redhat.com>
Changing the prototype to not have "int *index" since we'll soon be
disallowing index as a name. Curiously the original commit (a4504ac)
for the function used 'int idx' in the function - so they didn't match.
Now they do.
It is there even with -nodefaults and -no-user-config, so count with
that so we can start sparc domains.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The variable 'last_processed_hostdev_vf' indicates index of the last
successfully configed vf. When resetvfnetconfig because of failure,
hostdevs[last_processed_hostdev_vf] should also be reset.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.
2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
After set memory parameters for running domain, save the change to live
xml is needed otherwise it will disappear after restart libvirtd.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1211548
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
When set guest memory with a invalid parameter of --soft-limit,
it posts weird error:
$ virsh memtune r7 --hard-limit 20417224 --soft-limit 9007199254740992 \
--swap-hard-limit 35417224
error: Unable to parse integer parameter 'NAME
Change it to
error: Unable to parse integer parameter soft-limit
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1211550
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Apparently for Xen-devel 'index' is a global and causes a build failure,
so just use the shortened 'idx' instead to avoid the conflict.
Signed-off-by: John Ferlan <jferlan@redhat.com>
QEMU does not abandon the mirror. The job carries on in the synchronised
phase and it might be either pivoted again or cancelled. The commit
hints that the described behavior was happening in a downstream version.
If the command returns false there are two possible options:
1) qemu did not reach the point where it would ask the block job to
pivot
2) pivotting failed in the actual qemu coroutine
If either of those would happen we return failure and reset the
condition that waits for the block job to complete. This makes the API
fail but in case where qemu would actually abandon the mirror the fact
is notified via the event and handled asynchronously.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1202704
qemuDomainBlockJobImpl become an unmaintainable mess over the years of
adding new stuff to it. This patch starts splitting up individual
functions from it until it can be killed entirely.
In bulk this will add lines of code rather than delete them but it will
be traded for maintainability.
My intention is to split qemuMonitorJSONBlockJob() into simpler separate
functions for every block job type. Since the error handling code is the
same for all block jobs, this patch extracts the code into a separate
function that will later be reused in more places.
With the new helper qemuMonitorJSONErrorIsClass we can save a few
function calls as we can extract the error object once.
Split out the function that checks the actual error class string into a
separate helper as it will be useful later and refactor
qemuMonitorJSONHasError to return bool type and remove few useless
checks.
Basically virJSONValueObjectHasKey are useless here since the next call
to virJSONValueObjectGet is checking the return value again (which can't
fail at that point). By removing the first check we save a function
call.
Previously we checked that the vcpu we are trying to set is in range of
the number of threads presented by qemu. The problem is that if the VM
is offline the count is 0. Since the condition subtracted 1 from the
count the number would overflow and the check would never trigger.
Change the condition for more sensible ones with specific error
messages.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1208434
Refactor the code to parse the vcpupin in a similar way the iothreadpin
code is now structured. This allows to get rid of some very strange
conditions and error messages.
Additionally since a existing bug
( https://bugzilla.redhat.com/show_bug.cgi?id=1208434 ) allows to add
vcpupin definitions for vcpus that don't exist, this patch makes the
parser to ignore all vcpupins that don't have a matching vCPU in the
definition rather than just offlined ones.
Defining a domain with the following config:
<domain ...>
...
<iothreads>1</iothreads>
<cputune>
<iothreadpin cpuset='1'/>
will result in the following config formatted back:
<domain type='kvm'>
...
<iothreads>1</iothreads>
<cputune>
<iothreadpin iothread='0' cpuset='1'/>
After restart the VM would vanish. Since our schema requires the
@iothread field to be present in <iothreadpin> make it required by the
code too.