I noticed some inconsistent use of 'else'.
* src/qemu/qemu_driver.c (qemuCPUCompare)
(qemuDomainSnapshotCreateXML, qemuDomainRevertToSnapshot)
(qemuDomainSnapshotDiscard): Match coding conventions.
If a snapshot with the name already exists, virDomainSnapshotAssignDef()
just returns NULL, in which case the snapshot definition is leaked.
Currently this leak is not a big problem, since qemuDomainSnapshotLoad()
is only called once during initial startup of libvirtd.
Signed-off-by: Philipp Hahn <hahn@univention.de>
A previous commit gave the LXC driver the ability to mount
block devices for the container filesystem. Through use of
the loopback device functionality, we can build on this to
support use of plain file images for LXC filesytems.
By setting the LO_FLAGS_AUTOCLEAR flag we can ensure that
the loop device automatically disappears when the container
dies / shuts down
* src/lxc/lxc_container.c: Raise error if we see a file
based filesystem, since it should have been turned into
a loopback device already
* src/lxc/lxc_controller.c: Rewrite any filesystems of
type=file, into type=block, by binding the file image
to a free loop device
Currently the LXC driver can only populate filesystems from
host filesystems, using bind mounts. This patch allows host
block devices to be mounted. It autodetects the filesystem
format at mount time, and adds the block device to the cgroups
ACL. Example usage is
<filesystem type='block' accessmode='passthrough'>
<source dev='/dev/sda1'/>
<target dir='/home'/>
</filesystem>
* src/lxc/lxc_container.c: Mount block device filesystems
* src/lxc/lxc_controller.c: Add block device filesystems
to cgroups ACL
An application container shouldn't get a private /dev. Fix
the regression from 6d37888e6a
* src/lxc/lxc_container.c: Don't mount /dev for app containers
Detected by ccc-analyzer, reported by Alex Jia.
qemuProcessStart always calls qemuProcessWaitForMonitor with a
non-negative position, but qemuProcessAttach always calls with -1.
In the latter case, there is no log file we can scrape, so we
also should not be trying to scrape the logs if the qemu process
died at the very end.
* src/qemu/qemu_process.c (qemuProcessWaitForMonitor): Don't try
to read from log in qemuProcessAttach case.
This addresses https://bugzilla.redhat.com/show_bug.cgi?id=713728
When "defining" a new network (or one that exists but isn't currently
active) the new definition is stored in network->def, but for a
network that already exists and is active, the new definition is
stored in network->newDef, and then moved over to network->def as soon
as the network is destroyed.
However, the code that writes the dhcp and dns hosts files used by
dnsmasq was always using network->def for its information, even when
the new data was actually in network->newDef, so the hosts files
always lagged one edit behind the definition.
This patch changes the code to keep the pointer to the new definition
after it's been assigned into the network, and use it directly
(regardless of whether it's stored in network->newDef or network->def)
to construct the hosts files.
Value stored to 'ret' is never read, so remove this dead assignment.
* src/qemu/qemu_monitor_text.c: kill dead assignment.
Signed-off-by: Alex Jia <ajia@redhat.com>
Value stored to 'ret' is never read, in fact, 'cleanup' section will
directly return -1 when function is fail, so remove this dead assignment.
* src/qemu/qemu_process.c: kill dead assignment.
Signed-off-by: Alex Jia <ajia@redhat.com>
When trying to use any SASL authentication for TCP sockets by
setting auth_tls = "sasl" in libvirtd.conf on server side, the
client will hang because of the sasl session relocking other than
dropping the lock when exiting virNetSASLSessionExtKeySize()
* src/rpc/virnetsaslcontext.c: virNetSASLSessionExtKeySize drop the
lock on exit
This patch introduces a internal RPC API "virNetServerClose", which
is standalone with "virNetServerFree". it closes all the socket fds,
and unlinks the unix socket paths, regardless of whether the socket
is still referenced or not.
This is to address regression bug:
https://bugzilla.redhat.com/show_bug.cgi?id=725702
I noticed that with 0.9.4, gnulib ended up replacing pthread_sigmask
on glibc, even though glibc's works perfectly fine. It turns out
to have been an upstream gnulib bug.
* .gnulib: Update to latest, for pthread_sigmask fix.
Detection based on gnutls_session doesn't work because GnuTLS 2.x.y
comes with a compat.h that defines gnutls_session to gnutls_session_t.
Instead detect this based on LIBGNUTLS_VERSION_MAJOR. Move this from
configure/config.h to gnutls_1_0_compat.h and make sure that all users
include gnutls_1_0_compat.h properly.
Also fix header guard in gnutls_1_0_compat.h.
* configure.ac docs/news.html.in libvirt.spec.in: updates for new
release
* po/*.po*: pulled translations from the transifex teams and regenerated
localizations
Leak detected by Coverity; only possible on unlikely ptsname_r
failure. Additionally, the man page for ptsname_r states that
failure is merely non-zero, not necessarily -1.
* src/util/util.c (virFileOpenTtyAt): Avoid leak on ptsname_r
failure.
Coverity detected that ifaceGetNthParent had already dereferenced
'nth' prior to the conditional; all callers already complied with
passing a non-NULL pointer so make this part of the contract.
* src/util/interface.h (ifaceGetNthParent): Add annotations.
* src/util/interface.c (ifaceGetNthParent): Drop useless null check.
In virNetServerNew, Coverity didn't realize that srv->mdsnGroupName
can only be non-NULL if mdsnGroupName was non-NULL.
In virNetServerRun, Coverity didn't realize that the array is non-NULL
if the array count is non-zero.
* src/rpc/virnetserver.c (virNetServerNew): Use alternate pointer.
(virNetServerRun): Give coverity a hint.
Coverity complained that 395 out of 409 virAsprintf calls are
checked, and therefore assumed that the remaining cases are bugs
waiting to happen. But in each of these cases, a failed virAsprintf
will properly set the target string to NULL, and pass on that
failure to the caller, without wasting efforts to check the call.
Adding the ignore_value silences Coverity.
* src/conf/domain_audit.c (virDomainAuditGetRdev): Ignore
virAsprintf return value, when it behaves like we need.
* src/network/bridge_driver.c (networkDnsmasqLeaseFileNameDefault)
(networkRadvdConfigFileName, networkBridgeDummyNicName)
(networkRadvdPidfileBasename): Likewise.
* src/util/storage_file.c (absolutePathFromBaseFile): Likewise.
* src/openvz/openvz_driver.c (openvzGenerateContainerVethName):
Likewise.
* src/util/command.c (virCommandTranslateStatus): Likewise.
Quite a few leaks detected by coverity. For chr, the leaks were
close enough to the allocations to plug in place; for disk, the
leaks were separated from the allocation by enough other lines with
intermediate failure cases that I refactored the cleanup instead.
* src/qemu/qemu_command.c (qemuParseCommandLine): Plug leaks.
Warning detected by Coverity. No need for the NULL check, and
removing it silences the warning without any semantic change.
* src/qemu/qemu_migration.c (qemuMigrationFinish): All entries to
endjob had non-NULL vm.
Detected by Coverity. Freeing the wrong variable results in both
a memory leak and the likelihood of the caller dereferencing through
a freed pointer.
* src/rpc/virnettlscontext.c (virNetTLSSessionNew): Free correct
variable.
Coverity detected that 5 of 6 callers of virJSONValueArrayGet checked
for a NULL return; and that by not checking we risk a null deref
during an error. The error is unlikely since the prior call to
virJSONValueArraySize would probably have already caught any botched
JSON array parse, but better safe than sorry.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONGetBlockJobInfo):
Check for NULL.
(qemuMonitorJSONExtractPtyPaths): Fix typo.
Detected by Coverity. We want to compare the result of fnmatch 'rv',
not our pre-set return value 'ret'.
* src/rpc/virnetsaslcontext.c (virNetSASLContextCheckIdentity):
Check correct variable.
Right now, every re-run of configure re-evaluates whether a
static analysis tool is in use. But if you run configure under
coverity, make a tweak, and then do an incremental rebuild with
gcc but not coverity to test the tweak, then rerun a build under
coverity, then configure does not get rerun, and static analysis
ends up with lots of false positives.
This patch caches the static analysis result, and also makes it
easier to force static analysis even if the existing checks are
insufficient to detect newer versions of the static analyzer tools.
* configure.ac (lv_cv_static_analysis): New cache variable.
Revert 6a1f5f568f. Now that libvirt_iohelper takes fds by
inheritance rather than by open() (commit 1eb66479), there is
no longer a race where the parent can unlink() a file prior to
the iohelper open()ing the same file. From there, it makes
more sense to have the callers both create and unlink, rather
than the caller create and the stream unlink, since the latter
was only needed when iohelper had to do the unlink.
* src/fdstream.h (virFDStreamOpenFile, virFDStreamCreateFile):
Callers are responsible for deletion.
* src/fdstream.c (virFDStreamOpenFileInternal): Don't leak created
file on failure.
(virFDStreamOpenFile, virFDStreamCreateFile): Drop parameter.
* src/lxc/lxc_driver.c (lxcDomainOpenConsole): Update callers.
* src/qemu/qemu_driver.c (qemuDomainScreenshot)
(qemuDomainOpenConsole): Likewise.
* src/storage/storage_driver.c (storageVolumeDownload)
(storageVolumeUpload): Likewise.
* src/uml/uml_driver.c (umlDomainOpenConsole): Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainScreenshot): Likewise.
* src/xen/xen_driver.c (xenUnifiedDomainOpenConsole): Likewise.
The previous qemu patch could end up calling unlink(tmp) before
tmp was the name of a valid file (unlinking a fileXXXXXX template
instead), or calling unlink(tmp) twice on success (once here,
and once at the end of the stream). Meanwhile, vbox also suffered
from the same leaked tmp file bug.
* src/qemu/qemu_driver.c (qemuDomainScreenshot): Don't unlink on
success, or on invalid name.
* src/vbox/vbox_tmpl.c (vboxDomainScreenshot): Don't leak temp file.
Spotted by Coverity. Gnutls documents that buffer must be NULL
if gnutls_x509_crt_get_key_purpose_oid is to be used to determine
the correct size needed for allocating a buffer.
* src/rpc/virnettlscontext.c
(virNetTLSContextCheckCertKeyPurpose): Initialize buffer.
Spotted by coverity. If pipe2 fails, then we attempt to close
uninitialized fds, which may result in a double-close.
* src/rpc/virnetserver.c (virNetServerSignalSetup): Initialize fds.
Steps to reproduce this problem (vm1 is not running):
for i in `seq 50`; do virsh managedsave vm1& done; killall virsh
Pre-patch, virNetServerClientClose could end up setting client->sock
to NULL prior to other cleanup functions trying to use client->sock.
This fixes things by checking for NULL in more places, and by deferring
the cleanup until after all queued messages have been served.
* src/rpc/virnetserverclient.c (virNetServerClientRegisterEvent)
(virNetServerClientGetFD, virNetServerClientIsSecure)
(virNetServerClientLocalAddrString)
(virNetServerClientRemoteAddrString): Check for closed socket.
(virNetServerClientClose): Rearrange close sequence.
Analysis from Wen Congyang.
This patch adds an internal function openvzGetVEStatus to
get the real state of the domain. This function is used in
various places in the driver, in particular to detect when
the domain has been shut down by the user with the "halt"
command.
Currently, we attempt to run sync job and async job at the same time. It
means that the monitor commands for two jobs can be run in any order.
In the function qemuDomainObjEnterMonitorInternal():
if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
if (qemuDomainObjBeginNestedJob(driver, obj) < 0)
We check whether the caller is an async job by priv->job.active and
priv->job.asynJob. But when an async job is running, and a sync job is
also running at the time of the check, then priv->job.active is not
QEMU_JOB_NONE. So we cannot check whether the caller is an async job
in the function qemuDomainObjEnterMonitorInternal(), and must instead
put the burden on the caller to tell us when an async command wants
to do a nested job.
Once the burden is on the caller, then only async monitor enters need
to worry about whether the VM is still running; for sync monitor enter,
the internal return is always 0, so lots of ignore_value can be dropped.
* src/qemu/THREADS.txt: Reflect new rules.
* src/qemu/qemu_domain.h (qemuDomainObjEnterMonitorAsync): New
prototype.
* src/qemu/qemu_process.h (qemuProcessStartCPUs)
(qemuProcessStopCPUs): Add parameter.
* src/qemu/qemu_migration.h (qemuMigrationToFile): Likewise.
(qemuMigrationWaitForCompletion): Make static.
* src/qemu/qemu_domain.c (qemuDomainObjEnterMonitorInternal): Add
parameter.
(qemuDomainObjEnterMonitorAsync): New function.
(qemuDomainObjEnterMonitor, qemuDomainObjEnterMonitorWithDriver):
Update callers.
* src/qemu/qemu_driver.c (qemuDomainSaveInternal)
(qemudDomainCoreDump, doCoreDump, processWatchdogEvent)
(qemudDomainSuspend, qemudDomainResume, qemuDomainSaveImageStartVM)
(qemuDomainSnapshotCreateActive, qemuDomainRevertToSnapshot):
Likewise.
* src/qemu/qemu_process.c (qemuProcessStopCPUs)
(qemuProcessFakeReboot, qemuProcessRecoverMigration)
(qemuProcessRecoverJob, qemuProcessStart): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationToFile)
(qemuMigrationWaitForCompletion, qemuMigrationUpdateJobStatus)
(qemuMigrationJobStart, qemuDomainMigrateGraphicsRelocate)
(doNativeMigrate, doTunnelMigrate, qemuMigrationPerformJob)
(qemuMigrationPerformPhase, qemuMigrationFinish)
(qemuMigrationConfirm): Likewise.
* src/qemu/qemu_hotplug.c: Drop unneeded ignore_value.
whether or not previous return value is -1, the following codes will be
executed for a inactive guest in src/qemu/qemu_driver.c:
ret = virDomainSaveConfig(driver->configDir, persistentDef);
and if everything is okay, 'ret' is assigned to 0, the previous 'ret'
will be overwritten, this patch will fix this issue.
* src/qemu/qemu_driver.c: avoid return value is overwritten when give a argument
in out of blkio weight range for a inactive guest.
* how to reproduce?
% virsh blkiotune ${guestname} --weight 10
% echo $?
Note: guest must be inactive, argument 10 in out of blkio weight range,
and can get a error information by checking libvirtd.log, however,
virsh hasn't raised any error information, and return value is 0.
https://bugzilla.redhat.com/show_bug.cgi?id=726304
Signed-off-by: Alex Jia <ajia@redhat.com>
whether or not previous return value is -1, the following codes will be
executed for a inactive guest in qemuDomainSetMemoryParameters:
ret = virDomainSaveConfig(driver->configDir, persistentDef);
and if everything is okay, 'ret' is assigned to 0, the previous 'ret'
will be overwritten, this patch will fix this issue.
* src/qemu/qemu_driver.c: avoid return value is overwritten when set
min_guarante value to a inactive guest.
* how to reproduce?
% virsh memtune ${guestname} --min_guarante 1024
% echo $?
Note: guest must be inactive, in fact, 'min_guarante' hasn't been implemented
in memory tunable, and I can get the error when check actual libvirtd.log,
however, virsh hasn't raised any error information, and return value is 0.
Signed-off-by: Alex Jia <ajia@redhat.com>
This commands don't have a --pool option, so don't tell
vshCommandOptVolBy that there could be one. This made
vshCommandOptString for pooloptname fail and an "missing option"
error was reported.
Make pooloptname optional for vshCommandOptVolBy.
Introduced by f9a837da73, the condition is not changed after
the else clause is removed. So now it quit with "domain is not
running" when the domain is running. However, when the domain is
not running, it reports "no job is active".
How to reproduce:
1)
% virsh start $domain
% virsh domjobabort $domain
error: Requested operation is not valid: domain is not running
2)
% virsh destroy $domain
% virsh domjobabort $domain
error: Requested operation is not valid: no job is active on the domain
3)
% virsh save $domain /tmp/$domain.save
Before above commands finished, try to abort job in another terminal
% virsh domabortjob $domain
error: Requested operation is not valid: domain is not running