Commit Graph

19257 Commits

Author SHA1 Message Date
Dmitry Guryanov
756f8dcd40 conf: return proper default video type for parallels
Fix function virDomainVideoDefaultType for
parallels VMs and containers. It should return
VGA for VMs and VIR_DOMAIN_VIDEO_TYPE_PARALLELS
for containers.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:30 +02:00
Dmitry Guryanov
0d572b6982 conf: add VIR_DOMAIN_VIDEO_TYPE_PARALLELS video type
We support VNC for containers to have the same
interface with VMs. At this moment it just renders
linux text console.

Of course we don't pass any physical devices and
don't emulate virtual devices. Our VNC server
renders text from terminal master and sends
input events from VNC client to terminal.

So add special video type VIR_DOMAIN_VIDEO_TYPE_PARALLELS
for these pseudo-devices.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
Dmitry Guryanov
b16868a135 parallels: don't fill net adapter model for containers
Network adapter model has no sense for container,
so we shouldn't set it to e1000 in
parallelsDomainDeviceDefPostParse.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
Dmitry Guryanov
6a06b467f5 parallels: fill adapter model in virDomainNetDef
We handle this parameter for VMs while defining
domains, so let's get this property from PCS and
set corresponding field of virDomainNetDef in
prlsdkLoadDomains function.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
Dmitry Guryanov
b204afa13e parallels: add controllers in prlsdkLoadDomain
Call virDomainDefAddImplicitControllers to add disk
controllers, so virDomainDef, filled by this function
will look exactly like the one returned by virDomainDefParseString.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
Dmitry Guryanov
66aee37530 parallels: report, that cdroms are readonly
Set readonly flag for cdrom devices when we
retrieve a list of domains from PCS.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
Dmitry Guryanov
8951ad86ce parallels: implement virDomainManagedSave
Implement virDomainManagedSave api function. In PCS
this feature called "suspend". You can suspend VM or
CT while it is in running or paused state. And after
resuming (or starting) it will have the same state, as
before suspend.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
Dmitry Guryanov
233b799ddb parallels: split prlsdkDomainChangeState function
Split function prlsdkDomainChangeState into
prlsdkDomainChangeStateLocked and prlsdkDomainChangeState.
So it can be used from places, where virDomainObj already
found and locked.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
Dmitry Guryanov
18558ae80f parallels: fix headers in parallels_sdk.h
Return value of functions prlsdkStart/Kill/Stop e.t.c.
is PRL_RESULT in parallels_sdk.c and int in parallels_sdk.h.
PRL_RESULT is int, so compiler didn't report errors.
Let's fix the difference.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2015-04-10 09:50:29 +02:00
John Ferlan
97a1d94fa0 qemu: qemuDomainHotplugVcpus - separate out the del cgroup and pin
Future IOThread setting patches would copy the code anyway, so create
and generalize a delete cgroup and pindef for the vcpu into its own API.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-04-09 19:27:08 -04:00
John Ferlan
0ed8e47a7e qemu: qemuDomainHotplugVcpus - separate out the add cgroup
Future IOThread setting patches would copy the code anyway, so create
and generalize the add the vcpu to a cgroup into its own API.

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-04-09 19:27:08 -04:00
John Ferlan
0456eda317 cgroup: Use virCgroupNewThread
Replace the virCgroupNew{Vcpu|Emulator|IOThread} calls with the common
virCgroupNewThread API

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-04-09 19:27:08 -04:00
John Ferlan
2cd3a980dc cgroup: Introduce virCgroupNewThread
Create a new common API to replace the virCgroupNew{Vcpu|Emulator|IOThread}
API's using an emum to generate the cgroup name

Signed-off-by: John Ferlan <jferlan@redhat.com>
2015-04-09 19:27:08 -04:00
John Ferlan
2ac0e647bd storage: Don't duplicate efforts of backend driver
https://bugzilla.redhat.com/show_bug.cgi?id=1206521

If the backend driver updates the pool available and/or allocation values,
then the storage_driver VolCreateXML, VolCreateXMLFrom, and VolDelete APIs
should not change the value; otherwise, it will appear as if the values
were "doubled" for each change.  Additionally since unsigned arithmetic will
be used depending on the size and operation, either or both values could be
appear to be much larger than they should be (in the EiB range).

Currently only the disk pool updates the values, but other pools could.
Assume a "fresh" disk pool of 500 MiB using /dev/sde:

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     0.00 B
Available:      509.84 MiB

$ virsh vol-create-as disk-pool sde1 --capacity 300M

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     600.47 MiB
Available:      16.00 EiB

Following assumes disk backend updated to refresh the disk pool at deletion
of primary partition as well as extended partition:

$ virsh vol-delete --pool disk-pool sde1
Vol sde1 deleted

$ virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     9.73 EiB
Available:      6.27 EiB

This patch will check if the backend updated the pool values and honor that
update.
2015-04-09 19:04:18 -04:00
John Ferlan
1ffd82bb89 storage: Need to update freeExtent at delete primary partition
Commit id '471e1c4e' only considered updating the pool if the extended
partition was removed. As it turns out removing a primary partition
would also need to update the freeExtent list otherwise the following
sequence would fail (assuming a "fresh" disk pool for /dev/sde of 500M):

$  virsh pool-info disk-pool
...
Capacity:       509.88 MiB
Allocation:     0.00 B
Available:      509.84 MiB

$ virsh vol-create-as disk-pool sde1 --capacity 300M
$ virsh vol-delete --pool disk-pool sde1
$ virsh vol-create-as disk-pool sde1 --capacity 300M
error: Failed to create vol sde1
error: internal error: no large enough free extent

$

This patch will refresh the pool, rereading the partitions, and
return
2015-04-09 19:04:18 -04:00
John Ferlan
1095230dee storage: Fix issues in storageVolResize
https://bugzilla.redhat.com/show_bug.cgi?id=1073305

When creating a volume in a pool, the creation allows the 'capacity'
value to be larger than the available space in the pool. As long as
the 'allocation' value will fit in the space, the volume will be created.

However, resizing the volume checks were made with the new absolute
capacity value against existing capacity + the available space without
regard for whether the new absolute capacity was actually allocating
space or not.  For example, a pool with 75G of available space creates
a volume of 10G using a capacity of 100G and allocation of 10G will succeed;
however, if the allocation used a capacity of 10G instead and then tried
to resize the allocation to 100G the code would fail to allow the backend
to try the resize.

Furthermore, when updating the pool "available" and "allocation" values,
the resize code would just "blindly" adjust them regardless of whether
space was "allocated" or just "capacity" was being adjusted.  This left
a scenario whereby a resize to 100G would fail; however, a resize to 50G
followed by one to 100G would both succeed.  Again, neither was adjusting
the allocation value, just the "capacity" value.

This patch adds more logic to the resize code to understand whether the
new capacity value is actually "allocating" space as well and whether it
shrinking or expanding. Since unsigned arithmatic is involved, the possibility
that we adjust the pool size values incorrectly is probable.

This patch also ensures that updates to the pool values only occur if we
actually performed the allocation.

NB: The storageVolDelete, storageVolCreateXML, and storageVolCreateXMLFrom
each only updates the pool allocation/availability values by the target
volume allocation value.
2015-04-09 19:04:18 -04:00
Peter Krempa
a45ef3a9cd qemu: Avoid shadow of 'sync' symbol
Old compilers whine that 'sync' is being shadowed in the function
introduced in 1eccac1d2d.
2015-04-09 15:36:26 +02:00
Peter Krempa
7c62f239f4 qemu: blockPivot: Don't pause the VM any more since we don't use drive-reopen
Support for drive-reopen was never present in the upstream code so we
don't need to pause the VM when doing the block pivot. Kill all the
code related to this semi-upstream artifact.
2015-04-09 15:04:30 +02:00
Peter Krempa
db37f3cc3a qemu: Clean up old leftovers in qemuMonitorDrivePivot
There are two leftover unused variables. Remove them and clean up the
fallout of the change.
2015-04-09 14:18:48 +02:00
Peter Krempa
3eab2f647a qemu: blockjob: Use the new helpers in qemuDomainGetBlockJobInfo
Refactor the function to use the new helpers.
2015-04-09 14:11:49 +02:00
Peter Krempa
1eccac1d2d qemu: domain: Add helper to check block job support
We need to check that qemu supports block jobs in multiple places. Add a
helper to do the check.
2015-04-09 14:11:42 +02:00
Peter Krempa
88dc7e0c2f qemu: domain: Introduce helper to retrieve domain monitor object
In some cases where the function does not need to access the private
data this helper may be used to retrieve the monitor object.
2015-04-09 14:11:36 +02:00
Erik Skultety
3888dcaa67 doc: Add info (where necessary) that paths should be specified as absolute
We documented this almost everywhere, but missed it on several places.

https://bugzilla.redhat.com/show_bug.cgi?id=1208763
2015-04-09 13:58:47 +02:00
Cédric Bosdonnat
cc21badc5c Open /proc/PID/ns/* read-only to avoid getting permission denied
lxc-enter-namespace stopped working on recent kernels (at least 3.19+)
due to /proc/PID/ns/* file descriptors being opened RW. From outside
the namespace these can only be opened RO.
2015-04-09 11:20:32 +02:00
Cédric Bosdonnat
9e7b1e646d Apparmor qemu abstraction fixes for SLES
SLES 11 has legacy qemu-kvm package, /usr/bin/qemu-kvm and
/usr/share/qemu-kvm need to be accessed to domains.
2015-04-09 11:18:16 +02:00
Lubomir Rintel
da33a1ac1f lxc: create the required directories upon driver start
/var/run may reside on a tmpfs and we fail to create the PID file if
/var/run/lxc does not exist.

Since commit 0a8addc1, the lxc driver's state directory isn't
automatically created before starting a domain. Now, the lxc driver
makes sure the state directory exists when it initializes.

Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
2015-04-09 11:06:26 +02:00
Peter Krempa
fac04598bb util: file: Don't carelessly sanitize URIs
rfc3986 states that the separator in URI path is a single slash.
Multiple slashes may potentially lead to different resources and thus we
should not remove them.
2015-04-09 09:43:36 +02:00
Peter Krempa
b8e7facfa7 test: Add tests for virFileSanitizePath
Add test infrastructure for virFileSanitizePath so that it can be
sensibly refactored later.
2015-04-09 09:43:36 +02:00
Michal Privoznik
362566880f virLXCControllerSetupResourceLimits: Call virNuma*() iff needed
Like we are doing in qemu driver (ea576ee543), lets call
virNumaSetupMemoryPolicy() only if really needed. Problem is, if
we numa_set_membind() child, there's no way to change it from the
daemon afterwards. So any later attempts to change the pinning
will fail. But in very weird way - CGroups will be set, but due
to membind child will not allocate memory from any other node.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-08 12:01:10 +02:00
Luyao Huang
7cd0cf05f7 fix memleak in qemuRestoreCgroupState
131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176
    at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
    by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
    by 0x52A026F: virReallocN (viralloc.c:245)
    by 0x52BFCB5: saferead_lim (virfile.c:1268)
    by 0x52C00EF: virFileReadLimFD (virfile.c:1328)
    by 0x52C019A: virFileReadAll (virfile.c:1351)
    by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763)
    by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805)
    by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857)
    by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694)
    by 0x52FD171: virThreadHelper (virthread.c:206)
    by 0x82B8DF4: start_thread (pthread_create.c:308)
    by 0x85C31AC: clone (clone.S:113)

Signed-off-by: Luyao Huang <lhuang@redhat.com>
2015-04-08 11:56:30 +02:00
Dawid Zamirski
306a242dd7 vbox: Implement virDomainSendKey
Since the holdtime is not supported by VBOX SDK, it's being simulated
by sleeping before sending the key-up codes. The key-up codes are
auto-generated based on XT codeset rules (adding of 0x80 to key-down)
which results in the same behavior as for QEMU implementation.
2015-04-08 11:56:29 +02:00
Dawid Zamirski
445733f3a1 vbox: Register IKeyboard with the unified API.
The IKeyboard COM object is needed to implement virDomainSendKey and is
available in all supported VBOX versions.
2015-04-08 11:56:29 +02:00
Michal Privoznik
ea576ee543 qemuProcessHook: Call virNuma*() only when needed
https://bugzilla.redhat.com/show_bug.cgi?id=1198645

Once upon a time, there was a little domain. And the domain was pinned
onto a NUMA node and hasn't fully allocated its memory:

  <memory unit='KiB'>2355200</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>

  <numatune>
    <memory mode='strict' nodeset='0'/>
  </numatune>

Oh little me, said the domain, what will I do with so little memory.
If I only had a few megabytes more. But the old admin noticed the
whimpering, barely audible to untrained human ear. And good admin he
was, he gave the domain yet more memory. But the old NUMA topology
witch forbade to allocate more memory on the node zero. So he
decided to allocate it on a different node:

virsh # numatune little_domain --nodeset 0-1

virsh # setmem little_domain 2355200

The little domain was happy. For a while. Until bad, sharp teeth
shaped creature came. Every process in the system was afraid of him.
The OOM Killer they called him. Oh no, he's after the little domain.
There's no escape.

Do you kids know why? Because when the little domain was born, her
father, Libvirt, called numa_set_membind(). So even if the admin
allowed her to allocate memory from other nodes in the cgroups, the
membind() forbid it.

So what's the lesson? Libvirt should rely on cgroups, whenever
possible and use numa_set_membind() as the last ditch effort.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-08 11:54:31 +02:00
Michal Privoznik
d65acbde35 vircgroup: Introduce virCgroupControllerAvailable
This new internal API checks if given CGroup controller is
available.  It is going to be needed later when we need to make a
decision whether pin domain memory onto NUMA nodes using cpuset
CGroup controller or using numa_set_membind().

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-08 11:54:24 +02:00
Michael Chapman
cfcdf5ff01 qemu_driver: check caps after starting block job
Currently we check qemuCaps before starting the block job. But qemuCaps
isn't available on a stopped domain, which means we get a misleading
error message in this case:

  # virsh domstate example
  shut off

  # virsh blockjob example vda
  error: unsupported configuration: block jobs not supported with this QEMU binary

Move the qemuCaps check into the block job so that we are guaranteed the
domain is running.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 11:16:19 +02:00
Michael Chapman
72df8314f0 qemu_migrate: use nested job when adding NBD to cookie
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.

(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)

This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.

Relevant logs are as follows:

    6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
    6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'

At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.

qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:

  qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
  975         while (!mon->msg->finished) {
  (gdb) print mon->msg
  $1 = (qemuMonitorMessagePtr) 0x0

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 10:30:17 +02:00
Maxim Nestratov
9baf87bbc6 parallels: delete old networks in prlsdkDoApplyConfig before adding new ones
In order to change an existing domain we delete all existing devices and add
new from scratch. In case of network devices we should also delete corresponding
virtual networks (if any) before removing actual devices from xml. In the patch,
we do it by extending prlsdkDoApplyConfig with a new parameter, which stands for
old xml, and calling prlsdkDelNet every time old xml is specified.

Signed-off-by: Maxim Nestratov <mnestratov@parallels.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-08 10:22:39 +02:00
Michael Chapman
fa2607d577 util: fix removal of callbacks in virCloseCallbacksRun
The close callbacks hash are keyed by a UUID-string, but
virCloseCallbacksRun was attempting to remove them by raw UUID. This
patch ensures the callback entries are removed by UUID-string as well.

This bug caused problems when guest migrations were abnormally aborted:

  # timeout --signal KILL 1 \
      virsh migrate example qemu+tls://remote/system \
        --verbose --compressed --live --auto-converge \
        --abort-on-error --unsafe --persistent \
        --undefinesource --copy-storage-all --xml example.xml
  Killed

  # virsh migrate example qemu+tls://remote/system \
      --verbose --compressed --live --auto-converge \
      --abort-on-error --unsafe --persistent \
      --undefinesource --copy-storage-all --xml example.xml
  error: Requested operation is not valid: domain 'example' is not being migrated

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:48 +02:00
Michael Chapman
e5d729ba42 qemu: fix race between disk mirror fail and cancel
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.

We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:47 +02:00
Michael Chapman
77ddd0bba2 qemu: fix error propagation in qemuMigrationBegin
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:47 +02:00
Michael Chapman
7578cc17f5 qemu: fix crash in qemuProcessAutoDestroy
The destination libvirt daemon in a migration may segfault if the client
disconnects immediately after the migration has begun:

  # virsh -c qemu+tls://remote/system list --all
   Id    Name                           State
  ----------------------------------------------------
  ...

  # timeout --signal KILL 1 \
      virsh migrate example qemu+tls://remote/system \
        --verbose --compressed --live --auto-converge \
        --abort-on-error --unsafe --persistent \
        --undefinesource --copy-storage-all --xml example.xml
  Killed

  # virsh -c qemu+tls://remote/system list --all
  error: failed to connect to the hypervisor
  error: unable to connect to server at 'remote:16514': Connection refused

The crash is in:

   1531 void
   1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
   1533 {
   1534     qemuDomainObjPrivatePtr priv = obj->privateData;
   1535     qemuDomainJob job = priv->job.active;
   1536
   1537     priv->jobs_queued--;

Backtrace:

  #0  at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
  #1  in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
  #2  in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
  #3  in virCloseCallbacksRun at util/virclosecallbacks.c:350
  #4  in qemuConnectClose at qemu/qemu_driver.c:1154
  ...

qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
case is holding the last remaining reference to the domain.
qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
object has been freed and poisoned by then.

This patch bumps the domain's refcount until qemuDomainRemoveInactive
has completed. We also ensure qemuProcessAutoDestroy does not return the
domain to virCloseCallbacksRun to be unlocked in this case. There is
similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
(which call virDomainObjListRemove directly).

Signed-off-by: Michael Chapman <mike@very.puzzling.org>
2015-04-08 09:45:47 +02:00
Michal Privoznik
225aa80246 virQEMUDriverGetConfig: Fix memleak
==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
==19015==    at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015==    by 0x52ADF14: virAllocVar (viralloc.c:560)
==19015==    by 0x5302FD1: virObjectNew (virobject.c:193)
==19015==    by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
==19015==    by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
==19015==    by 0x53E0823: virStateInitialize (libvirt.c:777)
==19015==    by 0x11E067: daemonRunStateInit (libvirtd.c:905)
==19015==    by 0x53201AD: virThreadHelper (virthread.c:206)
==19015==    by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
==19015==    by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-07 18:52:27 +02:00
Michal Privoznik
8d971cecc6 virDomainVirtioSerialAddrSetFree: Fix memleak
==19015== 8 bytes in 1 blocks are definitely lost in loss record 34 of 1,049
==19015==    at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015==    by 0x4C2C32F: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015==    by 0x52AD888: virReallocN (viralloc.c:245)
==19015==    by 0x52AD97E: virExpandN (viralloc.c:294)
==19015==    by 0x52ADC51: virInsertElementsN (viralloc.c:436)
==19015==    by 0x5335864: virDomainVirtioSerialAddrSetAddController (domain_addr.c:816)
==19015==    by 0x53358E0: virDomainVirtioSerialAddrSetAddControllers (domain_addr.c:839)
==19015==    by 0x1DD5513B: qemuDomainAssignVirtioSerialAddresses (qemu_command.c:1422)
==19015==    by 0x1DD55A6E: qemuDomainAssignAddresses (qemu_command.c:1711)
==19015==    by 0x1DDA5818: qemuProcessStart (qemu_process.c:4616)
==19015==    by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265)
==19015==    by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320)

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-07 18:52:26 +02:00
Michal Privoznik
9dbe6f3151 qemuSetupCgroupForVcpu: Fix memleak
==19015== 1,064 (656 direct, 408 indirect) bytes in 2 blocks are definitely lost in loss record 1,002 of 1,049
==19015==    at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015==    by 0x52AD74B: virAlloc (viralloc.c:144)
==19015==    by 0x52B47CA: virCgroupNew (vircgroup.c:1057)
==19015==    by 0x52B53E5: virCgroupNewVcpu (vircgroup.c:1451)
==19015==    by 0x1DD85A40: qemuSetupCgroupForVcpu (qemu_cgroup.c:1013)
==19015==    by 0x1DDA66EA: qemuProcessStart (qemu_process.c:4844)
==19015==    by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265)
==19015==    by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320)
==19015==    by 0x1DDF1ACD: qemuDomainCreate (qemu_driver.c:7337)
==19015==    by 0x53F87EA: virDomainCreate (libvirt-domain.c:6820)
==19015==    by 0x12690A: remoteDispatchDomainCreate (remote_dispatch.h:3481)
==19015==    by 0x126827: remoteDispatchDomainCreateHelper (remote_dispatch.h:3457)

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
2015-04-07 18:52:26 +02:00
Erik Skultety
2a31c5f030 storage: Introduce storagePoolUpdateAllState function
The 'checkPool' callback was originally part of the storageDriverAutostart function,
but the pools need to be checked earlier during initialization phase,
otherwise we can't start a domain which mounts a volume after the
libvirtd daemon restarted. This is because qemuProcessReconnect is called
earlier than storageDriverAutostart. Therefore the 'checkPool' logic has been
moved to storagePoolUpdateAllState which is called inside storageDriverInitialize.

We also need a valid 'conn' reference to be able to execute 'refreshPool'
during initialization phase. Though it isn't available until storageDriverAutostart
all of our storage backends do ignore 'conn' pointer, except for RBD,
but RBD doesn't support 'checkPool' callback, so it's safe to pass
conn = NULL in this case.

Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1177733
2015-04-07 16:22:40 +02:00
Erik Skultety
a9700771f5 conf: Introduce virStoragePoolLoadAllState && virStoragePoolLoadState
These functions operate exactly the same as their network equivalents
virNetworkLoadAllState, virNetworkLoadState.
2015-04-07 16:22:40 +02:00
Erik Skultety
723143a19c storage: Add support for storage pool state XML
This patch introduces new virStorageDriverState element stateDir.
Also adds necessary changes to storageStateInitialize, so that
directories initialization becomes more generic.
2015-04-07 16:22:40 +02:00
Shivaprasad G Bhat
fb0ef7a60e hostdev: Report the domain name for used hostdevs during nodedev-detach
The nodedev-detach can report the name of the domain using the device
just the way nodedev-reattach does it.

Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
2015-04-07 14:01:40 +02:00
Cole Robinson
e3aa4c91c8 virsh: Improve change-media success message
$ sudo virsh change-media f19 hdc /mnt/data/devel/media/Fedora-16-x86_64-Live-KDE.iso
succeeded to complete action update on media

Change the message to:

  Successfully {inserted,ejected,changed} media.

https://bugzilla.redhat.com/show_bug.cgi?id=967946
2015-04-06 16:32:31 -04:00
Laine Stump
f2ab1b9e24 interface: allow multiple IPv4 addresses in interface XML
An upcoming netcf release will support multiple ipv4 addresses, so
let's loosen up libvirt's interface.rng to allow it.
2015-04-06 13:27:15 -04:00