introduce function
xenParseXMDisk(virConfPtr conf, ........);
which parses xm disk config instead
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
introduce function
xenParseXMCPUFeatures(virConfPtr conf,.........);
which parses CPU features instead
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
introduce function
xenParseXMEventActions(virConfPtr conf,........)
which parses events leading to certain actions
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
introduce function
xenParseXMTimeOffset(virConfPtr conf,.......);
which parses time offset config instead
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
During review of the iSCSI hostdev series, eblake noted that the
prototypes shouldn't have the extranenous space between the "*" and
the function name:
http://www.redhat.com/archives/libvir-list/2014-July/msg01227.html
Since it was more invasive than 1 or 2 lines - I said I'd send a
patch covering this once committed.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Introduce a new structure to handle an iSCSI host device based on the
existing virDomainHostdevSubsysSCSI by adding a "protocol='iscsi'" to
the <source/> element. The existing scsi_host subsystem RNG was modified
to read an optional "protocol='adapter'", although it won't be written
out nor is it documented as an option (by choice).
The new hostdev structure mimics the existing <disk/> element for an
iSCSI device (network) device. New XML is:
<hostdev mode='subsystem' type='scsi' managed='yes'>
<source protocol='iscsi' name='iqn.1992-01.com.example'>
<host name='example.org' port='3260'/>
<auth username='myname'>
<secret type='iscsi' usage='mycluster_myname'/>
</auth>
</source>
<address type='drive' controller='0' bus='0' target='2' unit='5'/>
</hostdev>
The controller element will mimic the existing scsi_host code insomuch
as when 'lsi' and 'virtio-scsi' are used.
In preparation for hostdev support for iSCSI and a virStorageNetHostDefPtr,
split out the network disk storage parsing of the 'host' element into a
separate routine.
Commit febf84c2 tried to delay in-memory modification of the actual
domain disk structure until after the qemu event was received.
However, I missed that the code for block pivot had been temporarily
setting disk->src = disk->mirror prior to the qemu command, in order
to label the backing chain of a reused external blockcopy disk;
and calls into qemu while still in that state before finally undoing
things at the cleanup label. Since the qemu event handler then does:
virStorageSourceFree(disk->src);
disk->src = disk->mirror;
we have the sad race that a fast enough qemu event can cause a leak of
the original disk->src, as well as a use-after-free of the disk->mirror
contents, bad enough to crash libvirtd in some of my test runs, even
though the common case of the qemu event being much later won't trip
the race.
I'll go wear the brown paper bag of shame, for introducing a crasher
in between rc1 and rc2 of the freeze for 1.2.7 :( My only
consolation is that virDomainBlockJobAbort requires the domain:write
ACL, so it is not a CVE.
The valgrind report when the race occurs looks like:
==25612== Invalid read of size 4
==25612== at 0x50E7C90: virStorageSourceGetActualType (virstoragefile.c:1948)
==25612== by 0x209C0B18: qemuDomainDetermineDiskChain (qemu_domain.c:2473)
==25612== by 0x209D7F6A: qemuProcessHandleBlockJob (qemu_process.c:1087)
==25612== by 0x209F40C9: qemuMonitorEmitBlockJob (qemu_monitor.c:1357)
...
==25612== Address 0xe4b5610 is 0 bytes inside a block of size 200 free'd
==25612== at 0x4A07577: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==25612== by 0x50839E9: virFree (viralloc.c:582)
==25612== by 0x50E7E51: virStorageSourceFree (virstoragefile.c:2015)
==25612== by 0x209D7EFF: qemuProcessHandleBlockJob (qemu_process.c:1073)
==25612== by 0x209F40C9: qemuMonitorEmitBlockJob (qemu_monitor.c:1357)
* src/qemu/qemu_driver.c (qemuDomainBlockPivot): Don't corrupt
disk->src, and only label chain for blockcopy.
Signed-off-by: Eric Blake <eblake@redhat.com>
Valgrind caught a memory leak:
==2018== 9 bytes in 1 blocks are definitely lost in loss record 143 of 927
==2018== at 0x4A0645D: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==2018== by 0x8C42369: strdup (strdup.c:42)
==2018== by 0x50EACC9: virStrdup (virstring.c:676)
==2018== by 0x50E79E5: virStorageSourceCopy (virstoragefile.c:1845)
==2018== by 0x20A3FAA7: qemuDomainBlockCommit (qemu_driver.c:15620)
==2018== by 0x51DC6B2: virDomainBlockCommit (libvirt.c:20092)
I traced it to the fact that blockcopy and blockcommit end up
reparsing a backing chain on pivot, but the chain parsing code
doesn't gracefully handle the case where the backing file is
already known.
I'm not exactly sure when this was introduced, but suspect that the
refactoring in commit 9944b71 and friends that moved towards probing
in-place rather than into a temporary structure are part of the cause.
* src/util/virstoragefile.c (virStorageFileGetMetadataInternal):
Don't leak any prior value.
Signed-off-by: Eric Blake <eblake@redhat.com>
Fix a comment in virDomainAuditNetDevice.
Fix a typo in comment of qemuPhysIfaceConnect which is
the caller of virDomainAuditNetDevice.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
RNG schema as well as the qemu driver requires absolute paths for memory
and disk snapshot image files but the XML parser was not enforcing it.
Add checks to avoid problems in qemu where the configuration it creates
is invalid.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1126329
Since commit be0782e1 we are parsing /proc/meminfo to find out the
default huge page size. However, if the host we are running at does
not support any huge pages (e.g. CONFIG_HUGETLB_PAGE is turned off),
we will not successfully parse the meminfo file and hence the whole
qemu driver init process fails. Moreover, the default huge page size
is needed if and only if there's at least one hugetlbfs mount point.
So the fix consists of moving the virFileGetDefaultHugepageSize
function call after the first hugetlbfs mount point is found.
With this fix, we fail to start with one or more hugetlbfs mounts and
malformed meminfo file, but that's expected (how can one mount
hugetlbfs without kernel supporting huge pages?). Workaround in that
case is to umount all the hugetlbfs mounts.
Reported-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In a system with Fiber Channel Host Adapters, a query to list all Fibre Channel
HBAs OR Vports currently returns empty list:
$ virsh nodedev-list --cap fc_host
$
Libvirt correctly discovers properties for all HBAs. However, the reporting
fails because of incorrect flag comparison while filtering these types.
This is fixed by removing references to 'VIR_CONNECT_LIST_NODE_DEVICES_CAP_*'
for comparison and replacing those with 'VIR_NODE_DEV_CAP_*'
Introduced by original commit id '652a2ec6'
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Commit 232a31b munged job info to report 'active commit' instead of
'commit' when generating events, but forgot to also munge the polling
variant of the command.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Adjust type as
needed.
Signed-off-by: Eric Blake <eblake@redhat.com>
Otherwise this beautiful error would be overwritten when
the function is called with a really high rate number:
2014-07-28 12:51:47.920+0000: 2304: error : virCommandWait:2399 :
internal error: Child process (/sbin/tc class add dev vnet0 parent 1:
classid 1:1 htb rate 4294968kbps) unexpected exit status 1: Illegal "rate"
Usage: ... qdisc add ... htb [default N] [r2q N]
default minor id of class to which unclassified packets are sent {0}
r2q DRR quantums are computed as rate in Bps/r2q {10}
debug string of 16 numbers each 0-3 {0}
... class add ... htb rate R1 [burst B1] [mpu B] [overhead O]
[prio P] [slot S] [pslot PS]
[ceil R2] [cburst B2] [mtu MTU] [quantum Q]
rate rate allocated to this class (class can still borrow)
burst max bytes burst which can be accumulated during idle period {computed}
mpu minimum packet size used in rate computations
overhead per-packet size overhead used in rate computations
linklay adapting to a linklayer e.g. atm
ceil definite upper class rate (no borrows) {rate}
cburst burst but for ceil {computed}
mtu max packet size we create rate map for {1600}
prio priority of leaf; lowe
https://bugzilla.redhat.com/show_bug.cgi?id=1043735
https://bugzilla.redhat.com/show_bug.cgi?id=1072653
Upon successful upload of a volume, the target volume and storage pool
were not updated to reflect any changes as a result of the upload. Make
use of the existing stream close callback mechanism to force a backend
pool refresh to occur in a separate thread once the stream closes. The
separate thread should avoid potential deadlocks if the refresh needed
to wait on some event from the event loop which is used to perform
the stream callback.
There are multiple mount points after commit 725a211f, but one comment
wasn't changed to use plurals.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Cygwin has getifaddrs(), but not AF_LINK, leading to:
util/virstats.c: In function 'virNetInterfaceStats':
util/virstats.c:138:41: error: 'AF_LINK' undeclared (first use in this function)
if (ifa->ifa_addr->sa_family != AF_LINK)
...
* src/util/virstats.c (virNetInterfaceStats): Only use getifaddrs
if AF_LINK is present.
Signed-off-by: Eric Blake <eblake@redhat.com>
libvirt previously only touched an interface's disable_ipv6 setting in
sysfs if it needed to be set to 1, assuming that 0 is the
default. Apparently that isn't always the case though (kernel 3.15.7-1
in Arch Linux reportedly defaults a new interface's disable_ipv6
setting to 1) so this patch explicitly sets it to 0 or 1 as
appropriate.
With this in place, I can (finally!) now do:
virsh blockcommit $dom vda --shallow --verbose --pivot
and watch qemu shorten the backing chain by one, followed by
libvirt automatically updating the dumpxml output, effectively
undoing the work of virsh snapshot-commit --no-metadata --disk-only.
Commit is SOOOO much faster than blockpull, when I'm still fairly
close in time to when the temporary qcow2 wrapper file was created
via a snapshot operation!
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Implement live
commit.
Signed-off-by: Eric Blake <eblake@redhat.com>
A future patch is going to wire up qemu active block commit jobs;
but as they have similar events and are canceled/pivoted in the
same way as block copy jobs, it is easiest to track all bookkeeping
for the commit job by reusing the <mirror> element. This patch
adds domain XML to track which job was responsible for creating a
mirroring situation, and adds a job='copy' attribute to all
existing uses of <mirror>. Along the way, it also massages the
qemu monitor backend to read the new field in order to generate
the correct type of libvirt job (even though it requires a
future patch to actually cause a qemu event that can be reported
as an active commit). It also prepares to update persistent XML
to match changes made to live XML when a copy completes.
* docs/schemas/domaincommon.rng: Enhance schema.
* docs/formatdomain.html.in: Document it.
* src/conf/domain_conf.h (_virDomainDiskDef): Add a field.
* src/conf/domain_conf.c (virDomainBlockJobType): String conversion.
(virDomainDiskDefParseXML): Parse job type.
(virDomainDiskDefFormat): Output job type.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Distinguish
active from regular commit.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Set job type.
(qemuDomainBlockPivot, qemuDomainBlockJobImpl): Clean up job type
on completion.
* tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-mirror-old.xml:
Update tests.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-disk-active-commit.xml: New
file.
* tests/qemuxml2xmltest.c (mymain): Drive new test.
Signed-off-by: Eric Blake <eblake@redhat.com>
If all features are set to default (including the capabilities policy),
but some capabilities are toggled, we need to output the <features>
element when formatting the config.
We were not directly saving the domain XML to file after starting
or finishing a blockcopy. Without the startup write, a libvirtd
restart in the middle of a copy job would forget that the job was
underway. Then at pivot, we were indirectly writing new XML in
reaction to events that occur as we stop and restart the guest CPUs.
But there was a race: since pivot is an async action, it is possible
that libvirtd is restarted before the pivot completes, so if XML
changes during the event, that change was not written. The original
blockcopy code cleared out the <mirror> element prior to restarting
the CPUs, but this is also a race, observed if a user does an async
pivot and a dumpxml before the event occurs. Furthermore, this race
will interfere with active commit in a future patch, because that
code will rely on the <mirror> element at the time of the qemu event
to determine whether to inform the user of a normal commit or an
active commit.
Fix things by saving state any time we modify live XML, while
delaying XML disk modifications until after the event completes. We
still need a to teach libvirtd restarts to examine all existing
<mirror> elements to see if the job completed in the meantime (that
is, if libvirtd misses the event, the updated state still needs to be
updated in live XML), but that will be a later patch, in part because
we also need to to start taking advantage of newer qemu's ability to
keep the job around after completion rather than the current usage
where the job disappears both on error and on success.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Track XML change
on disk.
(qemuDomainBlockJobImpl, qemuDomainBlockPivot): Move job-end XML
rewrites...
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): ...here.
Signed-off-by: Eric Blake <eblake@redhat.com>
Doing a blockcopy operation across a libvirtd restart is not very
robust at the moment. In particular, we are clearing the <mirror>
element prior to telling qemu to finish the job. Also, thanks to the
ability to request async completion, the user can easily regain
control prior to qemu actually finishing the effort, and they should
be able to poll the domain XML to see if the job is still going.
A future patch will fix things to actually wait until qemu is done
before modifying the XML to reflect the job completion. But since
qemu issues identical BLOCK_JOB_COMPLETE events regardless of whether
the job was cancelled (kept the original disk) or completed (pivoted
to the new disk), we have to track which of the two operations were
used to end the job. Furthermore, we'd like to avoid attempts to
end a job where we are already waiting on an earlier request to qemu
to end the job. Likewise, if we miss the qemu event (perhaps because
it arrived during a libvirtd restart), we still need enough state
recorded to be able to determine how to modify the domain XML once
we reconnect to qemu and manually learn whether the job still exists.
Although this patch doesn't actually fix the problem, it is a
preliminary step that makes it possible to track whether a job
has already begun steps towards completion.
* src/conf/domain_conf.h (virDomainDiskMirrorState): New enum.
(_virDomainDiskDef): Convert bool mirroring to new enum.
* src/conf/domain_conf.c (virDomainDiskDefParseXML)
(virDomainDiskDefFormat): Handle new values.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Adjust
client.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl): Likewise.
* docs/schemas/domaincommon.rng (diskMirror): Expose new values.
* docs/formatdomain.html.in (elementsDisks): Document it.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Test it.
Signed-off-by: Eric Blake <eblake@redhat.com>
If PCI passthrough type is not supported, we should error out rather than
continue building the command line.
When starting a domain, the type has been already checked by
qemuPrepareHostdevPCICheckSupport() before building qemu command line,
so the problem doesn't emerge.
But when coverting a domain xml without specifying passthrough type explictly
to qemu arg, we will get a malformed command line.
the xml:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0001' bus='0x03' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</hostdev>
the converted command line:
-device ,host=0001:03:00.0,id=hostdev0,bus=pci.0,addr=0x5
After this patch, virsh gives an error message:
virsh domxml-to-native qemu-argv /tmp/tmp.xml
error: internal error: invalid PCI passthrough type 'default'
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
Use better detection of hugetlbfs mount points. Yes, there can be
multiple mount points each serving different huge page size.
Since we already have ability to override the mount point in the
qemu.conf file, this crazy backward compatibility code is brought in.
Now we allow multiple mount points, so the "hugetlbfs_mount" option
must take an list of strings (mount points). But previously, it was
just a string, so we must accept both types now.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This should iterate over mount tab and search for hugetlbfs among with
looking for the default value of huge pages.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Use correct mode when pre-creating files (for snapshots). The refactor
changing to storage driver usage caused a regression as some systems
created the file with 000 permissions forbidding qemu to write the file.
Pass mode to the creating functions to avoid the problem.
Regression since 185e07a5f8.
Leak introduced in commit 16ebf10f (v1.2.6), detected by valgrind:
==9816== 216 (96 direct, 120 indirect) bytes in 6 blocks are definitely lost in loss record 665 of 821
==9816== at 0x4A081D4: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==9816== by 0x50836FB: virAlloc (viralloc.c:144)
==9816== by 0x1DBDBE27: udevProcessPCI (node_device_udev.c:546)
==9816== by 0x1DBDD79D: udevGetDeviceDetails (node_device_udev.c:1293)
* src/util/virpci.h (virPCIEDeviceInfoFree): New prototype.
* src/util/virpci.c (virPCIEDeviceInfoFree): New function.
* src/conf/node_device_conf.c (virNodeDevCapsDefFree): Clear
pci_express under pci case.
(virNodeDevCapPCIDevParseXML): Avoid leak.
* src/node_device/node_device_udev.c (udevProcessPCI): Likewise.
* src/libvirt_private.syms (virpci.h): Export it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Finding virPCIE* code is more intuitive if located in virpci.h
instead of node_device_conf.h.
* src/conf/node_device_conf.h (virPCIELinkSpeed, virPCIELink)
(virPCIEDeviceInfo): Move...
* src/util/virpci.h: ...here.
* src/conf/node_device_conf.c (virPCIELinkSpeed): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The compiler can alert us to places where we need to expand switch
statements because we add a new enum value, but only if we don't
have a default case.
* src/conf/node_device_conf.c (virNodeDeviceDefFormat)
(virNodeDevCapsDefParseXML, virNodeDevCapsDefFree): Drop default
case.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit e5f36698e3 introduces a
false-positive build failure in the sound card model handling switch.
Initialize the model to NULL although the value should never be used.
Libvirt documents that the default entropy source for the 'random'
backend of a RNG device is /dev/random. Instead of storing and
propagating NULL across our code and checking it in multiple places fill
the default in the post parse callback and use that in the other places.
Since 24e5cafba6 (thankfully unreleased)
when a VM with an empty disk drive would be started the code would call
stat() on NULL path as a check was missing from the callback rendering
machines unstartable.
Report success when the path is empty (denoting an empty drive).
virTimeFieldsThenRaw will never return negative result, so I clean up
the related meaningless judgements to make it better.
Signed-off-by: James <james.wangyufei@huawei.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The "random" backend for virtio-rng can be started with no path
specified which equals to /dev/random. The cgroup code didn't consider
this and called few of the functions with NULL resulting into:
$ virsh start rng-vm
error: Failed to start domain rng-vm
error: Path '(null)' is not accessible: Bad address
Problem introduced by commit c6320d3463