By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The privileged flag will not change while the configuration might
change. Make the 'privileged' flag member of the driver again and mark
it immutable. Should that ever change add an accessor that will group
reads of the state.
In qemuDomainUpdateCurrentMemorySize I misplaced the actual update of
the balloon size to a place where it may not be initialized. Move it a
few lines above.
When qemu does not support the balloon event the current memory size
needs to be queried. Since there are two places that implement the same
logic, split it out into a function and reuse.
The guest firmware provides the same functionality as the pvpanic
device, and the relevant element should always be present in the
domain XML to reflect this fact, so add it after parsing the
definition if it wasn't there already.
https://bugzilla.redhat.com/show_bug.cgi?id=976387
For a domain configured using the host cdrom, we should taint the domain
due to problems encountered when the host and guest try to control the tray.
Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the
'thread_id' as returned from the live qemu monitor data.
Remove the iothreadpids list from _qemuDomainObjPrivate and replace with
the new iothreadids 'thread_id' element.
Rather than use the default numbering scheme of 1..number of iothreads
defined for the domain, use the iothreadid's list for the iothread_id
Since iothreadids list keeps track of the iothread_id's, these are
now used in place of the many places where a for loop would "know"
that the ID was "+ 1" from the array element.
The new tests ensure usage of the <iothreadid> values for an exact number
of iothreads and the usage of a smaller number of <iothreadid> values than
iothreads that exist (and usage of the default numbering scheme).
When the guest agent channel gets hotplugged to a VM, libvirt would
still report that "QEMU guest agent is not configured" rather than
stating that the connection was not established yet.
Currently the code won't be able to connect to the agent after hotplug
but that will change in a later patch.
As the qemuFindAgentConfig() helper is quite helpful in this case move
it to a more usable place and export it.
virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.
https://bugzilla.redhat.com/show_bug.cgi?id=1213434
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
It is there even with -nodefaults and -no-user-config, so count with
that so we can start sparc domains.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The destination libvirt daemon in a migration may segfault if the client
disconnects immediately after the migration has begun:
# virsh -c qemu+tls://remote/system list --all
Id Name State
----------------------------------------------------
...
# timeout --signal KILL 1 \
virsh migrate example qemu+tls://remote/system \
--verbose --compressed --live --auto-converge \
--abort-on-error --unsafe --persistent \
--undefinesource --copy-storage-all --xml example.xml
Killed
# virsh -c qemu+tls://remote/system list --all
error: failed to connect to the hypervisor
error: unable to connect to server at 'remote:16514': Connection refused
The crash is in:
1531 void
1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
1533 {
1534 qemuDomainObjPrivatePtr priv = obj->privateData;
1535 qemuDomainJob job = priv->job.active;
1536
1537 priv->jobs_queued--;
Backtrace:
#0 at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
#1 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
#2 in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
#3 in virCloseCallbacksRun at util/virclosecallbacks.c:350
#4 in qemuConnectClose at qemu/qemu_driver.c:1154
...
qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
case is holding the last remaining reference to the domain.
qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
object has been freed and poisoned by then.
This patch bumps the domain's refcount until qemuDomainRemoveInactive
has completed. We also ensure qemuProcessAutoDestroy does not return the
domain to virCloseCallbacksRun to be unlocked in this case. There is
similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
(which call virDomainObjListRemove directly).
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015== by 0x52ADF14: virAllocVar (viralloc.c:560)
==19015== by 0x5302FD1: virObjectNew (virobject.c:193)
==19015== by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
==19015== by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
==19015== by 0x53E0823: virStateInitialize (libvirt.c:777)
==19015== by 0x11E067: daemonRunStateInit (libvirtd.c:905)
==19015== by 0x53201AD: virThreadHelper (virthread.c:206)
==19015== by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
==19015== by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of always using controller 0 and incrementing port number,
respect the maximum port numbers of controllers and use all of them.
Ports for virtio consoles are quietly reserved, but not formatted
(neither in XML nor on QEMU command line).
Also rejects duplicate virtio-serial addresses.
https://bugzilla.redhat.com/show_bug.cgi?id=890606https://bugzilla.redhat.com/show_bug.cgi?id=1076708
Test changes:
* virtio-auto.args
Filling out the port when just the controller is specified.
switched from using
maxport + 1
to:
first free port on the controller
* virtio-autoassign.args
Filling out the address when no <address> is specified.
Started using all the controllers instead of 0, also discards
the bus value.
* xml -> xml output of virtio-auto
The port assignment is no longer done as a part of XML parsing,
so the unspecified values stay 0.
While adding tests for status XML parsing and formatting I've noticed
that the device alias list is leaked.
==763001== 81 (48 direct, 33 indirect) bytes in 1 blocks are definitely lost in loss record 414 of 514
==763001== at 0x4C2B8F0: calloc (vg_replace_malloc.c:623)
==763001== by 0x6ACF70F: virAllocN (viralloc.c:191)
==763001== by 0x447B64: qemuDomainObjPrivateXMLParse (qemu_domain.c:727)
==763001== by 0x6B848F9: virDomainObjParseXML (domain_conf.c:15491)
==763001== by 0x6B84CAC: virDomainObjParseNode (domain_conf.c:15608)
This is very helpful when we want to log and report why we could not
acquire a state change lock. Reporting what job keeps it locked helps
with understanding the issue. Moreover, after calling
virDomainGetControlInfo, it's possible to tell whether libvirt is just
stuck somewhere within the API (or it just forgot to cleanup the job) or
whether libvirt is waiting for QEMU to reply.
The error message will look like the following:
# virsh resume cd
error: Failed to resume domain cd
error: Timed out during operation: cannot acquire state change lock
(held by remoteDispatchDomainSuspend)
https://bugzilla.redhat.com/show_bug.cgi?id=853839
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Add support to start qemu instance with 'pc-dimm' device. Thanks to the
refactors we are able to reuse the existing function to determine the
parameters.
When using 'dimm' memory devices with qemu, some of the information
like the slot number and base address need to be reloaded from qemu
after process start so that it reflects the actual state. The state then
allows to use memory devices across migrations.
This patch adds code that parses and formats configuration for memory
devices.
A simple configuration would be:
<memory model='dimm'>
<target>
<size unit='KiB'>524287</size>
<node>0</node>
</target>
</memory>
A complete configuration of a memory device:
<memory model='dimm'>
<source>
<pagesize unit='KiB'>4096</pagesize>
<nodemask>1-3</nodemask>
</source>
<target>
<size unit='KiB'>524287</size>
<node>1</node>
</target>
</memory>
This patch preemptively forbids use of the <memory> device in individual
drivers so the users are warned right away that the device is not
supported.
To enable memory hotplug the maximum memory size and slot count need to
be specified. As qemu supports now other units than mebibytes when
specifying memory, use the new interface in this case.
Add a XML element that will allow to specify maximum supportable memory
and the count of memory slots to use with memory hotplug.
To avoid possible confusion and misuse of the new element this patch
also explicitly forbids the use of the maxMemory setting in individual
drivers's post parse callbacks. This limitation will be lifted when the
support is implemented.
When no model is specified in the domain definition for
a scsi controller and the architectur is s390 than virtio-scsi
is set as default model.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1196934
When qemu exits during startup, libvirt includes the error from
/var/log/libvirt/qemu/vm.log in the error message:
$ virsh start test3
error: Failed to start domain test3
error: internal error: early end of file from monitor: possible problem:
2015-02-27T03:03:16.985494Z qemu-kvm: -numa memdev is not supported by
machine rhel6.5.0
The check for domain liveness added to qemuDomainObjExitMonitor
in commit dc2fd51f sometimes overwrites this error:
$ virsh start test3
error: Failed to start domain test3
error: operation failed: domain is no longer running
Fix the check to only report an error if there is none set.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
The memory sizes in qemu are aligned up to 1 MiB boundaries. There are
two places where this was done once for the total size and then for
individual NUMA cell sizes.
Add a function that will align the sizes in one place so that it's clear
where the sizes are aligned.
Making use of the ARCH_IS_S390 macro introduced with
e808357528
Signed-off-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Since s390 does not support usb the default creation of a usb controller
for a domain should not occur.
Also adjust s390 test cases by removing usb device instances since
usb devices are no longer created by default for s390 the s390
test cases need to be adjusted.
Signed-off-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
The domain might disappear during the time in monitor when
the virDomainObjPtr is unlocked, so the caller needs to check
if it's still alive.
Since most of the callers are going to need it, put the
check inside qemuDomainObjExitMonitor and return -1 if
the domain died in the meantime.
The virDomainDefParse* and virDomainDefFormat* methods both
accept the VIR_DOMAIN_XML_* flags defined in the public API,
along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
defined in domain_conf.c.
This is seriously confusing & error prone for a number of
reasons:
- VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
formatting operation
- Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
to parse or to format, but not both.
This patch cleanly separates out the flags. There are two
distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
flags that are used by the corresponding methods. The
VIR_DOMAIN_XML_* flags received via public API calls must
be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
needed.
The various calls to virDomainDefParse which hardcoded the
use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
The default value should be 16 MiB instead of 8 MiB. Only really old
version of upstream QEMU used the 8 MiB as default for vga framebuffer.
Without this change if you update your libvirt where we introduced the
"vgamem" attribute for QXL video device the value will be set to 8 MiB,
but previously your guest had 16 MiB because we didn't pass any value to
QEMU command line which means QEMU used its own 16 MiB as default.
This will affect all users with guest's display resolution higher than
1920x1080.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In commit 540c339a25 the whole domain
reference counting was refactored in the qemu driver. Domain jobs now
don't need to reference the domain object as they now expect the
reference from the calling function.
However, the patch forgot to remove the unref call in case we exit the
monitor when we were acquiring a nested job. This caused the daemon to
crash on a subsequent access to the domain object once we've done an
operation requiring a nested job for a monitor access.
An easy reproducer case:
1) Start a vm with qcow disks
2) virsh snapshot-create-as DOMNAME
3) virsh dumpxml DOMNAME
4) daemon crashes in a semi-random spot while accessing a now-removed VM
object.
Fortunately, the commit wasn't released yet, so there are no security
implications.
Reported-by: Shanzi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When requested in a later patch, the QMP command results are now
examined recursively. As qemu_driver will eventually have to
read items out of the hash table as stored by this patch, the
computation of backing alias string is done in a shared location.
* src/qemu/qemu_domain.h (qemuDomainStorageAlias): New prototype.
* src/qemu/qemu_domain.c (qemuDomainStorageAlias): Implement it.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetOneBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacityOne): Perform recursion.
(qemuMonitorJSONGetAllBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacity): Update callers.
Signed-off-by: Eric Blake <eblake@redhat.com>
Move entering the job into the thread to simplify the program flow. Also
as the code holds a separate reference to the domain object some
conditions can be simplified.
After this patch qemuDomainObjTransferJob is no longer needed so this
patch removes it.