If virDomainObjGetDefs used in qemuDomainPinIOThread would fail the code
would jump to the 'cleanup' label after acquiring the job, thus the VM
would be locked forever.
Introduced in commit cac6d639.
The privileged flag will not change while the configuration might
change. Make the 'privileged' flag member of the driver again and mark
it immutable. Should that ever change add an accessor that will group
reads of the state.
When hotplugging a memory device, there wasn't a check to determine
if there is a conflict with the address space being used by the to
be added memory device and any existing device which is disallowed by qemu.
This patch adds a check to ensure the new device address doesn't
conflict with any existing device.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1220527
This type of information defines attributes of a system
baseboard. With one exception: board type is yet not implemented
in qemu so it's not introduced here either.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Some machine types are only reported as canonical names for other
machine types, which make it a bit harder to find what machine types are
supported by a specific QEMU binary. Ideally, one would just use
/capabilities/guest/arch[@name='...']/machine/text() XPath to get a list
of all supported machine types, but it doesn't work right now.
For example, we report
<machine canonical='pc-i440fx-2.3' maxCpus='255'>pc</machine>
in guest capabilities, but the corresponding
<machine maxCpus='255'>pc-i440fx-2.3</machine>
is missing.
This is a result of QMP probing. With "-machine ?" parsing QEMU sends
us two lines:
pc Standard PC (i440FX + PIIX, 1996) (alias of pc-i440fx-2.3)
pc-i440fx-2.3 Standard PC (i440FX + PIIX, 1996) (default)
while query-machines QMP command reports both in the same entry:
{"name": "pc-i440fx-2.3", "is-default": true, "cpu-max": 255, "alias": "pc"}
Let's make sure we always report separate <machine/> for both the
canonical name and its alias and using the canonical name as the default
machine type (i.e., inserting it before its alias) in case is-default is
true.
https://bugzilla.redhat.com/show_bug.cgi?id=1229666
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The search for the memory balloon driver object is extended by a
second known name "virtio-balloon-ccw" in support for virtio-ccw.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1021480
Seems the property has been deprecated for qemu, although seemingly ignored.
This patch enforces from a libvirt perspective that a scsi-block 'lun'
device should not provide the 'serial' property.
If a guest has multiple network devices with the same MAC address,
when we online update the second device, libvirtd always updates
the first one.
commit def31e4c forgot to fix the online updating scenario. We need to
use virDomainNetFindIdx() to find the correct network device.
Signed-off-by: Zhou Yimin <zhouyimin@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1228007
When attaching a scsi volume lun via the attach-device --config or
--persistent options, there was no translation of the source pool
like there was for the live path, thus the attempt to modify the config
would fail since not enough was known about the disk.
With a few exceptions, we assume that qemu binary for given
architecture has form of qemu-system-$arch. Well, openrisc is yet
another exception. It's binary is called qemu-system-or32.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Move all the system_* fields into a separate struct. Not only this
simplifies the code a bit it also helps us to identify whether BIOS
info is present. We don't have to check all the four variables for
being not-NULL, but we can just check the pointer to the struct.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Move all the bios_* fields into a separate struct. Not only this
simplifies the code a bit it also helps us to identify whether BIOS
info is present. We don't have to check all the four variables for
being not-NULL, but we can just check the pointer to the struct.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch adds the support of queues attribute of the driver element
for vhost-user interface type. Example:
<interface type='vhostuser'>
<mac address='52:54:00:ee:96:6d'/>
<source type='unix' path='/tmp/vhost2.sock' mode='client'/>
<model type='virtio'/>
<driver queues='4'/>
</interface>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1207692
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The support for this was added in QEMU with commit
830d70db692e374b55555f4407f96a1ceefdcc97. Unfortunately we have to do
another ugly version-based capability check. The other option would be
not to check for the capability at all and leave that to qemu as it's
done with multiqueue tap devices.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
By default, getaddrinfo() will return addresses for both
IPv4 and IPv6 if both protocols are enabled, and so the
RPC code will listen/connect to both protocols too. There
may be cases where it is desirable to restrict this to
just one of the two protocols, so add an 'int family'
parameter to all the TCP related APIs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
qemu 2.3.0 added the -cpu host,aarch64=off option, which allows using
qemu-system-aarch64 KVM to run armv7l VMs.
Add a capabilities check for it, wire it up in qemu_command, and test
the command line generation.
When traversing through the QOM tree, we're looking for
a link to a device, e.g.:
link<virtio-balloon-pci>
Introduce a helper that will format the link name at the start,
instead of doing it every time while recursing through the tree.
In qemuDomainUpdateCurrentMemorySize I misplaced the actual update of
the balloon size to a place where it may not be initialized. Move it a
few lines above.
When getting block device I/O tuning data there is no check for whether
QEMU supports such options and the call fails on
qemuMonitorGetBlockIoThrottle() when getting the particular throttle
data. So try reporting a better error when blkdeviotune is not
supported.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1224053
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In the pre-NUMA ages pinning a vCPU to all pCPUs was eaqual to deleting
the pinning info. Now it does not entirely work that way. Pinning a vCPU
to all pCPUs might be a desired operation. Additionally removal of the
pinning will result into using the default pinning information at the
next boot which might be different from all vcpus.
This patch removes the false assumption that we should remove the
pinning after pinning to all vCPUs and tweaks the documentation for
virsh.
A later patch will implement a new flag for the virDomainPinVcpuFlags
API that will allow to remove the pinning in a sane way.
While we probably won't see machines with more than 65536 cpus for a
while lets store the cpu count as an integer so that we can avoid quite
a lot of overflow checks in our code.
Since the returned structure uses "unsigned long" for memory sizes add a
few overflow checks to notify the user in case we are not able to
represent given values.
When qemu does not support the balloon event the current memory size
needs to be queried. Since there are two places that implement the same
logic, split it out into a function and reuse.
After libvirt issues the balloon resize command, the current balloon
size needs to be changed to the maximum memory size since the vCPUs were
not started and thus the balloon driver could not return the memory.
Since GetXMLDesc and other APIs return the balloon size without updating
it in case they are not able to obtain the job and the memory balloon
does not support the asynchronous event the sizing might be incorrect.
Refactor the function to return the bitmap instead of an integer and the
inner workings so that they make more sense.
This patch also fixes possible segfault on old systems that was
introduced by commit:
commit f1a43a8e41
Author: Hu Tao <hutao@cn.fujitsu.com>
Date: Fri Sep 14 15:46:59 2012 +0800
use virBitmap to store cpu affinity info
Since the monitor code now supports ullongs when setting balloon size,
drop the legacy code with overflow checking.
Additionally the comment mentioning that the job is treated as a sync
job does not make sense any more since the monitor is entered
asynchronously.
Store the emulator pinning cpu mask as a pure virBitmap rather than the
virDomainPinDef since it stores only the bitmap and refactor
qemuDomainPinEmulator to do the same operations in a much saner way.
As a side effect virDomainEmulatorPinAdd and virDomainEmulatorPinDel can
be removed since they don't add any value.
In case when <vcpu ... cpuset=""> is not specified, the vcpupin array is
not guaranteed to be allocated to def->vcpus. This would cause a crash
for TCG since it does not report thread IDs for vCPUs.
Commit id '980b265d' neglected to check for a successful status when
deciding whether to release the device address for the RNG attach thus
the address would be released even though the device was added.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit id '862473fa' neglected to return the status from the
qemuDomainRemoveRNGDevice call in qemuDomainRemoveDevice causing
the function to always fail when receiving an RNG device unplug
event. Additionally the domain status/state would not be updated
in the processDeviceDeletedEvent path.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
The guest firmware provides the same functionality as the pvpanic
device, and the relevant element should always be present in the
domain XML to reflect this fact, so add it after parsing the
definition if it wasn't there already.
The guest firmware provides the same functionality as the pvpanic
device, which is not available in QEMU on pSeries, so the domain
XML should be allowed to contain the <panic> element.
On the other hand, unlike the pvpanic device, the guest firmware
can't be configured, so report an error if an address has been
provided in the XML.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1182388
When attempting to hotplug a virtio-serial console to a domain
that had no virtio-serial controllers (not even those that
are added by libvirt when some devices need them) at daemon startup,
report a user-friendly error:
error: Failed to attach device from console.xml
error: internal error: no virtio-serial controllers are available
instead of crashing the daemon:
Process terminating with default action of signal 11 (SIGSEGV): dumping core
Access not within mapped region at address 0x8
at 0x531028F: virDomainVirtioSerialAddrNext (domain_addr.c:916)
by 0x531028F: virDomainVirtioSerialAddrAssign (domain_addr.c:1029)
by 0x1CBF68: qemuDomainAttachChrDevice (qemu_hotplug.c:1565)
by 0x1BCD5E: qemuDomainAttachDeviceLive (qemu_driver.c:7997)
by 0x1BCD5E: qemuDomainAttachDeviceFlags (qemu_driver.c:8743)
Introduced in v1.2.14-30-g5903378.
The QMP command, like the interrupt reinjection logic it's connected
to, is only implemented in QEMU when TARGET_I386 is defined, so
checking for its availability on any other architecture is pointless.
On the other hand, when we're on x86, we shouldn still make sure that
rtc-reset-reinjection is available and refuse to set the time
otherwise.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1211938
Since commit bcd9a564b6 virDomainNumatuneGetMode returns the value
via a pointer rather than in the return value. The change triggered
problems with platforms where the compiler decides to use a data type of
size different than integer at the point where we typecast it.
Work around the issue by using an intermediate variable of the correct
type that gets casted back by the default typecasting rules.
Rather than an algorithm based solely on libvirtd ctime to refresh the
capabilities add the element of the libvirt build version into the equation.
Since that version wouldn't be there prior to this code being run - don't
fail on reading the capabilities if not found. In this case, the cache
will always be rebuilt when a new libvirt version is installed.
https://bugzilla.redhat.com/show_bug.cgi?id=1195882
Original commit id 'cbde3589' indicates that the cache file would be
discarded if either the QEMU binary or libvirtd 'ctime' changes; however,
the code only discarded if the QEMU binary time didn't match or if the
new libvirtd ctime was later than what created the cache file.
Since many factors come into play with 'ctime' adjustments (including
perhaps turning back the hands of time), change the logic to also force
a refresh if the ctime of libvirt is different than what's in the cache.
Recent changes to the -M/--machine processing code in qemuParseCommandLine
caused Coverity to determine there was a possible resource leak with how
the 'list' is managed. Rather than try to add virStringFreeList calls
everywhere - just promote list to the top of the variables and free it
within the error processing code. Also required a couple of other tweaks
in order to avoid double free's.
Not every chardev is plugged onto virtio-serial bus. However, the
code introduced in 89e991a2aa assumes that. Incorrectly.
With previous patches we have three options where a chardev can
be plugged: virtio-serial, USB and PCI. This commit fixes the
detach part. However, since we are not auto allocating USB
addresses yet, I'm just marking the place where appropriate code
should go.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Not every chardev is plugged onto virtio-serial bus. However, the
code introduced in 89e991a2aa assumes that. Incorrectly.
With previous patches we have three options where a chardev can
be plugged: virtio-serial, USB and PCI. This commit fixes the
attach part. However, since we are not auto allocating USB
addresses yet, I'm just marking the place where appropriate code
should go.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=998813
Implementation is pretty straight-forward. Of course, not all qemus
out there supports the device, so new capability is introduced and
checked prior each use of the device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Most virDomainDiskIndexByName callers do not care about the index; what
they really want is a disk def pointer.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When starting a domain, if a domain specifies security drivers we do not have
loaded, we fail. However we don't check for this during
reconnect, so any operation relying on security driver functionality would fail.
If someone e.g. starts a domain with selinux driver loaded, then they change
the security driver to 'none' in config, restart the daemon and call dump/save/..,
QEMU will return an error.
As we shouldn't kill the domain, we should at least log an error to let the
user know that domain reconnect wasn't completely clean.
https://bugzilla.redhat.com/show_bug.cgi?id=1183893
So far, we are not reporting if numatune was even defined. The
value of zero is blindly returned (which maps onto
VIR_DOMAIN_NUMATUNE_MEM_STRICT). Unfortunately, we are making
decisions based on this value. Instead, we should not only return
the correct value, but report to the caller if the value is valid
at all.
For better viewing of this patch use '-w'.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=976387
For a domain configured using the host cdrom, we should taint the domain
due to problems encountered when the host and guest try to control the tray.
Since af2a1f0587,
qemuDomainGetNumaParameters() returns invalid value for a running
guest. The problem is that it is getting the information from cgroups,
but the parent cgroup is being left alone since the mentioned commit.
Since the running guest's XML is in sync with cgroups, there is no need
to look into cgroups (unless someone changes the configuration behind
libvirt's back). Returning the info from the definition fixes a bug and
is also a cleanup.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1221047
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
For some reason a union (_virNodeDevCapData) that had only been
declared inside the toplevel struct virNodeDevCapsDef was being used
as an argument to functions all over the place. Since it was only a
union, the "type" attribute wasn't necessarily sent with it. While
this works, it just seems wrong.
This patch creates a toplevel typedef for virNodeDevCapData and
virNodeDevCapDataPtr, making it a struct that has the type attribute
as a member, along with an anonymous union of everything that used to
be in union _virNodeDevCapData. This way we only have to change the
following:
s/union _virNodeDevCapData */virNodeDevCapDataPtr /
and
s/caps->type/caps->data.type/
This will make me feel less guilty when adding functions that need a
pointer to one of these.
Introduces two new -machine option parameters to the QEMU command to
enable/disable the CPACF protected key management operations for a guest:
aes-key-wrap='on|off'
dea-key-wrap='on|off'
The QEMU code maps the corresponding domain configuration elements to the
QEMU -machine option parameters to create the QEMU command:
<cipher name='aes' state='on'> --> aes-key-wrap=on
<cipher name='aes' state='off'> --> aes-key-wrap=off
<cipher name='dea' state='on'> --> dea-key-wrap=on
<cipher name='dea' state='off'> --> dea-key-wrap=off
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Signed-off-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We have previously effectively ignored all <controller type='ide'>
elements in a domain definition.
On the i440fx-based machinetypes there is an IDE controller that is
included in the chipset and can't be removed (which is the ide
controller with index='0'>), so it makes sense to ignore that one
controller. However, if an i440fx domain definition has a 2nd
controller, nothing catches this error (unless you also have a disk
attached to it, in which case qemu will complain that you're trying to
use the ide controller named "ide1", which doesn't exist), and if any
other type of domain has even a single controller defined, it will be
incorrectly ignored.
Ignoring a bogus controller definition isn't such a big problem, as
long as an error is logged when any disk is attached to that
non-existent controller. But in the case of q35-based machinetypes,
the hardcoded id ("alias" in libvirt terms) of its builtin SATA
controller is "ide", which happens to be the same id as the builtin
IDE controller on i440fx machinetypes. So libvirt creates a
commandline believing that it is connecting the disk to the builtin
(but actually nonexistent) IDE controller, qemu thinks that libvirt
wanted that disk connected to the builtin SATA controller, and
everybody is happy.
Until you try to connect a 2nd disk to the IDE controller. Then qemu
will complain that you're trying to set unit=1 on a controller that
requires unit=0 (SATA controllers are organized differently than IDE
controllers).
After this patch, if a domain has an IDE controller defined for a
machinetype that has no IDE controllers, libvirt will log an error
about the controller itself as it is building the qemu commandline
(rather than a (possible) error from qemu about disks attached to that
controller). This is done by adding IDE to the list of controller
types that are handled in the loop that creates controller command
strings in qemuBuildCommandline() (previously it would *always* skip
IDE controllers). Then qemuBuildControllerDevStr() is modified to log
an appropriate error in the case of IDE controllers.
In the future, if we add support for extra IDE controllers (piix3-ide
and/or piix4-ide) we can just add it into the IDE case in
qemuBuildControllerDevStr(). For now, nobody seems anxious to add
extra support for an aging and very slow controller, when there are so
many better options available.
Resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1176071 (Fedora)
This makes sure that that the commandlines generated for devices and
controller devices are all using the alias that has been set in the
controller's object as the id of the controller, rather than
hardcoding a printf (or worse, encoding exceptions to the standard
${controller}${index} into the logic)
Since this "fixes" the controller name used for the sata controller,
the commandline arg for the sata controller in the sata test case had
to be adjusted to be "sata0" instead of "ahci0". All other tests
remain unchanged, verifying that the patch causes no other functional
change.
Because the function that finds a controller alias based on a device
def requires a pointer to the full domainDef in order to get the list
of controllers, the arglist of a few functions had to have this added.
There are a few extra exceptions that weren't being accounted for when
creating the alias for a controller. This resulted in 1) incorrect
status XML, and 2) exceptions/printfs of what *should* have been
directly available in the controller alias when constructing device
commandline arguments:
1) The primary (and only) IDE controller on a 440FX machinetype is
hardcoded to be "ide" in qemu.
2) The primary SATA controller on a 440FX machinetype is also
hardcoded to be "ide" in qemu.
3) On machinetypes that don't support multiple PCI buses, the PCI bus
is hardcoded in qemu to have the name "pci".
4) The first usb master controller is "usb", all others are the normal
"usb%d". (note that usb controllers that are not a "master" will have
the same index, and thus alias, as the master).
We needed to pass in the full domainDef and qemuCaps in order to
properly make the decisions about these exceptions.
When cancelling drive mirror, always try to do that for all disks even
if it fails for some of them. Report the first error we saw.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of redoing the same filtering over and over everytime we need to
walk through all disks which are being migrated.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As of eeb008dbfc the variable is not used anymore. Drop it.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In the XML we have the vnc port number, but QEMU takes on command line
a vnc screen number, it's port-5900. We should fail with error message
that only ports in range [5900,65535] are valid.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1164966
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1220809
When cold-plugging an RNG device but something fails in
qemuDomainAssignAddresses, we will double free the RNG device.
Once a device is plugged into the domain, we should set the
device pointer to NULL to fix this issue.
...
5 0x00007fb7d180ac8a in virFree at util/viralloc.c:582
6 0x00007fb7d1895cdd in virDomainRNGDefFree at conf/domain_conf.c:19786
7 0x00007fb7d1895d99 in virDomainDeviceDefFree at conf/domain_conf.c:2022
8 0x00007fb7b92b8baf in qemuDomainAttachDeviceFlags at qemu/qemu_driver.c:8785
9 0x00007fb7d190c5d7 in virDomainAttachDeviceFlags at libvirt-domain.c:8488
10 0x00007fb7d23af9d2 in remoteDispatchDomainAttachDeviceFlags at remote_dispatch.h:2842
...
Signed-off-by: Luyao Huang <lhuang@redhat.com>
The code to add device type to the commandline was identical for lsi
and other models of SCSI controllers, but was duplicated (with the
exception of a minor ordering difference of the if-else clauses) for
the two cases. This patch replaces those two with a single instance of
the code just before the if().
This patch makes qemuValideDevicePCISlotsChipsets() more consistent in
appearance by replacing several clauses of an if with the equivalent
call to qemuDomainMachineIsI440FX. The if was checking exactly the
same items, just in a slightly different order.
Since libvirt doesn't call to update the new balloon size in qemu add
code that will handle tweaking of the size of the current balloon
statistic until qemu reports the new size using the event.
Use the new domain list collection helpers to avoid going through
virDomainPtrs.
This additionally implements filter capability when called through the
api that accepts domain list filters.
Extend it to a universal helper used for clearing lists of any objects.
Note that the argument type is specifically void * to allow implicit
typecasting.
Additionally add a helper that works on non-NULL terminated arrays once
we know the length.
https://bugzilla.redhat.com/show_bug.cgi?id=890648
So, imagine you've issued an API that involves guest agent. For
instance, you want to query guest's IP addresses. So the API acquires
QUERY_JOB, locks the guest agent and issues the agent command.
However, for some reason, guest agent replies to initial ping
correctly, but then crashes tragically while executing real command
(in this case guest-network-get-interfaces). Since initial ping went
well, libvirt thinks guest agent is accessible and awaits reply to the
real command. But it will never come. What will is a monitor event.
Our handler (processSerialChangedEvent) will try to acquire
MODIFY_JOB, which will fail obviously because the other thread that's
executing the API already holds a job. So the event handler exits
early, and the QUERY_JOB is never released nor ended.
The way how to solve this is to put flag somewhere in the monitor
internals. The flag is called @running and agent commands are issued
iff the flag is set. The flag itself is set when we connect to the
agent socket. And unset whenever we see DISCONNECT event from the
agent. Moreover, we must wake up all the threads waiting for the
agent. This is done by signalizing the condition they're waiting on.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Running shutdown with mode agent on a shutoff domain gives cryptic
error message:
virsh # shutdown --mode agent gentoo
error: Failed to shutdown domain gentoo
error: Guest agent is not responding: QEMU guest agent is not connected
After this patch, the error is more clear:
virsh # shutdown --mode agent gentoo
error: Failed to shutdown domain gentoo
error: Requested operation is not valid: domain is not running
Reported-by: Martin Kletzander <mkletzan@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Allow ccw devices to be used with multiqueues. ccw provides a one to
one relation of fds to queues and does not support the vectors option.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Reviewed-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Coverity points out that qemuMonitorGetAllBlockStatsInfo could return a
-1 and thus not fill in 'stats' (leaving it NULL). Then the call to
qemuMonitorBlockStatsUpdateCapacity will dereference it.
Coverity complains over the [n]values pairing in virQEMUCapsFreeStringList
and rather than make a bunch if "if values" checks prior to calling, by
just adding the values check inside the free function we avoid the chance
that somehow nvalues is > 0, while values == NULL
Coverity points out it was possible to have a zero return from
qemuBuildRNGBackendProps thus not filling in 'props' and then
causing a NULL dereference on the next call.
Coverity notes that ->ifname is used after the VIR_FREE done in the
code path after the call to virNetDevMacVLanDeleteWithVPortProfile
by a call to virNetDevOpenvswitchRemovePort.
Since the ->ifname will be VIR_FREE()'d eventually in virDomainNetDefFree
just remove the extraneous VIR_FREE here.
When originally added, the Openvswitch code wasn't present and checks
were made for non NULL prior to use.
Coverity complains that in the error paths both the < 0 condition and
the success path after the qemuDomainObjExitMonitor failure will end
up going to cleanup. So just use ignore_value in this error path to
resolve the complaint.
The only version that's supported in QEMU is version 2, currently.
Fortunately, it is enabled by aarch64 automatically, so there's
nothing for us that needs to be put onto command line.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When migrating a domain while changing its name and using
VIR_MIGRATE_PERSIST_DEST flag, libvirt would fail to properly change the
name in the persistent definition. The inconsistency results in weird
behavior when dumping domain XML, destroying the domain, restarting
libvirtd and likely in several other situations.
Since the new name is already stored in vm->def->name, we just need to
make sure the persistent definition uses this new name too.
https://bugzilla.redhat.com/show_bug.cgi?id=1076354
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Set the capability based on qmp query, or qemu version. The qmp query
includes vmport with 2.2, but no longer with 2.3. It lists only
non-machine specific capabilities, so check the qemu version too until a
machine-specific query is supported.
Now that we have macros for exclusive flags and flag requirements we can
use them to cleanup the code for setvcpus and error out for all wrong
flag combination.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Found by Laine and discussed a bit on internal IRC.
Commit id c56fe7f1d6 added support for creating a command line to support
scsi-disk.channel.
Series was here:
http://www.redhat.com/archives/libvir-list/2012-February/msg01052.html
Which pointed to a design proposal here:
http://permalink.gmane.org/gmane.comp.emulators.libvirt/50428
Which states (in part):
Libvirt should check for the QEMU "scsi-disk.channel" property. If it
is unavailable, QEMU will only support channel=lun=0 and 0<=target<=7.
However, the check added was ensuring that bus != lun *and* bus != 0. So
if bus == lun and both were non zero, we'd never make the second check.
Changing this to an *or* check fixes the check, but still is less readable
than the just checking each for 0
Since the qemu capabilities are not initialized for offline VMs the
caller might get suboptimal error message:
$ virsh blockjob VM PATH --bandwidth 1
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the checks after we make sure that the VM is alive.
In qemuMigrationDriveMirror we can start all disk mirrors in parallel.
We wait until they are all ready, or one of them aborts.
In qemuMigrationCancelDriveMirror, we wait until all mirrors are
properly stopped. This is necessary to ensure that destination VM is
fully in sync with the (paused) source VM.
If a drive mirror can not be cancelled, then the destination is not in a
consistent state. In this case it is not safe to continue with the
migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The !modern code path needs to call qemuBlockJobEventProcess directly.
the modern code path will call it via qemuBlockJobSyncWait.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Other threads may be blocked in qemuBlockJobSyncWait. Ensure that
they're woken up when the domain is stopped.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
qemuBlockJobSyncBegin and qemuBlockJobSyncEnd delimit a region of code
where block job events are processed "synchronously".
qemuBlockJobSyncWait and qemuBlockJobSyncWaitWithTimeout wait for an
event generated by a block job.
The Wait* functions may be called multiple times while the synchronous
block job is active. Any pending block job event will be processed by
only when Wait* or End is called. disk->blockJobStatus is reset by
these functions, so if it is needed a pointer to a
virConnectDomainEventBlockJobStatus variable should be passed as the
last argument. It is safe to pass NULL if you do not care about the
block job status.
All functions assume the VM object is locked. The Wait* functions will
unlock the object for as long as they are waiting. They will return -1
and report an error if the domain exits before an event is received.
Typical use is as follows:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
qemuBlockJobSyncBegin(disk);
... start block job ...
if (qemuBlockJobSyncWait(driver, vm, disk, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
... possibly start other block jobs
or wait for further events ...
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
To perform other tasks periodically while waiting for an event:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
unsigned long long timeout = 500 * 1000ull; /* milliseconds */
qemuBlockJobSyncBegin(disk);
... start block job ...
do {
... do other task ...
if (qemuBlockJobSyncWaitWithTimeout(driver, vm, disk,
timeout, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
} while (status == -1);
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
We will want to use synchronous block jobs from qemu_migration as well,
so split this function out into a new source file.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The documentation states that for shallow block copy the image has to
have the same guest visible content as backing file of the current
image if the file is being reused. This condition can be achieved also
with a raw file (or a qcow without a backing file) so remove the
condition that would disallow it.
(This patch additionally fixes crash described in
https://bugzilla.redhat.com/show_bug.cgi?id=1215569 )
It would be used in qemumonitorjsontest, thus we make it non-static.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Zhou Yimin <zhouyimin@huawei.com>
If we received zero iothreads from the monitor, but were perhaps
expecting to receive something, then the code was skipping the check
to ensure what's in the monitor matches our expectations. So invert
the checks to check that what we get back matches expectations and
then check there are zero iothreads returned.
Rather than have a separate routine to parse the alias of an iothread
returned from qemu in order to get the iothread_id value, parse the alias
when returning and just return the iothread_id in qemuMonitorIOThreadInfoPtr
This set of patches removes the function, changes the "char *name" to
"unsigned int" and handles all the fallout.
Coverity notes that the switch() used to check 'connected' values has
two DEADCODE paths (_DEFAULT & _LAST). Since 'connected' is a boolean
it can only be one or the other (CONNECTED or DISCONNECTED), so it just
seems pointless to use a switch to get "all" values. Convert to if-else
Add qemuDomainAddIOThread and qemuDomainDelIOThread in order to add or
remove an IOThread to/from the host either for live or config optoins
The implementation for the 'live' option will use the iothreadpids list
in order to make decision, while the 'config' option will use the
iothreadids list. Additionally, for deletion each may have to adjust
the iothreadpin list.
IOThreads are implemented by qmp objects, the code makes use of the existing
qemuMonitorAddObject or qemuMonitorDelObject APIs.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Remove the iothreadspin array from cputune and replace with a cpumask
to be stored in the iothreadids list.
Adjust the test output because our printing goes in order of the iothreadids
list now.
Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the
'thread_id' as returned from the live qemu monitor data.
Remove the iothreadpids list from _qemuDomainObjPrivate and replace with
the new iothreadids 'thread_id' element.
Rather than use the default numbering scheme of 1..number of iothreads
defined for the domain, use the iothreadid's list for the iothread_id
Since iothreadids list keeps track of the iothread_id's, these are
now used in place of the many places where a for loop would "know"
that the ID was "+ 1" from the array element.
The new tests ensure usage of the <iothreadid> values for an exact number
of iothreads and the usage of a smaller number of <iothreadid> values than
iothreads that exist (and usage of the default numbering scheme).
If a user hot-attaches the guest agent channel libvirt would ignore it
until the restart of libvirtd or shutdown/destroy and start of the VM
itself.
This patch adds code that opens or closes the guest agent connection
according to the state of the guest agent channel according to
connect/disconnect events.
To allow opening the channel from the event handler qemuConnectAgent
needed to be exported.
When the guest agent channel gets hotplugged to a VM, libvirt would
still report that "QEMU guest agent is not configured" rather than
stating that the connection was not established yet.
Currently the code won't be able to connect to the agent after hotplug
but that will change in a later patch.
As the qemuFindAgentConfig() helper is quite helpful in this case move
it to a more usable place and export it.
virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.
https://bugzilla.redhat.com/show_bug.cgi?id=1213434
Every domain that grabs a domain object to work over should
reference it to make sure it won't disappear meanwhile.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
just as what b8e25c35d7 did, we
fall back to the ACPI method when the guest agent is unresponsive
in qemuDomainReboot().
Signed-off-by: YueWenyuan <yuewenyuan@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Because packets going through the egress from a bridge (where our
bandwidth limiting takes place) have no information about which
interface they came from, the QoS rules that we create instead
use the source MAC address of the packets to make their decisions
about which QDisc the packet should be in.
One flaw in this is that when a guest changed the MAC address it
used, packets from the guest would no longer be put into the
correct QDisc, but would instead be put in an "unprivileged"
class, resulting in the bandwidth "floor" (minimum guaranteed)
being no longer honored.
Now that libvirt has infrastructure to capture and respond to
RX_FILTER_CHANGE events from qemu (sent whenever a guest
interface modifies its MAC address, among other things), we can
notice when a guest MAC address changes, and update the QoS rules
accordingly, so that bandwidth floor is honored even after a
guest MAC address change.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainSetMemoryFlags() would allow to set the initial memory greater
than the <maxMemory> field. While the configuration would not work as
memory hotplug requires NUMA to be enabled and the
qemuDomainSetMemoryFlags() API does not work on NUMA guests this just
fixes a corner case.
The fix is still worth though as it allows to induce an invalid
configuration and make the VM vanish on libvirt restart.
Additionally this tweaks error message to be more accurate.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
A further fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=1113474
Since there is no possibility that any type of macvtap will work if
the parent physdev it's attached to is offline, we should bring the
physdev online at the same time as the macvtap. When taking the
macvtap offline, it's also necessary to take the physdev offline for
macvtap passthrough mode (because the physdev has the same MAC address
as the macvtap device, so could potentially cause problems with
misdirected packets during migration, as outlined in commits 829770
and 879c13). We can't set the physdev offline for other macvtap modes
1) because there may be other macvtap devices attached to the same
physdev (and/or the host itself may be using the device) in the other
modes whereas passthrough mode is exclusive to one macvtap at a time,
and 2) there's no practical reason to do so anyway.
- Remove all qemu emulators
- Restart libvirtd
- Install qemu emulators
- Call 'virsh version' -> errors
The only thing that will force the qemu driver to refresh it's cached
capablities info is an explict API call to GetCapabilities.
However in the case when the initial caps lookup at driver connect didn't
find a single qemu emulator to poll, the driver is effectively useless
and really can't do anything until it's populated some qemu capabilities
info.
With the above steps, the user would have to either know about the
magic refresh capabilities call, or restart libvirtd to pick up the
changes.
Instead, this patch changes things so that every time a part of th
driver requests access to capabilities info, check to see if
we've previously seen any emulators. If not, force a refresh.
In the case of 'still no emulators found', this is still very quick, so
I can't think of a downside.
https://bugzilla.redhat.com/show_bug.cgi?id=1000116
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
This revealed that GuestDefaultEmulator was a bit buggy, capable
of returning an emulator that didn't match the passed domain type. Fix
up the test suite input to continue to pass.
https://bugzilla.redhat.com/show_bug.cgi?id=1209948
So we have this bug. The virConnectGetDomainCapabilities() API
performs a couple of checks before it produces any result. One of
the checks is if the architecture requested by user can be run by
the binary (again user provided). However, the check is pretty
dumb. It merely compares if the default binary architecture
matches the one provided by user. However, a qemu binary can run
multiple architectures. For instance: qemu-system-ppc64 can run:
ppc, ppcle, ppc64, ppc64le and ppcemb. The default is ppc64, so
if user requested something else, like ppc64le, the check would
have failed without obvious reason.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a qemu domain is to be rebooted, from outside, at libvirt
level it looks like regular shutdown. To really restart the
domain, libvirt needs to issue reset command on the monitor once
SHUTDOWN event appeared. So, in order to differentiate bare
shutdown and reboot libvirt uses a variable within domain private
data. It's called fakeReboot. When the reboot API is called, the
variable is set, but when the shutdown API is called it must be
cleared out. But it was not for every possible case. So if user
called virDomainReboot(), and there was no ACPI daemon running
inside the guest (so guest didn't initiated shutdown sequence)
and then virDomainShutdown(mode=agent) was called bad thing
happened. We remembered the fakeReboot and instead of shutting
the domain down, we just rebooted it.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Among all the monitor APIs some where checking if mon is NULL and some
were not. Since it's possible to have mon equal to NULL in case a second
call is attempted once entered the monitor. This requires that every
single API checks for the monitor.
This patch adds a macro that helps checking the state of the monitor and
either refactors existing checking code to use the macro or adds it in
case it was missing.
Rather than erroring out make the best attempt to retrieve other data if
disks are inaccessible or missing. The failure will still be logged
though.
Since the bulk stats API is called on multiple domains an error like
this makes the API unusable. This regression was introduced by commit
596a137134
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1209394
Commit f6563bc3 introduced HMP impl of the function (so that a different
uglier function could be removed). Before the HMP code is called there's
a leftover check that the monitor is JSON which inhibits the code from
working.
Changing the prototype to not have "int *index" since we'll soon be
disallowing index as a name. Curiously the original commit (a4504ac)
for the function used 'int idx' in the function - so they didn't match.
Now they do.
It is there even with -nodefaults and -no-user-config, so count with
that so we can start sparc domains.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.
2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
After set memory parameters for running domain, save the change to live
xml is needed otherwise it will disappear after restart libvirtd.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1211548
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Apparently for Xen-devel 'index' is a global and causes a build failure,
so just use the shortened 'idx' instead to avoid the conflict.
Signed-off-by: John Ferlan <jferlan@redhat.com>
QEMU does not abandon the mirror. The job carries on in the synchronised
phase and it might be either pivoted again or cancelled. The commit
hints that the described behavior was happening in a downstream version.
If the command returns false there are two possible options:
1) qemu did not reach the point where it would ask the block job to
pivot
2) pivotting failed in the actual qemu coroutine
If either of those would happen we return failure and reset the
condition that waits for the block job to complete. This makes the API
fail but in case where qemu would actually abandon the mirror the fact
is notified via the event and handled asynchronously.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1202704
qemuDomainBlockJobImpl become an unmaintainable mess over the years of
adding new stuff to it. This patch starts splitting up individual
functions from it until it can be killed entirely.
In bulk this will add lines of code rather than delete them but it will
be traded for maintainability.
My intention is to split qemuMonitorJSONBlockJob() into simpler separate
functions for every block job type. Since the error handling code is the
same for all block jobs, this patch extracts the code into a separate
function that will later be reused in more places.
With the new helper qemuMonitorJSONErrorIsClass we can save a few
function calls as we can extract the error object once.
Split out the function that checks the actual error class string into a
separate helper as it will be useful later and refactor
qemuMonitorJSONHasError to return bool type and remove few useless
checks.
Basically virJSONValueObjectHasKey are useless here since the next call
to virJSONValueObjectGet is checking the return value again (which can't
fail at that point). By removing the first check we save a function
call.
Previously we checked that the vcpu we are trying to set is in range of
the number of threads presented by qemu. The problem is that if the VM
is offline the count is 0. Since the condition subtracted 1 from the
count the number would overflow and the check would never trigger.
Change the condition for more sensible ones with specific error
messages.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1208434
This patch adds checks for empty bitmaps right after the calls of
virBitmapParse. These only include spots where set API's are called and
where domain's XML is parsed.
Also, it partially reverts commit 983f5a which added a check for
invalid nodeset "0,^0" into virBitmapParse function. This change broke
the logic, as an empty bitmap should not cause an error.
https://bugzilla.redhat.com/show_bug.cgi?id=1210545
On arm, we probe for virtio-*-pci devices, but use their
virtio-*-device variants.
Set the capabilities based on the -device variants as well,
to make them work with qemus with the PCI devices compiled out.
When pre-creating storage for domains, we need to find corresponding
disk in the XML on the destination (domain XML may differ there, e.g.
disk is accessible under different path). For better debugging, I'm
printing all info I received on a disk. But there was a typo when
printing the disk capacity: "%lluu" instead of "%llu".
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The problem with the previous implementation is,
even when qemuMigrationUpdateJobStatus() detects a migration job
has completed, it will do a sleep for 50 ms (which is unnecessary
and only adds up to the VM pause time).
Signed-off-by: Xing Lin <xinglin@cs.utah.edu>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize the adding of pindef for the vcpu and the pinning of the
thread into their own APIs.
We support VNC for containers to have the same
interface with VMs. At this moment it just renders
linux text console.
Of course we don't pass any physical devices and
don't emulate virtual devices. Our VNC server
renders text from terminal master and sends
input events from VNC client to terminal.
So add special video type VIR_DOMAIN_VIDEO_TYPE_PARALLELS
for these pseudo-devices.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize a delete cgroup and pindef for the vcpu into its own API.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Future IOThread setting patches would copy the code anyway, so create
and generalize the add the vcpu to a cgroup into its own API.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Support for drive-reopen was never present in the upstream code so we
don't need to pause the VM when doing the block pivot. Kill all the
code related to this semi-upstream artifact.
131,088 bytes in 16 blocks are definitely lost in loss record 2,174 of 2,176
at 0x4C29BFD: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x4C2BACB: realloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
by 0x52A026F: virReallocN (viralloc.c:245)
by 0x52BFCB5: saferead_lim (virfile.c:1268)
by 0x52C00EF: virFileReadLimFD (virfile.c:1328)
by 0x52C019A: virFileReadAll (virfile.c:1351)
by 0x52A5D4F: virCgroupGetValueStr (vircgroup.c:763)
by 0x1DDA0DA3: qemuRestoreCgroupState (qemu_cgroup.c:805)
by 0x1DDA0DA3: qemuConnectCgroup (qemu_cgroup.c:857)
by 0x1DDB7BA1: qemuProcessReconnect (qemu_process.c:3694)
by 0x52FD171: virThreadHelper (virthread.c:206)
by 0x82B8DF4: start_thread (pthread_create.c:308)
by 0x85C31AC: clone (clone.S:113)
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1198645
Once upon a time, there was a little domain. And the domain was pinned
onto a NUMA node and hasn't fully allocated its memory:
<memory unit='KiB'>2355200</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
Oh little me, said the domain, what will I do with so little memory.
If I only had a few megabytes more. But the old admin noticed the
whimpering, barely audible to untrained human ear. And good admin he
was, he gave the domain yet more memory. But the old NUMA topology
witch forbade to allocate more memory on the node zero. So he
decided to allocate it on a different node:
virsh # numatune little_domain --nodeset 0-1
virsh # setmem little_domain 2355200
The little domain was happy. For a while. Until bad, sharp teeth
shaped creature came. Every process in the system was afraid of him.
The OOM Killer they called him. Oh no, he's after the little domain.
There's no escape.
Do you kids know why? Because when the little domain was born, her
father, Libvirt, called numa_set_membind(). So even if the admin
allowed her to allocate memory from other nodes in the cgroups, the
membind() forbid it.
So what's the lesson? Libvirt should rely on cgroups, whenever
possible and use numa_set_membind() as the last ditch effort.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently we check qemuCaps before starting the block job. But qemuCaps
isn't available on a stopped domain, which means we get a misleading
error message in this case:
# virsh domstate example
shut off
# virsh blockjob example vda
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the qemuCaps check into the block job so that we are guaranteed the
domain is running.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
qemuMigrationCookieAddNBD is usually called from within an async
MIGRATION_OUT or MIGRATION_IN job, so it needs to start a nested job.
(The one exception is during the Begin phase when change protection
isn't enabled, but qemuDomainObjEnterMonitorAsync will behave the same
as qemuDomainObjEnterMonitor in this case.)
This bug was encountered with a libvirt client that repeatedly queries
the disk mirroring block job info during a migration. If one of these
queries occurs just as the Perform migration cookie is baked, libvirt
crashes.
Relevant logs are as follows:
6701: warning : qemuDomainObjEnterMonitorInternal:1544 : This thread seems to be the async job owner; entering monitor without asking for a nested job is dangerous
[1] 6701: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block","id":"libvirt-629"}
[2] 6699: info : qemuMonitorIOWrite:503 : QEMU_MONITOR_IO_WRITE: mon=0x7fefdc004700 buf={"execute":"query-block","id":"libvirt-629"}
[3] 6704: info : qemuMonitorSend:972 : QEMU_MONITOR_SEND_MSG: mon=0x7fefdc004700 msg={"execute":"query-block-jobs","id":"libvirt-630"}
[4] 6699: info : qemuMonitorJSONIOProcessLine:203 : QEMU_MONITOR_RECV_REPLY: mon=0x7fefdc004700 reply={"return": [...], "id": "libvirt-629"}
6699: error : qemuMonitorJSONIOProcessLine:211 : internal error: Unexpected JSON reply '{"return": [...], "id": "libvirt-629"}'
At [1] qemuMonitorBlockStatsUpdateCapacity sends its request, then waits
on mon->notify. At [2] the request is written out to the monitor socket.
At [3] qemuMonitorBlockJobInfo sends its request, and also waits on
mon->notify. The reply from the first request is received at [4].
However, qemuMonitorJSONIOProcessLine is not expecting this reply since
the second request hadn't completed sending. The reply is dropped and an
error is returned.
qemuMonitorIO signals mon->notify twice during its error handling,
waking up both of the threads waiting on it. One of them clears mon->msg
as it exits qemuMonitorSend; the other crashes:
qemuMonitorSend (mon=0x7fefdc004700, msg=<value optimized out>) at qemu/qemu_monitor.c:975
975 while (!mon->msg->finished) {
(gdb) print mon->msg
$1 = (qemuMonitorMessagePtr) 0x0
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If a VM migration is aborted, a disk mirror may be failed by QEMU before
libvirt has a chance to cancel it. The disk->mirrorState remains at
_ABORT in this case, and this breaks subsequent mirrorings of that disk.
We should instead check the mirrorState directly and transition to _NONE
if it is already aborted. Do the check *after* aborting the block job in
QEMU to avoid a race.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If virCloseCallbacksSet fails, qemuMigrationBegin must return NULL to
indicate an error occurred.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
The destination libvirt daemon in a migration may segfault if the client
disconnects immediately after the migration has begun:
# virsh -c qemu+tls://remote/system list --all
Id Name State
----------------------------------------------------
...
# timeout --signal KILL 1 \
virsh migrate example qemu+tls://remote/system \
--verbose --compressed --live --auto-converge \
--abort-on-error --unsafe --persistent \
--undefinesource --copy-storage-all --xml example.xml
Killed
# virsh -c qemu+tls://remote/system list --all
error: failed to connect to the hypervisor
error: unable to connect to server at 'remote:16514': Connection refused
The crash is in:
1531 void
1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
1533 {
1534 qemuDomainObjPrivatePtr priv = obj->privateData;
1535 qemuDomainJob job = priv->job.active;
1536
1537 priv->jobs_queued--;
Backtrace:
#0 at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
#1 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
#2 in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
#3 in virCloseCallbacksRun at util/virclosecallbacks.c:350
#4 in qemuConnectClose at qemu/qemu_driver.c:1154
...
qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
case is holding the last remaining reference to the domain.
qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
object has been freed and poisoned by then.
This patch bumps the domain's refcount until qemuDomainRemoveInactive
has completed. We also ensure qemuProcessAutoDestroy does not return the
domain to virCloseCallbacksRun to be unlocked in this case. There is
similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
(which call virDomainObjListRemove directly).
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
==19015== 968 (416 direct, 552 indirect) bytes in 1 blocks are definitely lost in loss record 999 of 1,049
==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015== by 0x52ADF14: virAllocVar (viralloc.c:560)
==19015== by 0x5302FD1: virObjectNew (virobject.c:193)
==19015== by 0x1DD9401E: virQEMUDriverConfigNew (qemu_conf.c:164)
==19015== by 0x1DDDF65D: qemuStateInitialize (qemu_driver.c:666)
==19015== by 0x53E0823: virStateInitialize (libvirt.c:777)
==19015== by 0x11E067: daemonRunStateInit (libvirtd.c:905)
==19015== by 0x53201AD: virThreadHelper (virthread.c:206)
==19015== by 0xA1EE1F2: start_thread (in /lib64/libpthread-2.19.so)
==19015== by 0xA4EFC8C: clone (in /lib64/libc-2.19.so)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
==19015== 1,064 (656 direct, 408 indirect) bytes in 2 blocks are definitely lost in loss record 1,002 of 1,049
==19015== at 0x4C2C070: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19015== by 0x52AD74B: virAlloc (viralloc.c:144)
==19015== by 0x52B47CA: virCgroupNew (vircgroup.c:1057)
==19015== by 0x52B53E5: virCgroupNewVcpu (vircgroup.c:1451)
==19015== by 0x1DD85A40: qemuSetupCgroupForVcpu (qemu_cgroup.c:1013)
==19015== by 0x1DDA66EA: qemuProcessStart (qemu_process.c:4844)
==19015== by 0x1DDF1807: qemuDomainObjStart (qemu_driver.c:7265)
==19015== by 0x1DDF1A66: qemuDomainCreateWithFlags (qemu_driver.c:7320)
==19015== by 0x1DDF1ACD: qemuDomainCreate (qemu_driver.c:7337)
==19015== by 0x53F87EA: virDomainCreate (libvirt-domain.c:6820)
==19015== by 0x12690A: remoteDispatchDomainCreate (remote_dispatch.h:3481)
==19015== by 0x126827: remoteDispatchDomainCreateHelper (remote_dispatch.h:3457)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of always using controller 0 and incrementing port number,
respect the maximum port numbers of controllers and use all of them.
Ports for virtio consoles are quietly reserved, but not formatted
(neither in XML nor on QEMU command line).
Also rejects duplicate virtio-serial addresses.
https://bugzilla.redhat.com/show_bug.cgi?id=890606https://bugzilla.redhat.com/show_bug.cgi?id=1076708
Test changes:
* virtio-auto.args
Filling out the port when just the controller is specified.
switched from using
maxport + 1
to:
first free port on the controller
* virtio-autoassign.args
Filling out the address when no <address> is specified.
Started using all the controllers instead of 0, also discards
the bus value.
* xml -> xml output of virtio-auto
The port assignment is no longer done as a part of XML parsing,
so the unspecified values stay 0.
https://bugzilla.redhat.com/show_bug.cgi?id=1206479
As described in virDomainBlockCopy() parameters description, the
VIR_DOMAIN_BLOCK_COPY_GRANULARITY parameter may require the value to
have some specific attributes (e.g. be a power of two or fall within a
certain range). And in qemu, a power of two is required. However, our
code does not check that and let qemu operation fail. Moreover, the
virsh man page is not as exact as it could be in this respect.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When we shutdown/reboot a guest using agent-mode, if the guest itself blocks infinitely,
libvirt would block in qemuAgentShutdown() forever.
Thus, we set a timeout for shutdown/reboot, from our experience, 60 seconds would be fine.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.
This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.
Other two uses trigger on any block job type.
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
If any disk of a VM was involved in a (copy) block job we refused to do
a snapshot. As not only copy jobs interlock snapshots and the
interlocking is applicable to individual disks only we can make the
check in a more individual fashion and interlock all block job types
supported by libvirt.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1203628
In the order of appearance:
* MAX_LISTEN - never used
added by 23ad665c (qemud) and addec57 (lock daemon)
* NEXT_FREE_CLASS_ID - never used, added by 07d1b6b
* virLockError - never used, added by eb8268a4
* OPENVZ_MAX_ARG, CMDBUF_LEN, CMDOP_LEN
unused since the removal of ADD_ARG_LIT in d8b31306
* QEMU_NB_PER_CPU_STAT_PARAM - unused since 897808e
* QEMU_CMD_PROMPT, QEMU_PASSWD_PROMPT - unused since 1dc10a7
* TEST_MODEL_WORDSIZE - unused since c25c18f7
* TEMPDIR - never used, added by 714bef5
* NSIG - workaround around old headers
added by commit 60ed1d2
unused since virExec was moved by commit 02e8691
* DO_TEST_PARSE - never used, added by 9afa006
* DIFF_MSEC, GETTIMEOFDAY - unused since eee6eb6
Two places would call to qemuPrepareCpumap() with priv->autoNodeset to
convert it to a cpuset. Remove the function and use the prepared cpuset
automatically.
When the default cpuset or automatic numa placement is used libvirt
would place the whole parent cgroup in the specified cpuset. This then
disallowed to re-pin the vcpus to a different cpu.
This patch pins only the vcpu threads to the default cpuset and thus
allows to re-pin them later.
The following config would fail to start:
<domain type='kvm'>
...
<vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='2-3'/>
...
This is a regression since a39f69d2b.
When the synchronous pivot option is selected, libvirt would not update
the backing chain until the job was exitted. Some applications then
received invalid data as their job serialized first.
This patch removes polling to wait for the ABORT/PIVOT job completion
and replaces it with a condition. If a synchronous operation is
requested the update of the XML is executed in the job of the caller of
the synchronous request. Otherwise the monitor event callback uses a
separate worker to update the backing chain with a new job.
This is a regression since 1a92c71910
When the ABORT job is finished synchronously you get the following call
stack:
#0 qemuBlockJobEventProcess
#1 qemuDomainBlockJobImpl
#2 qemuDomainBlockJobAbort
#3 virDomainBlockJobAbort
While previously or while using the _ASYNC flag you'd get:
#0 qemuBlockJobEventProcess
#1 processBlockJobEvent
#2 qemuProcessEventHandler
#3 virThreadPoolWorker
Later on I'll be adding a condition that will allow to synchronise a
SYNC block job abort. The approach will require this code to be called
from two different places so it has to be extracted into a helper.
Commit 1a92c719 moved code to handle block job events to a different
function that is executed in a separate thread. The caller of
processBlockJob handles locking and unlocking of @vm, so the we should
not do it in the function itself.
The block copy API takes the speed in bytes/s rather than MiB/s that was
the prior approach in virDomainBlockRebase. We correctly converted the
speed to bytes/s in the old API but we still called the common helper
virDomainBlockCopyCommon with the unadjusted variable.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1207122