The documentation states that for shallow block copy the image has to
have the same guest visible content as backing file of the current
image if the file is being reused. This condition can be achieved also
with a raw file (or a qcow without a backing file) so remove the
condition that would disallow it.
(This patch additionally fixes crash described in
https://bugzilla.redhat.com/show_bug.cgi?id=1215569 )
It would be used in qemumonitorjsontest, thus we make it non-static.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Zhou Yimin <zhouyimin@huawei.com>
If we received zero iothreads from the monitor, but were perhaps
expecting to receive something, then the code was skipping the check
to ensure what's in the monitor matches our expectations. So invert
the checks to check that what we get back matches expectations and
then check there are zero iothreads returned.
Rather than have a separate routine to parse the alias of an iothread
returned from qemu in order to get the iothread_id value, parse the alias
when returning and just return the iothread_id in qemuMonitorIOThreadInfoPtr
This set of patches removes the function, changes the "char *name" to
"unsigned int" and handles all the fallout.
Coverity notes that the switch() used to check 'connected' values has
two DEADCODE paths (_DEFAULT & _LAST). Since 'connected' is a boolean
it can only be one or the other (CONNECTED or DISCONNECTED), so it just
seems pointless to use a switch to get "all" values. Convert to if-else
Add qemuDomainAddIOThread and qemuDomainDelIOThread in order to add or
remove an IOThread to/from the host either for live or config optoins
The implementation for the 'live' option will use the iothreadpids list
in order to make decision, while the 'config' option will use the
iothreadids list. Additionally, for deletion each may have to adjust
the iothreadpin list.
IOThreads are implemented by qmp objects, the code makes use of the existing
qemuMonitorAddObject or qemuMonitorDelObject APIs.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Remove the iothreadspin array from cputune and replace with a cpumask
to be stored in the iothreadids list.
Adjust the test output because our printing goes in order of the iothreadids
list now.
Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the
'thread_id' as returned from the live qemu monitor data.
Remove the iothreadpids list from _qemuDomainObjPrivate and replace with
the new iothreadids 'thread_id' element.
Rather than use the default numbering scheme of 1..number of iothreads
defined for the domain, use the iothreadid's list for the iothread_id
Since iothreadids list keeps track of the iothread_id's, these are
now used in place of the many places where a for loop would "know"
that the ID was "+ 1" from the array element.
The new tests ensure usage of the <iothreadid> values for an exact number
of iothreads and the usage of a smaller number of <iothreadid> values than
iothreads that exist (and usage of the default numbering scheme).
If a user hot-attaches the guest agent channel libvirt would ignore it
until the restart of libvirtd or shutdown/destroy and start of the VM
itself.
This patch adds code that opens or closes the guest agent connection
according to the state of the guest agent channel according to
connect/disconnect events.
To allow opening the channel from the event handler qemuConnectAgent
needed to be exported.
When the guest agent channel gets hotplugged to a VM, libvirt would
still report that "QEMU guest agent is not configured" rather than
stating that the connection was not established yet.
Currently the code won't be able to connect to the agent after hotplug
but that will change in a later patch.
As the qemuFindAgentConfig() helper is quite helpful in this case move
it to a more usable place and export it.
virDomainGetJobStats is able to report statistics of a completed
migration, however to get usable downtime and total time statistics both
hosts have to keep synchronized time. To provide at least some
estimation of the times even when NTP daemons are not running on both
hosts we can just ignore the time needed to transfer a migration cookie
to the destination host. The result will be also inaccurate but a bit
more predictable. The total/down time will just be at least what we
report.
https://bugzilla.redhat.com/show_bug.cgi?id=1213434
Every domain that grabs a domain object to work over should
reference it to make sure it won't disappear meanwhile.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
just as what b8e25c35d7 did, we
fall back to the ACPI method when the guest agent is unresponsive
in qemuDomainReboot().
Signed-off-by: YueWenyuan <yuewenyuan@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Because packets going through the egress from a bridge (where our
bandwidth limiting takes place) have no information about which
interface they came from, the QoS rules that we create instead
use the source MAC address of the packets to make their decisions
about which QDisc the packet should be in.
One flaw in this is that when a guest changed the MAC address it
used, packets from the guest would no longer be put into the
correct QDisc, but would instead be put in an "unprivileged"
class, resulting in the bandwidth "floor" (minimum guaranteed)
being no longer honored.
Now that libvirt has infrastructure to capture and respond to
RX_FILTER_CHANGE events from qemu (sent whenever a guest
interface modifies its MAC address, among other things), we can
notice when a guest MAC address changes, and update the QoS rules
accordingly, so that bandwidth floor is honored even after a
guest MAC address change.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainSetMemoryFlags() would allow to set the initial memory greater
than the <maxMemory> field. While the configuration would not work as
memory hotplug requires NUMA to be enabled and the
qemuDomainSetMemoryFlags() API does not work on NUMA guests this just
fixes a corner case.
The fix is still worth though as it allows to induce an invalid
configuration and make the VM vanish on libvirt restart.
Additionally this tweaks error message to be more accurate.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
A further fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=1113474
Since there is no possibility that any type of macvtap will work if
the parent physdev it's attached to is offline, we should bring the
physdev online at the same time as the macvtap. When taking the
macvtap offline, it's also necessary to take the physdev offline for
macvtap passthrough mode (because the physdev has the same MAC address
as the macvtap device, so could potentially cause problems with
misdirected packets during migration, as outlined in commits 829770
and 879c13). We can't set the physdev offline for other macvtap modes
1) because there may be other macvtap devices attached to the same
physdev (and/or the host itself may be using the device) in the other
modes whereas passthrough mode is exclusive to one macvtap at a time,
and 2) there's no practical reason to do so anyway.
- Remove all qemu emulators
- Restart libvirtd
- Install qemu emulators
- Call 'virsh version' -> errors
The only thing that will force the qemu driver to refresh it's cached
capablities info is an explict API call to GetCapabilities.
However in the case when the initial caps lookup at driver connect didn't
find a single qemu emulator to poll, the driver is effectively useless
and really can't do anything until it's populated some qemu capabilities
info.
With the above steps, the user would have to either know about the
magic refresh capabilities call, or restart libvirtd to pick up the
changes.
Instead, this patch changes things so that every time a part of th
driver requests access to capabilities info, check to see if
we've previously seen any emulators. If not, force a refresh.
In the case of 'still no emulators found', this is still very quick, so
I can't think of a downside.
https://bugzilla.redhat.com/show_bug.cgi?id=1000116
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
This revealed that GuestDefaultEmulator was a bit buggy, capable
of returning an emulator that didn't match the passed domain type. Fix
up the test suite input to continue to pass.
https://bugzilla.redhat.com/show_bug.cgi?id=1209948
So we have this bug. The virConnectGetDomainCapabilities() API
performs a couple of checks before it produces any result. One of
the checks is if the architecture requested by user can be run by
the binary (again user provided). However, the check is pretty
dumb. It merely compares if the default binary architecture
matches the one provided by user. However, a qemu binary can run
multiple architectures. For instance: qemu-system-ppc64 can run:
ppc, ppcle, ppc64, ppc64le and ppcemb. The default is ppc64, so
if user requested something else, like ppc64le, the check would
have failed without obvious reason.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a qemu domain is to be rebooted, from outside, at libvirt
level it looks like regular shutdown. To really restart the
domain, libvirt needs to issue reset command on the monitor once
SHUTDOWN event appeared. So, in order to differentiate bare
shutdown and reboot libvirt uses a variable within domain private
data. It's called fakeReboot. When the reboot API is called, the
variable is set, but when the shutdown API is called it must be
cleared out. But it was not for every possible case. So if user
called virDomainReboot(), and there was no ACPI daemon running
inside the guest (so guest didn't initiated shutdown sequence)
and then virDomainShutdown(mode=agent) was called bad thing
happened. We remembered the fakeReboot and instead of shutting
the domain down, we just rebooted it.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Among all the monitor APIs some where checking if mon is NULL and some
were not. Since it's possible to have mon equal to NULL in case a second
call is attempted once entered the monitor. This requires that every
single API checks for the monitor.
This patch adds a macro that helps checking the state of the monitor and
either refactors existing checking code to use the macro or adds it in
case it was missing.
Rather than erroring out make the best attempt to retrieve other data if
disks are inaccessible or missing. The failure will still be logged
though.
Since the bulk stats API is called on multiple domains an error like
this makes the API unusable. This regression was introduced by commit
596a137134
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1209394
Commit f6563bc3 introduced HMP impl of the function (so that a different
uglier function could be removed). Before the HMP code is called there's
a leftover check that the monitor is JSON which inhibits the code from
working.