If a remote call fails during event registration (more than likely from
a network failure or remote libvirtd restart timed just right), then when
calling the virObjectEventStateDeregisterID we don't want to call the
registered @freecb function because that breaks our contract that we
would only call it after succesfully returning. If the @freecb routine
were called, it could result in a double free from properly coded
applications that free their opaque data on failure to register, as seen
in the following details:
Program terminated with signal 6, Aborted.
#0 0x00007fc45cba15d7 in raise
#1 0x00007fc45cba2cc8 in abort
#2 0x00007fc45cbe12f7 in __libc_message
#3 0x00007fc45cbe86d3 in _int_free
#4 0x00007fc45d8d292c in PyDict_Fini
#5 0x00007fc45d94f46a in Py_Finalize
#6 0x00007fc45d960735 in Py_Main
#7 0x00007fc45cb8daf5 in __libc_start_main
#8 0x0000000000400721 in _start
The double dereference of 'pyobj_cbData' is triggered in the following way:
(1) libvirt_virConnectDomainEventRegisterAny is invoked.
(2) the event is successfully added to the event callback list
(virDomainEventStateRegisterClient in
remoteConnectDomainEventRegisterAny returns 1 which means ok).
(3) when function remoteConnectDomainEventRegisterAny is hit,
network connection disconnected coincidently (or libvirtd is
restarted) in the context of function 'call' then the connection
is lost and the function 'call' failed, the branch
virObjectEventStateDeregisterID is therefore taken.
(4) 'pyobj_conn' is dereferenced the 1st time in
libvirt_virConnectDomainEventFreeFunc.
(5) 'pyobj_cbData' (refered to pyobj_conn) is dereferenced the
2nd time in libvirt_virConnectDomainEventRegisterAny.
(6) the double free error is triggered.
Resolve this by adding a @doFreeCb boolean in order to avoid calling the
freeCb in virObjectEventStateDeregisterID for any remote call failure in
a remoteConnect*EventRegister* API. For remoteConnect*EventDeregister* calls,
the passed value would be true indicating they should run the freecb if it
exists; whereas, it's false for the remote call failure path.
Patch based on the investigation and initial patch posted by
fangying <fangying1@huawei.com>.
Commit 7456c4f5f introduced a regression by not reloading the backing
chain of a disk after snapshot. The regression was caused as
src->relPath was not set and thus the block commit code could not
determine the relative path.
This patch adds code that will load the backing store string if
VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT and store it in the correct place
when a snapshot is successfully completed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1461303
Changing labelling of the images does not need to happen after setting
the labeling and lock manager access. This saves the cleanup of the
labeling if the relative path can't be determined.
Most places which want to check ABI stability for an active domain need
to call this API rather than the original
qemuDomainDefCheckABIStability. The only exception is in snapshots where
we need to decide what to do depending on the saved image data.
https://bugzilla.redhat.com/show_bug.cgi?id=1460952
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Commit v3.4.0-44-gac793bd71 fixed a memory leak, but failed to return
the special -3 value. Thus an attempt to start a domain with corrupted
managed save file would removed the corrupted file and report
"An error occurred, but the cause is unknown" instead of starting the
domain from scratch.
https://bugzilla.redhat.com/show_bug.cgi?id=1460962
Use ATTRIBUTE_FALLTHROUGH, introduced by commit
5d84f5961b, instead of comments to
indicate that the fall through is an intentional behavior.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
If QEMU is new enough and we have the live updated CPU definition in
either save or migration cookie, we can use it to enforce ABI. The
original guest CPU from domain XML will be stored in private data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The destination host may not be able to start a domain using the live
updated CPU definition because either libvirt or QEMU may not be new
enough. Thus we need to send the original guest CPU definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The following patches will add an actual content in the cookie and use
the data when restoring a domain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The new structure encapsulates save image header and associated data
(domain XML).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function is now called virQEMUSaveDataWrite and it is now doing
everything it needs to save both the save image header and domain XML to
a file. Be it a new file or an existing file in which a user wants to
change the domain XML.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The function is supposed to update the save image header after a
successful migration to the save image file.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This is a preparation for creating a new virQEMUSaveData structure which
will encapsulate all save image header data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Since virQEMUSaveHeader will be followed by more than just domain XML,
the old name would be confusing as it was designed to describe the
length of all data following the save image header.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
This will be used later when a save cookie will become part of the
snapshot XML using new driver specific parser/formatter functions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Allow starting the block-copy job for a persistent domain if a user
declares by using a flag that the job will not be recovered if the VM is
switched off while the job is active.
This allows to use the block-copy job with persistent VMs under the same
conditions as would apply to transient domains.
While checking for ABI stability, drivers might pose additional
checks that are not valid for general case. For instance, qemu
driver might check some memory backing attributes because of how
qemu works. But those attributes may work well in other drivers.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemuDomainGetBlockInfo would error out if qemu did not report
'wr_highest_offset'. This usually does not happen, but can happen
briefly during active layer block commit. There's no need to report the
error, we can simply report that the disk is fully alocated at that
point.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1452045
In 48d9e6cdcc and friends we've allowed users to back guest
memory by a file inside the host. And in order to keep things
manageable the memory_backing_dir variable was introduced to
qemu.conf to specify the directory where the files are kept.
However, libvirt's policy is that directories are created on
domain startup if they don't exist. We've missed this one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Setting the 'group_name' for a disk would falsely trigger a error path
as in commit 4b57f76502 we did not properly check the return value of
VIR_STRDUP.
Since we allow active layer block commit the users are allowed to commit
the top of the chain (e.g. vda) into the backing image. The API would
not accept that parameter, as it tried to look up the image in the
backing chain.
Add the ability to use the top level image target name explicitly as the
top image of the block commit operation.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1451394
Since commit c5f6151390 qemuDomainBlockInfo tries to update the
"physical" storage size for all network storage and not only block
devices.
Since the storage driver APIs to do this are not implemented for certain
storage types (RBD, iSCSI, ...) the code would fail to retrieve any data
since the failure of qemuDomainStorageUpdatePhysical is fatal.
Since it's desired to return data even if the total size can't be
updated we need to ignore errors from that function and return plausible
data.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1442344
Not all async jobs are visible via virDomainGetJobStats (either they are
too fast or getting the stats is not allowed during the job), but
forcing all of them to advertise the operation is easier than hunting
the jobs for which fetching statistics is allowed. And we won't need to
think about this when we add support for getting stats for more jobs.
https://bugzilla.redhat.com/show_bug.cgi?id=1441563
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuDomainGetNumaParameters would return the automatic nodeset even for
the persistent config if the domain was running. This is incorrect since
the automatic nodeset will be re-queried upon starting the vm.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1445325
The code that validates whether an internal snapshot is possible would
reject an empty but not-readonly drive. Since floppies can have this
property, add a check for emptiness.
So far our code is full of the following pattern:
dom = virGetDomain(conn, name, uuid)
if (dom)
dom->id = 42;
There is no reasong why it couldn't be just:
dom = virGetDomain(conn, name, uuid, id);
After all, client domain representation consists of tuple (name,
uuid, id).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In 9e2465834 a check that denies internal snapshots when pflash
based loader is configured for the domain. However, if there's
none and an user tries to do an internal snapshot they will
witness daemon crash as in that case vm->def->os.loader is NULL
and we dereference it unconditionally.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
One of the problems with our virGetDomain function is that it
copies just domain name and domain UUID. Therefore it's very
easy to forget aboud domain ID. This can cause some bugs, like
virConnectGetAllDomainStats not reporting proper domain IDs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The hyperv panic notifier reports additional data in form of 5 registers
that are reported in the crash event from qemu. Log them into the VM log
file and report them as a warning so that admins can see the cause of
crash of their windows VMs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1426176
That file has only two exported files and each one of them has
different naming. virNode is what all the other files use, so let's
use it. It wasn't used before because the clash with public API
naming, so let's fix that by shortening the name (there is no other
private variant of it anyway).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There is no "node driver" as there was before, drivers have to do
their own ACL checking anyway, so they all specify their functions and
nodeinfo is basically just extending conf/capablities. Hence moving
the code to src/conf/ is the right way to go.
Also that way we can de-duplicate some code that is in virsysfs and/or
virhostcpu that got duplicated during the virhostcpu.c split. And
Some cleanup is done throughout the changes, like adding the vir*
prefix etc.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There is no reason for it not to be in the utils, all global symbols
under that file already have prefix vir* and there is no reason for it
to be part of DRIVER_SOURCES because that is just a leftover from
older days (pre-driver modules era, I believe).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Management tools may want to check whether the threshold is still set if
they missed an event. Add the data to the bulk stats API where they can
also query the current backing size at the same time.
Detect the node names when setting block threshold and when reconnecting
or when they are cleared when a block job finishes. This operation will
become a no-op once we fully support node names.