We can add the iSCSI hostdevs to the same test file.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
"hostdev-scsi-readonly" case tests the readonly disk with a virtio-scsi
controller. Add it for the 'lsi' controller test as well.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
qemu-2.8 didn't yet support QEMU_CAPS_ISCSI_PASSWORD_SECRET. This
version will allow integrating multiple test cases into one.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Modernize the current state to the pre-blockdev version of qemu to
minimize changes. Later patch will add a 'latest' case too.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The build job for this container has been failing every single
time, and as it turns out the explanation for that is very simple:
Debian is just not going to support the mips architecture going
forward.
Reported-by: Pino Toscano <ptoscano@redhat.com>
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The bitmap name used for the incremental backup would be leaked
otherwise.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Originally the function was cleaning up a failed job only but now
there's other stuff that needs to be cleared too.
Make only steps which clean up after a failed job depend on the
'started' field and execute the rest of the code always.
This fixes a leak of the backup job tracking object and the blockdev-add
helper data.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The code would repeatedly mark the first disk's blockjob as started
rather than accessing all the blockjobs. Fix the dereferencing operator.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Two functions called in sequence both initialized the virStorageSource
backing 'store' leading to a memleak.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The cleanup path expects that 'def' is assigned to 'priv->backup', but
that's not the case for early failures. Add a check to stop overwriting
of 'def' so that it can be freed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Using virKModConfig would not simplify any existing code.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
All callers except for the test suite pass the same value
for the second arg, so it can be removed, simplifying the
code.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
When a block copy job fails prior to reaching the synchronized phase
while we are waiting for the job to finish virsh would print the
following:
$ virsh blockcopy backup-test vda /tmp/dst.qcow2 --wait --reuse-external --transient-job
error:
Copy failed
The above message looks like we've forgot to print the error message
itself as the line ends after 'error:'. Unfortunately with the current
API design clients have no way of actually getting the error message as
the VIR_DOMAIN_EVENT_ID_BLOCK_JOB(_2) event only reports the status but
not an error and the job then vanishes.
Fix the expectations by using vshPrintExtra instead of vshError:
$ virsh blockcopy backup-test vda /tmp/dst.qcow2 --wait --reuse-external --transient-job
Copy failed
Note that the newline is required to avoid printing the 'Copy failed'
message on the same line when printing the job progress percentage.
Inspired by https://bugzilla.redhat.com/show_bug.cgi?id=1847867
Fix the same issue also for block pull and block commit job
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Outline the basics and how to integrate with externally created
overlays. Other topics will continue later.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Test both 'basic' and 'snapshots' cases on shallow and deep copy modes.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reuse qemuBlockGetBitmapMergeActions which allows the removal of the
ad-hoc implementation of bitmap merging for block copy.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
New semantics of the bitmap handling don't need this. Remove the field
and all uses of it including the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Simulate commit between all the combinations of layers in the
'snapshots' case to see whether the code merges the correct bitmaps with
the correct depth of temporary bitmaps.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
In the 'basic' case we have few bitmaps in only the top layer. Simulate
commit into the backing of the top layer and also 2 levels deep.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reuse qemuBlockGetBitmapMergeActions which allows removing the ad-hoc
implementation of bitmap merging for block commit. The new approach is
way simpler and more robust and also allows us to get rid of the
disabling of bitmaps done prior to the start as we actually do want to
update the bitmaps in the base.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The 'snapshots' case has multiple layers so we need to make sure that
the bitmaps are merged with the appropriate temporary bitmaps formatted
from the allocation bitmap for any backing chain layer above.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The 'basic' case is just a single backing store layer containing the
bitmaps so we just copy the bitmaps over to the backup bitmap.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reuse qemuBlockGetBitmapMergeActions which allows removal of the ad-hoc
implementation of bitmap merging for backup. The new approach is simpler
and also more robust in case some of the bitmaps break as they remove
the dependency on the whole chain of bitmaps working.
The new approach also allows backups if a snapshot is created outside of
libvirt.
Additionally the code is greatly simplified.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Add a function which allows merging bitmaps according to the new
semantics and will allow replacing all the specific ad-hoc functions
currently in use for 'backup', 'block commit', 'block copy' and will
also be usable in the future for 'block pull' and non-shared storage
migration.
The semantics are a bit quirky for the 'backup' case but these quirks
are documented and will prevent us from having two slightly different
algorithms.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Exercise the now arguably simpler checkpoint deletion code on the
'basic', 'snapshots', and 'synthetic' test data sets.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Now that we've switched to the simple handling, the first thing that can
be massively simplified is checkpoint deletion. We now need to only go
through the backing chain and find the appropriately named bitmaps and
delete them, no complex lookups or merging.
Note that compared to other functions this deletes the bitmap in all
layers compared to others where we expect only exactly 1 bitmap of a
name in the backing chain to prevent potential problems.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Based on the 'snapshots' example with manual tweaks to introduce
inactive, transient, inconsistent and duplicate bitmaps in various parts
of the chain to exercise detection and new validation code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Now that we've updated both the test data and the validator to new
semantics we can start testing again.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reject duplicates and other problematic bitmaps according to the new
semantics of bitmap use in libvirt.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use test data which conforms to the new semantics which changed in the
previous patch.
The test data was created by the same set of commands as originally in
commit 0b27b655b1
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use test data which conforms to the new semantics which changed in the
previous patch.
The test data was created by the same set of commands as originally in
commit 9aac9d5bda
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Chaining bitmaps for checkpoints (disabling the active one and creating
a new) severely overcomplicated all operations in regards to bitmaps.
Specifically it requires us re-matching the on-disk state to the
internal metadata and in case of merging during block jobs it makes it
almost impossible to cover all corner cases.
Since the checkpoints and incremental backups were not yet enabled,
let's change the design to keep one bitmap per checkpoint. In case of
layered snapshots this will be filled in by using dirty-bitmap-populate.
Finally the main reason for this unnecessary complexity was the fear
that qemu's performance could degrade. In the end I think that
addressing the performance issue will be better done in qemu (e.g by
keeping an internal bitmap updated with changes and merging it
periodically back to the real bitmaps. QEMU writes out changes to disk
at shutdown so consistency is not a problem).
Removing the relationships between bitmaps frees us from complex
handling and also makes all the surrounding code more robust as one
broken bitmap doesn't necessarily invalidate whole chains of backups.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
There will be multiple places where we'll need to print nodenames from a
GSList of virStorageSource for testing purposes. Extract the code into a
function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
They will be replaced by a different set which will test scenarios
relevant for the new semantics.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Upcoming patches are going to rewrite and semantically modify how
bitmaps are handled during blockjobs. This is possible as incremental
backup is not yet fully enabled.
As the changes are going to be incompatible with any current test data
remove all test cases for bitmap handling during checkpoint deletion,
incremental backups, block commit, block copy, and bitmap validation
operations.
The tests will be gradually added back later after the code and
test-data is refactored.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use the new test data for checkpoint deletion testing. This test also
requires modification of the internals to allow checking for test
failure.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Use the new test data when calculating incremental backup operations. As
incremental backup fails with no bitmap the test code is modified to
allow testing this case too.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Fetch the checkpoint list for every disk specifically based on the new
per-disk 'incremental' field.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
In preparation to allow heterogenous backups store the 'incremental'
field per-disk and fill it by default from the per-backup field.
Having this will be important once we'll want to allow incremental
backup working while hotplugging a new disk.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If a disk is not captured by one of the intermediate checkpoints the
code would fail, but we can easily calculate the bitmaps to merge
correctly by skipping over checkpoints which don't describe the disk.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The algorithm is getting quite complex. Split out the lookup of range of
backing chain storage sources and bitmaps contained in them which
correspond to one checkpoint.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>