The latest lcitool merged the 'prebuilt-env' and 'local-env' jobs into
one which use variables to pick up the right environment and steps
rather than duplicating everything.
Regenerate the generated job definitions, fix the helper definitions
and also fix the manually defined jobs (website-job).
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Entering $SCRATCH_DIR, going back to the original directory and
setting SELinux labels for the newly-installed QEMU binaries
are all steps that logically belong to this template rather
than its callers.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We enter $SCRATCH_DIR before going through the process of
cloning QEMU's upstream repo and building it, but once we're
done we don't get back to libvirt's sources, so the very next
step fails with
/tmp/script.: line 188: ci/jobs.sh: No such file or directory
Use pushd/popd to ensure that we're back to the correct place
once QEMU has been built and installed.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since the section now only consists of a single command, we can happily
move the command to the main integration template job body.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Follow what's been done to other jobs in .gitlab-ci.yml and extract the
shell logic from YAML to a function in ci/jobs.sh
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
We're already past Fedora 35 and so all new fedora's default to
modular daemon setup.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
The default expiry time is 30 days. Since the RPM artifacts coming from
the previous pipeline stages are set to expire in 1 day we can set the
failed integration job log artifacts to the same value. The sentiment
here is that if an integration job legitimately failed (i.e. not with
an infrastructure failure) unless it was fixed in the meantime it will
fail the next day with the scheduled pipeline again, meaning, that even
if the older log artifacts are removed, they'll be immediately
replaced with fresh ones.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Avocado 99.0 causes the TCK test suite to fail with the nwfilter tests
(which is another Bash framework underneath). Until the culprit is
identified and fixed in Avocado, let's lock the version to 98.0 which
worked with the test suite just fine.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Latest versions of Avocado create 'by-status' symlink shortcuts to test
results, IOW:
# this is the main test results directory containing all data
$ ls <path>/avocado/job-results/latest/test-results/
01-scripts_networks_050-transient-lifecycle.t
02-scripts_networks_051-transient-autostart.t
...
22-scripts_networks_400-guest-bandwidth.t
by_status/
# list only the failed tests
$ ls -l <path>/avocado/job-results/latest/test-results/by-status/FAIL
19-scripts_networks_360-guest-network-vepa.t ->
<path>/avocado/job-results/latest/test-results/19-scripts_networks_360-guest-network-vepa.t
Therefore, let's bundle only the failed ones, it's going to make the
log artifacts more obvious when looking for libvirt errors.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Don't create an avocado directory in the resulting log artifacts
if Avocado didn't even run (e.g. libvirt errored out on service
restart).
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
All 'script' blocks are defined as 'set -e' and so a single failed
return value means we won't collect some of the logs. Because of
the nature of the original job's failure some of the log sources
might not be available, but that's fine, however, the gitlab
after_script job cannot finish prematurely.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It could be quite confusing looking at the job log artifacts and having
an empty coredump log in there, IOW it doesn't really give much
confidence that the reporting mechanism actually works.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It's a directory, so -d should be used with 'test'.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Both log filters and log outputs expect string values, however, augeas
apparently requires an extra level of quotes apart from the ones we
pass via shell (see comment [1]) to work properly, otherwise augeas
ignores the value and returns 0.
Without this fix we don't set libvirt's log level to debug, we don't
set logging to a file and hence we don't include the logs in CI
artifacts in case the test suite fails.
[1] https://github.com/hercules-team/augeas/issues/301#issuecomment-143699880
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It was missing from the set. While at it, order the daemon set
alphabetically.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
After addition of the new libvirt-client-qemu sub-package which is using
python bindings (thus creating a circular dependency between the libvirt
and libvirt-python projects) the integration jobs fail with:
Error:
Problem: conflicting requests
- nothing provides python3-libvirt >= 8.9.0-1.el9 needed by libvirt-client-qemu-8.9.0-1.el9.x86_64
The libvirt-python project now provides the RPMs in artifacts:
https://gitlab.com/libvirt/libvirt-python/-/merge_requests/96
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
This refresh switches the CI for contributors to be triggered by merge
requests. Pushing to a branch in a fork will no longer run CI pipelines,
in order to avoid consuming CI minutes. To regain the original behaviour
contributors can opt-in to a pipeline on push
git push <remote> -o ci.variable=RUN_PIPELINE=1
This variable can also be set globally on the repository, through the
web UI options Settings -> CI/CD -> Variables, though this is not
recommended. Upstream repo pushes to branches will run CI.
The use of containers has changed in this update, with only the upstream
repo creating containers, in order to avoid consuming contributors'
limited storage quotas. A fork with existing container images may delete
them. Containers will be rebuilt upstream when pushing commits with CI
changes to the default branch. Any other scenario with CI changes will
simply install build pre-requisite packages in a throaway environment,
using the ci/buildenv/ scripts. These scripts may also be used on a
contributor's local machines.
With pipelines triggered by merge requests, it is also now possible to
workaround the inability of contributors to run pipelines if they have
run out of CI quota. A project member can trigger a pipeline from the
merge request, which will run in context of upstream, however, note
this should only be done after reviewing the code for any malicious
CI changes.
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Since a fix for CVE-2022-24765 was released every git command is now
checked against the context repo in which it's supposed to run
resulting in a fatal error if the repo is owned by other user than the
one running the git command.
This means that in order to be able to do 'sudo make install', we have
to set the 'safe.directory' for the root user. This is because QEMU
runs 'git submodule update' automatically on 'make install'.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
YAML anchors don't work with Shell condition structures, so we cannot
simply reference the QEMU build template YAML anchor conditionally and
hence have everything as part of a single job template.
Instead, we have to "subclass" the .integration_tests template and
inject the QEMU building bits explicitly.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This was heavily inspired by QEMU's upstream CI buildtest-template.yml.
Rather than referencing QEMU's template directly (which GitLab can do),
this patch resorts to hard-coding the build steps ourselves, solely
because there's no guarantee QEMU will keep either the template file
name or the template name from which the build steps were mostly copied
from.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
There's quite a lot happening in the .integration_tests template
already even without adding upstream QEMU build into the mix.
Let's break the template into more pieces which can then reference
in the .integration_tests template when putting all the pieces back
together using YAML anchors.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Future patches will do more code extraction from the existing template
using YAML anchors so it'd be better that the templates would live
separately from job definitions.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>