The 'auto-read-only' blockdev option is available in all supported qemu
versions so we can remove the migration hack which disabled it.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We want at least one file to always be present, so that it can
serve as a pointer for users. Ensure that this is the case by
unconditionally using the value of the respective keys.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It's somewhat confusing that some of the services have a
corresponding foo.service.extra.in and foo.socket.extra.in, some
have just one of the two, and some have neither.
In order to make things more approachable, make sure that both
files exists for each service.
In most cases the extra units are currently unused, so they will
just contain a comment briefly explaining their purpose and
pointing users to meson.build, where they can find more
information. The same comment is also added to the top of
extra units that already have some contents in them for
consistency.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that the underlying script is able to merge an arbitrary
number of units into the base template, expose this possibility
in the build system.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It uses custom templates which already hardcode the correct
value.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Starting v18, cloud-hypervisor supports serial and console devices in
parallel. Drop related check based on ch version.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Starting with v28.0 cloud-hypervisor requires the use of "payload" api to pass
kernel, initramfs and cmdline options. Extend ch driver to use the new
api based on ch version.
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The function virHostCPUGetPhysAddrSize was introduced with commit be1b7d5b18
fails on architectures other than x86 and SuperH. The commit 8417c1394c
fixed the issue only for s390 but the problem is still seen on other
architectures like ppc which does not report Physical address size in their
cpuinfo output.
command:
systemctl restart libvirtd.service
Output :
<snip>
dnsmasq[2377]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0
addresses
dnsmasq-dhcp[2377]: read /var/lib/libvirt/dnsmasq/default.hostsfile
libvirtd[3163]: libvirt version: 9.8.0
libvirtd[3163]: hostname: xxxxxxxxxx
libvirtd[3163]: internal error: Missing or invalid CPU address size in
/proc/cpuinfo
libvirtd.service: Deactivated successfully.
</snip>
This patch fixes this issue by returning the size=0 for architectures
other than x86 and SuperH.
Signed-off-by: Narayana Murty N <nnmlinux@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
vmDef->fss[i]->src->path may be NULL,
so check is needed before passing it to VIR_DEBUG.
Also removed checking vmDef->fss[i]->src for NULL, since it may not be NULL.
Fixes: 57487085dc ("lxc: don't try to reference NULL when mounting filesystems")
Signed-off-by: Dmitry Frolov <frolov@swemel.ru>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently, libvirt doesn't send events when devices are attached,
detached or updated. Thus, any services that listen to events are
unaware of the change to persistent config.
Signed-off-by: Fima Shevrin <efim.shevrin@virtuozzo.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Considering that at the virSecuritySELinuxSetFilecon() function can only
return 0 or -1 and so does the virSecuritySELinuxFSetFilecon(), the check
for '1' at the end of virSecuritySELinuxSetImageLabelInternal() is
effectively a dead code. Drop it.
Co-developed-by: sdl.qemu <sdl.qemu@linuxtesting.org>
Signed-off-by: Sergey Mironov <mironov@fintech.ru>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While the name itself doesn't matter, this rename is done to prove that
all places using 'nodeformat' were converted to the appropriate
accessors.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The persistent bitmaps are stored in the format layer, using 'effective'
bitmap name is the most reasonable approach in this case.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The blockjob, NBD export and setup of the cookie data all care about the
effective nodename.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The frontend device needs to access the blocks directly so it cares
about the effective nodename.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In most cases the bitmap operations are relevant only on qcow2 images
thus the 'format' layer will be present. Although in certain specific
cases temporary bitmaps can be created on top of other images as well,
thus we use the 'effective' bitmap name in all cases for bitmap
operations.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
I case of statistics we're interested in the statistics of the effective
bitmap whatever it happens to be.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The disk backend setup code is concerned only about the effective
nodename. Doing this conversion will also simplify further changes
needed to drop the 'raw' layer in cases when it's not really needed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The code setting the nodenames needs to use the 'true' nodename of the
format layer.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Convert the main -blockdev JSON object setup code to use the new
accessors. In these we use mainly the real 'format' layer node name.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the effective nodename for naming the job as we use that one now.
It doesn't matter too much which one we pick, because it's used just for
the name of the job, which we preserve in the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Both modified cases in this patch require the effective nodename as they
deal with the data being backed up.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce a set of accessors, which return node names based on
semantics. This will allow to us to modify how we setup the backing
chain in cases when e.g. the format driver can be omitted, without
breaking all the code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
While the name itself doesn't matter, this rename is done to prove that
all places using 'nodestorage' were converted to the appropriate
accessors.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to keep setting the block threshold on the real storage layer
per semantics of the API. Use the appropriate accessor.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In all cases we want to probe stats from the 'storage' layer as we're
interested in the 'threshold' value, which we set there.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the new nodename accessors for any storage layer helper object.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Refactor the code settin up data structures used to attach/detach disks
and SCSI hostdevs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Refactor the code which assigns the 'storage' layer nodenames for disks.
scsi hostdevs and pflash backend.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to use the 'effective' storage nodename (one which includes the
optional storage slice 'raw' intermediate layer) in the code which
formats the 'format' layer props.
All other cases need the real storage driver nodename as they either
generate the 'storage' layer props, or the storage slice, which refers
to the proper storage backend.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the new accessors in the private XML formatters and parsers and the
recovery code.
Specifically in all instances we use the proper (not effective) storage
nodename. In the virStorageSource private data it is what we need to
store. In blockjobs status XML it simply serves us to find the
appropriate 'virStorageSource' struct so using the storage layer node
name is simpler.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use 'qemuBlockStorageSourceGetEffectiveStorageNodename' in all the JSON
props formatters for setting up a 'blockdev-create' job of a format
layer.
In case of the blockjob name designator we're okay to use just the
storage layer nodename as that serves only to find the appropriate
entry.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The lookup by nodename requires the proper storage nodename which we use
also in status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce a set of accessors, which return node names based on
semantics. This will allow to us to modify how we setup the backing
chain in cases when e.g. the format driver can be omitted, without
breaking all the code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use qemuBlockStorageSourceGetFormatProps as it formats the properties of
the 'format' driver in qemu. Adjust the comment which was hinting
otherwise.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Restructure the conditions so that we can use virJSONValueObjectAdd with
a clearer logic for backing store control.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the node name of the storage access driver to identify the block job
volumes. This will prepare the blockjob code for the possibility that the
format layer may be missing. Our lookup code can find either of them,
thus we can safely switch.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The 'virt-aa-helper' process gets a XML of the VM it needs to create a
profile for. For a disk type='volume' this XML contained only the
pool and volume name.
The 'virt-aa-helper' needs a local path though for anything it needs to
label. This means that we'd either need to invoke connection to the
storage driver and re-resolve the volume. Alternative which makes more
sense is to pass the proper data in the XML already passed to it via the
new XML formatter and parser flags.
This was indirectly reported upstream in
https://gitlab.com/libvirt/libvirt/-/issues/546
The configuration in the issue above was created by Cockpit on Debian.
Since Cockpit is getting more popular it's more likely that users will
be impacted by this problem.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Re-translating the disk source pools when reconnecting to a VM makes no
sense as the volume might have changed or pool became inactive. The VM
still uses the original volume though. Failing to re-translate the pool
also causes the VM to be killed.
Fix this by storing the original translation in the status XML.
Resolves: https://issues.redhat.com/browse/RHEL-7345
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Re-translating a disk type='volume' definition from a storage pool is
not a good idea in cases when the volume might have changed or we might
not have access to the storage driver.
Specific cases are if a storage pool is not activated on daemon restart,
then re-connecting to a VM fails, or if the virt-aa-helper program tries
to setup labelling for apparmor.
Add a new flag which will preserve the translated data in the
definition.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If a disk definition was already translated re-doing it makes no sense.
Skip the translation if the 'actualtype' is already populated.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Register autoptr cleanup function for virStorageSourcePoolDef and
refactor the parser to simplify the logic.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>