The code setting the nodenames needs to use the 'true' nodename of the
format layer.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Convert the main -blockdev JSON object setup code to use the new
accessors. In these we use mainly the real 'format' layer node name.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the effective nodename for naming the job as we use that one now.
It doesn't matter too much which one we pick, because it's used just for
the name of the job, which we preserve in the status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Both modified cases in this patch require the effective nodename as they
deal with the data being backed up.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce a set of accessors, which return node names based on
semantics. This will allow to us to modify how we setup the backing
chain in cases when e.g. the format driver can be omitted, without
breaking all the code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
While the name itself doesn't matter, this rename is done to prove that
all places using 'nodestorage' were converted to the appropriate
accessors.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to keep setting the block threshold on the real storage layer
per semantics of the API. Use the appropriate accessor.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In all cases we want to probe stats from the 'storage' layer as we're
interested in the 'threshold' value, which we set there.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the new nodename accessors for any storage layer helper object.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Refactor the code settin up data structures used to attach/detach disks
and SCSI hostdevs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Refactor the code which assigns the 'storage' layer nodenames for disks.
scsi hostdevs and pflash backend.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We need to use the 'effective' storage nodename (one which includes the
optional storage slice 'raw' intermediate layer) in the code which
formats the 'format' layer props.
All other cases need the real storage driver nodename as they either
generate the 'storage' layer props, or the storage slice, which refers
to the proper storage backend.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the new accessors in the private XML formatters and parsers and the
recovery code.
Specifically in all instances we use the proper (not effective) storage
nodename. In the virStorageSource private data it is what we need to
store. In blockjobs status XML it simply serves us to find the
appropriate 'virStorageSource' struct so using the storage layer node
name is simpler.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use 'qemuBlockStorageSourceGetEffectiveStorageNodename' in all the JSON
props formatters for setting up a 'blockdev-create' job of a format
layer.
In case of the blockjob name designator we're okay to use just the
storage layer nodename as that serves only to find the appropriate
entry.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The lookup by nodename requires the proper storage nodename which we use
also in status XML.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Introduce a set of accessors, which return node names based on
semantics. This will allow to us to modify how we setup the backing
chain in cases when e.g. the format driver can be omitted, without
breaking all the code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use qemuBlockStorageSourceGetFormatProps as it formats the properties of
the 'format' driver in qemu. Adjust the comment which was hinting
otherwise.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Restructure the conditions so that we can use virJSONValueObjectAdd with
a clearer logic for backing store control.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Use the node name of the storage access driver to identify the block job
volumes. This will prepare the blockjob code for the possibility that the
format layer may be missing. Our lookup code can find either of them,
thus we can safely switch.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
While qemu is still reporting the 'device' field in the tray even the
code was not ready for the possibility of it missing. Fix the condition
for clearing 'devAlias' if qemu doesn't report the 'device' field.
Signed-off-by: Sergey Mironov <mironov@fintech.ru>
Now that deleting and reverting external snapshots is implemented we can
report that in capabilities so management applications can use that
information and start using external snapshots.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Original code assumed that the memory state file is only migration
stream but it has additional metadata stored by libvirt. To correctly
load the memory state file we need to reuse code that is used when
restoring domain from saved image.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We need to skip all disks that have snapshot type other than 'external'.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When used with internal snapshots there is no memory state file so we
have no data to load and decompression is not needed.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When called from snapshot code we will need to pass snapshot object in
order to make internal snapshots work correctly.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When called by snapshot code we will need to use different reason.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The function will no longer be used only when restoring VM as it will
be used when reverting snapshot as well so move it to qemu_process
and rename it accordingly.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
These new helpers separates the code from the logic used to start new
QEMU process with memory state and will make it easier to move
qemuSaveImageStartProcess() into qemu_process.c file.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Part of qemuSaveImageStartVM() function will be used when reverting
external snapshots. To avoid duplicating code and logic extract the
shared bits into separate function.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Currently, nbdkit support will automatically be enabled as long as
the pidfd_open(2) syscall is available. Optionally, libnbd is used
to generate more user-friendly error messages.
In theory this is all good, since use of nbdkit is supposed to be
transparent to the user. In practice, however, there is a problem:
if support for it is enabled at build time and the necessary
runtime components are installed, nbdkit will always be preferred,
with no way for the user to opt out.
This will arguably be fine in the long run, but right now none of
the platforms that we target ships with a SELinux policy that
allows libvirt to launch nbdkit, and the AppArmor policy that we
maintain ourselves hasn't been updated either.
So, in practice, as of today having nbdkit installed on the host
makes network disks completely unusable unless you're willing to
compromise the overall security of the system by disabling
SELinux/AppArmor.
In order to make the transition smoother, provide a convenient
way for users and distro packagers to disable nbdkit support at
compile time until SELinux and AppArmor are ready.
In the process, detection is completely overhauled. libnbd is
made mandatory when nbdkit support is enabled, since availability
across operating systems is comparable and offering users the
option to make error messages worse doesn't make a lot of sense;
we also make sure that an explicit request from the user to
enable/disable nbdkit support is either complied with, or results
in a build failure when that's not possible. Last but not least,
we avoid linking against libnbd when nbdkit support is disabled.
At the RPM level, we disable the feature when building against
anything older than Fedora 40, which still doesn't have the
necessary SELinux bits but will hopefully gain them by the time
it's released. We also allow nbdkit support to be disabled at
build time the same way as other optional features, that is, by
passing "--define '_without_nbdkit 1'" to rpmbuild. Finally, if
nbdkit support has been disabled, installing libvirt will no
longer drag it in as a (weak) dependency.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
Wrap the macro body in a new block and move the declaration of 'tmp'
into it, to avoid the need to mix g_autofree with manual freeing.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
Hypervisors are referred to by their user-facing name rather
than the name of their libvirt driver, the monolithic daemon is
explicitly referred to as legacy, and a consistent format is
used throughout.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Requires/Wants only tells systemd that the corresponding unit
should be started when the current one is, but that could very
well happen in parallel. For virtlogd/virtlockd, we want the
socket to be already active when the hypervisor driver is
started.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
We're about to change the defaults and start migrating to common
templates: in order to be able to switch units over one at a
time, make the input files that are currently used explicit
rather than implicit.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
virBitmapFormat returns the string that should be freed.
All strings in three ADD_BITMAP calls in qemuDomainGetGuestVcpusParams
are contained in tmp. So memory leak is possible here without VIR_FREE.
Fixes: 0108deb944af5ca6f1da350c9d0352c8ed18738b
Signed-off-by: Anastasia Belova <abelova@astralinux.ru>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that providing the value is optional, we can remove almost
all uses.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
For most services, the value provided explicitly matches the
documented default.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The current message can be misleading, because it seems to suggest
that no firmware of the requested type is available on the system.
What actually happens most of the time, however, is that despite
having multiple firmwares of the right type to choose from, none
of them is suitable because of lacking some specific feature or
being incompatible with some setting that the user has explicitly
enabled.
Providing an error message that describes exactly the problem is
not feasible, since we would have to list each candidate along
with the reason why we rejected it, which would get out of hand
quickly.
As a small but hopefully helpful improvement over the current
situation, reword the error message to make it clearer that the
culprit is not necessarily the firmware type, but rather the
overall domain configuration.
Suggested-by: Michael Kjörling <7d1340278307@ewoof.net>
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Function virGetConnectSecret() can return NULL so we need to check it
since in virSecretGetSecretString() it gets dereferenced.
Reported-by: coverity
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
It's not possible to use password-protected ssh keys directly with
libvirt because libvirt doesn't have any way to prompt a user for the
password. To accomodate password-protected key files, an administrator
can add these keys to an ssh agent and then configure the domain with
the path to the ssh-agent socket.
Note that this requires an administrator or management app to
configure the ssh-agent with an appropriate socket path and add the
necessary keys to it. In addition, it does not currently work with
selinux enabled. The ssh-agent socket would need a label that libvirt
would be allowed to access rather than unconfined_t.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
For ssh disks that are served by nbdkit, we can support logging in with
an ssh key file. Pass the path to the configured key file and the
username to the nbdkit process.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
For ssh disks that are served by nbdkit, use the configured value for
knownHosts and pass it to the nbdkit process.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
For ssh disks that are served by nbdkit, lookup the password from the
configured secret and securely pass it to the nbdkit process using fd
passing.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When using nbdkit to serve a network disk source, the nbdkit process
will start and wait for an nbd connection before actually attempting to
connect to the (remote) disk location. Because of this, nbdkit will not
report an error until after qemu is launched and tries to read from the
disk. This results in a fairly user-unfriendly error saying that qemu
was unable to start because "Requested export not available".
Ideally we'd like to be able to tell the user *why* the export is not
available, but this sort of information is only available to nbdkit, not
qemu. It could be because the url was incorrect, or because of an
authentication failure, or one of many other possibilities.
To make this friendlier for users and easier to detect
misconfigurations, try to connect to nbdkit immediately after starting
nbdkit and before we try to start qemu. This requires adding a
dependency on libnbd. If an error occurs when connecting to nbdkit, read
back from the nbdkit error log and provide that information in the error
report from qemuNbdkitProcessStart().
User-visible change demonstrated below:
Previous error:
$ virsh start nbdkit-test
2023-01-18 19:47:45.778+0000: 30895: error : virNetClientProgramDispatchError:172 : internal
error: process exited while connecting to monitor: 2023-01-18T19:47:45.704658Z
qemu-system-x86_64: -blockdev {"driver":"nbd","server":{"type":"unix",
"path":"/var/lib/libvirt/qemu/domain-1-nbdkit-test/nbdkit-libvirt-1-storage.socket"},
"node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: Requested export not
available
error: Failed to start domain 'nbdkit-test'
error: internal error: process exited while connecting to monitor: 2023-01-18T19:47:45.704658Z
qemu-system-x86_64: -blockdev {"driver":"nbd","server":{"type":"unix",
"path":"/var/lib/libvirt/qemu/domain-1-nbdkit-test/nbdkit-libvirt-1-storage.socket"},
"node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}: Requested export not
available
After this change:
$ virsh start nbdkit-test
2023-01-18 19:44:36.242+0000: 30895: error : virNetClientProgramDispatchError:172 : internal
error: Failed to connect to nbdkit for 'http://localhost:8888/nonexistent.iso': nbdkit: curl[1]:
error: problem doing HEAD request to fetch size of URL [http://localhost:8888/nonexistent.iso]:
HTTP response code said error: The requested URL returned error: 404
error: Failed to start domain 'nbdkit-test'
error: internal error: Failed to connect to nbdkit for 'http://localhost:8888/nonexistent.iso]:
error: problem doing HEAD request to fetch size of URL [http://localhost:8888/nonexistent.iso]:
HTTP response code said error: The requested URL returned error: 404
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Adds the ability to monitor the nbdkit process so that we can take
action in case the child exits unexpectedly.
When the nbdkit process exits, we pause the vm, restart nbdkit, and then
resume the vm. This allows the vm to continue working in the event of a
nbdkit failure.
Eventually we may want to generalize this functionality since we may
need something similar for e.g. qemu-storage-daemon, etc.
The process is monitored with the pidfd_open() syscall if it exists
(since linux 5.3). Otherwise it resorts to checking whether the process
is alive once a second. The one-second time period was chosen somewhat
arbitrarily.
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>