No sane firmware build will fail this check, but just to be on
the safe side let's check anyway.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Currently, a firmware configuration such as
<os firmware='efi'>
<firmware>
<feature enabled='yes' name='enrolled-keys'/>
</firmware>
</os>
will correctly pick a firmware that implements the Secure Boot
feature and initialize the NVRAM file so that it contains the
keys necessary to enforce the signing requirements. However, the
lack of a
<loader secure='yes'/>
element makes it possible for pflash writes to happen outside
of SMM mode. This means that the authenticated UEFI variables
where the keys are stored could potentially be overwritten by
malicious code running in the guest, thus making it possible to
circumvent Secure Boot.
To prevent that from happening, automatically turn on the
loader.secure feature whenever a firmware that implements Secure
Boot is chosen by the firmware autoselection logic. This is
identical to the way we already automatically enable SMM in such
a scenario.
Note that, while this is technically a guest-visible change, it
will not affect migration of existings VMs and will not prevent
legitimate guest code from running.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When resuming post-copy migration users may want to limit the bandwidth
used by the migration and use a value that is different from the one
specified when the migration was originally started.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/333
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
So the we can apply selected migration parameters even when resuming
post-copy migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We will need to annotate individual parameters a bit more than just
noting their type. Let's introduce qemuMigrationParamInfo replacing
simple qemuMigrationParamTypes with an array of structs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The flags will later be used to determine which parameters should
actually be applied.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
My original commit v8.4.0-288-gf01fc4d119 accidentally forgot to fix
both instances of the same problem. While it fixed the destination side
of migration, the source one remained broken.
However, that commit was also wrong in saying the issue could have
caused unlimited memory locking to be allowed for QEMU when RDMA
migration was used. It could not, because the code would refuse to even
think about starting RDMA migration if hard_limit was not set. But
avoiding the "mem.hard_limit > 0" check is useful anyway.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Introduced in v8.4.0-rc1~183 but the first real problem
introduced in v8.4.0-rc1~170, there's a
qemuBuildInterfaceConnect() call inside of
qemuDomainAttachNetDevice(). If the former fails, then the
function is immediately returned from instead of jumping onto the
cleanup label. This is crucial, because at this point the domain
definition contains 'borrowed' net definition, which is then
freed, since an error was met. The domain definition is then left
with a dangling pointer which leads to all sorts of different
crashes.
Fixes: 29d022b1eb
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2102009
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Our documentation says RDMA migration requires hard_limit to be set so
that we know how big memory locking limit should be set for the domain
during migration. But since commit v1.2.13-71-gcf521fc8ba (which changed
the default hard_limit value from 0 to
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED) we were actually setting memlock
limit to unlimited if hard_limit was not set.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
For RDMA migration we update memory locking limit, but never set it back
once migration finishes (on the destination host) or aborts (on the
source host).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This helper will not try to set the limit if it is already big enough,
which may be useful when libvirt daemon is running in a containerized
environment and is not allowed to change memory locking limit.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemuDomainAdjustMaxMemLock combined computing the desired limit with
applying it. This patch separates the code to apply a memory locking
limit to a new qemuDomainSetMaxMemLock helper for better reusability.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Convert all the cases where we can unconditionally free
the virURI at the end of scope.
In libxlDomainMigrationDstPrepare, uri is only filled
if uri_in was present, so moving the virURIFree out of
the condition is safe.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Replace tpm->type and tpm->model qemuCaps validation with the
similar logic in domcaps.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
The qemu `tpm-tis` device is an ISA device, so only really applicable
to x86 archs. For all non-x86 archs we should use `tpm-tis-device`
This fixes tpm-tis usage on armv7l and riscv
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Checking against qemu capabilities should be enough here
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/329
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
We explicitly check whether the value is YES or NO, which makes
it unnecessary to make sure it's not ABSENT beforehand.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch introduces the logic to format and parse remote NVRAM.
Update NVRAM element schema, and docs for supporting network backed
NVRAM. NVRAM backed over network would give the flexibility to start
the VM on any host without having to worry about where to get the latest
nvram image.
<nvram type='network'>
<source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/0'>
<host name='example.com' port='6000'/>
</source>
</nvram>
or
<nvram type='file'>
<source file='/var/lib/libvirt/nvram/guest_VARS.fd'/>
</nvram>
In the qemu driver we will support the new definition only with qemu's
supporting -blockdev.
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Prepare for network backed nvram by refusing the reset of nvram on boot
and don't check whether it exists. We will not support filling it from a
template.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Setup all fields for use with -blockdev.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
We don't really use it besides when starting up the VM so when
reconnecting this step is totally pointless.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Use the designated helpers for virStorageSource instead using the
file-based ones with a check.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Since we now have a full virStorageSource for storing the nvram path we
don't need the extra dance of transferring the data into the 'pflash1'
variable which was an intermediary solution to use -blockdev.
For now we keep it functionally identical to the previous impl.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Currently, libvirt allows only local filepaths to specify the location
of the 'nvram' image. Changing it to virStorageSource type will allow
supporting remote storage for nvram.
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Signed-off-by: Florian Schmidt <flosch@nutanix.com>
Signed-off-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
Extract the internals of qemuDomainPrepareStorageSourceBlockdev into
qemuDomainPrepareStorageSourceBlockdevNodename so that we can reuse it
when instantiating the virStorageSource for pflash backing.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Rohit Kumar <rohit.kumar3@nutanix.com>
This is a partial revert of 87a43a907f
The change to use g_clear_pointer() in more places was accidentally
applied to cases involving vir_g_source_unref().
In some cases, the ordering of g_source_destroy() and
vir_g_source_unref() was reversed, which resulted in the source being
marked as destroyed, after it is already unreferenced. This
use-after-free case might work in many cases, but with versions of
glib older than 2.64.0 it may defer unref to run within the main
thread to avoid a race condition, which creates a large distance
between the g_source_unref() and g_source_destroy().
In some cases, the call to vir_g_source_unref() was replaced with a
second call to g_source_destroy(), leading to a memory leak or worse.
In our experience, the symptoms were that use of libvirt-python became
slower over time, with OpenStack nova-compute initially taking around
one second to periodically query the host PCI devices, and within an
hour it was taking over a minute to complete the same operation, until
it is was eventually running this query back-to-back, resulting in the
nova-compute process consuming 100% of one CPU thread, losing its
RabbitMQ connection frequently, and showing up as down to the control
plane.
Signed-off-by: Mark Mielke <mark.mielke@gmail.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
When creating a TAP interface we can end up with multiple FDs,
each representing one queue. However, these FDs must be
relabelled as they are then passed to QEMU. In case of
qemuBuildInterfaceConnect() we allocate the array for the FDs and
then let function corresponding to the <interface/> type to fill
the array with FDs. When any of the functions meets an error,
it's also responsible for closing previously opened FDs. However,
the functions take a shortcut: iterate through each member of the
array and close it (if it's non-negative). This assumes that the
array is initialized to negative values, which use to be the case
before rewrite in v8.4.0-rc1~170 but after it it's no longer the
case. Subsequently, "random" FDs are closed (okay, not that
random since the array is allocated via g_new0(), but hey - FD 0
is still valid FD and might be valuable, actually).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2075383#c18
Fixes: 7a38d3946b
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Since the main-loop and iothread classes are derived from the
same class (EventLoopBaseClass) we don't need new capability and
can use QEMU_CAPS_IOTHREAD_THREAD_POOL_MAX directly to check
whether QEMU's capable of setting defaultiothread pool size.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Introduced in previous commit, QEMU driver needs to be taught how
to set VIR_DOMAIN_IOTHREAD_THREAD_POOL_MIN and
VIR_DOMAIN_IOTHREAD_THREAD_POOL_MAX parameters on given IOThread.
Fortunately, this is fairly trivial to do and since these two
parameters are exposed in domain XML too the update of inactive
XML can be wired up too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Now that we have a capability that reflects whether QEMU is
capable of setting iothread pool size, let's introduce a
validator check to make sure users are not trying to use this
feature with QEMU that doesn't support it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This capability reflects whether QEMU allows setting
thread-pool-min and thread-pool-max attributes on iothread
object. Since both attributes were introduced in the same commit
(v7.0.0-878-g71ad4713cc) and can't exist independently of each
other we can stick with one capability covering both of them.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
They were constructed from two separate strings using "%s: %s", which
is ugly and does not work well with translations.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When a domain has a guest agent channel enabled and the agent is running
in the guest, we will get VSERPORT_CHANGE event on a destination host as
soon as we start vCPUs there. This is not an issue for normal migration,
but post-copy migration will remain running after we started vCPUs on
the destination. If it runs for more than 30s, the VSERPORT_CHANGE event
handler will fail to get a job and log the following error message:
Timed out during operation: cannot acquire state change lock (held
by monitor=remoteDispatchDomainMigrateFinish3Params)
and of course we will think the guest agent is not connected and thus
all APIs talking to it will fail. Until the agent or libvirt daemon is
restarted.
Luckily we only need to update the channel state (to mark it as
connected) and connect to the agent neither of which conflicts with
migration. Thus we can safely enable processing this event during
migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This is a special job for operations that need to modify domain state
during an active migration. The modification must not affect any state
that could conflict with the migration code. This is useful mainly for
event handlers that need to be processed during migration and which
could otherwise time out on acquiring a normal MODIFY job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Since all parts of post-copy recovery have been implemented now, it's
time to enable the corresponding flag.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Every single call to qemuMigrationJobContinue needs to register a
cleanup callback in case the migrating domain dies between phases or
when migration is paused due to a failure in postcopy mode.
Let's integrate registering the callback in qemuMigrationJobContinue to
make sure the current thread does not release a migration job without
setting a cleanup callback.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The callback will properly cleanup non-p2p migration job in case the
migrating domain dies between Begin and Perform while the client which
controls the migration is not cooperating (normally the API for the next
migration phase would handle this).
The same situation can happen even after Prepare and Perform phases, but
they both already register a suitable callback, so no fix is needed
there.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Normally the structure is created once the source reports completed
migration, but with post-copy migration we can get here even after
libvirt daemon was restarted. It doesn't make sense to preserve the
structure in our status XML as we're going to rewrite almost all of it
while refreshing the stats anyway. So we just create the structure here
if it doesn't exist to make sure we can properly report statistics of a
completed migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>