Use virJSONValueObjectGetArray + virJSONValueArrayToStringList instead
so that the ofvirJSONValueObjectGetStringArray wrapper can be removed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
In two instances (qemuMonitorJSONGetStringListProperty,
qemuMonitorJSONGetStringArray) the return value is checked by
qemuMonitorJSONCheckReply and extracted by
virJSONValueObjectGetStringArray.
We can use qemuMonitorJSONGetReply which returns it directly and then
virJSONValueArrayToStringList to convert it without the additional
lookup.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Using 'virJSONValueObjectHasKey' when we want to access the value
afterwards is wasteful. Fetch the JSON value right away.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Rather than checking that the object has the correct key and then
fetching it again use fetch the array first and then use
virJSONValueArrayToStringList to directly convert it.
Additionally we can avoid the conversion if there are no members
simplifying the surrounding logic.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The 'dependencies' field in the return data may be missing in some
cases. Historically 'virJSONValueObjectGetStringArray' didn't report
error in such case, but later refactor (commit 043b50b948 ) added
an error in order to use it in other places too.
Unfortunately this results in the error log being spammed with an
irrelevant error in case when qemuAgentGetDisks is invoked on a VM
running windows.
Replace the use of virJSONValueObjectGetStringArray by fetching the
array first and calling virJSONValueArrayToStringList only when we have
an array.
Fixes: 043b50b948
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2149752
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use qemuMonitorJSONGetReply and unify the two blocks with the same
condition.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use qemuMonitorJSONGetReply in cases where qemuMonitorJSONCheckReply
is followed by virJSONValueObjectGet*(reply, "return").
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Replace usage of the following pattern with the new helper:
if (qemuMonitorJSONCheckReply(cmd, reply, VIR_JSON_TYPE_ARRAY) < 0)
return -1;
data = virJSONValueObjectGetArray(reply, "return");
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Replace usage of the following pattern with the new helper:
if (qemuMonitorJSONCheckReply(cmd, reply, VIR_JSON_TYPE_OBJECT) < 0)
return -1;
data = virJSONValueObjectGetObject(reply, "return");
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Rather than simply checking that the 'return' field is of the expected
type we can directly return it as the caller is very likely going to use
it. Extract the code into the new function and add a wrapper to preserve
old functionality.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Don't continue with the historical mistake and fix all internal
functions to use a sane type for flags.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The virCommandSetSendBuffer() function consumes passed @buffer,
but takes it only as plain pointer. Switch to a double pointer to
make this obvious. This allows us then to drop all
g_steal_pointer() in callers.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
QEMU capabilities is the only thing we use from priv so we can just pass
that directly.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When an internal API takes a vm pointer, it's usually just after the
driver argument.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The vm object is used inside qemuMigrationCookieParse based on the flags
passed to qemuMigrationCookieParse and the content of the cookie. The
callers should not just blindly guess and pass NULL if they
(incorrectly) think the vm object is not needed. We should always pass
the vm object unless it does not exist yet.
This fixes a bug when statistics of a completed migration reported
"Unknown" operation instead of "Incoming migration" on the destination
host.
https://bugzilla.redhat.com/show_bug.cgi?id=2137298
Fixes: v8.7.0-79-g0150f7a8c1
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Inside of qemuTPMEmulatorBuildCommand() there are two calls to
qemuTPMSetupEncryption() which simply ignore returned error. This
is suboptimal because then we rely on swtpm binary reporting a
generic error (something among invalid command line arguments)
while an error reported by qemuTPMSetupEncryption() is more
specific.
However, since virCommandSetSendBuffer() only sets an error
inside of virCommand structure (the error is then reported in
virCommandRun()), we need to exempt its retval from error
checking. Thus, the signature of qemuTPMSetupEncryption() is
changed a bit so that -1/0 can be returned to indicate error.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When there is no vIOMMU, vfio devices don't need to lock the entire guest
memory per-device, but they still need to lock the entire guest memory to
share between all vfio devices. This memory accounting is not shared
with vDPA devices, so it should be added to the memlock limit separately.
Commit 8d5704e2 added support for multiple vfio/vdpa devices but
calculated the limits incorrectly when there were both vdpa and vfio
devices and no vIOMMU. In this case, the memory lock limit was not
increased separately for the vfio devices.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2143838
Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
When post-copy migration is running in Finish phase we already did
everything needed and we're just waiting for all the memory to transfer
to the destination. The domain is already running on there at this
point. Once all data is transferred (QEMU sends a MIGRATION completed
event) we're done. So in this specific post-copy case the source does
not need to care about the result of the Finish call as long as QEMU
says migration completed. The Finish call to the destination daemon may
fail for reasons that do not affect QEMU, e.g., libvirt daemon was
restarted there or the libvirt connection broke.
Currently we just mark the post-copy migration as failed on the source
and keep the domain paused there. But when libvirt daemon is restarted
at this point, it will detect migration finished successfully and kill
the domain as migrated. It make sense to do this even without having to
restart the daemon.
Closes: https://gitlab.com/libvirt/libvirt/-/issues/338
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
We need the restored job even in case the migration already finished
even though we will stop it just a few lines below as the functions we
call in between require an existing migration job.
This fixes a crash on reconnect when post-copy migration finished while
the daemon was not running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When generating memory for main guest memory memory-backend-*
might be used. This means, we may need to generate thread-context
objects too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When generating memory for memory devices memory-backend-* might
be used. This means, we may need to generate thread-context
objects too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
When generating memory for guest NUMA memory-backend-* might be
used. This means, we may need to generate thread-context objects
too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
While technically thread-context objects can be reused, we only
use them (well, will use them) to pin memory allocation threads.
Therefore, once we connect to QEMU monitor, all memory (with
prealloc=yes) was allocated and thus these objects are no longer
needed and can be removed. For on demand allocation the TC object
is left behind.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
The aim of thread-context object is to set affinity on threads
that allocate memory for a memory-backend-* object. For instance:
-object '{"qom-type":"thread-context","id":"tc-ram-node0","node-affinity":[3]}' \
-object '{"qom-type":"memory-backend-memfd","id":"ram-node0","hugetlb":true,\
"hugetlbsize":2097152,"share":true,"prealloc":true,"prealloc-threads":8,\
"size":15032385536,"host-nodes":[3],"policy":"preferred",\
"prealloc-context":"tc-ram-node0"}' \
allocates 14GiB worth of memory, backed by 2MiB hugepages from
host NUMA node 3, using 8 threads. If it weren't for
thread-context these threads wouldn't have any affinity and thus
theoretically could be scheduled to run on CPUs of different NUMA
node (which is what I saw occasionally).
Therefore, whenever we are pinning memory (IOW setting host-nodes
attribute), we can generate thread-context object with the same
affinity.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
In its commit v7.1.0-1429-g7208429223 QEMU gained new object
thread-context, which allows running specialized tasks with
affinity set to a given subset of host CPUs/NUMA nodes. Even
though only memory allocation task accepts this new object, it's
exactly what we aim to implement in libvirt. Therefore, introduce
a new capability to track whether QEMU is capable of this object.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
In one of recent commits an error message was introduced. In this
message a variable of type ssize_t is being printed out, but the
corresponding format directive is %ld instead of %zd which breaks
on 32bits systems. Switch to proper format.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
According to the result parsing from xml, add the argument of
SGX EPC memory backend into QEMU command line.
$ qemu-system-x86_64 \
...... \
-object '{"qom-type":"memory-backend-epc","id":"memepc0","prealloc":true,"size":67108864,"host-nodes":[0,1],"policy":"bind"}' \
-object '{"qom-type":"memory-backend-epc","id":"memepc1","prealloc":true,"size":16777216,"host-nodes":[2,3],"policy":"bind"}' \
-machine sgx-epc.0.memdev=memepc0,sgx-epc.0.node=0,sgx-epc.1.memdev=memepc1,sgx-epc.1.node=1
Signed-off-by: Lin Yang <lin.a.yang@intel.com>
Signed-off-by: Haibin Huang <haibin.huang@intel.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This is similar to the previous commit. SGX memory backend needs
to access /dev/sgx_vepc and /dev/sgx_provision. Create these
nodes in domain's private /dev when required by domain's config.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Haibin Huang <haibin.huang@intel.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
SGX memory backend needs to access /dev/sgx_vepc (which allows
userspace to allocate "raw" EPC without an associated enclave)
and /dev/sgx_provision (which allows creating provisioning
enclaves). Allow these two devices in CGroups if a domain is
configured so.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Haibin Huang <haibin.huang@intel.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Extend hypervisor capabilities to include sgx feature. When available,
the hypervisor supports launching an VM with SGX on Intel platfrom.
The SGX feature tag privides additional details like section size and
sgx1 or sgx2.
Signed-off-by: Haibin Huang <haibin.huang@intel.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Generate the QMP command for query-sgx-capabilities and the command
return SGX capabilities from QMP.
{"execute":"query-sgx-capabilities"}
the right reply:
{"return":
{
"sgx": true,
"section-size": 197132288,
"flc": true
}
}
the error reply:
{"error":
{"class": "GenericError", "desc": "SGX is not enabled in KVM"}
}
Signed-off-by: Haibin Huang <haibin.huang@intel.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
JSON args for -netdev were added as precursor for adding the 'dgram'
network backend type. Enable the detection and update test cases using
DO_TEST_CAPS_LATEST.
Enabling the capability also ensures that the -netdev argument is
validated against the QAPI schema of 'netdev_add' which was already
implemented but not enabled.
The parser supporting JSON was added by qemu commit f3eedcddba3 and
enabled when adding stream/dgram netdevs in commit 5166fe0ae46.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
All callers pass the equivalent of looking up whether qemu supports
QEMU_CAPS_QMP_QUERY_NAMED_BLOCK_NODES_FLAT. Use
'mon->queryNamedBlockNodesFlat' directly and refactor all callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
'query-named-block-nodes' in non-flat mode returns redundantly nested
data under the 'backing-image' field. Fortunately we don't need it when
updating the capacity stats.
This function was unfortunately not fixed originally when the support
for flat mode was added. Use the flat cached in the monitor object to
force flat mode if available.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rather than having callers always pass this flag store it in the
qemuMonitor object. Following patches will convert the code to use this
internal flag.
In the future this will also simplify removal when all supported qemu
versions will support the new mode.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
We don't need automatic freeing for 'blockNamedNodeData' and we can
directly return it rather than checking it for NULL-ness first.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemu-6.2 introduced support for the hv-avic enlightenment which allows
to use Hyper-V SynIC with hardware APICv/AVIC enabled.
Implement the libvirt support for it.
Closes: https://gitlab.com/libvirt/libvirt/-/issues/402
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In recent commits migration of TPM on shared storage was
introduced. However, I've only complied it with gcc and thus did
not notice that clang build fails due to missing break; at the
end of some (empty) cases in switch() statements.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Never remove the TPM state on outgoing migration if the storage setup
has shared storage for the TPM state files. Also, do not do the security
cleanup on outgoing migration if shared storage is detected.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
When using shared storage there is no need to apply security labels on the
storage since the files have to have been labeled already on the source
side and we must assume that the source and destination side have been
setup to use the same uid and gid for running swtpm as well as share the
same security labels. Whether the security labels can be used at all
depends on the shared storage and whether and how it supports them.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Pass the --migration option to swtpm if swptm supports it (starting
with v0.8) and if the TPM's state is written on shared storage. If this
is the case apply the 'release-lock-outgoing' parameter with this
option and apply the 'incoming' parameter for incoming migration so that
swtpm releases the file lock on the source side when the state is migrated
and locks the file on the destination side when the state is received.
If a started swtpm instance is running with the necessary options of
migrating with share storage then remember this with a flag in the
virDomainTPMPrivateDef.
Report an error if swtpm does not support the --migration option and an
incoming migration across shared storage is requested.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Add support for storing private TPM-related data. The first private data
will be related to the capability of the started swtpm indicating whether
it is capable of migration with a shared storage setup since that requires
support for certain command line flags that were only becoming available
in v0.8.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Do not create storage if the TPM state files are on shared storage and
there's an incoming migration since in this case the storage directory
must already exist. Also do not run swtpm_setup in this case.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
New qemuTPMHasSharedStorage() function is introduced which
returns whether the swtpm state directory is on a shared
filesystem (e.g. NFS).
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Now that nothing uses this capability, it can be retired.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>