In most cases this eliminates one or more calls to
virBufferClearAndReset(), but even when it doesn't it's better because:
1) it makes the code more consistent, making it more likely that new
contributors who are "learning by example" will to the right thing.
2) it protects against future modifications that might have otherwise
needed to add a virBufferClearAndReset()
3) Currently some functions don't call virBufferClearAndReset() only
because they're relying on some subordinate function to call it for
them (e.g. bhyveConnectGetSysinfo() in this patch relies on
virSysinfoFormat() to clear out the buffer when there is an
error). I think this is sloppy behavior, and that the toplevel
function that defines and initializes the buffer should be the
function clearing it at the end.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The output of vcpupin and emulatorpin for a domain with vcpu
placement='static' is based on a default bitmap that contains
all possible CPUs in the host, regardless of the CPUs being offline
or not. E.g. for a Linux host with this CPU setup (from lscpu):
On-line CPU(s) list: 0,8,16,24,32,40,(...),184
Off-line CPU(s) list: 1-7,9-15,17-23,25-31,(...),185-191
And a domain with this configuration:
<vcpu placement='static'>1</vcpu>
'virsh vcpupin' will return the following:
$ sudo ./run tools/virsh vcpupin vcpupin_test
VCPU CPU Affinity
----------------------
0 0-191
This is benign by its own, but can make the user believe that all
CPUs from the 0-191 range are eligible for pinning. Which can lead
to situations like this:
$ sudo ./run tools/virsh vcpupin vcpupin_test 0 1
error: Invalid value '1' for 'cpuset.cpus': Invalid argument
This is exarcebated by the fact that 'virsh vcpuinfo' considers only
available host CPUs in the 'CPU Affinity' field:
$ sudo ./run tools/virsh vcpuinfo vcpupin_test
(...)
CPU Affinity: y-------y-------y-------(...)
This patch changes the default bitmap of vcpupin and emulatorpin, in
the case of domains with static vcpu placement, to all available CPUs
instead of all possible CPUs. Aside from making it consistent with
the behavior of 'vcpuinfo', users will now have one less incentive to
try to pin a vcpu in an offline CPU.
https://bugzilla.redhat.com/show_bug.cgi?id=1434276
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The idea is to have a function that calls virHostCPUGetOnlineBitmap()
but, instead of returning NULL if the host does not have CPU
offlining capabilities, fall back to a bitmap containing all
present CPUs.
Next patch will use this helper in two other places.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
This function reads the string in sysfspath/cpu/present and
parses it manually to retrieve the number of present CPUs.
virHostCPUGetPresentBitmap() reads and parses the same file,
using a more robust parser via virBitmapParseUnlimited(),
but returns a bitmap. Let's drop all the manual parsing done
here and simply return the size of the resulting bitmap
from virHostCPUGetPresentBitmap().
Given that no more parsing is being done manually in the function,
rename it to virHostCPUCountLinux().
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use g_auto* pointers to avoid the need for the cleanup label. The
type of the pointer 'virDomainPtr dom' was changed to its alias
'virshDomainPtr' to allow the use of g_autoptr().
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Use g_auto* in the string and in the bitmap. Remove the
cleanup label since it's now unneeded.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Since the ESX virtual hardware version 4.0, virtual machines support up
to 10 virtual NICs instead of 4 previously. This changes the limit
accordingly based on the provided `virtualHW.version`.
Signed-off-by: Bastien Orivel <bastien.orivel@diateam.net>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
If the name is 'vcpus', we will get 'vcpussched' instead of 'vcpusched'
in the error message as following:
... 19155 : vcpussched attributes 'vcpus' must not overlap
So we use 'vcpu' to replace 'vcpus'.
Signed-off-by: Liao Pingfang <liao.pingfang@zte.com.cn>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
A few commits ago, in aeecbc87b7, I've implemented command line
generation for ACPI HMAT. For this, we need to know if at least
one guest NUMA node has vCPUs. This is tracked in
@masterInitiator variable, which is initialized to -1, then we
iterate through guest NUMA nodes and break the loop if we find a
node with a vCPU. After the loop, if masterInitiator is still
negative then no NUMA node has a vCPU and we error out. But this
exact check was missing comparison for negativeness.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This capability tracks whether QEMU is capable of defining HMAT
ACPI table for the guest.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
These APIs will be used by QEMU driver when building the command
line.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
There are several restrictions, for instance @initiator and
@target have to refer to existing NUMA nodes (daa), @cache has to
refer to a defined cache level and so on.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
To cite ACPI specification:
Heterogeneous Memory Attribute Table describes the memory
attributes, such as memory side cache attributes and bandwidth
and latency details, related to the System Physical Address
(SPA) Memory Ranges. The software is expected to use this
information as hint for optimization.
According to our upstream discussion [1] this is exposed under
<numa/> as <cache/> under NUMA <cell/> and <latency> or
<bandwidth/> under numa/latencies.
1: https://www.redhat.com/archives/libvir-list/2020-January/msg00422.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
QEMU allows creating NUMA nodes that have memory only.
These are somehow important for HMAT.
With check done in qemuValidateDomainDef() for QEMU 2.7 or newer
(checked via QEMU_CAPS_NUMA), we can be sure that the vCPUs are
fully assigned to NUMA nodes in domain XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
There is only one caller of virDomainNumaSetNodeCpumask() which
checks for the return value but because the function will return
NULL iff the @cpumask was NULL in the first place. But in that
place @cpumask can't be NULL because it was just allocated by
virBitmapParse().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The machine can not be NULL at this point -
qemuDomainDefPostParse() makes sure it isn't.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The function doesn't just build the argument for -numa. Since the
-numa can be repeated multiple times, it also puts -numa onto the
cmd line. Also, the rest of the functions has 'Command' infix.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
There are two functions virDomainNumaDefCPUFormatXML() and
virDomainNumaDefCPUParseXML() which format and parse domain's
<numa/>. There is nothing CPU specific about them. Drop the
infix.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
There is nothing domain specific about the function, thus it
should not have virDomain prefix. Also, the fact that it is a
static function makes it impossible to use from other files.
Move the function to virxml.c and drop the 'Domain' infix.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This test case checks that expanding NUMA distance works. On
input we accept if only distance from A to B is specified. On the
output we format the B to A distance too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Commit 82576d8f35 used a string "on" to enable the 'pmem' property.
This is okay for the command line visitor, but the property is declared
as boolean in qemu and thus it will not work when using QMP.
Modify the type to boolean. This changes the command line, but
fortunately the command line visitor in qemu parses both 'yes' and 'on'
as true for the property.
https://bugzilla.redhat.com/show_bug.cgi?id=1854684
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Previous commit removed the last usage of the function. Drop
virQEMUCapsCompareArch as well since virQEMUCapsCacheLookupByArch was
its only caller.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Firstly, SEV is present only on AMD, so we can safely assume x86.
Secondly, the problem with looking up capabilities in the cache by arch
is that it's using virHashSearch with a callback to find the right
capabilities and get the binary name from it as well, but since the
cache is empty, it will return NULL and we won't get the corresponding
binary name out of the lookup either. Then, during the cache validation
we try to create a new cache entry for the emulator, but since we don't
have the binary name, nothing gets created.
Therefore, virQEMUCapsCacheLookupDefault is used to fix this issue,
because it doesn't rely on the capabilities cache to construct the
emulator binary name.
https://bugzilla.redhat.com/show_bug.cgi?id=1852311
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Add link and description of libvirt knowledge base to make it easier for
users and testers to understand libvirt.
Signed-off-by: Jianan Gao <jgao@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
The semantics of the backup operation don't strictly require that all
disks being backed up are part of the same incremental part (when a disk
was checkpointed/backed up separately or in a different VM), or even
they may not have a previous checkpoint at all (e.g. when the disk
was freshly hotplugged to the vm).
In such cases we can still create a common checkpoint for all of them
and backup differences according to configuration.
This patch adds a per-disk configuration of the checkpoint to do the
incremental backup from via the 'incremental' attribute and allows
perform full backups via the 'backupmode' attribute.
Note that no changes to the qemu driver are necessary to take advantage
of this as we already obey the per-disk 'incremental' field.
https://bugzilla.redhat.com/show_bug.cgi?id=1829829
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Call the post-processing function so that we can validate that it does
the correct thing.
virDomainBackupAlignDisks requires disk definitions to be present so
let's fake them by copying disks from the backup definition and add one
extra disk 'vdextradisk'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Replace the output by a copy of the input file for further changes once
we start testing virDomainBackupAlignDisks.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The former is the new recommended frontend for browsing Go API
documentation online.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Use the configured TLS env to setup encryption of the TLS transport.
https://bugzilla.redhat.com/show_bug.cgi?id=1822631
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Allow enabling TLS for the NBD server used to do pull-mode backups. Note
that documentation already mentions 'tls', so this just implements the
schema and XML bits.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
TLS is required to transport backed-up data securely when using
pull-mode backups.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>