Right now, if numad fails, we raise an error but return an
empty string to the caller instead of a NULL pointer, which
means processing will continue and the user will see
# virsh start guest
error: Failed to start domain guest
error: invalid argument: Failed to parse bitmap ''
instead of a more reasonable
# virsh start guest
error: Failed to start domain guest
error: operation failed: Failed to query numad for the advisory nodeset
Make sure the user gets a better error message.
https://bugzilla.redhat.com/show_bug.cgi?id=1716387
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This helper converts a set of NUMA node to the set of CPUs
they contain.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
By making use of GNU C's cleanup attribute handled by the
VIR_AUTOPTR macro for declaring aggregate pointer variables,
majority of the calls to *Free functions can be dropped, which
in turn leads to getting rid of most of our cleanup sections.
Signed-off-by: Sukrit Bhatnagar <skrtbhtngr@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
By making use of GNU C's cleanup attribute handled by the
VIR_AUTOFREE macro for declaring scalar variables, majority
of the VIR_FREE calls can be dropped, which in turn leads to
getting rid of most of our cleanup sections.
Signed-off-by: Sukrit Bhatnagar <skrtbhtngr@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1569678
On some large systems (with ~400GB of RAM) it is possible for
unsigned int to overflow in which case we report invalid number
of 4K pages pool size. Switch to unsigned long long.
We hit overflow in virNumaGetPages when doing:
huge_page_sum += 1024 * page_size * page_avail;
because although 'huge_page_sum' is an unsigned long long, the
page_size and page_avail are both unsigned int, so the promotion
to unsigned long long doesn't happen until the sum has been
calculated, by which time we've already overflowed.
Turning page_avail into a unsigned long long is not strictly
needed until we need ability to represent more than 2^32
4k pages, which equates to 16 TB of RAM. That's not
outside the realm of possibility, so makes sense that we
change it to unsigned long long to avoid future problems.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Name it virNumaGetHostMemoryNodeset and return only NUMA nodes which
have memory installed. This is necessary as the kernel is not very happy
to set the memory cgroup setting for nodes which do not have any memory.
This would break vcpu hotplug with following message on such
configruation:
Invalid value '0,8' for 'cpuset.mems': Invalid argument
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1375268
Move all APIs with a virHostMEM name prefix out into new
util/virhostmem.h & util/virhostmem.c files
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In preparation for moving all the memory related APIs out of
the nodeinfo file, give them a virHostMem name prefix.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit id '1c24cfe9' added error messages for virNumaSetPagePoolSize;
however, virNumaGetHugePageInfo also uses virNumaGetHugePageInfoPath
in order to build the path, but it never checked upon return if
the built path exists which could lead to an error message as follows:
$ virsh freepages 0 1
error: Failed to open file
'/sys/devices/system/node/node0/hugepages/hugepages-1kB/free_hugepages':
No such file or directory
Rather than add the same message for the other two callers, adjust
the virNumaGetHugePageInfoPath in order not only build the path, but
also check if the built path exists. If the path does not exist,
then generate the error message and return failure.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit id '1c24cfe9' added new checks and error messaes for failure
scenarios. Let's adjust those error messages to after the call to
virNumaGetHugePageInfoPath in order to provide a more specific error
message depending on node and page_size
After this patch:
# virsh allocpages --pagesize 2047 --pagecount 1 --cellno 0
error: operation failed: page size 2047 is not available on node 0
# virsh allocpages --pagesize 2047 --pagecount 1
error: operation failed: page size 2047 is not available
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1265114
Refactor helper virNumaGetHugePageInfoPath to handle returning a directory
path when passed a page_size of 0 and suffix == NULL into a new helper
virNumaGetHugePageInfoDir which will only be called when a directory
path is expected to be returned. This solves the issue where the helper
was called with page_size == 0 expecting a file path in return, but
instead got a directory path and failed in virFileReadAll with:
error : virFileReadAll:1358 : Failed to read file
'/sys/devices/system/node/node0/hugepages/': Is a directory
Since virNumaGetPages API expects to return a directory by passing
page_size == 0 and suffix == NULL, it will now call the new helper.
Callers to virNumaGetHugePageInfoPath expect to return a file path
which could then be used in the call to virFileReadAll.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1224587
The function takes two important arguments (among many others): @node
and @page_size. From these two a path under /sys is constructed. The
path is then used to read and write the desired size of huge pages
pool. However, if the path does not exists due to either @node or
@page_size having nonexistent value (e.g. there's no such NUMA node or
no page size like -2), an cryptic error message is produced:
virsh # allocpages --pagesize 2049 --pagecount 8 --cellno -2
error: Failed to open file '/sys/devices/system/node/node-2/hugepages/hugepages-2049kB/nr_hugepages': No such file or directory
Add two more checks to catch this and therefore produce much more
friendlier error messages.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Some code paths have special logic depending on the page size
reported by sysconf, which in turn affects the test results.
We must mock this so tests always have a consistent page size.
Introduced by commit c63ef0452b, when nodeset is NULL, validation will
pass in virNumaSetupMemoryPolicy, but virBitmapNextSetBit must ensure
bitmap is not NULL, otherwise that might cause a segmentation fault.
This patch fixes it.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
When compiled without full numa support, the stub function for
virNumaNodeIsAvailable() just checks whether specified node is in range
<0, max); where max is maximum NUMA node available on the host. But
because the maximum node number is the highest usabe number (and not the
count of nodes), the check is incorrect as it should check whether the
specified node is in range <0, max> instead.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This is a reaction to Michal's fix [1] for non-NUMA systems that also
splits out conf/ out of util/ because libvirt_util shouldn't require
libvirt_conf if it is the other way around. This particular use case
worked, but we're trying to avoid it as mentioned [2], many times.
The only functions from virnuma.c that needed numatune_conf were
virDomainNumatuneNodesetIsAvailable() and virNumaSetupMemoryPolicy().
The first one should be in numatune_conf as it works with
virDomainNumatune, the second one just needs nodeset and mode, both of
which can be passed without the need of numatune_conf.
Apart from fixing that, this patch also fixes recently added
code (between commits d2460f85^..5c8515620) that doesn't support
non-contiguous nodesets. It uses new function
virNumaNodesetIsAvailable(), which doesn't need a stub as it doesn't use
any libnuma functions, to check if every specified nodeset is available.
[1] https://www.redhat.com/archives/libvir-list/2014-November/msg00118.html
[2] http://www.redhat.com/archives/libvir-list/2011-June/msg01040.html
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There was no check for 'nodeset' attribute in numatune-related
elements. This patch adds validation that any nodeset specified does
not exceed maximum host node.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
In case the host has 2 or more NUMA nodes, we fetch CPU map for each
node. However, we need to free the CPU map in between loops:
==29513== 96 (72 direct, 24 indirect) bytes in 3 blocks are definitely lost in loss record 951 of 1,264
==29513== at 0x4C2A700: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==29513== by 0x52AD24B: virAlloc (viralloc.c:144)
==29513== by 0x52AF0E6: virBitmapNew (virbitmap.c:78)
==29513== by 0x52FB720: virNumaGetNodeCPUs (virnuma.c:294)
==29513== by 0x53C700B: nodeCapsInitNUMA (nodeinfo.c:1886)
==29513== by 0x11759708: vboxCapsInit (vbox_common.c:398)
==29513== by 0x11759CC4: vboxConnectOpen (vbox_common.c:514)
==29513== by 0x53C965F: do_open (libvirt.c:1147)
==29513== by 0x53C9EBC: virConnectOpen (libvirt.c:1317)
==29513== by 0x142905: remoteDispatchConnectOpen (remote.c:1215)
==29513== by 0x126ADF: remoteDispatchConnectOpenHelper (remote_dispatch.h:2346)
==29513== by 0x5453D21: virNetServerProgramDispatchCall (virnetserverprogram.c:437)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Under ./configure --without-numactl but with numactl-devel installed,
the build fails with:
../../src/util/virnuma.c: In function 'virNumaNodeIsAvailable':
../../src/util/virnuma.c:407:5: error: implicit declaration of function 'numa_bitmask_isbitset' [-Werror=implicit-function-declaration]
return numa_bitmask_isbitset(numa_nodes_ptr, node);
^
and other failures, all because the configure results for particular
functions were used without regard to whether libnuma was even being
linked in.
* src/util/virnuma.c (virNumaGetPages): Fix message typo.
(virNumaNodeIsAvailable): Correct build when not using numactl.
Signed-off-by: Eric Blake <eblake@redhat.com>
There were numerous places where numatune configuration (and thus
domain config as well) was changed in different ways. On some
places this even resulted in persistent domain definition not to be
stable (it would change with daemon's restart).
In order to uniformly change how numatune config is dealt with, all
the internals are now accessible directly only in numatune_conf.c and
outside this file accessors must be used.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since there was already public virDomainNumatune*, I changed the
private virNumaTune to match the same, so all the uses are unified and
public API is kept:
s/vir\(Domain\)\?Numa[tT]une/virDomainNumatune/g
then shrunk long lines, and mainly functions, that were created after
that:
sed -i 's/virDomainNumatuneMemPlacementMode/virDomainNumatunePlacement/g'
And to cope with the enum name, I haad to change the constants as
well:
s/VIR_NUMA_TUNE_MEM_PLACEMENT_MODE/VIR_DOMAIN_NUMATUNE_PLACEMENT/g
Last thing I did was at least a little shortening of already long
name:
s/virDomainNumatuneDef/virDomainNumatune/g
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There are many places with numatune-related code that should be put
into special numatune_conf and this patch creates a basis for that.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
If we are running on a system that is not capable of huge pages (e.g.
because the kernel is not configured that way) we still try to open
"/sys/kernel/mm/hugepages/" which however does not exist. We should
be tolerant to this specific use case.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
On the Linux kernel, if huge pages are allocated the size they cut off
from memory is accounted under the 'MemUsed' in the meminfo file.
However, we want the sum to be subtracted from 'MemTotal'. This patch
implements this feature. After this change, we can enable reporting
of the ordinary system pages in the capability XML:
<capabilities>
<host>
<uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid>
<cpu>
<arch>x86_64</arch>
<model>Haswell</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='1' threads='1'/>
<feature/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
<power_management/>
<migration_features/>
<topology>
<cells num='4'>
<cell id='0'>
<memory unit='KiB'>4048248</memory>
<pages unit='KiB' size='4'>748382</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
</cpus>
</cell>
...
</cells>
</topology>
</host>
</capabilities>
You can see the beautiful thing about this: if you sum up all the
<pages/> you'll get <memory/>.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
virNumaGetPages calls closedir(dir) in cleanup and dir could
be NULL if we jump there from the failed opendir() call.
While it's not harmful on Linux, FreeBSD libc crashes [1], so
make sure that dir is not NULL before calling closedir.
1: http://lists.freebsd.org/pipermail/freebsd-standards/2014-January/002704.html
One of previous commits (e6258a33) tried to build the huge page code
only on Linux since it's Linux centric indeed. But it failed miserably
as it used 'WITH_LINUX' which is an automake conditional not a gcc
one. In the sources we need to use __linux__.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
==== Invalid write of size 4
==== at 0x52E678C: virNumaGetDistances (virnuma.c:479)
==== by 0x5396890: nodeCapsInitNUMA (nodeinfo.c:1796)
==== by 0x203C2B: virQEMUCapsInit (qemu_capabilities.c:960)
==== Address 0xe10a1e0 is 0 bytes after a block of size 0 alloc'd
==== at 0x4C2A6D0: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==== by 0x52A10D6: virAllocN (viralloc.c:191)
==== by 0x52E674D: virNumaGetDistances (virnuma.c:470)
==== by 0x5396890: nodeCapsInitNUMA (nodeinfo.c:1796)
==== by 0x203C2B: virQEMUCapsInit (qemu_capabilities.c:960)
The hugepage sizing and counting code gathers the information from sysfs
and thus isn't portable. Stub it out for non-Linux so that we can report
a better error. This patch also avoids calling sysinfo() on Mingw where
it isn't supported.
For future work we need two functions that fetches total number of
pages and number of free pages for given NUMA node and page size
(virNumaGetPageInfo()).
Then we need to learn pages of what sizes are supported on given node
(virNumaGetPages()).
Note that system page size is disabled at the moment as there's one
issue connected. If you have a NUMA node with huge pages allocated the
kernel would return the normal size of memory for that node. It
basically ignores the fact that huge pages steal size from the system
memory. Until we resolve this, it's safer to not confuse users and
hence not report any system pages yet.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Not on all hosts the set of NUMA nodes IDs is continuous. This is
critical, because our code currently assumes the set doesn't contain
holes. For instance in nodeGetFreeMemory() we can see the following
pattern:
if ((max_node = virNumaGetMaxNode()) < 0)
return 0;
for (n = 0; n <= max_node; n++) {
...
}
while it should be something like this:
if ((max_node = virNumaGetMaxNode()) < 0)
return 0;
for (n = 0; n <= max_node; n++) {
if (!virNumaNodeIsAvailable(n))
continue;
...
}
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
On some systems, libnuma can be present but it's so ancient that
it misses some symbols that virNumaGetDistances() needs. To be
more precise: numa_bitmask_isbitset() and numa_nodes_ptr are the
symbols in question. Fortunately, they were both introduced in
the same release so it's sufficient for us to check for only one
of them. And the winner is numa_bitmask_isbitset().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In case the libvirt is built without numactl support, we're
missing the virNumaGetDistances() stub so the linking fails:
CCLD libvirt_lxc
libvirt_lxc-nodeinfo.o: In function `virNodeCapsGetSiblingInfo':
/home/zippy/tmp/libvirt.git/src/nodeinfo.c:1763: undefined reference to `virNumaGetDistances'
collect2: error: ld returned 1 exit status
make[3]: *** [libvirt_lxc] Error 1
The issue was introduced in 77c830d8c4.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The API gets a NUMA node and find distances to other nodes. The
distances are returned in an array. If an item X within the array
equals to value of zero, then there's no such node as X.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In 9dd02965 the virNumaGetNodeMemory was introduced, however the
comment describing the function mentions virNumaGetNodeMemorySize.
And there's one typo in virNumaIsAvailable() description.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are 2 issues here: First we shouldn't add "1" to the return
value of numa_max_node(), since the semanteme of the error message
was changed, it's not saying about the number of total NUMA nodes
anymore. Second, the value of "bit" is the position of the first
bit which exceeds either numa_max_node() or NUMA_NUM_NODES, it can
be any number in the range, so saying "bigger than $bit" is quite
confused now. For example, assuming there is a NUMA machine which
has 10 NUMA nodes, and one specifies the "nodeset" as "0,5,88",
the error message will be like:
Nodeset is out of range, host cannot support NUMA node bigger than 88
It sounds like all NUMA node number less than 88 is fine, but
actually the maximum NUMA node number the machine supports is 9.
This patch fixes the issues by removing the addition with "1" and
simplifies the error message as "NUMA node $bit is out of range".
Also simplifies the comparision in the while loop by getting the
smaller one of numa_max_node() and NUMA_NUM_NODES up front.