Under ./configure --without-numactl but with numactl-devel installed,
the build fails with:
../../src/util/virnuma.c: In function 'virNumaNodeIsAvailable':
../../src/util/virnuma.c:407:5: error: implicit declaration of function 'numa_bitmask_isbitset' [-Werror=implicit-function-declaration]
return numa_bitmask_isbitset(numa_nodes_ptr, node);
^
and other failures, all because the configure results for particular
functions were used without regard to whether libnuma was even being
linked in.
* src/util/virnuma.c (virNumaGetPages): Fix message typo.
(virNumaNodeIsAvailable): Correct build when not using numactl.
Signed-off-by: Eric Blake <eblake@redhat.com>
There were numerous places where numatune configuration (and thus
domain config as well) was changed in different ways. On some
places this even resulted in persistent domain definition not to be
stable (it would change with daemon's restart).
In order to uniformly change how numatune config is dealt with, all
the internals are now accessible directly only in numatune_conf.c and
outside this file accessors must be used.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since there was already public virDomainNumatune*, I changed the
private virNumaTune to match the same, so all the uses are unified and
public API is kept:
s/vir\(Domain\)\?Numa[tT]une/virDomainNumatune/g
then shrunk long lines, and mainly functions, that were created after
that:
sed -i 's/virDomainNumatuneMemPlacementMode/virDomainNumatunePlacement/g'
And to cope with the enum name, I haad to change the constants as
well:
s/VIR_NUMA_TUNE_MEM_PLACEMENT_MODE/VIR_DOMAIN_NUMATUNE_PLACEMENT/g
Last thing I did was at least a little shortening of already long
name:
s/virDomainNumatuneDef/virDomainNumatune/g
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There are many places with numatune-related code that should be put
into special numatune_conf and this patch creates a basis for that.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
If we are running on a system that is not capable of huge pages (e.g.
because the kernel is not configured that way) we still try to open
"/sys/kernel/mm/hugepages/" which however does not exist. We should
be tolerant to this specific use case.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
On the Linux kernel, if huge pages are allocated the size they cut off
from memory is accounted under the 'MemUsed' in the meminfo file.
However, we want the sum to be subtracted from 'MemTotal'. This patch
implements this feature. After this change, we can enable reporting
of the ordinary system pages in the capability XML:
<capabilities>
<host>
<uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid>
<cpu>
<arch>x86_64</arch>
<model>Haswell</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='1' threads='1'/>
<feature/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
<power_management/>
<migration_features/>
<topology>
<cells num='4'>
<cell id='0'>
<memory unit='KiB'>4048248</memory>
<pages unit='KiB' size='4'>748382</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
</cpus>
</cell>
...
</cells>
</topology>
</host>
</capabilities>
You can see the beautiful thing about this: if you sum up all the
<pages/> you'll get <memory/>.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
virNumaGetPages calls closedir(dir) in cleanup and dir could
be NULL if we jump there from the failed opendir() call.
While it's not harmful on Linux, FreeBSD libc crashes [1], so
make sure that dir is not NULL before calling closedir.
1: http://lists.freebsd.org/pipermail/freebsd-standards/2014-January/002704.html
One of previous commits (e6258a33) tried to build the huge page code
only on Linux since it's Linux centric indeed. But it failed miserably
as it used 'WITH_LINUX' which is an automake conditional not a gcc
one. In the sources we need to use __linux__.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
==== Invalid write of size 4
==== at 0x52E678C: virNumaGetDistances (virnuma.c:479)
==== by 0x5396890: nodeCapsInitNUMA (nodeinfo.c:1796)
==== by 0x203C2B: virQEMUCapsInit (qemu_capabilities.c:960)
==== Address 0xe10a1e0 is 0 bytes after a block of size 0 alloc'd
==== at 0x4C2A6D0: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==== by 0x52A10D6: virAllocN (viralloc.c:191)
==== by 0x52E674D: virNumaGetDistances (virnuma.c:470)
==== by 0x5396890: nodeCapsInitNUMA (nodeinfo.c:1796)
==== by 0x203C2B: virQEMUCapsInit (qemu_capabilities.c:960)
The hugepage sizing and counting code gathers the information from sysfs
and thus isn't portable. Stub it out for non-Linux so that we can report
a better error. This patch also avoids calling sysinfo() on Mingw where
it isn't supported.
For future work we need two functions that fetches total number of
pages and number of free pages for given NUMA node and page size
(virNumaGetPageInfo()).
Then we need to learn pages of what sizes are supported on given node
(virNumaGetPages()).
Note that system page size is disabled at the moment as there's one
issue connected. If you have a NUMA node with huge pages allocated the
kernel would return the normal size of memory for that node. It
basically ignores the fact that huge pages steal size from the system
memory. Until we resolve this, it's safer to not confuse users and
hence not report any system pages yet.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Not on all hosts the set of NUMA nodes IDs is continuous. This is
critical, because our code currently assumes the set doesn't contain
holes. For instance in nodeGetFreeMemory() we can see the following
pattern:
if ((max_node = virNumaGetMaxNode()) < 0)
return 0;
for (n = 0; n <= max_node; n++) {
...
}
while it should be something like this:
if ((max_node = virNumaGetMaxNode()) < 0)
return 0;
for (n = 0; n <= max_node; n++) {
if (!virNumaNodeIsAvailable(n))
continue;
...
}
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
On some systems, libnuma can be present but it's so ancient that
it misses some symbols that virNumaGetDistances() needs. To be
more precise: numa_bitmask_isbitset() and numa_nodes_ptr are the
symbols in question. Fortunately, they were both introduced in
the same release so it's sufficient for us to check for only one
of them. And the winner is numa_bitmask_isbitset().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In case the libvirt is built without numactl support, we're
missing the virNumaGetDistances() stub so the linking fails:
CCLD libvirt_lxc
libvirt_lxc-nodeinfo.o: In function `virNodeCapsGetSiblingInfo':
/home/zippy/tmp/libvirt.git/src/nodeinfo.c:1763: undefined reference to `virNumaGetDistances'
collect2: error: ld returned 1 exit status
make[3]: *** [libvirt_lxc] Error 1
The issue was introduced in 77c830d8c4.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The API gets a NUMA node and find distances to other nodes. The
distances are returned in an array. If an item X within the array
equals to value of zero, then there's no such node as X.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In 9dd02965 the virNumaGetNodeMemory was introduced, however the
comment describing the function mentions virNumaGetNodeMemorySize.
And there's one typo in virNumaIsAvailable() description.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are 2 issues here: First we shouldn't add "1" to the return
value of numa_max_node(), since the semanteme of the error message
was changed, it's not saying about the number of total NUMA nodes
anymore. Second, the value of "bit" is the position of the first
bit which exceeds either numa_max_node() or NUMA_NUM_NODES, it can
be any number in the range, so saying "bigger than $bit" is quite
confused now. For example, assuming there is a NUMA machine which
has 10 NUMA nodes, and one specifies the "nodeset" as "0,5,88",
the error message will be like:
Nodeset is out of range, host cannot support NUMA node bigger than 88
It sounds like all NUMA node number less than 88 is fine, but
actually the maximum NUMA node number the machine supports is 9.
This patch fixes the issues by removing the addition with "1" and
simplifies the error message as "NUMA node $bit is out of range".
Also simplifies the comparision in the while loop by getting the
smaller one of numa_max_node() and NUMA_NUM_NODES up front.
Convert the type of loop iterators named 'i', 'j', k',
'ii', 'jj', 'kk', to be 'size_t' instead of 'int' or
'unsigned int', also santizing 'ii', 'jj', 'kk' to use
the normal 'i', 'j', 'k' naming
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Intend to reduce the redundant code,use virNumaSetupMemoryPolicy
to replace virLXCControllerSetupNUMAPolicy and
qemuProcessInitNumaMemoryPolicy.
This patch also moves the numa related codes to the
file virnuma.c and virnuma.h
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
qemuGetNumadAdvice will be used by LXC driver, rename
it to virNumaGetAutoPlacementAdvice and move it to virnuma.c
Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>