There are two macros used in the test: CAPSCOMP and CAPS_EXPECT_ERR.
Both run a test case and if a failure occurred, they set the @ret
variable to a value of -1 to indicate an error. Well, that's what they
should do. Due to a typo, they set the variable to a positive one
effectively masking any failed test.
Then, we have couple of tests failing. Fix them too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The phyp driver stuffed it into a DomainDefPtr during its attachdevice
routine, but the value is never advertised via capabilities so it should
be safe to drop.
Have the phyp driver use OSTYPE_LINUX, which is what it advertises via
capabilities.
When building without lxc support enabled, build fails with:
CLD vircapstest
vircapstest.o: In function `test_virCapsDomainDataLookupLXC':
vircapstest.c:(.text+0x9ef): undefined reference to `testLXCCapsInit'
Fix that by hiding LXC tests under appropriate #ifdef. Same applies
for QEMU and XEN.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is a helper function to look up all capabilities data for all
the OS bits that are relevant to <domain>. This is
- os type
- arch
- domain type
- emulator
- machine type
This will be used to replace several functions in later commits.
There are two places where you'll find info on page sizes. The first
one is under <cpu/> element, where all supported pages sizes are
listed. Then the second one is under each <cell/> element which refers
to concrete NUMA node. At this place, the size of page's pool is
reported. So the capabilities XML looks something like this:
<capabilities>
<host>
<uuid>01281cda-f352-cb11-a9db-e905fe22010c</uuid>
<cpu>
<arch>x86_64</arch>
<model>Westmere</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='1' threads='1'/>
...
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
...
<topology>
<cells num='4'>
<cell id='0'>
<memory unit='KiB'>4054408</memory>
<pages unit='KiB' size='4'>1013602</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
</cpus>
</cell>
<cell id='1'>
<memory unit='KiB'>4071072</memory>
<pages unit='KiB' size='4'>1017768</pages>
<pages unit='KiB' size='2048'>3</pages>
<pages unit='KiB' size='1048576'>1</pages>
<distances/>
<cpus num='1'>
<cpu id='1' socket_id='0' core_id='0' siblings='1'/>
</cpus>
</cell>
...
</cells>
</topology>
...
</host>
<guest/>
</capabilities>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If user or management application wants to create a guest,
it may be useful to know the cost of internode latencies
before the guest resources are pinned. For example:
<capabilities>
<host>
...
<topology>
<cells num='2'>
<cell id='0'>
<memory unit='KiB'>4004132</memory>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='20'/>
</distances>
<cpus num='2'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
<cpu id='2' socket_id='0' core_id='2' siblings='2'/>
</cpus>
</cell>
<cell id='1'>
<memory unit='KiB'>4030064</memory>
<distances>
<sibling id='0' value='20'/>
<sibling id='1' value='10'/>
</distances>
<cpus num='2'>
<cpu id='1' socket_id='0' core_id='0' siblings='1'/>
<cpu id='3' socket_id='0' core_id='2' siblings='3'/>
</cpus>
</cell>
</cells>
</topology>
...
</host>
...
</capabilities>
We can see the distance from node1 to node0 is 20 and within nodes 10.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This test creates a Fake NUMA topology with non-sequential cell ids
to check if libvirt properly handles the same
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Signed-off-by: Pradipta Kr. Banerjee <bpradip@in.ibm.com>