In many files there are header comments that contain an Author:
statement, supposedly reflecting who originally wrote the code.
In a large collaborative project like libvirt, any non-trivial
file will have been modified by a large number of different
contributors. IOW, the Author: comments are quickly out of date,
omitting people who have made significant contribitions.
In some places Author: lines have been added despite the person
merely being responsible for creating the file by moving existing
code out of another file. IOW, the Author: lines give an incorrect
record of authorship.
With this all in mind, the comments are useless as a means to identify
who to talk to about code in a particular file. Contributors will always
be better off using 'git log' and 'git blame' if they need to find the
author of a particular bit of code.
This commit thus deletes all Author: comments from the source and adds
a rule to prevent them reappearing.
The Copyright headers are similarly misleading and inaccurate, however,
we cannot delete these as they have legal meaning, despite being largely
inaccurate. In addition only the copyright holder is permitted to change
their respective copyright statement.
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
All of the ones being removed are pulled in by internal.h. The only
exception is sanlock which expects the application to include <stdint.h>
before sanlock's headers, because sanlock prototypes use fixed width
int, but they don't include stdint.h themselves, so we have to leave
that one in place.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>
It doesn't really make sense for us to have stdlib.h and string.h but
not stdio.h in the internal.h header.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>
By making use of GNU C's cleanup attribute handled by the
VIR_AUTOFREE macro for declaring scalar variables, majority
of the VIR_FREE calls can be dropped, which in turn leads to
getting rid of most of our cleanup sections.
Signed-off-by: Sukrit Bhatnagar <skrtbhtngr@gmail.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Currently, the functions return a pointer to the
destination buffer on success or NULL on failure.
Not only does this kind of error handling look quite
alien in the context of libvirt, where most functions
return zero on success and a negative int on failure,
but it's also somewhat pointless because unless there's
been a failure the returned pointer will be the same
one passed in by the user, thus offering no additional
value.
Change the functions so that they return an int
instead.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1569678
On some large systems (with ~400GB of RAM) it is possible for
unsigned int to overflow in which case we report invalid number
of 4K pages pool size. Switch to unsigned long long.
We hit overflow in virNumaGetPages when doing:
huge_page_sum += 1024 * page_size * page_avail;
because although 'huge_page_sum' is an unsigned long long, the
page_size and page_avail are both unsigned int, so the promotion
to unsigned long long doesn't happen until the sum has been
calculated, by which time we've already overflowed.
Turning page_avail into a unsigned long long is not strictly
needed until we need ability to represent more than 2^32
4k pages, which equates to 16 TB of RAM. That's not
outside the realm of possibility, so makes sense that we
change it to unsigned long long to avoid future problems.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
libvirt reports a fake NUMA topology in virConnectGetCapabilities
even if built without numactl support. The fake NUMA topology consists
of a single cell representing the host's cpu and memory resources.
Currently this is the case for ARM and s390[x] RPM builds.
A client iterating over NUMA cells obtained via virConnectGetCapabilities
and invoking virNodeGetMemoryStats on them will see an internal failure
"NUMA isn't available on this host" from virNumaGetMaxNode. An example
for such a client is VDSM.
Since the intention seems to be that libvirt always reports at least
a single cell it is necessary to return "fake" node memory statistics
matching the previously reported fake cell in case NUMA isn't supported
on the system.
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Move all APIs with a virHostMEM name prefix out into new
util/virhostmem.h & util/virhostmem.c files
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>