Historically, we declared pointer type to our types:
typedef struct _virXXX virXXX;
typedef virXXX *virXXXPtr;
But usefulness of such declaration is questionable, at best.
Unfortunately, we can't drop every such declaration - we have to
carry some over, because they are part of public API (e.g.
virDomainPtr). But for internal types - we can do drop them and
use what every other C project uses 'virXXX *'.
This change was generated by a very ugly shell script that
generated sed script which was then called over each file in the
repository. For the shell script refer to the cover letter:
https://listman.redhat.com/archives/libvir-list/2021-March/msg00537.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Attempting to create a lxc domain with <seclabel type='none'/> fails
virsh --connect lxc:/// create distro_nosec.xml
error: Failed to create domain from distro_nosec.xml
error: unsupported configuration: Security driver model '(null)' is not available
Commit 638ffa2228 adjusted the logic for setting a driver's default
security model.
The lxc driver does not set a default security driver model in the XML
parser config, causing seclabels of type='none' to have a null model.
The lxc driver's security manager is initialized in lxcStateInitialize()
by calling lxcSecurityInit(). Use the model of this manager as the
default in the XML parser config.
For the record, this is a regression caused by commit 638ffa2228, which
changed the logic for setting a driver's default security model. The
qemu driver was adjusted accordingly, but a similar change was missed
in the lxc driver.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
To enable the virCapsPtr parameter to the post parse method to be
eliminated, the drivers must fetch the virCapsPtr from their own
driver via the opaque parameter, or use an alternative approach
to validate the parsed data.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
All of the ones being removed are pulled in by internal.h. The only
exception is sanlock which expects the application to include <stdint.h>
before sanlock's headers, because sanlock prototypes use fixed width
int, but they don't include stdint.h themselves, so we have to leave
that one in place.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>
This patch refactors various places to allow removing of the
defaultConsoleTargetType callback from the virCaps structure.
A new console character device target type is introduced -
VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_NONE - to mark that no type was
specified in the XML. This type is at the end converted to the standard
VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_SERIAL. Other types that are
different from this default have to be processed separately in the
device post parse callback.
To enable virCapabilities instances to be reference counted,
turn it into a virObject. All cases of virCapabilitiesFree
turn into virObjectUnref
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
For S390, the default console target type cannot be of type 'serial'.
It is necessary to at least interpret the 'arch' attribute
value of the os/type element to produce the correct default type.
Therefore we need to extend the signature of defaultConsoleTargetType
to account for architecture. As a consequence all the drivers
supporting this capability function must be updated.
Despite the amount of changed files, the only change in behavior is
that for S390 the default console target type will be 'virtio'.
N.B.: A more future-proof approach could be to to use hypervisor
specific capabilities to determine the best possible console type.
For instance one could add an opaque private data pointer to the
virCaps structure (in case of QEMU to hold capsCache) which could
then be passed to the defaultConsoleTargetType callback to determine
the console target type.
Seems to be however a bit overengineered for the use case...
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Pass argv to the init binary of LXC, using a new <initarg> element.
* docs/formatdomain.html.in: Document <os> usage for containers
* docs/schemas/domaincommon.rng: Add <initarg> element
* src/conf/domain_conf.c, src/conf/domain_conf.h: parsing and
formatting of <initarg>
* src/lxc/lxc_container.c: Setup LXC argv
* tests/Makefile.am, tests/lxcxml2xmldata/lxc-systemd.xml,
tests/lxcxml2xmltest.c, tests/testutilslxc.c,
tests/testutilslxc.h: Test parsing/formatting of LXC related
XML parts