Commit f7afeddc added code to report to systemd an array of interface
indexes for all tap devices used by a guest. Unfortunately it not only
didn't add code to report the ifindexes for macvtap interfaces
(interface type='direct') or the tap devices used by type='ethernet',
it ended up sending "-1" as the ifindex for each macvtap or hostdev
interface. This resulted in a failure to start any domain that had a
macvtap or hostdev interface (or actually any type other than
"network" or "bridge").
This patch does the following with the nicindexes array:
1) Modify qemuBuildInterfaceCommandLine() to only fill in the
nicindexes array if given a non-NULL pointer to an array (and modifies
the test jig calls to the function to send NULL). This is because
there are tests in the test suite that have type='ethernet' and still
have an ifname specified, but that device of course doesn't actually
exist on the test system, so attempts to call virNetDevGetIndex() will
fail.
2) Even then, only add an entry to the nicindexes array for
appropriate types, and to do so for all appropriate types ("network",
"bridge", and "direct"), but only if the ifname is known (since that
is required to call virNetDevGetIndex().
Well, not that we are not formatting invalid XML, rather than not as
beautiful as we can:
<cpu mode='host-passthrough'>
</cpu>
If there are no children, let's use the singleton element.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This API joins the following two lines:
char *s = virBufferContentAndReset(buf1);
virBufferAdd(buf2, s, -1);
into one:
virBufferAddBuffer(buf2, buf1);
With one exception: there's no re-indentation applied to @buf1.
The idea is, that in general both can have different indentation
(like the test I'm adding proves)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since s390 does not support usb the default creation of a usb controller
for a domain should not occur.
Also adjust s390 test cases by removing usb device instances since
usb devices are no longer created by default for s390 the s390
test cases need to be adjusted.
Signed-off-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
For historical reasons data regarding NUMA configuration were split
between the CPU definition and numatune. We cannot do anything about the
XML still being split, but we certainly can at least store the relevant
data in one place.
This patch moves the NUMA stuff to the right place.
It's easier to recalculate the number in the one place it's used as
having a separate variable to track it. It will also help with moving
the NUMA code to the separate module.
The mask was stored both as a bitmap and as a string. The string is used
for XML output only. Remove the string, as it can be reconstructed from
the bitmap.
The test change is necessary as the bitmap formatter doesn't "optimize"
using the '^' operator.
Rewrite the function to save a few local variables and reorder the code
to make more sense.
Additionally the ncells_max member of the virCPUDef structure is used
only for tracking allocation when parsing the numa definition, which can
be avoided by switching to VIR_ALLOC_N as the array is not resized
after initial allocation.
The "virDomainGetInfo" will get for running domain only live info and for
offline domain only config info. There was no way how to get config info
for running domain. We will use "vshCPUCountCollect" instead to get the
correct cpu count that we need to pass to "virDomainGetVcpuPinInfo".
Also cleanup some unnecessary variables and checks that are done by
drivers.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1160559
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Not all machine types support all devices, device properties, backends,
etc. So until we create a matrix of [machineType, qemuCaps], lets just
filter out some capabilities before we return them to the consumer
(which is going to make decisions based on them straight away).
Currently, as qemu is unable to tell which capabilities are (not)
enabled for given machine types, it's us who has to hardcode the matrix.
One day maybe the hardcoding will go away and we can create the matrix
dynamically on the fly based on a few monitor calls.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, when building the '-numa' command line, the
qemuBuildMemoryBackendStr() function does quite a lot of checks to
chose the best backend, or to check if one is in fact needed. However,
it returned that backend is needed even for this little fella:
<numatune>
<memory mode="strict" nodeset="0,2"/>
</numatune>
This can be guaranteed via CGroups entirely, there's no need to use
memory-backend-ram to let qemu know where to get memory from. Well, as
long as there's no <memnode/> element, which explicitly requires the
backend. Long story short, we wouldn't have to care, as qemu works
either way. However, the problem is migration (as always). Previously,
libvirt would have started qemu with:
-numa node,memory=X
in this case and restricted memory placement in CGroups. Today, libvirt
creates more complicated command line:
-object memory-backend-ram,id=ram-node0,size=X
-numa node,memdev=ram-node0
Again, one wouldn't find anything wrong with these two approaches.
Both work just fine. Unless you try to migrated from the older libvirt
into the newer one. These two approaches are, unfortunately, not
compatible. My suggestion is, in order to allow users to migrate, lets
use the older approach for as long as the newer one is not needed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Well, we can pretend that we've asked numad for its suggestion and let
qemu command line be built with that respect. Again, this alone has no
big value, but see later commits which build on the top of this.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In order for QEMU vCPU (and other) threads to run with RT scheduler,
libvirt needs to take care of that so QEMU doesn't have to run privileged.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1178986
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In our RNG schema we do allow multiple (different) seclabels per-domain,
but don't allow this for devices, yet we neither have a check in our XML parser,
nor in a post-parse callback. In that case we should allow multiple
(different) seclabels for devices as well.
Move the alias name right after the object type for rng-egd backend so
that we can later use the JSON to commandline generator to create the
command line.
Libvirt didn't prefix the random number generator backend object alias
with any string thus the device alias and object alias were identical.
To avoid possible problems, rename the alias for the backend object and
tweak tests to comply with the change.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
It is only supported for virtio adapters.
Silently drop it if it was specified for other models,
as is done for other virtio attributes.
Also mention this in the documentation.
https://bugzilla.redhat.com/show_bug.cgi?id=1147195
If a storage file would be backed with a NBD device without path
(nbd://localhost) libvirt would crash when parsing the backing path for
the disk as the URI structure's path element is NULL in such case but
the NBD parser would access it shamelessly.
Some code paths have special logic depending on the page size
reported by sysconf, which in turn affects the test results.
We must mock this so tests always have a consistent page size.
Change done by commit f309db1f4d wrongly
assumes that qemu can start with a combination of NUMA nodes specified
with the "memdev" option and the appropriate backends, and the legacy
way by specifying only "mem" as a size argument. QEMU rejects such
commandline though:
$ /usr/bin/qemu-system-x86_64 -S -M pc -m 1024 -smp 2 \
-numa node,nodeid=0,cpus=0,mem=256 \
-object memory-backend-ram,id=ram-node1,size=12345 \
-numa node,nodeid=1,cpus=1,memdev=ram-node1
qemu-system-x86_64: -numa node,nodeid=1,cpus=1,memdev=ram-node1: qemu: memdev option must be specified for either all or no nodes
To fix this issue we need to check if any of the nodes requires the new
definition with the backend and if so, then all other nodes have to use
it too.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1182467
QEMU's command line visitor as well as the JSON interface take bytes by
default for memory object sizes. Convert mebibytes to bytes so that we
can later refactor the existing code for hotplug purposes.
QEMU's qapi visitor code allows yes/on/y for true and no/off/n for false
value of boolean properities. Unify the used style so that we can
generate it later and fix test cases.
Unlike -device, qemu uses a JSON object to add backend "objects" via the
monitor rather than the string that would be passed on the commandline.
To be able to reuse code parts that configure backends for various
devices, this patch adds a helper that will allow generating the command
line representations from the JSON property object.
Adding or reordering test cases is usually a pain due to static test
case names that are then passed to virtTestRun(). To ease the numbering
of test cases, this patch adds two simple helpers that generate the test
names according to the order they are run. The test name can be
configured via the reset function.
This will allow us to freely add test cases in middle of test groups
without the need to re-number the rest of test cases.
https://bugzilla.redhat.com/show_bug.cgi?id=1170492
In one of our previous commits (dc8b7ce7) we've done a functional
change even though it was intended as pure refactor. The problem is,
that the following XML:
<vcpu placement='static' current='2'>6</vcpu>
<cputune>
<emulatorpin cpuset='1-3'/>
</cputune>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
gets translated into this one:
<vcpu placement='auto' current='2'>6</vcpu>
<cputune>
<emulatorpin cpuset='1-3'/>
</cputune>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
We should not change the vcpu placement mode. Moreover, we're doing
something similar in case of emulatorpin and iothreadpin. If they were
set, but vcpu placement was auto, we've mistakenly removed them from
the domain XML even though we are able to set them independently on
vcpus.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There are some interface types (notably 'server' and 'client')
which instead of allowing the default set of elements and
attributes (like the rest do), try to enumerate only the elements
they know of. This way it's, however, easy to miss something. For
instance, the <address/> element was not mentioned at all. This
resulted in a strange behavior: when such interface was added
into XML, the address was automatically generated by parsing
code. Later, the formatted XML hasn't passed the RNG schema. This
became more visible once we've turned on the XML validation on
domain XML changes: appending an empty line at the end of
formatted XML (to trick virsh think the XML had changed) made
libvirt to refuse the very same XML it formatted.
Instead of trying to find each element and attribute we are
missing in the schema, lets just allow all the elements and
attributes like we're doing that for the rest of types. It's no
harm if the schema is wider than our parser allows.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Well, even though users can pass the list of UEFI:NVRAM pairs at the
configure time, we may maintain the list of widely available UEFI
ourselves too. And as arm64 begin to rises, OVMF was ported there too.
With a slight name change - it's called AAVMF, with AAVMF_CODE.fd
being the UEFI firmware and AAVMF_VARS.fd being the NVRAM store file.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up until now there are just two ways how to specify UEFI paths to
libvirt. The first one is editing qemu.conf, the other is editing
qemu_conf.c and recompile which is not that fancy. So, new
configure option is introduced: --with-loader-nvram which takes a
list of pairs of UEFI firmware and NVRAM store. This way, the
compiled in defaults can be passed during compile time without
need to change the code itself.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The function may return NULL if something went wrong. In some places
in the tests we are not checking the return value rather than
accessing the pointer directly resulting in SIGSEGV.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Coverity reports that my commit af1c98e introduced
two memory leaks:
the cpumap if ncpus == 0 in virCgroupGetPercpuStats
and the params array in the test of the function.
My commit af1c98e4 broke the build on RHEL-6:
vircgrouptest.c: In function 'testCgroupGetPercpuStats':
vircgrouptest.c:566: error: nested extern declaration of
'_gl_verify_function2' [-Wnested-externs]
The only thing that needs checking is that the array size
is at least EXPECTED_NCPUS, to prevent access beyond the array.
We can ensure the minimum size also by specifying the array
size upfront.
Per-cpu stats are only shown for present CPUs in the cgroups,
but we were only parsing the largest CPU number from
/sys/devices/system/cpu/present and looking for stats even for
non-present CPUs.
This resulted in:
internal error: cpuacct parse error
https://bugzilla.redhat.com/show_bug.cgi?id=1130390
The listen address is not mandatory for <interface type='server'>
but when it's not specified, we've been formatting it as:
-netdev socket,listen=(null):5558,id=hostnet0
which failed with:
Device 'socket' could not be initialized
Omit the address completely and only format the port in the listen
attribute.
Also fix the schema to allow specifying a model.
When libvirt is configured --without-xen, building the xlconfigtest
fails with
CCLD xlconfigtest
/usr/lib/gcc/x86_64-linux-gnu/4.7/../../../x86_64-linux-gnu/crt1.o
In function `_start': (.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
Introduced in commit 4ed5fb91 by too much copy and paste from
xmconfigtest.
This adds a new "localOnly" attribute on the domain element of the
network xml. With this set to "yes", DNS requests under that domain
will only be resolved by libvirt's dnsmasq, never forwarded upstream.
This was how it worked before commit f69a6b987d, and I found that
functionality useful. For example, I have my host's NetworkManager
dnsmasq configured to forward that domain to libvirt's dnsmasq, so I can
easily resolve guest names from outside. But if libvirt's dnsmasq
doesn't know a name and forwards it to the host, I'd get an endless
forwarding loop. Now I can set localOnly="yes" to prevent the loop.
Signed-off-by: Josh Stone <jistone@redhat.com>
Do not run ZFS tests when ZFS is unsupported in the environment.
The recent patch b4af40226d adds tests
to storagepoolxml2xmltest for the optional ZFS feature. When ZFS is
not included in the configuration these tests should not / cannot be
run. Modify the test source file to check for use of the feature and
compile in the tests accordingly.
Signed-off-by: Gary R Hook <gary.hook@nimboxx.com>
Ploop is a pseudo device which makeit possible to access
to an image in a file as a block device. Like loop devices,
but with additional features, like snapshots, write tracker
and without double-caching.
It used in PCS for containers and in OpenVZ. You can manage
ploop devices and images with ploop utility
(http://git.openvz.org/?p=ploop).
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>