Introduce VIR_DOMAIN_VIRT_NONE to give domaintype the default value of zero.
This is specially helpful in constructing better error messages
when we don't want to look up the default emulator by virtType.
The test data in vircapstest.c is also modified to reflect this change.
My original implementation was based on a qemu version that still did
not have all the checks in place. Using sizes that would align to odd
megabyte increments will produce the following error:
qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,id=dimm0: backend memory size must be multiple of 0x200000
qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,id=dimm0: Device 'pc-dimm' could not be initialized
Introduce an alignment retrieval function for memory devices and use it
to align the devices separately and modify a test case to verify it.
We use the function to create a virDomainXMLOption object that is
required for some functions. However, we don't pass the driver
pointer to the object anywhere - rather than pass NULL. This
causes trouble later when parsing a domain XML and calling post
parse callbacks:
Program received signal SIGSEGV, Segmentation fault.
0x000000000043fa3e in qemuDomainDefPostParse (def=0x7d36c0, caps=0x7caf10, opaque=0x0) at qemu/qemu_domain.c:1043
1043 qemuCaps = virQEMUCapsCacheLookup(driver->qemuCapsCache, def->emulator);
(gdb) bt
#0 0x000000000043fa3e in qemuDomainDefPostParse (def=0x7d36c0, caps=0x7caf10, opaque=0x0) at qemu/qemu_domain.c:1043
#1 0x00007ffff2928bf9 in virDomainDefPostParse (def=0x7d36c0, caps=0x7caf10, xmlopt=0x7c82c0) at conf/domain_conf.c:4269
#2 0x00007ffff294de04 in virDomainDefParseXML (xml=0x7da8c0, root=0x7dab80, ctxt=0x7da980, caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16400
#3 0x00007ffff294e5b5 in virDomainDefParseNode (xml=0x7da8c0, root=0x7dab80, caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16582
#4 0x00007ffff294e424 in virDomainDefParse (xmlStr=0x0, filename=0x7c7ef0 "/home/zippy/work/libvirt/libvirt.git/tests/securityselinuxlabeldata/disks.xml", caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16529
#5 0x00007ffff294e4b2 in virDomainDefParseFile (filename=0x7c7ef0 "/home/zippy/work/libvirt/libvirt.git/tests/securityselinuxlabeldata/disks.xml", caps=0x7caf10, xmlopt=0x7c82c0, flags=0) at conf/domain_conf.c:16553
#6 0x00000000004303ca in testSELinuxLoadDef (testname=0x53c929 "disks") at securityselinuxlabeltest.c:192
#7 0x00000000004309e8 in testSELinuxLabeling (opaque=0x53c929) at securityselinuxlabeltest.c:313
#8 0x0000000000431207 in virtTestRun (title=0x53c92f "Labelling \"disks\"", body=0x430964 <testSELinuxLabeling>, data=0x53c929) at testutils.c:211
#9 0x0000000000430c5d in mymain () at securityselinuxlabeltest.c:373
#10 0x00000000004325c2 in virtTestMain (argc=1, argv=0x7fffffffd7e8, func=0x430b4a <mymain>) at testutils.c:863
#11 0x0000000000430deb in main (argc=1, argv=0x7fffffffd7e8) at securityselinuxlabeltest.c:381
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Even though usage of the lock is limited to a very few cases,
it's still needed. Therefore we should initialize it too.
Otherwise we may get some random test failures:
==1204== Conditional jump or move depends on uninitialised value(s)
==1204== at 0xEF7F7CF: pthread_mutex_lock (in /lib64/libpthread-2.20.so)
==1204== by 0x9CA89A5: virMutexLock (virthread.c:89)
==1204== by 0x450B2A: qemuDriverLock (qemu_conf.c:83)
==1204== by 0x45549C: virQEMUDriverGetConfig (qemu_conf.c:869)
==1204== by 0x448E29: qemuDomainDeviceDefPostParse (qemu_domain.c:1240)
==1204== by 0x9CC9B13: virDomainDeviceDefPostParse (domain_conf.c:4224)
==1204== by 0x9CC9B91: virDomainDefPostParseDeviceIterator (domain_conf.c:4251)
==1204== by 0x9CC7843: virDomainDeviceInfoIterateInternal (domain_conf.c:3440)
==1204== by 0x9CC9C25: virDomainDefPostParse (domain_conf.c:4276)
==1204== by 0x9CEEE03: virDomainDefParseXML (domain_conf.c:16400)
==1204== by 0x9CEF5B4: virDomainDefParseNode (domain_conf.c:16582)
==1204== by 0x9CEF423: virDomainDefParse (domain_conf.c:16529)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For some machine types ppc64 machines now require that memory sizes are
aligned to 256MiB increments (due to the dynamically reconfigurable
memory). As now we treat existing configs reasonably in regards to
migration, we can round all the sizes unconditionally. The only drawback
will be that the memory size of a VM can potentially increase by
(256MiB - 1byte) * number_of_NUMA_nodes.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1249006
When we are starting a qemu process for an incomming migration or
snapshot reloading we should not modify the memory sizes in the domain
since we could potentially change the guest ABI that was tediously
checked before. Additionally the function now updates the initial memory
size according to the NUMA node size, which should not happen if we are
restoring state.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1252685
The flag was used only for formatting the XML and once the parser and
formatter flags were split in 0ecd685109
it doesn't make sense any more to have it.
Use the new API in order to correctly add capability sets to the cache
before parsing XML files
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
The main purpose of this patch is to introduce test mode to
virQEMUCapsCacheLookup(). This is done by adding a global variable, which
effectively overrides binary name. This variable is supposed to be set by
test suite.
The second addition is qemuTestCapsCacheInsert() function which allows the
test suite to actually populate the cache.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Two utility functions are introduced for proper initialization and
cleanup of the driver.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Tool such as libguestfs need the datacenter path to get access to disk
images. The ESX driver knows the correct datacenter path, but this
information cannot be accessed using libvirt API yet. Also, it cannot
be deduced from the connection URI in a robust way.
Expose the datacenter path in the domain XML as <vmware:datacenterpath>
node similar to the way the <qemu:commandline> node works. The new node
is ignored while parsing the domain XML. In contrast to <qemu:commandline>
it is output only.
Commit id 'f1f68ca33' added code to remove the directory paths for
auto-generated sockets, but that code could be called before the
paths were created resulting in generating error messages from
virFileDeleteTree indicating that the file doesn't exist.
Rather than "enforce" all callers to make the non-NULL and existence
checks, modify the virFileDeleteTree API to silently ignore NULL on
input and non-existent directory trees.
We may want to do some decisions in drivers based on fact if we
are running as privileged user or not. Propagate this info there.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Mock libraries are not built with testutils.c, but there's one which
uses VIR_TEST_DEBUG. But because that debug should be an error, if we
change it, then it will not only be more semantically correct, but mingw
compiler will be happier as well.
It also follows suit with all other mock libraries.
For few other things, used in this file, need libvirt.la to be added
into LIBADD for mingw as well.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1260846
Introduced by 8fedbbdb, if we parse an unordered NUMA cell, will
get a segfault. This is because of a check for overlapping @cpus
sets we have there. However, since the array to hold guest NUMA
cells is allocated upfront and therefore it contains all zeros,
an out of order cell will break our assumption that cell IDs have
increasing character. At this point we try to access yet NULL
bitmap and therefore segfault.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Adds a new interface type using UDP sockets, this seems only applicable
to QEMU but have edited tree-wide to support the new interface type.
The interface type required the addition of a "localaddr" (local
address), this then maps into the following xml and qemu call.
<interface type='udp'>
<mac address='52:54:00:5c:67:56'/>
<source address='127.0.0.1' port='11112'>
<local address='127.0.0.1' port='22222'/>
</source>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</interface>
QEMU call:
-net socket,udp=127.0.0.1:11112,localaddr=127.0.0.1:22222
Notice the xml "local" entry becomes the "localaddr" for the qemu call.
reference:
http://lists.gnu.org/archive/html/qemu-devel/2011-11/msg00629.html
Signed-off-by: Jonathan Toppins <jtoppins@cumulusnetworks.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
This patch adds feature for lxc containers to inherit namespaces.
This is very similar to what lxc-tools or docker provides. Look
for "man lxc-start" and you will find that you can pass command
args as [ --share-[net|ipc|uts] name|pid ]. Or check out docker
networking option in which you can give --net=container:NAME_or_ID
as an option for sharing +namespace.
>From this patch you can add extra libvirt option to share
namespace in following way.
<lxc:namespace>
<lxc:sharenet type='netns' value='red'/>
<lxc:shareipc type='pid' value='12345'/>
<lxc:shareuts type='name' value='container1'/>
</lxc:namespace>
The netns option is specific to sharenet. It can be used to
inherit from existing network namespace.
Co-authored: Daniel P. Berrange <berrange@redhat.com>
We forbid access to /usr/share/, but (at least on Debian-based systems)
the Open Virtual Machine Firmware files needed for booting UEFI virtual
machines in QEMU live in /usr/share/ovmf/. Therefore, we need to add
that directory to the list of read only paths.
A similar patch was suggested by Jamie Strandboge <jamie@canonical.com>
on https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1483071.
The output of that function was not tested until now. In order to keep
the paths in /tmp, the test driver config is "fixed" as well.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We are automatically generating some socket paths for domains, but all
those paths end up in a directory that's the same for multiple domains.
The problem is that multiple domains can each run with different
seclabels (users, selinux contexts, etc.). The idea here is to create a
per-domain directory labelled in a way that each domain can access its
own unix sockets.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
JSON data that are used to initialize tests in virnetdaemontest should
be in a consistent format, i.e. not using tabs for indentation, those
should be replaced by spaces.
The numad hint stored in priv->autoNodeset is information that gets lost
during daemon restart. And because we would like to use that
information in the future, we also need to save it in the status XML.
For the sake of tests, we need to initialize nnumaCell_max to some
value, so that the restoration doesn't fail in our test suite. There is
no need to fill in the actual numa cell data since the recalculating
function virCapabilitiesGetCpusForNodemask() will not fail, it will just
skip filling the data in the bitmap which we don't use in tests anyway.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commit a6f9af8292 added checking for address colisions between
starting and ending addresses of forwarding addresses, but forgot that
there might be no addresses set at all.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The upcoming commits will make heavy modifications to the ppc64
driver, split so that it's easier to review the changes.
Instead of updating the test cases so that they pass, possibly
only to update them again with the following commit, disable them
for the time being.
Another commit will update them all in one go once all required
changes are in place.
Limitations of the POWER architecture mean that you can't run
eg. a POWER7 guest on a POWER8 host when using KVM. This applies
to all guests, not just those using VIR_CPU_MATCH_STRICT in the
CPU definition; in fact, exact and strict CPU matching are
basically the same on ppc64.
This means, of course, that hosts using different CPUs have to be
considered incompatible as well.
Change ppc64Compute(), called by cpuGuestData(), to reflect this
fact and update test cases accordingly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1250977
A test is considered successful if the obtained result matches
the expected result: if that's not the case, whether because a
test that was expected to succeed failed or because a test that
was supposed to fail succeeded, then something's not right and
we want the user to know about this.
On the other hand, if a failure that's unrelated to the bits
we're testing occurs, then the user should be notified even if
the test was expected to fail.
Use different values to tell these two situations apart.
Fix a test case that was wrongly expected to fail as well.
This patch modifies virSocketAddrGetRange() to function properly when
the containing network/prefix of the address range isn't known, for
example in the case of the NAT range of a virtual network (since it is
a range of addresses on the *host*, not within the network itself). We
then take advantage of this new functionality to validate the NAT
range of a virtual network.
Extra test cases are also added to verify that virSocketAddrGetRange()
works properly in both positive and negative cases when the network
pointer is NULL.
This is the *real* fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=985653
Commits 1e334a and 48e8b9 had earlier been pushed as fixes for that
bug, but I had neglected to read the report carefully, so instead of
fixing validation for the NAT range, I had fixed validation for the
DHCP range. sigh.
Since its introduction in 2011 (particularly in commit f4324e3292),
the option doesn't work. It just effectively disables all incoming
connections. That's because the client private data that contain the
'keepalive_supported' boolean, are initialized to zeroes so the bool is
false and the only other place where the bool is used is when checking
whether the client supports keepalive. Thus, according to the server,
no client supports keepalive.
Removing this instead of fixing it is better because a) apparently
nobody ever tried it since 2011 (4 years without one month) and b) we
cannot know whether the client supports keepalive until we get a ping or
pong keepalive packet. And that won't happen until after we dispatched
the ConnectOpen call.
Another two reasons would be c) the keepalive_required was tracked on
the server level, but keepalive_supported was in private data of the
client as well as the check that was made in the remote layer, thus
making all other instances of virNetServer miss this feature unless they
all implemented it for themselves and d) we can always add it back in
case there is a request and a use-case for it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This controller can be connected only to a port on a
pcie-switch-upstream-port. It provides a single hotpluggable port that
will accept any PCI or PCIe device, as well as any device requiring a
pcie-*-port (the only current example of such a device is the
pcie-switch-upstream-port).
The downstream ports of an x3130-upstream switch can each have one of
these plugged into them (and that is the only place they can be
connected). Each xio3130-downstream provides a single PCIe port that
can have PCI or PCIe devices hotplugged into it. Apparently an entire
set of x3130-upstream + several xio3130-downstreams can be hotplugged
as a unit, but it's not clear to me yet how that would be done, since
qemu only allows attaching a single device at a time.
This device will be used to implement the
"pcie-switch-downstream-port" model of pci controller.
This controller can be connected only to a pcie-root-port or a
pcie-switch-downstream-port (which will be added in a later patch),
which is the reason for the new connect type
VIR_PCI_CONNECT_TYPE_PCIE_PORT. A pcie-switch-upstream-port provides
32 ports (slot=0 to slot=31) on the downstream side, which can only
have pci controllers of model "pcie-switch-downstream-port" plugged
into them, which is the reason for the other new connect type
VIR_PCI_CONNECT_TYPE_PCIE_SWITCH.
This is the upstream part of a PCIe switch. It connects to a PCIe port
(but not PCI) on the upstream side, and can have up to 31
xio3130-downstream controllers (but no other types of devices)
connected to its downstream side.
This device will be used to implement the "pcie-switch-upstream-port"
model of pci controller.
This is backed by the qemu device ioh3420.
chassis and port from the <target> subelement are used to store/set the
respective qemu device options for the ioh3420. Currently, chassis is
set to be the index of the controller, and port is set to
"(slot << 3) + function" (per suggestion from Alex Williamson).