Commit f7afeddc added code to report to systemd an array of interface
indexes for all tap devices used by a guest. Unfortunately it not only
didn't add code to report the ifindexes for macvtap interfaces
(interface type='direct') or the tap devices used by type='ethernet',
it ended up sending "-1" as the ifindex for each macvtap or hostdev
interface. This resulted in a failure to start any domain that had a
macvtap or hostdev interface (or actually any type other than
"network" or "bridge").
This patch does the following with the nicindexes array:
1) Modify qemuBuildInterfaceCommandLine() to only fill in the
nicindexes array if given a non-NULL pointer to an array (and modifies
the test jig calls to the function to send NULL). This is because
there are tests in the test suite that have type='ethernet' and still
have an ifname specified, but that device of course doesn't actually
exist on the test system, so attempts to call virNetDevGetIndex() will
fail.
2) Even then, only add an entry to the nicindexes array for
appropriate types, and to do so for all appropriate types ("network",
"bridge", and "direct"), but only if the ifname is known (since that
is required to call virNetDevGetIndex().
Previously this function relied on having ATTRIBUTE_NONNULL(1) in its
prototype rather than explicitly checking for a null
ifname. Unfortunately, ATTRIBUTE_NONNULL is just a hint to the
optimizer and code analyzers like Coverity, it doesn't actually check
anything at execution time, so the result was possible warnings from
Coverity, along with the possibility of null dereferences when ifname
wasn't available.
This patch removes the ATTRIBUTE_NONNULL from the prototype, and
checks ifname inside the function, logging an error if it's NULL (once
we've determined that the user really is trying to set a bandwidth).
libvirt was unconditionally calling virNetDevBandwidthClear() for
every interface (and network bridge) of a type that supported
bandwidth, whether it actually had anything set or not. This doesn't
hurt anything (unless ifname == NULL!), but is wasteful.
This patch makes sure that all calls to virNetDevBandwidthClear() are
qualified by checking that the interface really had some bandwidth
setup done, and checks for a null ifname inside
virNetDevBandwidthClear(), silently returning success if it is null
(as well as removing the ATTRIBUTE_NONNULL from that function's
prototype, since we can't guarantee that it is never null,
e.g. sometimes a type='ethernet' interface has no ifname as it is
provided on the fly by qemu).
The element wasn't declared under the interleave thus it was required
always to be first. This made it inconvenient when pasting new stuff to
the XML manually in the "wrong" place.
The "virtio-mmio" is perfectly valid address type which we parse and
format correctly, but it's missing in our RNG schemas, hence editing a
domain with device having such address fails the validation.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
If the qemu binary on x86 does not support lsi SCSI controller,
but it supports virtio-scsi, we reject the virtio-specific attributes
for no reason.
Move the default controller assignment before the check.
https://bugzilla.redhat.com/show_bug.cgi?id=1168849
https://bugzilla.redhat.com/show_bug.cgi?id=1183869
Soo. you've successfully started yourself a domain. And since you want
to use it on your host exclusively you are confident enough to
passthrough the host CPU model, like this:
<cpu mode='host-passthrough'/>
Then, after a while, you want to save the domain into a file (e.g.
virsh save dom dom.save). And here comes the trouble. The file consist
of two parts: Libvirt header (containing domain XML among other
things), and qemu migration data. Now, the domain XML in the header is
formatted using special flags (VIR_DOMAIN_XML_SECURE |
VIR_DOMAIN_XML_UPDATE_CPU | VIR_DOMAIN_XML_INACTIVE |
VIR_DOMAIN_XML_MIGRATABLE).
Then, on your way back from the bar, you think of changing something
in the XML in the saved file (we have a command for it after all), say
listen address for graphics console. So you successfully type in the
command:
virsh save-image-edit dom.save
Change all the bits, and exit the editor. But instead of success
you're left with sad error message:
error: unsupported configuration: Target CPU model <null> does not
match source Pentium Pro
Sigh. Digging into the code you see lines, where we check for ABI
stability. The new XML you've produced is compared with the old one
from the saved file to see if qemu ABI will break or not. Wait, what?
We are using different flags to parse the XML you've provided so we
were just lucky it worked in some cases? Yep, that's right.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Well, not that we are not formatting invalid XML, rather than not as
beautiful as we can:
<cpu mode='host-passthrough'>
</cpu>
If there are no children, let's use the singleton element.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Well, so far there are no variables to free, no cleanup work needed on
an error, so bare 'return -1;' after each error is just okay. But this
will change in a while.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This API joins the following two lines:
char *s = virBufferContentAndReset(buf1);
virBufferAdd(buf2, s, -1);
into one:
virBufferAddBuffer(buf2, buf1);
With one exception: there's no re-indentation applied to @buf1.
The idea is, that in general both can have different indentation
(like the test I'm adding proves)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In commit cc41c648 I've re-factored qemuMonitorFindBalloonObjectPath, but
missed that there is a memory leak. The "nextpath" variable is
overwritten while looping in for cycle and we have to free it before next
cycle.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1151942
While the restriction doesn't have origin in any RFC, it matters
to us while constructing the dnsmasq config file (or command line
previously). For better picture, this is how the corresponding
part of network XML look like:
<dns>
<forwarder addr='8.8.4.4'/>
<txt name='example' value='example value'/>
</dns>
And this is how the config file looks like then:
server=8.8.4.4
txt-record=example,example value
Now we can see why there can't be any commas in the TXT name.
They are used by dnsmasq to separate @name and @value.
Funny, we have it in the documentation, but the code (which was
pushed back in 2011) didn't reflect that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Making use of the ARCH_IS_S390 macro introduced with
e808357528
Signed-off-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Since s390 does not support usb the default creation of a usb controller
for a domain should not occur.
Also adjust s390 test cases by removing usb device instances since
usb devices are no longer created by default for s390 the s390
test cases need to be adjusted.
Signed-off-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
The description of the virsh command 'cpu-models' was written in the
wrong context (i.e. beside the domain states).
This patch moves the command description just to the cpu related
commands like 'cpu-baseline' and 'cpu-compare'.
Signed-off-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Well, imagine domains were running, and as the host went down, they
were managesaved. Later, after some time, the host went up again and
domains got restored. But without correct time. And depending on how
long was the host shut off, it may take some time for ntp to sync the
time too. But hey, wait a minute. We have an API just for that! So:
1) Introduce SYNC_TIME variable in libvirt-guests.sysconf to allow
users control over the new functionality
2) Call 'virsh domtime --sync $dom' in the libvirt-guests script.
Unfortunately, this is all-or-nothing approach (just like anything
else with the script). Domains are required to have configured and
running qemu-ga inside.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This implement handling of <backenddomain name=''/> parameter introduced
in previous patch.
Works on Xen >= 4.3, because only there libxl supports setting backend
domain by name. Specifying backend domain by ID or UUID is currently not
supported.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
At least Xen supports backend drivers in another domain (aka "driver
domain"). This patch introduces an XML config option for specifying the
backend domain name for <disk> and <interface> devices. E.g.
<disk>
<backenddomain name='diskvm'/>
...
</disk>
<interface type='bridge'>
<backenddomain name='netvm'/>
...
</interface>
In the future, same option will be needed for USB devices (hostdev
objects), but for now libxl doesn't have support for PVUSB.
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
The function that parses the <forward> subelement of a network used to
fail/log an error if the network definition contained both a <pf>
element as well as at least one <interface> or <address> element. That
check was present because the configuration of a network should have
either one <pf>, one or more <interface>, or one or more <address>,
but never combinations of multiple kinds.
This caused a problem when libvirtd was restarted with a network
already active - when a network with a <pf> element is started, the
referenced PF (Physical Function of an SRIOV-capable network card) is
checked for VFs (Virtual Functions), and the <forward> is filled in
with a list of all VFs for that PF either in the form of their PCI
addresses (a list of <address>) or their netdev names (a list of
<interface>); the <pf> element is not removed though. When libvirtd is
restarted, it parses the network status and finds both the original
<pf> from the config, as well as the list of either <address> or
<interface>, fails the parse, and the network is not added to the
active list. This failure is often obscured because the network is
marked as autostart so libvirt immediately restarts it.
It seems odd to me that <interface> and <address> are stored in the
same array rather than keeping two separate arrays, and having
separate arrays would have made the check much simpler. However,
changing to use two separate arrays would have required changes in
more places, potentially creating more conflicts and (more
importantly) more possible regressions in the event of a backport, so
I chose to keep the existing data structure in order to localize the
change.
It appears that this problem has been in the code ever since support
for <pf> was added (0.9.10), but until commit
34cc3b2f10 (first in libvirt 1.2.4)
networks with interface pools were not properly marked as active on
restart anyway, so there is no point in backporting this patch any
further than that.
Later patches will need to access the full definition to do check the
memory size and thus the checking needs to be done after the whole
definition including devices is known.
For historical reasons data regarding NUMA configuration were split
between the CPU definition and numatune. We cannot do anything about the
XML still being split, but we certainly can at least store the relevant
data in one place.
This patch moves the NUMA stuff to the right place.
As virDomainNumatuneSet now doesn't allocate the virDomainNuma object
any longer it's not necessary to pass the pointer to a pointer to store
the object as it will not change any longer.
While touching the parameter definitions I've also changed the name of
the parameter to "numa".
Since our formatter now handles well if the config is allocated and not
filled we can safely always-allocate the NUMA config and remove the
ad-hoc allocation code.
This will help in later patches as the parser will be refactored to just
fill the data.
Move the existing virDomainDefNew to virDomainDefNewFull as it's setting
a few things in the conf and re-introduce virDomainDefNew as a function
without parameters for common use.
Do a content-aware check if formatting of the <numatune> element is
necessary. Later on the def->numa structure will be always present so we
cannot decide only on the basis whether it's allocated.
Shuffling around the logic will allow to simplify the code quite a bit.
As an additional bonus the change in the logic now reports an error if
automatic placement is selected and individual placement is configured.
Currently the code would exit without reporting an error as
virBitmapParse reports one only if it fails to parse the bitmap, whereas
the code was jumping to the error label even in case 0 cpus were
correctly parsed in the map.
It's easier to recalculate the number in the one place it's used as
having a separate variable to track it. It will also help with moving
the NUMA code to the separate module.
Name it virNumaMemAccess and add it to conf/numa_conf.[ch]
Note that to avoid a circular dependency the type of the NUMA cell
memAccess variable was changed to int. It will be turned back later
after the circular dependency will not exist.
The mask was stored both as a bitmap and as a string. The string is used
for XML output only. Remove the string, as it can be reconstructed from
the bitmap.
The test change is necessary as the bitmap formatter doesn't "optimize"
using the '^' operator.
Rewrite the function to save a few local variables and reorder the code
to make more sense.
Additionally the ncells_max member of the virCPUDef structure is used
only for tracking allocation when parsing the numa definition, which can
be avoided by switching to VIR_ALLOC_N as the array is not resized
after initial allocation.