Currently 'virsh perf domain' errors out as the perf nparams is
incorrectly compared against REMOTE_DOMAIN_MEMORY_PARAMETERS_MAX
instead of REMOTE_DOMAIN_PERF_EVENTS_MAX.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com>
We want pcie-root-ports to be used when available in QEMU,
but at the same time we need to ensure that hosts running
older QEMU releases keep working and that the user can
override the default at any time.
Add a comment for the original pcie-root-port test cases
to make it clear how these new test cases are different.
ioh3420 is emulated Intel hardware, so it always looked
quite out of place in aarch64/virt guests. Even for x86/q35
guests, the recently-introduced pcie-root-port is a better
choice because, unlike ioh3420, it doesn't require IO space
(a fairly constrained resource) to work.
If pcie-root-port is available in QEMU, use it; ioh3420 is
still used as fallback for when pcie-root-port is not
available.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1408808
QEMU 2.9 introduces the pcie-root-port device, which is
a generic version of the existing ioh3420 device.
Make the new device available to libvirt users.
If the SASL config does not have any mechanisms we currently
just report an empty list to the client which will then
fail to identify a usable mechanism. This is a server config
error, so we should fail immediately on the server side.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When providing explicit x509 cert/key paths in libvirtd.conf,
the user must provide all three. If one or more is missed,
this leads to obscure errors at runtime when negotiating
the TLS session
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Linux still defaults to a 1024 open file handle limit. This causes
scalability problems for libvirtd / virtlockd / virtlogd on large
hosts which might want > 1024 guest to be running. In fact if each
guest needs > 1 FD, we can't even get to 500 guests. This is not
good enough when we see machines with 100's of physical cores and
TBs of RAM.
In comparison to other memory requirements of libvirtd & related
daemons, the resource usage associated with open file handles
is essentially line noise. It is thus reasonable to increase the
limits unconditionally for all installs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There were couple of reports on the list (e.g. [1]) that guests
with huge amounts of RAM are unable to start because libvirt
kills qemu in the initialization phase. The problem is that if
guest is configured to use hugepages kernel has to zero them all
out before handing over to qemu process. For instance, 402GiB
worth of 1GiB pages took around 105 seconds (~3.8GiB/s). Since we
do not want to make the timeout for connecting to monitor
configurable, we have to teach libvirt to count with this
fact. This commit implements "1s per each 1GiB of RAM" approach
as suggested here [2].
1: https://www.redhat.com/archives/libvir-list/2017-March/msg00373.html
2: https://www.redhat.com/archives/libvir-list/2017-March/msg00405.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While connecting to qemu monitor, the first thing we do is wait
for it to show up. However, we are doing it with some timeout to
avoid indefinite waits (e.g. when qemu doesn't create the monitor
socket at all). After beaa447a29 we are using exponential back
off timeout meaning, after the first connection attempt we wait
1ms, then 2ms, then 4 and so on. This allows us to bring down
wait time for small domains where qemu initializes quickly.
However, on the other end of this scale are some domains with
huge amounts of guest memory. Now imagine that we've gotten up to
wait time of 15 seconds. The next one is going to be 30 seconds,
and the one after that whole minute. Well, okay - with current
code we are not going to wait longer than 30 seconds in total,
but this is going to change in the next commit.
The exponential back off is usable only for first few iterations.
Then it needs to be caped (one second was chosen as the limit)
and switch to constant wait time.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Fix a "bug" in the storage pool test driver code which "assumed"
testStoragePoolObjSetDefaults should fill in the configFile for
both the Define/Create (persistent) and CreateXML (transient) pools
by just VIR_FREE()'ing it during CreateXML. Because the configFile
was filled in, during Destroy the pool wouldn't be free'd which
could cause issues for future patches which add tests to validate
vHBA creation for the storage pool using the same name.
Commit id 'bb74a7ffe' added a fairly non specific message when providing
only the <parent wwnn='xxx'/> or <parent wwpn='xxx'/> instead of providing
both wwnn and wwpn. This patch just modifies the message to be more specific
about which was missing.
Rather than returning true/false and having the caller check if the
vHBA was actually created, let's do that check within the CreateVport
function. That way the caller can faithfully assume success based
on a name start the thread looking for the LUNs. Prior to this change
it's possible that the vHBA wasn't really created (e.g if the call to
virVHBAGetHostByWWN returned NULL), we'd claim success, but in reality
there'd be no vHBA for the pool. This also fixes a second yet seen
issue that if the nodedev was present, but the parent by name wasn't
provided (perhaps parent by wwnn/wwpn or by fabric_name), then a failure
would be returned. For this path it shouldn't be an error - we should
just be happy that something else is managing the device and we don't
have to create/delete it.
The end result is that the createVport code can now just start the
refresh thread once it gets a non NULL name back.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the bulk of createVport and rename to virNodeDeviceCreateVport.
Remove the deleteVport entirely and replace with virNodeDeviceDeleteVport
Signed-off-by: John Ferlan <jferlan@redhat.com>
The function is actually in virutil.c, but prototyped in virfile.h.
This patch fixes that by renaming the function to virWaitForDevices,
adding the prototype in virutil.h and libvirt_private.syms, and then
changing the callers to use the new name.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the virStoragePoolSourceAdapter from storage_conf.h and rename
to virStorageAdapter.
Continue with code realignment for brevity and flow.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the code to use the new FCHost specific adapter structures.
Also rework the parameters to only pass what's need and leave logic in
the caller for the adapter type and the need to call the helpers.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types.
Continue to realign the code for shorter lines.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types.
Continue to realign the code for shorter lines.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have lots of ugly inline code, create helpers to try and
make things more readable. While creating the helpers realign the code
as necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have lots of ugly inline code, create helpers to try and
make things more readable. While creating the helpers realign the code
as necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than use virXPathString, pass along an virXPathNode and alter
the parsing to use virXMLPropString.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Split out the code that munges through the storage pool adapter into
helpers - it's about to be moved into it's own source file.
This is purely code motion at this point.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id 'bb74a7ffe' added some new fields to search for a fchost by
parent wwnn/wwpn or parent_fabric_name, but neglected to validate that
the data within the fields was valid at parse time. This could lead to
eventual failure at run time, so rather than have the failure then, let's
validate now.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1428209
Commit id 'bb74a7ffe' neglected to check that both the parent_wwnn
parent_wwpn are in the XML if one or the other is similar to how
the node device code checked (commit id '2b13361bc').
If only one is provided, the "default" is to use a vHBA capable
adapter (see commit id '78be2e8b'), so the vHBA could start, but
perhaps not on the expected adapter.
Signed-off-by: John Ferlan <jferlan@redhat.com>
RFC 6331 documents a number of serious security weaknesses in
the SASL DIGEST-MD5 mechanism. As such, libvirtd should not
by using it as a default mechanism. GSSAPI is the only other
viable SASL mechanism that can provide secure session encryption
so enable that by defalt as the replacement.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Some users might want to pass a blockdev or a chardev as a
backend for NVDIMM. In fact, this is expected to be the mostly
used configuration. Therefore libvirt should allow the device in
devices CGroup then.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that we have APIs for relabel memdevs on hotplug, fill in the
missing implementation in qemu hotplug code.
The qemuSecurity wrappers might look like overkill for now,
because qemu namespace code does not deal with the nvdimms yet.
Nor does our cgroup code. But hey, there's cgroup_device_acl
variable in qemu.conf. If users add their /dev/pmem* device in
there, the device is allowed in cgroups and created in the
namespace so they can successfully passthrough it to the domain.
It doesn't look like overkill after all, does it?
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When domain is being started up, we ought to relabel the host
side of NVDIMM so qemu has access to it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When domain is being started up, we ought to relabel the host
side of NVDIMM so qemu has access to it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that NVDIMM has found its way into libvirt, users might want
to fine tune some settings for each module separately. One such
setting is 'share=on|off' for the memory-backend-file object.
This setting - just like its name suggest already - enables
sharing the nvdimm module with other applications. Under the hood
it controls whether qemu mmaps() the file as MAP_PRIVATE or
MAP_SHARED.
Yet again, we have such config knob in domain XML, but it's just
an attribute to numa <cell/>. This does not give fine enough
tuning on per-memdevice basis so we need to have the attribute
for each device too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, majority of the code is just ready as-is. Well, with one
slight change: differentiate between dimm and nvdimm in places
like device alias generation, generating the command line and so
on.
Speaking of the command line, we also need to append 'nvdimm=on'
to the '-machine' argument so that the nvdimm feature is
advertised in the ACPI tables properly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
NVDIMM is new type of memory introduced into QEMU 2.6. The idea
is that we have a Non-Volatile memory module that keeps the data
persistent across domain reboots.
At the domain XML level, we already have some representation of
'dimm' modules. Long story short, NVDIMM will utilize the
existing <memory/> element that lives under <devices/> by adding
a new attribute 'nvdimm' to the existing @model and introduce a
new <path/> element for <source/> while reusing other fields. The
resulting XML would appear as:
<memory model='nvdimm'>
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit='KiB'>523264</size>
<node>0</node>
</target>
<address type='dimm' slot='0'/>
</memory>
So far, this is just a XML parser/formatter extension. QEMU
driver implementation is in the next commit.
For more info on NVDIMM visit the following web page:
http://pmem.io/
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Frankly, this function is one big mess. A lot of arguments,
complicated behaviour. It's really surprising that arguments were
in random order (input and output arguments were mixed together),
the documentation was outdated, the description of return values
was bogus.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>