There were couple of reports on the list (e.g. [1]) that guests
with huge amounts of RAM are unable to start because libvirt
kills qemu in the initialization phase. The problem is that if
guest is configured to use hugepages kernel has to zero them all
out before handing over to qemu process. For instance, 402GiB
worth of 1GiB pages took around 105 seconds (~3.8GiB/s). Since we
do not want to make the timeout for connecting to monitor
configurable, we have to teach libvirt to count with this
fact. This commit implements "1s per each 1GiB of RAM" approach
as suggested here [2].
1: https://www.redhat.com/archives/libvir-list/2017-March/msg00373.html
2: https://www.redhat.com/archives/libvir-list/2017-March/msg00405.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While connecting to qemu monitor, the first thing we do is wait
for it to show up. However, we are doing it with some timeout to
avoid indefinite waits (e.g. when qemu doesn't create the monitor
socket at all). After beaa447a29 we are using exponential back
off timeout meaning, after the first connection attempt we wait
1ms, then 2ms, then 4 and so on. This allows us to bring down
wait time for small domains where qemu initializes quickly.
However, on the other end of this scale are some domains with
huge amounts of guest memory. Now imagine that we've gotten up to
wait time of 15 seconds. The next one is going to be 30 seconds,
and the one after that whole minute. Well, okay - with current
code we are not going to wait longer than 30 seconds in total,
but this is going to change in the next commit.
The exponential back off is usable only for first few iterations.
Then it needs to be caped (one second was chosen as the limit)
and switch to constant wait time.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Fix a "bug" in the storage pool test driver code which "assumed"
testStoragePoolObjSetDefaults should fill in the configFile for
both the Define/Create (persistent) and CreateXML (transient) pools
by just VIR_FREE()'ing it during CreateXML. Because the configFile
was filled in, during Destroy the pool wouldn't be free'd which
could cause issues for future patches which add tests to validate
vHBA creation for the storage pool using the same name.
Commit id 'bb74a7ffe' added a fairly non specific message when providing
only the <parent wwnn='xxx'/> or <parent wwpn='xxx'/> instead of providing
both wwnn and wwpn. This patch just modifies the message to be more specific
about which was missing.
Rather than returning true/false and having the caller check if the
vHBA was actually created, let's do that check within the CreateVport
function. That way the caller can faithfully assume success based
on a name start the thread looking for the LUNs. Prior to this change
it's possible that the vHBA wasn't really created (e.g if the call to
virVHBAGetHostByWWN returned NULL), we'd claim success, but in reality
there'd be no vHBA for the pool. This also fixes a second yet seen
issue that if the nodedev was present, but the parent by name wasn't
provided (perhaps parent by wwnn/wwpn or by fabric_name), then a failure
would be returned. For this path it shouldn't be an error - we should
just be happy that something else is managing the device and we don't
have to create/delete it.
The end result is that the createVport code can now just start the
refresh thread once it gets a non NULL name back.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the bulk of createVport and rename to virNodeDeviceCreateVport.
Remove the deleteVport entirely and replace with virNodeDeviceDeleteVport
Signed-off-by: John Ferlan <jferlan@redhat.com>
The function is actually in virutil.c, but prototyped in virfile.h.
This patch fixes that by renaming the function to virWaitForDevices,
adding the prototype in virutil.h and libvirt_private.syms, and then
changing the callers to use the new name.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the virStoragePoolSourceAdapter from storage_conf.h and rename
to virStorageAdapter.
Continue with code realignment for brevity and flow.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the code to use the new FCHost specific adapter structures.
Also rework the parameters to only pass what's need and leave logic in
the caller for the adapter type and the need to call the helpers.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types.
Continue to realign the code for shorter lines.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types.
Continue to realign the code for shorter lines.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have lots of ugly inline code, create helpers to try and
make things more readable. While creating the helpers realign the code
as necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have lots of ugly inline code, create helpers to try and
make things more readable. While creating the helpers realign the code
as necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than use virXPathString, pass along an virXPathNode and alter
the parsing to use virXMLPropString.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Split out the code that munges through the storage pool adapter into
helpers - it's about to be moved into it's own source file.
This is purely code motion at this point.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id 'bb74a7ffe' added some new fields to search for a fchost by
parent wwnn/wwpn or parent_fabric_name, but neglected to validate that
the data within the fields was valid at parse time. This could lead to
eventual failure at run time, so rather than have the failure then, let's
validate now.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1428209
Commit id 'bb74a7ffe' neglected to check that both the parent_wwnn
parent_wwpn are in the XML if one or the other is similar to how
the node device code checked (commit id '2b13361bc').
If only one is provided, the "default" is to use a vHBA capable
adapter (see commit id '78be2e8b'), so the vHBA could start, but
perhaps not on the expected adapter.
Signed-off-by: John Ferlan <jferlan@redhat.com>
RFC 6331 documents a number of serious security weaknesses in
the SASL DIGEST-MD5 mechanism. As such, libvirtd should not
by using it as a default mechanism. GSSAPI is the only other
viable SASL mechanism that can provide secure session encryption
so enable that by defalt as the replacement.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Some users might want to pass a blockdev or a chardev as a
backend for NVDIMM. In fact, this is expected to be the mostly
used configuration. Therefore libvirt should allow the device in
devices CGroup then.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that we have APIs for relabel memdevs on hotplug, fill in the
missing implementation in qemu hotplug code.
The qemuSecurity wrappers might look like overkill for now,
because qemu namespace code does not deal with the nvdimms yet.
Nor does our cgroup code. But hey, there's cgroup_device_acl
variable in qemu.conf. If users add their /dev/pmem* device in
there, the device is allowed in cgroups and created in the
namespace so they can successfully passthrough it to the domain.
It doesn't look like overkill after all, does it?
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When domain is being started up, we ought to relabel the host
side of NVDIMM so qemu has access to it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When domain is being started up, we ought to relabel the host
side of NVDIMM so qemu has access to it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that NVDIMM has found its way into libvirt, users might want
to fine tune some settings for each module separately. One such
setting is 'share=on|off' for the memory-backend-file object.
This setting - just like its name suggest already - enables
sharing the nvdimm module with other applications. Under the hood
it controls whether qemu mmaps() the file as MAP_PRIVATE or
MAP_SHARED.
Yet again, we have such config knob in domain XML, but it's just
an attribute to numa <cell/>. This does not give fine enough
tuning on per-memdevice basis so we need to have the attribute
for each device too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So, majority of the code is just ready as-is. Well, with one
slight change: differentiate between dimm and nvdimm in places
like device alias generation, generating the command line and so
on.
Speaking of the command line, we also need to append 'nvdimm=on'
to the '-machine' argument so that the nvdimm feature is
advertised in the ACPI tables properly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
NVDIMM is new type of memory introduced into QEMU 2.6. The idea
is that we have a Non-Volatile memory module that keeps the data
persistent across domain reboots.
At the domain XML level, we already have some representation of
'dimm' modules. Long story short, NVDIMM will utilize the
existing <memory/> element that lives under <devices/> by adding
a new attribute 'nvdimm' to the existing @model and introduce a
new <path/> element for <source/> while reusing other fields. The
resulting XML would appear as:
<memory model='nvdimm'>
<source>
<path>/tmp/nvdimm</path>
</source>
<target>
<size unit='KiB'>523264</size>
<node>0</node>
</target>
<address type='dimm' slot='0'/>
</memory>
So far, this is just a XML parser/formatter extension. QEMU
driver implementation is in the next commit.
For more info on NVDIMM visit the following web page:
http://pmem.io/
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Frankly, this function is one big mess. A lot of arguments,
complicated behaviour. It's really surprising that arguments were
in random order (input and output arguments were mixed together),
the documentation was outdated, the description of return values
was bogus.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Even though this variable contains just values from an enum where
zero has the usual meaning, it's enum after all and we should
check it as such.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Introduce config file support for the bhyve driver. The only available
setting at present is 'firmware_dir' for specifying a directory with
UEFI firmware files.
One of the main reasons for introducing host-model CPU definition in a
domain capabilities XML was the inability to express disabled features
in a host capabilities XML. That is, when a host CPU is, e.g., Haswell
without x2apic support, host capabilities XML will have to report it as
Westmere + a bunch of additional features., but we really want to use
Haswell - x2apic when creating a host-model CPU.
Unfortunately, I somehow forgot to do the last step and the code would
just copy the CPU definition found in the host capabilities XML. This
changed recently for new QEMU versions which allow us to query host CPU,
but any slightly older QEMU will not benefit from any change I did. This
patch makes sure the right CPU model is filled in the domain
capabilities even with old QEMU.
The issue was reported in
https://bugzilla.redhat.com/show_bug.cgi?id=1426456
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function is now called virQEMUCapsProbeHostCPU. Both the refactoring
and the change of the name is done for consistency with a new function
which will be introduced in the following commit.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When creating host CPU definition usable with a given emulator, the CPU
should not be defined using an unsupported CPU model. The new @models
and @nmodels parameters can be used to limit CPU models which can be
used in the result.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The parameter can be used to request either VIR_CPU_TYPE_HOST (which has
been assumed so far) or VIR_CPU_TYPE_GUEST definition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
cpuNodeData has always been followed by cpuDecode as no hypervisor
driver is really interested in raw CPUID data for a host CPU. Let's
create a new CPU driver API which returns virCPUDefPtr directly.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>