We used to set migration capabilities only when a user asked for them in
flags. This is fine when migration succeeds since the QEMU process is
killed in the end but in case migration fails or if it's cancelled, some
capabilities may remain turned on with no way to turn them off. To fix
that, migration capabilities have to be turned on if requested but
explicitly turned off in case they were not requested but QEMU supports
them.
https://bugzilla.redhat.com/show_bug.cgi?id=1163953
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Recent commit 12bd207e21 fixed few
VSH_OT_STRING options that should've been VSH_OT_DATA. That lead me to
this commit that enforces people to check that newly added options have
proper type. Thanks to virsh erroring out with error message, this will
immediately show up in 'make check' thanks to our virsh-synopsis test.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Even though vshCmddefOptParse() tried returning -1 if there was an
optional option specification that preceded a required one, it failed to
check that for boolean type options and options with VSH_OFLAG_REQ_OPT
flag set. On the other hand, it makes sense that VSH_OT_ARGV is
specified at the end of the option list.
Returning -1 enforces the proper ordering thanks to virsh-synopsis test
in 'make check'.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
According to comments in parsing functions, optional options should be
specified *after* required ones. It makes sense and help output looks
cleaner. The only exceptions are options with type == VSH_OT_ARGV.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Rather than just picking the first CD (or failing that, HDD) we come
across, if the user has picked a boot device ordering with <boot
order=''>, respect that (and just try to boot the lowest-index device).
Adds two sets of tests to bhyve2xmlargv; 'grub-bootorder' shows that we
pick a user-specified device over the first device in the domain;
'grub-bootorder2' shows that we pick the first (lowest index) device.
When user calls setmem on a running LXC machine, we do update its cgroup
entry, however we neither update domain's runtime XML nor
we update our internal structures and this patch fixes it.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1131919
The API docs generators were broken by the header file
re-organization. Specifically
* html/libvirt-libvirt.html was empty (and should be deleted)
* Makefile.am didn't install html/libvirt-libvirt-*.html
* hvsupport.html was mostly empty
* sitemap.html.in didn't list the new html/*.html files
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
and starting of guest, but this same deadlock exists for
updating/attaching network device to domain.
The deadlock was introduced by removing global QEMU driver lock because
nwfilter was counting on this lock and ensure that all driver locks are
locked inside of nwfilter{Define,Undefine}.
This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
the deadlock for all possible paths in QEMU driver. LXC and UML drivers
still have global lock.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In one of my previous patches (3a3c3780b) I've tried to fix the
problem of nvram path disappearing on a domain that's been
started and shut down again. I fixed this by explicitly saving
domain's config file. However, I did a bit of clumsy without
realizing we have a transient domains for which we don't save the
config file. Hence, any domain using UEFI became persistent.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
As I was reviewing bhyve commits, I've noticed qemuxml2argvtest
failing for some test cases. This is not bug in qemu driver code
rather than being unable to load qemuxml2argvmock on non-Linux
platforms. For instance:
318) QEMU XML-2-ARGV numatune-memnode
... libvirt: error : internal error: NUMA node 0 is unavailable
FAILED
Rather than disabling qemuxml2argvtest on BSD (we do compile qemu
driver there) disable only those test cases which require mocking.
To achieve that goal new DO_TEST_LINUX() macro is introduced which
invokes the test case on Linux only and consume arguments on other
systems.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160926
Introduce a 'managed' attribute to allow libvirt to decide whether to
delete a vHBA vport created via external means such as nodedev-create.
The code currently decides whether to delete the vHBA based solely on
whether the parent was provided at creation time. However, that may not
be the desired action, so rather than delete and force someone to create
another vHBA via an additional nodedev-create allow the configuration of
the storage pool to decide the desired action.
During createVport when libvirt does the VPORT_CREATE, set the managed
value to YES if not already set to indicate to the deleteVport code that
it should delete the vHBA when the pool is destroyed.
If libvirtd is restarted all the memory only state was lost, so for a
persistent storage pool, use the virStoragePoolSaveConfig in order to
write out the managed value.
Because we're now saving the current configuration, we need to be sure
to not save the parent in the output XML if it was undefined at start.
Saving the name would cause future starts to always use the same parent
which is not the expected result when not providing a parent. By not
providing a parent, libvirt is expected to find the best available
vHBA port for each subsequent (re)start.
At deleteVport, use the new managed value to decide whether to execute
the VPORT_DELETE. Since we no longer save the parent in memory or in
XML when provided, if it was not provided, then we have to look it up.
https://bugzilla.redhat.com/show_bug.cgi?id=1160926
Passing a copy of the storage pool adapter to a function just changes the
copy of the fields in the particular function and then when returning to
the caller those changes are discarded. While not yet biting us in the
storage clean-up case, it did cause an issue for the fchost storage pool
startup case, createVport. The issue was at startup, if no parent is found
in the XML, the code will search for the 'best available' parent and then
store that in the in memory copy of the adapter. Of course, in this case
it was a copy, so when returning to the virStorageBackendSCSIStartPool that
change was discarded (or lost) from the pool->def->source.adapter which
meant at shutdown (deleteVport), the code assumed no adapter was passed
and skipped the deletion, leaving the vHBA created by libvirt still defined
requiring an additional stop of a nodedev-destroy to remove.
Adjusted the createVport to take virStoragePoolDefPtr instead of the
adapter copy. Then use the virStoragePoolSourceAdapterPtr when processing.
A future patch will need the 'def' anyway, so this just sets up for that.
https://bugzilla.redhat.com/show_bug.cgi?id=1160565
The existing code assumed that the configuration of a 'parent' attribute
was correct for the createVport path. As it turns out, that may not be
the case which leads errors during the deleteVport path because the
wwnn/wwpn isn't associated with the parent.
With this change the following is reported:
error: Failed to start pool fc_pool_host3
error: XML error: Parent attribute 'scsi_host4' does not match parent 'scsi_host3' determined for the 'scsi_host16' wwnn/wwpn lookup.
for XML as follows:
<pool type='scsi'>
<name>fc_pool</name>
<source>
<adapter type='fc_host' parent='scsi_host4' wwnn='5001a4aaf3ca174b' wwpn='5001a4a77192b864'/>
</source>
Where 'nodedev-dumpxml scsi_host16' provides:
<device>
<name>scsi_host16</name>
<path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-11/host16</path>
<parent>scsi_host3</parent>
<capability type='scsi_host'>
<host>16</host>
<unique_id>13</unique_id>
<capability type='fc_host'>
<wwnn>5001a4aaf3ca174b</wwnn>
<wwpn>5001a4a77192b864</wwpn>
...
The patch also adjusts the description of the storage pool to describe the
restrictions.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160565
If a 'parent' attribute is provided for the fchost, then at startup
time check to ensure it is a vport capable scsi_host. If the parent
is not vport capable, then disallow the startup. The following is the
expected results:
error: Failed to start pool fc_pool
error: XML error: parent 'scsi_host2' specified for vHBA is not vport capable
where the XML for the fc_pool is:
<pool type='scsi'>
<name>fc_pool</name>
<source>
<adapter type='fc_host' parent='scsi_host2' wwnn='5001a4aaf3ca174b' wwpn='5001a4a77192b864'/>
</source>
...
and 'scsi_host2' is not vport capable.
Providing an incorrect parent and a correct wwnn/wwpn could lead to
failures at shutdown (deleteVport) where the assumption is the parent
is for the fchost.
NOTE: If the provided wwnn/wwpn doesn't resolve to an existing scsi_host,
then we will be creating one with code (virManageVport) which
assumes the parent is vport capable.
Signed-off-by: John Ferlan <jferlan@redhat.com>
This enables booting interactive GRUB menus (e.g. install CDs) with
libvirt-bhyve.
Caveat: A terminal other than the '--console' option to 'virsh start'
(e.g. 'cu -l /dev/nmdm0B -s 115200') must be used to connect to
grub-bhyve because the bhyve loader path is synchronous and must occur
before the VM actually starts.
Changing the bhyveProcessStart logic around to accommodate '--console'
for interactive loader use seems like a significant project and probably
not worth it, if UEFI/BIOS support for bhyve is "coming soon."
We still default to bhyveloader(1) if no explicit bootloader
configuration is supplied in the domain.
If the /domain/bootloader looks like grub-bhyve and the user doesn't
supply /domain/bootloader_args, we make an intelligent guess and try
chainloading the first partition on the disk (or a CD if one exists,
under the assumption that for a VM a CD is likely an install source).
Caveat: Assumes the HDD boots from the msdos1 partition. I think this is
a pretty reasonable assumption for a VM. (DrvBhyve with Bhyveload
already assumes that the first disk should be booted.)
I've tested both HDD and CD boot and they seem to work.
Use the device type name if we know it instead of its number,
even if we can't hotplug it:
qemuMonitorJSONAttachCharDevCommand:6094 : operation failed: Unsupported
char device type '10'
virDomainChrSourceDefIsEqual should return 'true' for
identical SPICEVMC chardevs, and those that have no source
specification.
After this change, a failed hotplug no longer leaves a stale
pointer in the domain definition.
https://bugzilla.redhat.com/show_bug.cgi?id=1162097
If the memory mode is specified as 'strict' and with one node, we
get the following error when starting domain.
error: Unable to write to '$cgroup_path/cpuset.mems': Device or resource busy
XML is configured with numatune as follows:
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
It's broken by Commit 411cea638f
which moved qemuSetupCgroupForEmulator() before setting cpuset.mems
in qemuSetupCgroupPostInit.
Directory '$cgroup_path/emulator/' is created in qemuSetupCgroupForEmulator.
But '$cgroup_path/emulator/cpuset.mems' it not set and has a default value
(all nodes, such as 0-1). Then we setup '$cgroup_path/cpuset.mems' to the
nodemask (in this case it's '0') in qemuSetupCgroupPostInit. It must fail.
This patch makes '$cgroup_path/emulator/cpuset.mems' is set before
'$cgroup_path/cpuset.mems'. The action is similar with that in
qemuDomainSetNumaParamsLive.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
If the memory mode in numatune is not 'strict', we should not setup
cpuset.mems. Before commit 1a7be8c600
we have checked the memory mode in virDomainNumatuneGetNodeset. This
patch adds the check as before.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
If the memory mode in numatune is specified as 'preferred' with one node
(such as nodeset='0'), domain's memory is not all in node 0 absolutely.
Assumption that node 0 doesn't have enough memory, memory can be allocated
on node 1 when qemu process startup. Then if we set cpuset.mems to '0',
it may invoke OOM.
Commit 1a7be8c600 changed the former logic of
checking memory mode in virDomainNumatuneGetNodeset. This patch adds the
check as before.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
This patch fixes the following issues.
1) When an invalid wwn is introduced, libvirt reports
"Malformed wwn: %s". The template won't be replaced.
2) "target" option for dompmsuspend and "xml" option for
save-image-define are required options and should use
VSH_OT_DATA instead of VSH_OT_STRING as an option type.
3) A typo.
Signed-off-by: Hao Liu <hliu@redhat.com>
Coverity found out that commit cd490086 caused a possible NULL pointer
dereference. This is due to the fact, that phyp_driver is NULL at the
time of closing the socket, instead of connection_data, which kept the
socket before the mentioned commit, could not be NULL.
However, internal_socket is still the local socket that can be
closed, even unconditionally, if we initialize it to -1.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Check the arability of the options with the current qemu binary,
add them in the varable opt if yes, print a message if not.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Detect if the the qemu binary currently in use support the bps_max option,
If yes add it to the command, if not, just ignore the option.
We don't print error here, because the check for invalide arguments
has alerady been made in qemu_driver.c
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Add support for bps_max and friends in the driver part.
In the part checking if a qemu is running, check if the running binary
support bps_max, if not print an error message, if yes add it to
"info" variable
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add the capability to detect if the qemu binary have the capability
to use bps_max and friends
Add a value in the enum virQEMUCapsFlags for the qemu capability.
Set it with virQEMUCapsSet if the binary suport bps_max and they friends.
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Modify the structure _virDomainBlockIoTuneInfo to support these the new
options.
Change the initialization of the variable expectedInfo in qemumonitorjsontest.c
to avoid compiling problem.
Add documentation about the new xml options
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
nodeSetMemoryParameters() will call nodeSetMemoryParameterValue()
to set parameters. But it just filter the return code '-2' as
failure. Indeed we should report error when rc is negative.
https://bugzilla.redhat.com/show_bug.cgi?id=1161541
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
CPU numa topology implicitly allows memory specification in 'KiB'.
Enabling this to accept the 'unit' in which memory needs to be specified.
This now allows users to specify memory in units of choice, and
lists the same in 'KiB' -- just like other 'memory' elements in XML.
<numa>
<cell cpus='0-3' memory='1024' unit='MiB' />
<cell cpus='4-7' memory='1024' unit='MiB' />
</numa>
Also augment test cases to correctly model NUMA memory specification.
This adds the tag 'unit="KiB"' for memory attribute in NUMA cells.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 01b4de2b9f abstracts virDomainParseMemory()
for use by other functions in domain_conf.c
Extend the same for use, for functions outside of this file.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Store version numbers in this format
version = 1000000 * major + 1000 * minor + micro
produced by virParseVersionString instead of dedicated enums.
Split the complex esxVI_ProductVersion enum into a simpler
esxVI_ProductLine enum and a product version number.
Relax API and product version number checks to accept everything that
is equal or greater than the supported minimum version. VMware ESX
went through 3 major versions and the vSphere API always stayed
backward compatible. This commit assumes that this will also be true
for future VMware ESX versions.
Also reword error messages in esxConnectTo* to say what was expected
and what was found instead (suggested by Richard W.M. Jones).
As reviewing patches upstream it occurred to me, that we have two
functions doing nearly the same: virDomainParseMemory which
expects XML in the following format:
<memory unit='MiB'>1337</memory>
The other function being virDomainHugepagesParseXML expecting the
following format:
<someElement size='1337' unit='MiB'/>
It wouldn't matter to have two functions handle two different
scenarios like this if we could only not copy code that handles
32bit arches around. So this code merges the common parts into
one by inventing new @units_xpath argument to
virDomainParseMemory which allows overriding the default location
of @unit attribute in XML. With this change both scenarios above
can be parsed with virDomainParseMemory.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If detect_scsi_host_caps reports errors but keeps libvirtd going on
startup, the user is misled by the error messages. Transforming them
into warning still shows the problems, but indicates this is not fatal.
Introduced by commit c63ef0452b, when nodeset is NULL, validation will
pass in virNumaSetupMemoryPolicy, but virBitmapNextSetBit must ensure
bitmap is not NULL, otherwise that might cause a segmentation fault.
This patch fixes it.
Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
The shared netcf driver is stateful and inside the daemon so
there is no need to use the networkPrivateData field to get the
driver handle. Just access the global driver handle directly.
The shared network driver is stateful and inside the daemon so
there is no need to use the networkPrivateData field to get the
driver handle. Just access the global driver handle directly.
Many places already directly accessed the global driver handle
in any case, so the code could never work without relying on
this.