The libxl driver always uses virDomainObj->def when formatting
the domain XML description. Use virDomainObj->newDef when
--inactive flag is set.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
libxlDomainCreateXML() would remove a persistent domain if
libxlDomainStart() failed. Check if domain is persistent
before removing.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
When restarting libvirtd and reconnecting to running domains,
libxlReconnectDomain() would unconditionally set the domain state
to VIR_DOMAIN_RUNNING, overwriting the state maintained in
$statedir/<domname>.xml. A domain in a paused state would have
the state changed to running, even though it was actually in a
paused state.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1201760
When the domain "<on_crash>coredump-destroy</on_crash>" is set, the
domain wasn't being destroyed, rather it was being rebooted.
Add VIR_DOMAIN_LIFECYCLE_CRASH_COREDUMP_DESTROY to the list of
on_crash types that cause "-no-reboot" to be added to the qemu
command line.
Although defined the same way, fortunately there hadn't been any deviation.
Ensure any assignments to onCrash use VIR_DOMAIN_LIFECYCLE_CRASH_* defs and
not VIR_DOMAIN_LIFECYCLE_* defs
https://bugzilla.redhat.com/show_bug.cgi?id=1232606
Since an mpath pool contains all the Multipath devices on a host, allowing
more than one defined on a host at a time should be disallowed under the
policy of disallowing duplicate source pools for the host.
Adjust to docs to clarify the Multipath target path value usage for both
the storage driver (only 1 pool per host) and formatstorage references
(ignore the target element in favor of the default target mapping of
/dev/mapper).
https://bugzilla.redhat.com/show_bug.cgi?id=1230664
Per the devmapper docs, use "/dev/mapper" or "/dev/dm-n" in order to
determine if a device is under control of DM Multipath.
So add "/dev/mapper" to the virFileExists, leaving the "/dev/mpath"
as a "legacy" option since it appears for a while it was the preferred
mechanism, but is no longer maintained
https://bugzilla.redhat.com/show_bug.cgi?id=1201143
The formatdomain.html description for <disk> device 'lun' indicates that
it must be either a type 'block' or type 'network' with protocol 'iscsi';
however, we did not make that check until domain startup.
This caused issues for virt-manager which had an unexpected failure at
run time rather config time.
This patch adds a check in post part disk device checking for the specific
and supported lun types as well as adjusting the test failure to be for
parse config rather than run time.
Libvirt periodically refreshes all volumes in a storage pool, including
the volumes being cloned.
While cloning a storage volume from parent, we drop pool locks. Subsequent
volume refresh sometimes changes allocation for an ongoing copy, and leads
to corrupt images.
Fix: Introduce a shadow volume that isolates the volume object under refresh
from the base which has a copy ongoing.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Spice events have mostly similar information present in the event JSON
but they differ in the name of the element containing the port.
The JSON event also provides connection ID which might be useful in the
future.
This patch splits up the event parser code into two functions and the
SPICE reimplements the event parsing with correct names and drops the
VNC only stuff.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1236585
https://bugzilla.redhat.com/show_bug.cgi?id=1227664
If the requested format type for the new entry in the file system pool
is a 'dir', then be sure to set the vol->type correctly as would be done
when the pool is refreshed.
Make sure we only assign the default spicevmc channel name to spicevmc
virtio channels. Caused by commits 3269ee65 and 1133ee2b, which moved
the assignment from XML parsing code to QEMU but failed to keep the
logic.
https://bugzilla.redhat.com/show_bug.cgi?id=1179680
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Proper Haswell CPU model handling is tested in several
qemuxml2argv-cpu-* which are run in a special environment. Let's remove
the CPU model from other tests to make them less fragile.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Don't listen on the admin socket in the daemon and comment out the
admin devel files out of specfile.
Library is still being compiled and installed in order to link easily
without any disturbing modifications to the daemon code.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Its only file must be included in the daemon package anyway, since the
daemon is linked with the admin library and so then it's just an empty
package until we have virt-admin binary which we can decide later on
whether to just move it to clients or create a new package for it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
While re-reading what I wrote for commit id '785a8940e', I realized
I needed to clarify that being able to present as a 'lun', the mode
property for the pool source element needed to be "host" (or empty)
and not "direct".
It was described correctly later in the mode host description, but
this just ensures it's not missed here as well.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Also move the mention of version numbers for the various PCI
controller models up to the end of the sentence where they are first
given, to avoid confusion.
Certain PCI buses don't support hotplug, and when automatically
assigning PCI addresses for devices, libvirt is very conservative in
its assumptions about whether or not a device will need to be
hotplugged/unplugged in the future. But if the user manually assigns
an address, they likely are aware of any hotplug requirements of the
device (or at least they should be).
In short, after this patch, automatically PCI address assignment will
assume that the device must be plugged in to a hot-pluggable slot, but
manually assignment can place the device in any bus that is
compatible, regardless of whether or not it supports hotplug. If the
user makes a mistake and plugs the device into a bus that doesn't
support hotplug, then later tries to do a hot-unplug, qemu will give
an appropriate error.
(in the future we may want to add a "hotpluggable" attribute to all
devices, with default being "yes" for autoassign, and "no" for manual
assign).
When support for the pcie-root and dmi-to-pci-bridge buses on a Q35
machinetype was added, I was concerned that even though qemu at the
time allowed plugging a PCI device into a PCIe port, that it might not
be supported in the future. To prevent painful backtracking in the
possible future where this happened, I disallowed such connections
except in a few specific cases requested by qemu developers (indicated
in the code with the flag VIR_PCI_CONNECT_TYPE_EITHER_IF_CONFIG).
Now that a couple years have passed, there is a clear message from
qemu that there is no danger in allowing PCI devices to be plugged
into PCIe ports. This patch eliminates
VIR_PCI_CONNECT_TYPE_EITHER_IF_CONFIG and changes the code to always
allow PCI->PCIe or PCIe->PCI connection *when the PCI address is
specified in the config. (For newly added devices that haven't yet
been given a PCI address, the auto-placement still prefers using the
correct type of bus).
The PCI case of the switch statement in this function contains another
switch statement with a case for each model. Currently every model
except pci-root and pcie-root has a check for index > 0 (since only
those two can have index==0), and the function should never be called
for those two anyway. If we move the check for !pci[e]-root to the top
of the pci case, then we can move the check for index > 0 out of the
individual model cases. This will save repeating that check for the
three new controller models about to be added.
Instead of using qemuMonitorJSONDevGetBlockExtent (which I plan to
remove later) extract the data in place.
Additionally add a flag that will be set when the wr_highest_offset was
extracted correctly so that callers can act according to that.
The test case addition should help make sure that everything works.
Function prlsdkGetStatsParam was missing a prototype or the static
keyword. I went with static since it built successfully.
Pushed as a build breaker fix.
Implemented counters:
VIR_DOMAIN_MEMORY_STAT_SWAP_IN
VIR_DOMAIN_MEMORY_STAT_SWAP_OUT
VIR_DOMAIN_MEMORY_STAT_MINOR_FAULT
VIR_DOMAIN_MEMORY_STAT_MAJOR_FAULT
VIR_DOMAIN_MEMORY_STAT_AVAILABLE
VIR_DOMAIN_MEMORY_STAT_ACTUAL_BALLOON
VIR_DOMAIN_MEMORY_STAT_UNUSED
Comments.
1. Use vzDomObjFromDomainRef/virDomainObjEndAPI pair to get domain
object as we use prlsdkGetStatsParam. See previous statistics
comments.
2. Balloon statistics is not applicable to containers. Fault
statistics for containers not provided in PCS6 yet.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Comments.
Replace vzDomObjFromDomain/virObjectUnlock pair
to vzDomObjFromDomainRef/virDomainObjEndAPI as we
use prlsdkGetStatsParam. See previous statistics
comments.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Make net device lookup by mac return sdk handle
instead of quite ephemeral enumeration index. After
this change there is no need anymore in special
function of removing device by enumeration index.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Populate counters SDK currenly supports:
rx_bytes
rx_packets
tx_bytes
tx_packets
Comments.
Use vzDomObjFromDomainRef/virDomainObjEndAPI pair to get domain
object as we use prlsdkGetStatsParam that can release domain
object lock and thus we need a reference in case domain
is deleated meanwhile.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
In my previous fix (1310b1358) I've tried to solve an ordering
issue. Well, while it worked it has a side effect of keeping a
temporary file around. My patch was buggy in that sense. Solve
this by properly marking the dependency without any side effect.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far the argument has not much meaning and was practically ignored.
This is not good since when doing memory hotplug, the size of desired
hugepage backing is passed in that argument. Taking closer look at the
tests I'm fixing reveals the bug. For instance, while the following is
in the test:
<memory model='dimm'>
<source>
<nodemask>1-3</nodemask>
<pagesize unit='KiB'>4096</pagesize>
</source>
<target>
<size unit='KiB'>524287</size>
<node>0</node>
</target>
<address type='dimm' slot='0' base='0x100000000'/>
</memory>
the generated commandline corresponding to this XML was:
-object memory-backend-ram,id=memdimm0,size=536870912,\
host-nodes=1-3,policy=bind
Have you noticed? Yes, memory-backend-ram! Nothing can be further away
from the right answer. The hugepage backing is requested in the XML
and we happily ignore it. This is just not right. It's
memory-backend-file which should have been used:
-object memory-backend-file,id=memdimm0,prealloc=yes,\
mem-path=/dev/hugepages4M/libvirt/qemu,size=536870912,\
host-nodes=1-3,policy=bind
The problem is, that @pagesize passed to qemuBuildMemoryBackendStr
(where this part of commandline is built) was ignored. The hugepage to
back memory was searched only and only by NUMA nodes pinning. This
works only for regular guest NUMA nodes.
Then, I'm changing the hugepages size in the test XMLs too. This is
simply because in the test suite we create dummy mount points just for
2M and 1G hugepages. And in the test 4M was requested. I'm sticking to
2M, but 1G should just work too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1196644
This function constructs the backend (host facing) part of the
memory device. At the beginning, the configured hugepages are
searched to find the best match for given guest NUMA node.
Configured hugepages can have a @nodeset attribute to specify on
which guest NUMA nodes should be the hugepages backing used.
There is, however, one 'corner case'. Users may just tell 'use
hugepages to back all the nodes'. In other words:
<memoryBacking>
<hugepages/>
</memoryBacking>
<cpu>
<numa>
<cell id='0' cpus='0-1' memory='1024000' unit='KiB'/>
</numa>
</cpu>
Our code fails in this case. Well, since there's no @nodeset (nor
any <page/> child element to <hugepages/>) we fail to lookup the
default hugepage size to use.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1235116
According to our XML definition, zero is as valid as any other value.
Mainly because it should be kernel-agnostic.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Drop internal data structures and use the proper fields in virDomainDef.
This allows to greatly simplify the code and allows to remove the
private data structure that was holding just redundant data.
This patch also fixes the bogous output where we'd report that a fresh
VM without vCPU pinning would not run on all vcpus.