If any disk of a VM was involved in a (copy) block job we refused to do
a snapshot. As not only copy jobs interlock snapshots and the
interlocking is applicable to individual disks only we can make the
check in a more individual fashion and interlock all block job types
supported by libvirt.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1203628
In the order of appearance:
* MAX_LISTEN - never used
added by 23ad665c (qemud) and addec57 (lock daemon)
* NEXT_FREE_CLASS_ID - never used, added by 07d1b6b
* virLockError - never used, added by eb8268a4
* OPENVZ_MAX_ARG, CMDBUF_LEN, CMDOP_LEN
unused since the removal of ADD_ARG_LIT in d8b31306
* QEMU_NB_PER_CPU_STAT_PARAM - unused since 897808e
* QEMU_CMD_PROMPT, QEMU_PASSWD_PROMPT - unused since 1dc10a7
* TEST_MODEL_WORDSIZE - unused since c25c18f7
* TEMPDIR - never used, added by 714bef5
* NSIG - workaround around old headers
added by commit 60ed1d2
unused since virExec was moved by commit 02e8691
* DO_TEST_PARSE - never used, added by 9afa006
* DIFF_MSEC, GETTIMEOFDAY - unused since eee6eb6
Two places would call to qemuPrepareCpumap() with priv->autoNodeset to
convert it to a cpuset. Remove the function and use the prepared cpuset
automatically.
When the default cpuset or automatic numa placement is used libvirt
would place the whole parent cgroup in the specified cpuset. This then
disallowed to re-pin the vcpus to a different cpu.
This patch pins only the vcpu threads to the default cpuset and thus
allows to re-pin them later.
The following config would fail to start:
<domain type='kvm'>
...
<vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='2-3'/>
...
This is a regression since a39f69d2b.
When the synchronous pivot option is selected, libvirt would not update
the backing chain until the job was exitted. Some applications then
received invalid data as their job serialized first.
This patch removes polling to wait for the ABORT/PIVOT job completion
and replaces it with a condition. If a synchronous operation is
requested the update of the XML is executed in the job of the caller of
the synchronous request. Otherwise the monitor event callback uses a
separate worker to update the backing chain with a new job.
This is a regression since 1a92c71910
When the ABORT job is finished synchronously you get the following call
stack:
#0 qemuBlockJobEventProcess
#1 qemuDomainBlockJobImpl
#2 qemuDomainBlockJobAbort
#3 virDomainBlockJobAbort
While previously or while using the _ASYNC flag you'd get:
#0 qemuBlockJobEventProcess
#1 processBlockJobEvent
#2 qemuProcessEventHandler
#3 virThreadPoolWorker
Later on I'll be adding a condition that will allow to synchronise a
SYNC block job abort. The approach will require this code to be called
from two different places so it has to be extracted into a helper.
Commit 1a92c719 moved code to handle block job events to a different
function that is executed in a separate thread. The caller of
processBlockJob handles locking and unlocking of @vm, so the we should
not do it in the function itself.
The block copy API takes the speed in bytes/s rather than MiB/s that was
the prior approach in virDomainBlockRebase. We correctly converted the
speed to bytes/s in the old API but we still called the common helper
virDomainBlockCopyCommon with the unadjusted variable.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1207122
When getting info on NUMA parameters for domain,
virCgroupGetCpusetMems() may be called. However, as of 43b67f2e
the call is guarded by check if memory controller is present.
Even though it may be not obvious instantly, NUMA parameters are
stored under cpuset controller. Therefore the check needs to look
like this:
if (!virCgroupHasController(priv->cgroup,
VIR_CGROUP_CONTROLLER_CPUSET) ||
virCgroupGetCpusetMems(priv->cgroup, &nodeset) < 0) {
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Blockcopy to non-file destination is not supported according the code,
but a 'goto endjob' is missed after checking the destination.
This leads to calling drive-mirror with wrong parameters.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1206406
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Starting a qemu VM with a memory module that has the base address
specified results in the following error:
error: internal error: early end of file from monitor: possible problem:
2015-03-26T03:45:52.338891Z qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,
id=dimm0,slot=0,base=4294967296: Property '.base' not found
The correct property name for the base address is 'addr'.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Because of the microcode update to Haswell/Broadwell CPUs, existing
domains using these CPUs may fail to start even though they used to run
just fine. To help users solve this issue we try to suggest switching to
-noTSX variant of the CPU model:
virsh # start cd
error: Failed to start domain cd
error: unsupported configuration: guest and host CPU are not
compatible: Host CPU does not provide required features: rtm, hle;
try using 'Haswell-noTSX' CPU model
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Use the virStorageSourceIsEmpty helper to determine whether the drive
source is empty rather than checking for src->path. This will fix start
of VM with empty network cdrom that would not report any error.
The function that formats the string for network drives would return
error code but did not set the error message when called on storage
source with VIR_STORAGE_NET_PROTOCOL_LAST or _NONE.
Report an error in this case if it would ever be called in that way.
While adding tests for status XML parsing and formatting I've noticed
that the device alias list is leaked.
==763001== 81 (48 direct, 33 indirect) bytes in 1 blocks are definitely lost in loss record 414 of 514
==763001== at 0x4C2B8F0: calloc (vg_replace_malloc.c:623)
==763001== by 0x6ACF70F: virAllocN (viralloc.c:191)
==763001== by 0x447B64: qemuDomainObjPrivateXMLParse (qemu_domain.c:727)
==763001== by 0x6B848F9: virDomainObjParseXML (domain_conf.c:15491)
==763001== by 0x6B84CAC: virDomainObjParseNode (domain_conf.c:15608)
When starting a VM with hotpluggable memory devices the user may specify
an invalid source NUMA node. Libvirt would pass through the error from
qemu:
# virsh start test3
error: Failed to start domain test3
error: internal error: process exited while connecting to monitor:
2015-03-25T01:12:17.205913Z qemu-kvm: -object memory-backend-ram,id=memdimm0
,size=536870912,host-nodes=1-3,policy=bind: cannot bind memory to host NUMA nodes:
Invalid argument
This patch adds a check that allows to report better error:
# virsh start test3
error: Failed to start domain test3
error: configuration unsupported: NUMA node 1 is unavailable
Signed-off-by: Luyao Huang <lhuang@redhat.com>
This is very helpful when we want to log and report why we could not
acquire a state change lock. Reporting what job keeps it locked helps
with understanding the issue. Moreover, after calling
virDomainGetControlInfo, it's possible to tell whether libvirt is just
stuck somewhere within the API (or it just forgot to cleanup the job) or
whether libvirt is waiting for QEMU to reply.
The error message will look like the following:
# virsh resume cd
error: Failed to resume domain cd
error: Timed out during operation: cannot acquire state change lock
(held by remoteDispatchDomainSuspend)
https://bugzilla.redhat.com/show_bug.cgi?id=853839
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We don't have to modify cpuset.mems on hosts without NUMA. It also
fixes an error message that you get instead of success if you trying
update vcpus of a guest on a host without NUMA.
error: internal error: NUMA isn't available on this host
Signer-off-by: Pavel Hrdina <phrdina@redhat.com>
We should call virDomainLiveConfigHelperMethod ASAP because this
function transfers VIR_DOMAIN_AFFECT_CURRENT to VIR_DOMAIN_AFFECT_LIVE
or VIR_DOMAIN_AFFECT_CONFIG. All other additional checks for those two
flags should consider that the user give us VIR_DOMAIN_AFFECT_CURRENT.
Remove the unnecessary check whether the domain is live in case of
VIR_DOMAIN_VCPU_GUEST because this check is done by
virDomainLiveConfigHelperMethod.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
by rewriting it completely from:
error: unsupported configuration: virtio only support device address
type 'PCI'
to:
error: unsupported configuration: virtio disk cannot have an address of type
drive
Since we now support CCW addresses as well.
While debugging the support for responding to qemu RX_FILTER_CHANGED
events, I had changed the "ignoring this event" log message from
VIR_DEBUG to VIR_WARN, but forgot to change it back before
pushing. Since many guest OSes make enough changes to multicast lists
and/or promiscuous mode settings to trigger this message, it's
starting to show up as a red herring in bug reports.
Add a few helpers that allow to operate with memory device definitions
on the domain config and use them to implement memory device coldplug in
the qemu driver.
Add support to start qemu instance with 'pc-dimm' device. Thanks to the
refactors we are able to reuse the existing function to determine the
parameters.
Make sure that libvirt has all vital information needed to reliably
represent configuration of guest's memory devices in case of a
migration.
This patch forbids migration in case the required slot number and module
base address are not present (failed to be loaded from qemu via
monitor).
When using 'dimm' memory devices with qemu, some of the information
like the slot number and base address need to be reloaded from qemu
after process start so that it reflects the actual state. The state then
allows to use memory devices across migrations.
This patch adds code that parses and formats configuration for memory
devices.
A simple configuration would be:
<memory model='dimm'>
<target>
<size unit='KiB'>524287</size>
<node>0</node>
</target>
</memory>
A complete configuration of a memory device:
<memory model='dimm'>
<source>
<pagesize unit='KiB'>4096</pagesize>
<nodemask>1-3</nodemask>
</source>
<target>
<size unit='KiB'>524287</size>
<node>1</node>
</target>
</memory>
This patch preemptively forbids use of the <memory> device in individual
drivers so the users are warned right away that the device is not
supported.
To enable memory hotplug the maximum memory size and slot count need to
be specified. As qemu supports now other units than mebibytes when
specifying memory, use the new interface in this case.
Add a XML element that will allow to specify maximum supportable memory
and the count of memory slots to use with memory hotplug.
To avoid possible confusion and misuse of the new element this patch
also explicitly forbids the use of the maxMemory setting in individual
drivers's post parse callbacks. This limitation will be lifted when the
support is implemented.
In the last section if the function determines that the config is
invalid when QEMU doesn't support the memory device the JSON config
object would be returned even if it doesn't make sense.
Assign the object to be returned only on success.
When no model is specified in the domain definition for
a scsi controller and the architectur is s390 than virtio-scsi
is set as default model.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Commit cf54c60699 introduced the ability
to create missing storage volumes during migration. For network disks,
however, we may not necessarily be able to detect whether they already
exist -- there is no straight-forward way to map the disk to a storage
volume, and even if there were it's possible no configured storage pool
actually contains the disk.
It is better to assume the network disk exists in this case, rather than
aborting the migration completely. If the volume really is missing, QEMU
will generate an appropriate error later in the migration.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Wikipedia's list of common misspellings [1] has a machine-readable
version. This patch fixes those misspellings mentioned in the list
which don't have multiple right variants (as e.g. "accension", which can
be both "accession" and "ascension"), such misspellings are left
untouched. The list of changes was manually re-checked for false
positives.
[1] https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We've never set the cpuset.memory_migrate value to anything, keeping it
on default. However, we allow changing cpuset.mems on live domain.
That setting, however, don't have any consequence on a domain unless
it's going to allocate new memory.
I managed to make 'virsh numatune' move all the memory to any node I
wanted even without disabling libnuma's numa_set_membind(), so this
should be safe to use with it as well.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1198497
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1196934
When qemu exits during startup, libvirt includes the error from
/var/log/libvirt/qemu/vm.log in the error message:
$ virsh start test3
error: Failed to start domain test3
error: internal error: early end of file from monitor: possible problem:
2015-02-27T03:03:16.985494Z qemu-kvm: -numa memdev is not supported by
machine rhel6.5.0
The check for domain liveness added to qemuDomainObjExitMonitor
in commit dc2fd51f sometimes overwrites this error:
$ virsh start test3
error: Failed to start domain test3
error: operation failed: domain is no longer running
Fix the check to only report an error if there is none set.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Issue #1 - A call to virBitmapNew did not check if the allocation
failed which could lead to a NULL dereference
Issue #2 - When deleting the pin entries from the config file, the
code loops from the number of elements down to the "new" vcpu count;
however, the pin id values are numbered 0..n-1 not 1..n, so the "first"
pin attempt would never work. Luckily the check was for whether the
incoming 'n' (vcpu id) matched the entry in the array from 0..arraysize
rather than a dereference of the 'n' entry
In qemu 2.3, the migration status will include 'cancelling' in the
window between when an asynchronous cancel has been requested and
when the migration is actually halted. Previously, qemu hid this
state and reported 'active'. Libvirt manages the sequence okay
even when the string is unrecognized (that is, it will report an
unknown state:
Migration: [ 69 %]^Cerror: internal error: unexpected migration status in cancelling.
but the migration is still cancelled), but recognizing the string
makes for a smoother user experience.
* src/qemu/qemu_monitor.h
(QEMU_MONITOR_MIGRATION_STATUS_CANCELLING): Add enum.
* src/qemu/qemu_monitor.c (qemuMonitorMigrationStatus): Map it.
* src/qemu/qemu_migration.c (qemuMigrationUpdateJobStatus): Adjust
clients.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetMigrationStatusReply): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
virnetdevopenvswitch.h declares a few functions that can be called to
add ports to and remove them from OVS bridges, and retrieve the
migration data for a port. It does not contain any data definitions
that are used by domain_conf.h. But for some reason, domain_conf.h
virnetdevopenvswitch.h should be directly #including it. This adds a
few lines to the project, but saves all the files that don't need it
from the extra computing, and makes the dependencies more clear cut.
Problem Description:
When we set boot order for a vhost-user network interface, we found the boot index
doesn't work.
Cause of the Problem:
In the function qemuBuildVhostuserCommandLine(), it forcely set the arg bootindex of
function qemuBuildNicDevStr() to 0. Thus, the bootindex parameter got missing.
Solution:
Trans the arg bootindex down.
Signed-off-by: Gao Haifeng <gaohaifeng.gao@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
When libvirt is starting a domain, it reports the state as SHUTOFF until
it's RUNNING. This is not ideal because domain startup may take a long
time (usually because of some configuration issues, firewalls blocking
access to network disks, etc.) and domain lists provided by libvirt look
awkward. One can see weird shutoff domains with IDs in a list of active
domains or even shutoff transient domains. In any case, it looks more
like a bug in libvirt than a normal state a domain goes through.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function needs a pointer to the network to get list of DHCP
leases. The pointer is obtained via virNetworkLookupByName() which
requires callers to free the returned network once no longer needed.
Otherwise it's leaked.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that we allow HW address to be not present on our RPC layer,
don't error out if qemu-ga hasn't provided any.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1199182 documents that
after a series of disk snapshots into existing destination images,
followed by active commits of the top image, it is possible for
qemu 2.2 and earlier to end up tracking a different name for the
image than what it would have had when opening the chain afresh.
That is, when starting with the chain 'a <- b <- c', the name
associated with 'b' is how it was spelled in the metadata of 'c',
but when starting with 'a', taking two snapshots into 'a <- b <- c',
then committing 'c' back into 'b', the name associated with 'b' is
now the name used when taking the first snapshot.
Sadly, older qemu doesn't know how to treat different spellings of
the same filename as identical files (it uses strcmp() instead of
checking for the same inode), which means libvirt's attempt to
commit an image using solely the names learned from qcow2 metadata
fails with a cryptic:
error: internal error: unable to execute QEMU command 'block-commit': Top image file /tmp/images/c/../b/b not found
even though the file exists. Trying to teach libvirt the rules on
which name qemu will expect is not worth the effort (besides, we'd
have to remember it across libvirtd restarts, and track whether a
file was opened via metadata or via snapshot creation for a given
qemu process); it is easier to just always directly ask qemu what
string it expects to see in the first place.
As a safety valve, we validate that any name returned by qemu
still maps to the same local file as we have tracked it, so that
a compromised qemu cannot accidentally cause us to act on an
incorrect file.
* src/qemu/qemu_monitor.h (qemuMonitorDiskNameLookup): New
prototype.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONDiskNameLookup):
Likewise.
* src/qemu/qemu_monitor.c (qemuMonitorDiskNameLookup): New function.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONDiskNameLookup)
(qemuMonitorJSONDiskNameLookupOne): Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit)
(qemuDomainBlockJobImpl): Use it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Use the utilities introduced in the previous patches so the qemu
driver is able to create tap devices that are bound (and unbound
on domain destroyal) to Midonet virtual ports.
Signed-off-by: Antoni Segura Puimedon <toni+libvirt@midokura.com>
Only selected fields from the disk source were copied when cold updating
source in a CDROM drive. When such drive was backed by a network file
this resulted into corruption of the definition:
<disk type='network' device='cdrom'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='gluster' name='gluster-vol1(null)'>
<host name='localhost'/>
</source>
<target dev='vdc' bus='virtio'/>
<readonly/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
Update the whole source instead of cherry-picking elements.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1166024
By querying the qemu guest agent with the QMP command
"guest-network-get-interfaces" and converting the received JSON
output to structured objects.
Although "ifconfig" is deprecated, IP aliases created by "ifconfig"
are supported by this API. The legacy syntax of an IP alias is:
"<ifname>:<alias-name>". Since we want all aliases to be clubbed
under parent interface, simply stripping ":<alias-name>" suffices.
Note that IP aliases formed by "ip" aren't visible to "ifconfig",
and aliases created by "ip" do not have any specific name. But
we are lucky, as qemu guest agent detects aliases created by both.
src/qemu/qemu_agent.h:
* Define qemuAgentGetInterfaces
src/qemu/qemu_agent.c:
* Implement qemuAgentGetInterface
src/qemu/qemu_driver.c:
* New function qemuGetDHCPInterfaces
* New function qemuDomainInterfaceAddresses
src/remote_protocol-sructs:
* Define new structs
tests/qemuagenttest.c:
* Add new test: testQemuAgentGetInterfaces
Test cases for IP aliases, 0 or multiple ipv4/ipv6 address(es)
Signed-off-by: Nehal J Wani <nehaljw.kkd1@gmail.com>
We're parsing memballoon status period as unsigned int, but when we're
trying to set it, both we and qemu use signed int. That means large
values will get wrapped around to negative one resulting in error.
Basically the same problem as commit e3a7b874 was dealing with when
updating live domain.
QEMU changed the accepted value to int64 in commit 1f9296b5, but even
values as INT_MAX don't make sense since the value passed means seconds.
Hence adding capability flag for this change isn't worth it.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1140958
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In order not to leave old error messages set, this patch refactors the
code so the error is reported only when acted upon. The only such place
already rewrites any error, so cleaning up all the error reporting in
qemuMonitorSetMemoryStatsPeriod() is enough.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Patch 51f9f03a4c introduces a regression
where if a blockCommit operation fails the disk is still marked as being
part of a block job but can't be unmarked later.
As pointed out by jtomko in his review of the IOThreads pinning code:
http://www.redhat.com/archives/libvir-list/2015-March/msg00495.html
there are some comments sprinkled in indicating IOThreads were using
the same structure as the VcpuPin code...
This is the first patch of a few that will change the virDomainVcpuPin*
structures and code to just virDomainPin* - starting with the data
structure naming...
During his review of the iothreads pin setting code, Pavel noted that
there was a potential memory leak with respect to how the newVcpuPin
is handled and the goto endjob's in failure paths which would not free
the memory. For reference, See:
http://www.redhat.com/archives/libvir-list/2015-March/msg00415.html
The memory sizes in qemu are aligned up to 1 MiB boundaries. There are
two places where this was done once for the total size and then for
individual NUMA cell sizes.
Add a function that will align the sizes in one place so that it's clear
where the sizes are aligned.
As there are two possible approaches to define a domain's memory size -
one used with legacy, non-NUMA VMs configured in the <memory> element
and per-node based approach on NUMA machines - the user needs to make
sure that both are specified correctly in the NUMA case.
To avoid this burden on the user I'd like to replace the NUMA case with
automatic totaling of the memory size. To achieve this I need to replace
direct access to the virDomainMemtune's 'max_balloon' field with
two separate getters depending on the desired size.
The two sizes are needed as:
1) Startup memory size doesn't include memory modules in some
hypervisors.
2) After startup these count as the usable memory size.
Note that the comments for the functions are future aware and document
state that will be present after a few later patches.
Surprisingly we did not grab a VM job when a block job finished and we'd
happily rewrite the backing chain data. This made it possible to crash
libvirt when queueing two backing chains tightly and other badness.
To fix it, add yet another handler to the helper thread that handles
monitor events that require a job.
We interpret port values as signed int (convert them from char *),
so if a negative value is provided in network disk's configuration,
we accept it as valid, however there's an 'unknown cause' error raised later.
This error is only accidental because we return the port value in the return code.
This patch adds just a minor tweak to the already existing check so we
reject negative values the same way as we reject non-numerical strings.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1163553
A helper that never returns an error and treats bits out of bitmap range
as false.
Use it everywhere we use ignore_value on virBitmapGetBit, or loop over
the bitmap size.
Now that qemuDomainBlocksStatsGather provides functions of both
qemuMonitorGetBlockStatsParamsNumber and qemuMonitorGetBlockStatsInfo we
can reuse it and kill a lot of code.
Additionally as a bonus qemuDomainBlockStatsFlags will now support
summary statistics so add a statement to the virsh man page about that.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1142636
In the LXC driver, if the disk path is not provided the API returns
total statistics for all disks of the domain. With the new text monitor
implementation this can be now done in the qemu driver too.
Add code that wil total the stats for all disks if the path is not
provided.
Extract the code to look up the disk alias and return the block stats
struct so that it can be reused later in qemuDomainBlockStatsFlags.
The function uses qemuMonitorGetAllBlockStatsInfo instead of
qemuMonitorGetBlockStatsInfo.
Our virDomainBlockStatsFlags API uses the old approach where, when it's
called without the typed parameter array, returns the count of parameters
supported by qemu.
The supported parameter count is obtained via separate monitor calls
which is a waste since we can calculate it when gathering the data.
This patch adds code to the qemuMonitorGetAllBlockStatsInfo workers that
allows to track the count of supported fields reported by qemu and will
allow to remove the old duplicate code.
The function that is extracting block stats data from the QMP monitor
reply contains a lot of repeated code. Since I'd be changing each of the
copies in the next patch, lets convert it to a macro right away.
Add a different version of parser for "info blockstats" that basically
parses the same information as the existing copy of the function.
This will allow us to remove the single device version
qemuMonitorGetBlockStatsInfo in the future.
The new implementation uses few new helpers so it should be more
understandable and provides a test case to verify that it works.
Allocate the hash table in the monitor wrapper function instead of the
worker itself so that the text monitor impl that will be added in the
next patch doesn't have to duplicate it.
Error messages are already set in all code paths returning -1 from
networkGetNetworkAddress, so we don't want to overwrite them.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
When creating qemu capabilities, a dummy virDomainObj is created just
because our monitor code expects that. However, the object is created
locked already. Then, under cleanup label, we simply unref the object
which results in whole domain object to be disposed. The object lock
is destroyed subsequently, but hey - it's still locked:
==24845== Thread #14's call to pthread_mutex_destroy failed
==24845== with error code 16 (EBUSY: Device or resource busy)
==24845== at 0x4C3024E: pthread_mutex_destroy (in /usr/lib64/valgrind/vgpreload_helgrind-amd64-linux.so)
==24845== by 0x531F72E: virMutexDestroy (virthread.c:83)
==24845== by 0x5302977: virObjectLockableDispose (virobject.c:237)
==24845== by 0x5302A89: virObjectUnref (virobject.c:265)
==24845== by 0x1DD37866: virQEMUCapsInitQMP (qemu_capabilities.c:3397)
==24845== by 0x1DD37CC6: virQEMUCapsNewForBinary (qemu_capabilities.c:3481)
==24845== by 0x1DD381E2: virQEMUCapsCacheLookup (qemu_capabilities.c:3609)
==24845== by 0x1DD30F8A: virQEMUCapsInitGuest (qemu_capabilities.c:744)
==24845== by 0x1DD31889: virQEMUCapsInit (qemu_capabilities.c:1020)
==24845== by 0x1DD7DD36: virQEMUDriverCreateCapabilities (qemu_conf.c:888)
==24845== by 0x1DDC57C0: qemuStateInitialize (qemu_driver.c:803)
==24845== by 0x53DC743: virStateInitialize (libvirt.c:777)
==24845==
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 4bbe1029f fixed a problem in commit f7afeddc by moving the call
to virNetDevGetIndex() to a location common to all interface types (so
that the nicindex array would be filled in for macvtap as well as tap
interfaces), but the location was *too* common, as the original call
to virNetDevGetIndex() had been in a section qualified by "if
(cfg->privileged)". The result was that the "fixed" libvirtd would try
to call virNetDevGetIndex() even for session mode libvirtd, and end up
failing with the log message:
Unable to open control socket: Operation not permitted
To remedy that, this patch qualifies the call to virNetDevGetIndex()
in its new location with cfg->privileged.
This resolves https://bugzilla.redhat.com/show_bug.cgi?id=1198244
By adding a call and check of return of virBitmapToData to the
IOThreads code, my Coverity checker lets me know qemuDomainHelperGetVcpus
also needs to check the status...
Depending on the flags passed, either attempt to return the active/live
IOThread data for the domain or the config data.
The active/live path will call into the Monitor in order to get the
IOThread data and then correlate the thread_id's returned from the
monitor to the currently running system/threads in order to ascertain
the affinity for each iothread_id.
The config path will map each of the configured IOThreads and return
any configured iothreadspin data
Signed-off-by: John Ferlan <jferlan@redhat.com>
There was a mess in the way how we store unlimited value for memory
limits and how we handled values provided by user. Internally there
were two possible ways how to store unlimited value: as 0 value or as
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED. Because we chose to store memory
limits as unsigned long long, we cannot use -1 to represent unlimited.
It's much easier for us to say that everything greater than
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid
value despite that it makes no sense to set limit to 0.
Remove unnecessary function virCompareLimitUlong. The update of test
is to prevent the 0 to be miss-used as unlimited in future.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Pass the TPM file descriptor to QEMU via command line.
Instead of passing /dev/tpm0 we now pass /dev/fdset/10 and the additional
parameters -add-fd set=10,fd=20.
This addresses the use case when QEMU is started with non-root privileges
and QEMU cannot open /dev/tpm0 for example.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
When the domain's source disk type is network, if source protocol is rbd
or sheepdog, the 'if().. break' will end the current case, which lead to
miss check the driver type is raw or qcow2. Libvirt will allow to create
internal snapshot for a running domain with raw format disk which based
on rbd storage.
While both protocols support internal snapshots of the disk qemu is not
able to use it as it requires some place to store the memory image. The
check if the disk is backed by a qcow2 image needs to be executed
always.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1179533
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Previously when a domain would get stuck in a domain job due to a
programming mistake we'd report the following control state:
$ virsh domcontrol domain
occupied (1424343406.150s)
The timestamp is invalid as the monitor was not entered for that domain.
We can use that to detect that the domain has an active job and report a
better error instead:
$ virsh domcontrol domain
error: internal (locking) error
https://bugzilla.redhat.com/show_bug.cgi?id=1197600
So, libvirt uses pid file to track pid of started qemus. Whenever
a domain is started, its pid is put into corresponding pid file.
The pid file path is generated based on domain name and stored
into domain object internals. However, it's not stored in the
status XML and therefore lost on daemon restarts. Hence, later,
when domain is being shut down, the daemon does not know which
pid file to unlink, and the correct pid file is left behind. To
avoid this, lets generate the pid file path again in
qemuProcessReconnect().
Reported-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of checking defaultMode for every channel that has no mode
configured, test it only once outside of channel loop. This fixes a bug
that in case all possible channels are fore example set to insecure, but
defaultMode is set to secure, we wouldn't auto-generate TLS port. This
results in failure while starting a guest.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143832
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
We have two different places that needs to be updated while touching
code for allocation spice ports. Add a bool option to
'qemuProcessSPICEAllocatePorts' function to switch between true and fake
allocation so we can use this function also in qemu_driver to generate
native domain definition.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Since adding the support for scheduler policy settings in commit
8680ea97, there are two enums with the same information. That was
caused by rewriting the patch since first draft.
Find out thanks to clang, but there was no impact whatsoever.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The problem here was that when opening a channel, we were checking
whether the channel given is alias (can't be NULL for running domain) or
it's name, which can be NULL (for example with spicevmc). In case of
such domain qemuDomainOpenChannel() made the daemon crash.
STREQ_NULLABLE() is safe to use since the code in question is wrapped in
"if (name)" and is more readable, so use that instead of checking for
non-NULL "vm->def->channels[i]->target.name".
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1142631
This patch resolves a situation where the same "<target dev='$name'...>"
can be used for multiple disks in the domain.
While the $name is "mostly" advisory regarding the expected order that
the disk is added to the domain and not guaranteed to map to the device
name in the guest OS, it still should be unique enough such that other
domblk* type operations can be performed.
Without the patch, the domblklist will list the same Target twice:
$ virsh domblklist $dom
Target Source
------------------------------------------------
sda /var/lib/libvirt/images/file.qcow2
sda /var/lib/libvirt/images/file.img
Additionally, getting domblkstat, domblkerror, domblkinfo, and other block*
type calls will not be able to reference the second target.
Fortunately, hotplug disallows adding a "third" sda value:
$ qemu-img create -f raw /var/lib/libvirt/images/file2.img 10M
$ virsh attach-disk $dom /var/lib/libvirt/images/file2.img sda
error: Failed to attach disk
error: operation failed: target sda already exists
$
BUT, it since 'sdb' doesn't exist one would get the following on the same
hotplug attempt, but changing to use 'sdb' instead of 'sda'
$ virsh attach-disk $dom /var/lib/libvirt/images/file2.img sdb
error: Failed to attach disk
error: internal error: unable to execute QEMU command 'device_add': Duplicate ID 'scsi0-0-1' for device
$
Since we cannot fix this issue at parsing time, the best that can be done so
as to not "lose" a domain is to make the check prior to starting the guest
with the results as follows:
$ virsh start $dom
error: Failed to start domain $dom
error: XML error: target 'sda' duplicated for disk sources '/var/lib/libvirt/images/file.qcow2' and '/var/lib/libvirt/images/file.img'
$
Running 'make check' found a few more instances in the tests where this
duplicated target dev value was being used. These also exhibited some
duplicated 'id=' values (negating the uniqueness argument of aliases) in
the corresponding .args file and of course the *xmlout version of a few
input XML files.
NUMA enabled guest configuration explicitly specifies memory sizes for
individual nodes. Allowing the virDomainSetMemoryFlags API (and friends)
to change the total doesn't make sense as the individual node configs
are not updated in that case.
Forbid use of the API in case NUMA is specified.
If we combine the boot order on the command line with other
boot options, we prepend order= in front of it.
Instead of checking if the number of added arguments is between
0 and 2, separate the strings for boot order and options
and prepend boot order only if both strings are not empty.
Commit f7afeddc added code to report to systemd an array of interface
indexes for all tap devices used by a guest. Unfortunately it not only
didn't add code to report the ifindexes for macvtap interfaces
(interface type='direct') or the tap devices used by type='ethernet',
it ended up sending "-1" as the ifindex for each macvtap or hostdev
interface. This resulted in a failure to start any domain that had a
macvtap or hostdev interface (or actually any type other than
"network" or "bridge").
This patch does the following with the nicindexes array:
1) Modify qemuBuildInterfaceCommandLine() to only fill in the
nicindexes array if given a non-NULL pointer to an array (and modifies
the test jig calls to the function to send NULL). This is because
there are tests in the test suite that have type='ethernet' and still
have an ifname specified, but that device of course doesn't actually
exist on the test system, so attempts to call virNetDevGetIndex() will
fail.
2) Even then, only add an entry to the nicindexes array for
appropriate types, and to do so for all appropriate types ("network",
"bridge", and "direct"), but only if the ifname is known (since that
is required to call virNetDevGetIndex().
libvirt was unconditionally calling virNetDevBandwidthClear() for
every interface (and network bridge) of a type that supported
bandwidth, whether it actually had anything set or not. This doesn't
hurt anything (unless ifname == NULL!), but is wasteful.
This patch makes sure that all calls to virNetDevBandwidthClear() are
qualified by checking that the interface really had some bandwidth
setup done, and checks for a null ifname inside
virNetDevBandwidthClear(), silently returning success if it is null
(as well as removing the ATTRIBUTE_NONNULL from that function's
prototype, since we can't guarantee that it is never null,
e.g. sometimes a type='ethernet' interface has no ifname as it is
provided on the fly by qemu).
If the qemu binary on x86 does not support lsi SCSI controller,
but it supports virtio-scsi, we reject the virtio-specific attributes
for no reason.
Move the default controller assignment before the check.
https://bugzilla.redhat.com/show_bug.cgi?id=1168849
https://bugzilla.redhat.com/show_bug.cgi?id=1183869
Soo. you've successfully started yourself a domain. And since you want
to use it on your host exclusively you are confident enough to
passthrough the host CPU model, like this:
<cpu mode='host-passthrough'/>
Then, after a while, you want to save the domain into a file (e.g.
virsh save dom dom.save). And here comes the trouble. The file consist
of two parts: Libvirt header (containing domain XML among other
things), and qemu migration data. Now, the domain XML in the header is
formatted using special flags (VIR_DOMAIN_XML_SECURE |
VIR_DOMAIN_XML_UPDATE_CPU | VIR_DOMAIN_XML_INACTIVE |
VIR_DOMAIN_XML_MIGRATABLE).
Then, on your way back from the bar, you think of changing something
in the XML in the saved file (we have a command for it after all), say
listen address for graphics console. So you successfully type in the
command:
virsh save-image-edit dom.save
Change all the bits, and exit the editor. But instead of success
you're left with sad error message:
error: unsupported configuration: Target CPU model <null> does not
match source Pentium Pro
Sigh. Digging into the code you see lines, where we check for ABI
stability. The new XML you've produced is compared with the old one
from the saved file to see if qemu ABI will break or not. Wait, what?
We are using different flags to parse the XML you've provided so we
were just lucky it worked in some cases? Yep, that's right.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In commit cc41c648 I've re-factored qemuMonitorFindBalloonObjectPath, but
missed that there is a memory leak. The "nextpath" variable is
overwritten while looping in for cycle and we have to free it before next
cycle.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Making use of the ARCH_IS_S390 macro introduced with
e808357528
Signed-off-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Since s390 does not support usb the default creation of a usb controller
for a domain should not occur.
Also adjust s390 test cases by removing usb device instances since
usb devices are no longer created by default for s390 the s390
test cases need to be adjusted.
Signed-off-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
For historical reasons data regarding NUMA configuration were split
between the CPU definition and numatune. We cannot do anything about the
XML still being split, but we certainly can at least store the relevant
data in one place.
This patch moves the NUMA stuff to the right place.
As virDomainNumatuneSet now doesn't allocate the virDomainNuma object
any longer it's not necessary to pass the pointer to a pointer to store
the object as it will not change any longer.
While touching the parameter definitions I've also changed the name of
the parameter to "numa".
Name it virNumaMemAccess and add it to conf/numa_conf.[ch]
Note that to avoid a circular dependency the type of the NUMA cell
memAccess variable was changed to int. It will be turned back later
after the circular dependency will not exist.
Not all machine types support all devices, device properties, backends,
etc. So until we create a matrix of [machineType, qemuCaps], lets just
filter out some capabilities before we return them to the consumer
(which is going to make decisions based on them straight away).
Currently, as qemu is unable to tell which capabilities are (not)
enabled for given machine types, it's us who has to hardcode the matrix.
One day maybe the hardcoding will go away and we can create the matrix
dynamically on the fly based on a few monitor calls.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1179678
When migrating with storage, libvirt iterates over domain disks and
instruct qemu to migrate the ones we are interested in (shared, RO and
source-less disks are skipped). The disks are migrated in series. No
new disk is transferred until the previous one hasn't been quiesced.
This is checked on the qemu monitor via 'query-jobs' command. If the
disk has been quiesced, it practically went from copying its content
to mirroring state, where all disk writes are mirrored to the other
side of migration too. Having said that, there's one inherent error in
the design. The monitor command we use reports only active jobs. So if
the job fails for whatever reason, we will not see it anymore in the
command output. And this can happen fairly simply: just try to migrate
a domain with storage. If the storage migration fails (e.g. due to
ENOSPC on the destination) we resume the host on the destination and
let it run on partly copied disk.
The proper fix is what even the comment in the code says: listen for
qemu events instead of polling. If storage migration changes state an
event is emitted and we can act accordingly: either consider disk
copied and continue the process, or consider disk mangled and abort
the migration.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has
completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image,
the event looks something like this:
"timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event":
"BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len":
8412790784, "offset": 409993216, "speed": 8796093022207, "type":
"mirror", "error": "No space left on device"}}
If "len" does not equal "offset" it's considered an error, and we can
clearly see "error" field filled in. However, later in the event
processing this case was handled no differently to case of job being
aborted via separate API. It's time that we start differentiate these
two because of the future work.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated
each time. The callback code handling the events checks if a blockjob
was started via our public APIs prior to setting the mirrorState.
However, some block jobs may be started internally (e.g. during
storage migration), in which case we don't bother with setting
disk->mirror (there's nothing we can set it to anyway), or other
fields. But it will come handy if we update the mirrorState in these
cases too. The event wasn't delivered just for fun - we've started the
job after all.
So, in this commit, the mirrorState is set to whatever job status
we've obtained. Of course, there are some actions on some statuses
that we want to perform. But instead of if {} else if {} else {} ...
enumeration, let's move to switch().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If 'virNumaGetHostNodeset()' fails then the error path will try to free
uninitialized pointer mem_mask. Introduced by commit af2a1f058.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
PowerPC : Forbid NULL CPU model with 'host-model' mode in qemu command line.
This ensures that an XML such as following:
...
<cpu mode='host-model'>
<model fallback='allow'/>
</cpu>
...
will not generate a '-cpu host,compat=(null)' command line with qemu-system-ppc64.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
PowerPC : Explicitly associate 'qemu-system-ppc64' as the
default emulator for all 64-bit PowerPC guests ( both Big & Little Endian )
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1126762
Commit 43b67f introduced a deadlock issue when we use numatune
to change numa settings to a vm in session mode.
Jump to endjob instead of jump to cleanup.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
So, when building the '-numa' command line, the
qemuBuildMemoryBackendStr() function does quite a lot of checks to
chose the best backend, or to check if one is in fact needed. However,
it returned that backend is needed even for this little fella:
<numatune>
<memory mode="strict" nodeset="0,2"/>
</numatune>
This can be guaranteed via CGroups entirely, there's no need to use
memory-backend-ram to let qemu know where to get memory from. Well, as
long as there's no <memnode/> element, which explicitly requires the
backend. Long story short, we wouldn't have to care, as qemu works
either way. However, the problem is migration (as always). Previously,
libvirt would have started qemu with:
-numa node,memory=X
in this case and restricted memory placement in CGroups. Today, libvirt
creates more complicated command line:
-object memory-backend-ram,id=ram-node0,size=X
-numa node,memdev=ram-node0
Again, one wouldn't find anything wrong with these two approaches.
Both work just fine. Unless you try to migrated from the older libvirt
into the newer one. These two approaches are, unfortunately, not
compatible. My suggestion is, in order to allow users to migrate, lets
use the older approach for as long as the newer one is not needed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We do have a check for valid per-domain security model, however we still
do permit an invalid security model for a domain's device (those which
are specified with <source> element).
This patch introduces a new function virSecurityManagerCheckAllLabel
which compares user specified security model against currently
registered security drivers. That being said, it also permits 'none'
being specified as a device security model.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1165485
Signed-off-by: Ján Tomko <jtomko@redhat.com>
The qemuDomainHelperGetVcpus attempted to report an error when the
vcpupids info was NULL. Unfortunately earlier code would clamp the
value of 'maxinfo' to 0 when nvcpupids was 0, so the error reporting
would end up being skipped.
This lead to 'virsh vcpuinfo <dom>' just returning an empty list
instead of giving the user a clear error.
If a previous commit I fixed the incorrect handling of vcpu pids
for TCG mode QEMU:
commit b07f3d821d
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Thu Dec 18 16:34:39 2014 +0000
Don't setup fake CPU pids for old QEMU
The code assumes that def->vcpus == nvcpupids, so when we setup
fake CPU pids for old QEMU with nvcpupids == 1, we cause the
later code to read off the end of the array. This has fun results
like sche_setaffinity(0, ...) which changes libvirtd's own CPU
affinity, or even better sched_setaffinity($RANDOM, ...) which
changes the affinity of a random OS process.
The intent was that this would merely disable the ability to set
per-vCPU affinity. It should still have been possible to set VM
level host CPU affinity.
Unfortunately, when you set <vcpu cpuset='0-1'>4</vcpu>, the XML
parser will internally take this & initialize an entry in the
def->cputune.vcpupin array for every VCPU. IOW this is implicitly
being treated as
<cputune>
<vcpupin cpuset='0-1' vcpu='0'/>
<vcpupin cpuset='0-1' vcpu='1'/>
<vcpupin cpuset='0-1' vcpu='2'/>
<vcpupin cpuset='0-1' vcpu='3'/>
</cputune>
Even more fun, the faked cputune elements are hidden from view when
querying the live XML, because their cpuset mask is the same as the
VM default cpumask.
The upshot was that it was impossible to set VM level CPU affinity.
To fix this we must update qemuProcessSetVcpuAffinities so that it
only reports a fatal error if the per-VCPU cpu mask is different
from the VM level cpu mask.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In the event we're falling into the code that tries to create the file
in a forked environment (VIR_FILE_OPEN_FORK) we pass different mode bits,
but those are never set because the virFileOpenForceOwnerMode has a check
if the OPEN_FORCE_MODE bit is set before attempting to change the mode.
Since this is a special case it seems reasonable to set u+rw,g+rw,o
https://bugzilla.redhat.com/show_bug.cgi?id=1191355
When we attempt to migrate a vm with a migrateuri that has no scheme:
# virsh migrate test4 --live qemu+ssh://lhuang/system --migrateuri 127.0.0.1
target libvirtd will crash because uri->scheme is NULL in
qemuMigrationPrepareDirect on this line:
if (STRNEQ(uri->scheme, "tcp") &&
Add a value check before this line. Also fix a bug like this in
doNativeMigrate, that could only happen when destination libvirtd
returned an incorrect URI.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Export the required helpers and add backend code to hotplug RNG devices.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
As the RNG device is using an -object as backend refactor the code to
use the JSON to commandline generator so that we can reuse the code
later in hotplug.
Move the alias name right after the object type for rng-egd backend so
that we can later use the JSON to commandline generator to create the
command line.
Libvirt didn't prefix the random number generator backend object alias
with any string thus the device alias and object alias were identical.
To avoid possible problems, rename the alias for the backend object and
tweak tests to comply with the change.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Rename qemuBuildRNGDeviceArgs to qemuBuildRNGDevStr and change the
return type so that it can be reused in the device hotplug code later.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This function is used to assign an alias for a RNG device. It will be
later reused when hotplugging RNGs.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
It is only usable for NETWORK and BRIDGE type interfaces.
Error out when trying to start a domain where the custom
tap device path is specified for interfaces of other types,
or when the daemon is not privileged.
Note that this cannot be checked at definition time, because
the comparison is against actual type.
https://bugzilla.redhat.com/show_bug.cgi?id=1147195
It is often helpful to know which version of libvirt and QEMU
was present when a guest was first launched. Ensure this info
is written into the QEMU log file for each guest.
Add the missing jump to the error label when the uuid in the
migration cookie XML does not match the uuid of the migrated
domain.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Some code paths have special logic depending on the page size
reported by sysconf, which in turn affects the test results.
We must mock this so tests always have a consistent page size.
Change done by commit f309db1f4d wrongly
assumes that qemu can start with a combination of NUMA nodes specified
with the "memdev" option and the appropriate backends, and the legacy
way by specifying only "mem" as a size argument. QEMU rejects such
commandline though:
$ /usr/bin/qemu-system-x86_64 -S -M pc -m 1024 -smp 2 \
-numa node,nodeid=0,cpus=0,mem=256 \
-object memory-backend-ram,id=ram-node1,size=12345 \
-numa node,nodeid=1,cpus=1,memdev=ram-node1
qemu-system-x86_64: -numa node,nodeid=1,cpus=1,memdev=ram-node1: qemu: memdev option must be specified for either all or no nodes
To fix this issue we need to check if any of the nodes requires the new
definition with the backend and if so, then all other nodes have to use
it too.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1182467
With the new JSON to argv formatter we are now able to represent the
memory backend definitions in the JSON object format that is reusable
for monitor use (hotplug) and then convert it into the shell string.
This will avoid having two separate instances of the same code that
would create the different formats.
Previous refactors now allow to make this step without changes to the
test suite.
QEMU's command line visitor as well as the JSON interface take bytes by
default for memory object sizes. Convert mebibytes to bytes so that we
can later refactor the existing code for hotplug purposes.
QEMU's qapi visitor code allows yes/on/y for true and no/off/n for false
value of boolean properities. Unify the used style so that we can
generate it later and fix test cases.
Extract the memory backend device code into a separate function so that
it can be later easily refactored and reused.
Few small changes for future reusability, namely:
- new (currently unused) parameter for user specified page size
- size of the memory is specified in kibibytes, divided up in the
function
- new (currently unused) parameter for user specifed source nodeset
- option to enforce capability check
Unlike -device, qemu uses a JSON object to add backend "objects" via the
monitor rather than the string that would be passed on the commandline.
To be able to reuse code parts that configure backends for various
devices, this patch adds a helper that will allow generating the command
line representations from the JSON property object.
This patch enables synchronization of the host macvtap
device options with the guest device's in response to the
NIC_RX_FILTER_CHANGED event.
The following device options will be synchronized:
* PROMISC
* MULTICAST
* ALLMULTI
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1158034
If we're expecting to create a file somewhere and that fails for some
reason during qemuOpenFileAs, then we unlink the path we're attempting
to create leaving no way to determine what the "existing" privileges,
protections, or labels are that caused the failure (open, change owner
and group, change mode, etc.).
Furthermore, if we fall into the path where we'll be opening / creating
the file using VIR_FILE_OPEN_FORK, we need to first unlink/delete the file
we created in the first path; otherwise, the attempt by the child process
to open as some specific user:group may fail because the file was already
created using nfsnobody:nfsnobody. Again, if we didn't create the file we
don't want to blindly delete what already exists. Thus, a second reason for
the original check to set need_unlink to false when we find the file with
CREAT set, but already existing.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '540c339a' to fix issues with reference counting and transient
domains moved the qemuDomainObjEndAsyncJob call prior to the attempt to
restart the guest CPU's resulting in an error:
error: Failed to save domain rhel70 to /tmp/pl/rhel70.save
error: internal error: unexpected async job 3
when (ret != 0) - eg, the error path from qemuDomainSaveMemory.
This patch will adjust the logic to call the EndAsyncJob only after
we've tried to restart the guest CPUs. It also needs to adjust the
test for qemuDomainRemoveInactive to add the ret == 0 condition.
Additionally, if we get to endjob: because of some error earlier, then
we need to save that error in the event the CPU restart logic fails.
We don't want to return the error from CPU restart failure, rather we
want to return the error from the failed save that caused us to fall
into the retry to start the CPU logic.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Well, even though users can pass the list of UEFI:NVRAM pairs at the
configure time, we may maintain the list of widely available UEFI
ourselves too. And as arm64 begin to rises, OVMF was ported there too.
With a slight name change - it's called AAVMF, with AAVMF_CODE.fd
being the UEFI firmware and AAVMF_VARS.fd being the NVRAM store file.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up until now there are just two ways how to specify UEFI paths to
libvirt. The first one is editing qemu.conf, the other is editing
qemu_conf.c and recompile which is not that fancy. So, new
configure option is introduced: --with-loader-nvram which takes a
list of pairs of UEFI firmware and NVRAM store. This way, the
compiled in defaults can be passed during compile time without
need to change the code itself.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1183890
When we try to update a xml to a image file, we will clear the
graphics passwd settings, because we do not pass VIR_DOMAIN_XML_SECURE
to qemuDomainDefCopy, qemuDomainDefFormatBuf won't format the passwd.
Add VIR_DOMAIN_XML_SECURE flag when we call qemuDomainDefCopy
in qemuDomainSaveImageUpdateDef.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
This way the device is in vmdef only if ret = 0 and the caller
(qemuDomainAttachDeviceFlags) does not free it.
Otherwise it might get double freed by qemuProcessStop
and qemuDomainAttachDeviceFlags if the domain crashed
in monitor after we've added it to vm->def.
Do the allocation first, then add the actual device.
The second part should never fail. This is good
for live hotplug where we don't want to remove the device
on OOM after the monitor command succeeded.
The only change in behavior is that on failure, the
vmdef->consoles array is freed, not just the first console.
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The code modifies the domain configuration but doesn't take a MODIFY
type job to do so.
This patch also fixes a few very long lines of code around the touched
parts.
For distros that want to add versioned machine types, they will add
(downstream) machine types like "virt-foo-1.2.3". Detect these as
MMIO too.
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Previous patch of this series fixed the issue with adding a new PCI bridge
when all the slots were reserved by devices with user specified addresses.
In case there are still some PCI devices waiting to get a slot reserved
by qemuAssignDevicePCISlots, this means a new bus needs to be
created along with a corresponding bridge controller. By adding an
additional check, this scenario now results in a reasonable error
instead of generating wrong qemu command line.
Commit 93c8ca tried to fix the issue with auto-adding of a PCI bridge
controller, but didn't work properly in all scenarios.
This patch provides a better fix of the issue when all slots on a PCI bus
are reserved by devices with user specified addresses and no additional
bridges need to be created.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1132900
In order to be able to test for fully reserved PCI buses, assignment of
PCI slots for integrated devices needs to be moved to a separate function.
This also might be a good preparation if we decide to add support for
other chipsets as well.
Move qemuDomainAssignPCIAddresses after the definition
of the static function qemuDomainValidateDevicePCISlotsQ35.
This lets us define a new static function using
qemuDomainValidateDevicePCISlots* and use it in
qemuDomainAssignPCIAddresses without a forward declaration.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
As it turned out, fix of dead code 419a22 changed the affected condition
from "never true" to "always true", so better fix would be to change the
return code of virDomainMaybeAddController from 0 to 1 if
a new bridge has been added, thus distinguishing case when we didn't need to
add any controller and case we successfully added one.
The return code is changed in the next commit
The ACL check didn't check the VIR_DOMAIN_XML_SECURE flag and the
appropriate permission for it. Found via code inspection while fixing
permissions for save images.
https://bugzilla.redhat.com/show_bug.cgi?id=1164627
When using 'virsh attach-device' to hotplug an unsupported console type
into a qemu guest the attachment would succeed as the command line
formatter didn't report error in such case.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1130390
The listen address is not mandatory for <interface type='server'>
but when it's not specified, we've been formatting it as:
-netdev socket,listen=(null):5558,id=hostnet0
which failed with:
Device 'socket' could not be initialized
Omit the address completely and only format the port in the listen
attribute.
Also fix the schema to allow specifying a model.
Using the same driver multiple times is pointless and
it can result in confusing errors:
$ virsh start test
error: Failed to start domain test
error: internal error: security label already defined for VM
https://bugzilla.redhat.com/show_bug.cgi?id=1153891
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
If the domain crashed while we were in monitor,
we cannot rely on the REALLOC done on live definition,
since vm->def now points to the persistent definition.
Skip adding the attached devices to domain definition
if the domain crashed.
In AttachChrDevice, the chardev was already added to the
live definition and freed by qemuProcessStop in the case
of a crash. Skip the device removal in that case.
Also skip audit if the domain crashed in the meantime.
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
In the device type-specific functions, exit early
if the domain has disappeared, because the cleanup
should have been done by qemuProcessStop.
Check the return value in processDeviceDeletedEvent
and qemuProcessUpdateDevices.
Skip audit and removing the device from live def because
it has already been cleaned up.
Ploop is a pseudo device which makeit possible to access
to an image in a file as a block device. Like loop devices,
but with additional features, like snapshots, write tracker
and without double-caching.
It used in PCS for containers and in OpenVZ. You can manage
ploop devices and images with ploop utility
(http://git.openvz.org/?p=ploop).
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
We tested for positive return value from virDomainMaybeAddController,
but it returns 0 or -1 only resulting in a dead code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In case we find out, there are more PCI devices to be connected
than there are available slots on the default PCI bus, we automatically add a
new bus and a related PCI bridge controller as well. As there are no free slots
left on the default PCI bus, PCI bridge controller gets a free slot on a
newly created PCI bus which causes qemu to refuse to start the guest.
This fix introduces a new function qemuDomainPCIBusFullyReserved which
is checked right before we possibly try to reserve a slot for PCI bridge
controller.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1132900
The virDomainDefineXMLFlags and virDomainCreateXML APIs both
gain new flags allowing them to be told to validate XML.
This updates all the drivers to turn on validation in the
XML parser when the flags are set
systemd-machined introduced a new method CreateMachineWithNetwork
that obsoletes CreateMachine. It expects to be given a list of
VETH/TAP device indexes for the host side device(s) associated
with a container/machine.
This falls back to the old CreateMachine method when the new
one is not supported.
https://bugzilla.redhat.com/show_bug.cgi?id=1181182
When we meet error in qemuMigrationPrepareAny and goto
cleanup with rc < 0, we forget clear the priv->origname and this
will make this vm migrate fail next time because leave a wrong
origname in priv, and will Generate a wrong cookie when do
migrate next time.
This patch will make priv->origname is NULL when migrate fail
in target host.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Make local copy of the disk alias in qemuProcessInitPasswords,
instead of referencing the one in domain definition, which
might get freed if the domain crashes while we're in monitor.
Also copy the memballoon period value.
Make a local copy of the disk alias instead of pointing
to the domain definition, which might get freed if
the domain dies while we're in monitor.
Also exit early if that happens.
Exit the monitor right after we've done with it to get
the virDomainObjPtr lock back, otherwise we might be accessing
vm->def while it's being cleaned up by qemuProcessStop.
If the domain crashed while we were in the monitor, exit
early instead of changing vm->def which is now the persistent
definition.
The domain might disappear during the time in monitor when
the virDomainObjPtr is unlocked, so the caller needs to check
if it's still alive.
Since most of the callers are going to need it, put the
check inside qemuDomainObjExitMonitor and return -1 if
the domain died in the meantime.
QEMU internally updates the size of video memory if the domain XML had
provided too low memory size or there are some dependencies for a QXL
devices 'vgamem' and 'ram' size. We need to know about the changes and
store them into the status XML to not break migration or managedsave
through different libvirt versions.
The values would be loaded only if the "vgamem_mb" property exists for
the device. The presence of the "vgamem_mb" also tells that the
"ram_size" and "vram_size" exists for QXL devices.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The search is done recursively only through QOM object that has a type
prefixed with "child<" as this indicate that the QOM is a parent for
other QOM objects.
The usage is that you give known device name with starting path where to
search.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Commit e3435caf fixed hot-plugging of vcpus with strict memory pinning
on NUMA hosts, but unfortunately it also broke updating number of vcpus
for offline guests using our API.
The issue is that we try to create a cpu cgroup for non-running guest
which fails as there are no cgroups for that domain. We should create
cgroups and update cpuset.mems only if we are hot-plugging.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1165993
So, there are still plenty of vNIC types that we don't know how to set
bandwidth on. Let's warn explicitly in case user has requested it
instead of pretending everything was set.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When create inactive external snapshot, after update disk definitions,
virDomainSaveConfig is needed, if not after restart libvirtd the new snapshot
file definitions in xml will be lost.
Reproduce steps:
1. prepare a shut off guest
$ virsh domstate rhel7 && virsh domblklist rhel7
shut off
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/rhel7.img
2. create external disk snapshot
$ virsh snapshot-create rhel7 --disk-only && virsh domblklist rhel7
Domain snapshot 1417882967 created
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/rhel7.1417882967
3. restart libvirtd then check guest source file
$ service libvirtd restart && virsh domblklist rhel7
Redirecting to /bin/systemctl restart libvirtd.service
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/rhel7.img
This was first reported by Eric Blake
http://www.redhat.com/archives/libvir-list/2014-December/msg00369.html
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
The virDomainDefParse* and virDomainDefFormat* methods both
accept the VIR_DOMAIN_XML_* flags defined in the public API,
along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
defined in domain_conf.c.
This is seriously confusing & error prone for a number of
reasons:
- VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
formatting operation
- Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
to parse or to format, but not both.
This patch cleanly separates out the flags. There are two
distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
flags that are used by the corresponding methods. The
VIR_DOMAIN_XML_* flags received via public API calls must
be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
needed.
The various calls to virDomainDefParse which hardcoded the
use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
https://bugzilla.redhat.com/show_bug.cgi?id=1135339 documents some
confusing behavior when a user tries to start an inactive block
commit in a second connection while there is already an on-going
active commit from a first connection. Eventually, qemu will
support multiple simultaneous block jobs, but as of now, it does
not; furthermore, libvirt also needs an overhaul before we can
support simultaneous jobs. So, the best way to avoid confusing
ourselves is to quit relying on qemu to tell us about the situation
(where we risk getting in weird states) and instead forbid a
duplicate block commit ourselves.
Note that we are still relying on qemu to diagnose attempts to
interrupt an inactive commit (since we only track XML of an active
commit), but as inactive commit is less confusing for libvirt to
manage, there is less that can go wrong by leaving that detection
up to qemu.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Hoist check for
active commit to occur earlier outside of conditions.
Signed-off-by: Eric Blake <eblake@redhat.com>
QEMU supports feature specification with -cpu host and we just skip
using that. Since QEMU developers themselves would like to use this
feature, this patch modifies the code to work.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1178850
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The default value should be 16 MiB instead of 8 MiB. Only really old
version of upstream QEMU used the 8 MiB as default for vga framebuffer.
Without this change if you update your libvirt where we introduced the
"vgamem" attribute for QXL video device the value will be set to 8 MiB,
but previously your guest had 16 MiB because we didn't pass any value to
QEMU command line which means QEMU used its own 16 MiB as default.
This will affect all users with guest's display resolution higher than
1920x1080.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In one of my previous commits (311b4a67) I've tried to allow to
pass regular system pages to <hugepages>. However, there was a
little bug that wasn't caught. If domain has guest NUMA topology
defined, qemuBuildNumaArgStr() function takes care of generating
corresponding command line. The hugepages backing for guest NUMA
nodes is handled there too. And here comes the bug: the hugepages
setting from XML is stored in KiB internally, however, the system
pages size was queried and stored in Bytes. So the check whether
these two are equal was failing even if it shouldn't.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In commit 540c339a25 the whole domain
reference counting was refactored in the qemu driver. Domain jobs now
don't need to reference the domain object as they now expect the
reference from the calling function.
However, the patch forgot to remove the unref call in case we exit the
monitor when we were acquiring a nested job. This caused the daemon to
crash on a subsequent access to the domain object once we've done an
operation requiring a nested job for a monitor access.
An easy reproducer case:
1) Start a vm with qcow disks
2) virsh snapshot-create-as DOMNAME
3) virsh dumpxml DOMNAME
4) daemon crashes in a semi-random spot while accessing a now-removed VM
object.
Fortunately, the commit wasn't released yet, so there are no security
implications.
Reported-by: Shanzi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1177723
When setting new bandwidth limits via
virDomainSetInterfaceParameters, the old ones are cleared first.
However, if setting the new ones fails, the old are already gone
and interface is left in inconsistent state. Therefore, right
before failing we ought to try to restore the old bandwidth.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1178652
We will get a warning when we have a guest in paused
status (caused by kernel panic) and restart libvirtd,
warning message like this:
Qemu reported unknown VM status: 'guest-panicked'
and this seems because we set a wrong status name in
qemu_monitor.c, and from qemu qapi-schema.json file
we know this status should named 'guest-panicked'.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Add the possibility to have more than one IP address configured for a
domain network interface. IP addresses can also have a prefix to define
the corresponding netmask.
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Although QMP returns info about vCPU threads in TCG mode, the
data it returns is mostly lies. Only the first vCPU has a valid
thread_id returned. The thread_id given for the other vCPUs is
in fact the main emulator thread. All vCPUs actually run under
the same thread in TCG mode.
Our vCPU pinning code is not at all able to cope with this
so if you try to set CPU affinity per-vCPU you end up with
wierd errors
error: Failed to start domain instance-00000007
error: cannot set CPU affinity on process 24365: Invalid argument
Since few people will care about the performance of TCG with
strict CPU pinning, lets just disable that for now, so we get
a clear error message
error: Failed to start domain instance-00000007
error: Requested operation is not valid: cpu affinity is not supported
The code assumes that def->vcpus == nvcpupids, so when we setup
fake CPU pids for old QEMU with nvcpupids == 1, we cause the
later code to read off the end of the array. This has fun results
like sche_setaffinity(0, ...) which changes libvirtd's own CPU
affinity, or even better sched_setaffinity($RANDOM, ...) which
changes the affinity of a random OS process.