Support for drive-reopen was never present in the upstream code so we
don't need to pause the VM when doing the block pivot. Kill all the
code related to this semi-upstream artifact.
Currently we check qemuCaps before starting the block job. But qemuCaps
isn't available on a stopped domain, which means we get a misleading
error message in this case:
# virsh domstate example
shut off
# virsh blockjob example vda
error: unsupported configuration: block jobs not supported with this QEMU binary
Move the qemuCaps check into the block job so that we are guaranteed the
domain is running.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
https://bugzilla.redhat.com/show_bug.cgi?id=1206479
As described in virDomainBlockCopy() parameters description, the
VIR_DOMAIN_BLOCK_COPY_GRANULARITY parameter may require the value to
have some specific attributes (e.g. be a power of two or fall within a
certain range). And in qemu, a power of two is required. However, our
code does not check that and let qemu operation fail. Moreover, the
virsh man page is not as exact as it could be in this respect.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
virDomainHasDiskMirror() currently detects only jobs that add the mirror
elements. Since some operations like migration are interlocked by
existing block jobs on the given domain the check needs to be
instrumented to check regular jobs too.
This patch renames virDomainHasDiskMirror to virDomainHasDiskBlockjob
and adds an argument that allows to select that it returns true only for
block copy jobs as those interlock making the domain persistent.
Other two uses trigger on any block job type.
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
If any disk of a VM was involved in a (copy) block job we refused to do
a snapshot. As not only copy jobs interlock snapshots and the
interlocking is applicable to individual disks only we can make the
check in a more individual fashion and interlock all block job types
supported by libvirt.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1203628
In the order of appearance:
* MAX_LISTEN - never used
added by 23ad665c (qemud) and addec57 (lock daemon)
* NEXT_FREE_CLASS_ID - never used, added by 07d1b6b
* virLockError - never used, added by eb8268a4
* OPENVZ_MAX_ARG, CMDBUF_LEN, CMDOP_LEN
unused since the removal of ADD_ARG_LIT in d8b31306
* QEMU_NB_PER_CPU_STAT_PARAM - unused since 897808e
* QEMU_CMD_PROMPT, QEMU_PASSWD_PROMPT - unused since 1dc10a7
* TEST_MODEL_WORDSIZE - unused since c25c18f7
* TEMPDIR - never used, added by 714bef5
* NSIG - workaround around old headers
added by commit 60ed1d2
unused since virExec was moved by commit 02e8691
* DO_TEST_PARSE - never used, added by 9afa006
* DIFF_MSEC, GETTIMEOFDAY - unused since eee6eb6
When the synchronous pivot option is selected, libvirt would not update
the backing chain until the job was exitted. Some applications then
received invalid data as their job serialized first.
This patch removes polling to wait for the ABORT/PIVOT job completion
and replaces it with a condition. If a synchronous operation is
requested the update of the XML is executed in the job of the caller of
the synchronous request. Otherwise the monitor event callback uses a
separate worker to update the backing chain with a new job.
This is a regression since 1a92c71910
When the ABORT job is finished synchronously you get the following call
stack:
#0 qemuBlockJobEventProcess
#1 qemuDomainBlockJobImpl
#2 qemuDomainBlockJobAbort
#3 virDomainBlockJobAbort
While previously or while using the _ASYNC flag you'd get:
#0 qemuBlockJobEventProcess
#1 processBlockJobEvent
#2 qemuProcessEventHandler
#3 virThreadPoolWorker
Later on I'll be adding a condition that will allow to synchronise a
SYNC block job abort. The approach will require this code to be called
from two different places so it has to be extracted into a helper.
Commit 1a92c719 moved code to handle block job events to a different
function that is executed in a separate thread. The caller of
processBlockJob handles locking and unlocking of @vm, so the we should
not do it in the function itself.
The block copy API takes the speed in bytes/s rather than MiB/s that was
the prior approach in virDomainBlockRebase. We correctly converted the
speed to bytes/s in the old API but we still called the common helper
virDomainBlockCopyCommon with the unadjusted variable.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1207122
When getting info on NUMA parameters for domain,
virCgroupGetCpusetMems() may be called. However, as of 43b67f2e
the call is guarded by check if memory controller is present.
Even though it may be not obvious instantly, NUMA parameters are
stored under cpuset controller. Therefore the check needs to look
like this:
if (!virCgroupHasController(priv->cgroup,
VIR_CGROUP_CONTROLLER_CPUSET) ||
virCgroupGetCpusetMems(priv->cgroup, &nodeset) < 0) {
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Blockcopy to non-file destination is not supported according the code,
but a 'goto endjob' is missed after checking the destination.
This leads to calling drive-mirror with wrong parameters.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1206406
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
We don't have to modify cpuset.mems on hosts without NUMA. It also
fixes an error message that you get instead of success if you trying
update vcpus of a guest on a host without NUMA.
error: internal error: NUMA isn't available on this host
Signer-off-by: Pavel Hrdina <phrdina@redhat.com>
We should call virDomainLiveConfigHelperMethod ASAP because this
function transfers VIR_DOMAIN_AFFECT_CURRENT to VIR_DOMAIN_AFFECT_LIVE
or VIR_DOMAIN_AFFECT_CONFIG. All other additional checks for those two
flags should consider that the user give us VIR_DOMAIN_AFFECT_CURRENT.
Remove the unnecessary check whether the domain is live in case of
VIR_DOMAIN_VCPU_GUEST because this check is done by
virDomainLiveConfigHelperMethod.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
While debugging the support for responding to qemu RX_FILTER_CHANGED
events, I had changed the "ignoring this event" log message from
VIR_DEBUG to VIR_WARN, but forgot to change it back before
pushing. Since many guest OSes make enough changes to multicast lists
and/or promiscuous mode settings to trigger this message, it's
starting to show up as a red herring in bug reports.
Add a few helpers that allow to operate with memory device definitions
on the domain config and use them to implement memory device coldplug in
the qemu driver.
This patch adds code that parses and formats configuration for memory
devices.
A simple configuration would be:
<memory model='dimm'>
<target>
<size unit='KiB'>524287</size>
<node>0</node>
</target>
</memory>
A complete configuration of a memory device:
<memory model='dimm'>
<source>
<pagesize unit='KiB'>4096</pagesize>
<nodemask>1-3</nodemask>
</source>
<target>
<size unit='KiB'>524287</size>
<node>1</node>
</target>
</memory>
This patch preemptively forbids use of the <memory> device in individual
drivers so the users are warned right away that the device is not
supported.
Wikipedia's list of common misspellings [1] has a machine-readable
version. This patch fixes those misspellings mentioned in the list
which don't have multiple right variants (as e.g. "accension", which can
be both "accession" and "ascension"), such misspellings are left
untouched. The list of changes was manually re-checked for false
positives.
[1] https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Issue #1 - A call to virBitmapNew did not check if the allocation
failed which could lead to a NULL dereference
Issue #2 - When deleting the pin entries from the config file, the
code loops from the number of elements down to the "new" vcpu count;
however, the pin id values are numbered 0..n-1 not 1..n, so the "first"
pin attempt would never work. Luckily the check was for whether the
incoming 'n' (vcpu id) matched the entry in the array from 0..arraysize
rather than a dereference of the 'n' entry
The function needs a pointer to the network to get list of DHCP
leases. The pointer is obtained via virNetworkLookupByName() which
requires callers to free the returned network once no longer needed.
Otherwise it's leaked.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1199182 documents that
after a series of disk snapshots into existing destination images,
followed by active commits of the top image, it is possible for
qemu 2.2 and earlier to end up tracking a different name for the
image than what it would have had when opening the chain afresh.
That is, when starting with the chain 'a <- b <- c', the name
associated with 'b' is how it was spelled in the metadata of 'c',
but when starting with 'a', taking two snapshots into 'a <- b <- c',
then committing 'c' back into 'b', the name associated with 'b' is
now the name used when taking the first snapshot.
Sadly, older qemu doesn't know how to treat different spellings of
the same filename as identical files (it uses strcmp() instead of
checking for the same inode), which means libvirt's attempt to
commit an image using solely the names learned from qcow2 metadata
fails with a cryptic:
error: internal error: unable to execute QEMU command 'block-commit': Top image file /tmp/images/c/../b/b not found
even though the file exists. Trying to teach libvirt the rules on
which name qemu will expect is not worth the effort (besides, we'd
have to remember it across libvirtd restarts, and track whether a
file was opened via metadata or via snapshot creation for a given
qemu process); it is easier to just always directly ask qemu what
string it expects to see in the first place.
As a safety valve, we validate that any name returned by qemu
still maps to the same local file as we have tracked it, so that
a compromised qemu cannot accidentally cause us to act on an
incorrect file.
* src/qemu/qemu_monitor.h (qemuMonitorDiskNameLookup): New
prototype.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONDiskNameLookup):
Likewise.
* src/qemu/qemu_monitor.c (qemuMonitorDiskNameLookup): New function.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONDiskNameLookup)
(qemuMonitorJSONDiskNameLookupOne): Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit)
(qemuDomainBlockJobImpl): Use it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Only selected fields from the disk source were copied when cold updating
source in a CDROM drive. When such drive was backed by a network file
this resulted into corruption of the definition:
<disk type='network' device='cdrom'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='gluster' name='gluster-vol1(null)'>
<host name='localhost'/>
</source>
<target dev='vdc' bus='virtio'/>
<readonly/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
</disk>
Update the whole source instead of cherry-picking elements.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1166024
By querying the qemu guest agent with the QMP command
"guest-network-get-interfaces" and converting the received JSON
output to structured objects.
Although "ifconfig" is deprecated, IP aliases created by "ifconfig"
are supported by this API. The legacy syntax of an IP alias is:
"<ifname>:<alias-name>". Since we want all aliases to be clubbed
under parent interface, simply stripping ":<alias-name>" suffices.
Note that IP aliases formed by "ip" aren't visible to "ifconfig",
and aliases created by "ip" do not have any specific name. But
we are lucky, as qemu guest agent detects aliases created by both.
src/qemu/qemu_agent.h:
* Define qemuAgentGetInterfaces
src/qemu/qemu_agent.c:
* Implement qemuAgentGetInterface
src/qemu/qemu_driver.c:
* New function qemuGetDHCPInterfaces
* New function qemuDomainInterfaceAddresses
src/remote_protocol-sructs:
* Define new structs
tests/qemuagenttest.c:
* Add new test: testQemuAgentGetInterfaces
Test cases for IP aliases, 0 or multiple ipv4/ipv6 address(es)
Signed-off-by: Nehal J Wani <nehaljw.kkd1@gmail.com>
Patch 51f9f03a4c introduces a regression
where if a blockCommit operation fails the disk is still marked as being
part of a block job but can't be unmarked later.
As pointed out by jtomko in his review of the IOThreads pinning code:
http://www.redhat.com/archives/libvir-list/2015-March/msg00495.html
there are some comments sprinkled in indicating IOThreads were using
the same structure as the VcpuPin code...
This is the first patch of a few that will change the virDomainVcpuPin*
structures and code to just virDomainPin* - starting with the data
structure naming...
During his review of the iothreads pin setting code, Pavel noted that
there was a potential memory leak with respect to how the newVcpuPin
is handled and the goto endjob's in failure paths which would not free
the memory. For reference, See:
http://www.redhat.com/archives/libvir-list/2015-March/msg00415.html
As there are two possible approaches to define a domain's memory size -
one used with legacy, non-NUMA VMs configured in the <memory> element
and per-node based approach on NUMA machines - the user needs to make
sure that both are specified correctly in the NUMA case.
To avoid this burden on the user I'd like to replace the NUMA case with
automatic totaling of the memory size. To achieve this I need to replace
direct access to the virDomainMemtune's 'max_balloon' field with
two separate getters depending on the desired size.
The two sizes are needed as:
1) Startup memory size doesn't include memory modules in some
hypervisors.
2) After startup these count as the usable memory size.
Note that the comments for the functions are future aware and document
state that will be present after a few later patches.
Surprisingly we did not grab a VM job when a block job finished and we'd
happily rewrite the backing chain data. This made it possible to crash
libvirt when queueing two backing chains tightly and other badness.
To fix it, add yet another handler to the helper thread that handles
monitor events that require a job.
Now that qemuDomainBlocksStatsGather provides functions of both
qemuMonitorGetBlockStatsParamsNumber and qemuMonitorGetBlockStatsInfo we
can reuse it and kill a lot of code.
Additionally as a bonus qemuDomainBlockStatsFlags will now support
summary statistics so add a statement to the virsh man page about that.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1142636
In the LXC driver, if the disk path is not provided the API returns
total statistics for all disks of the domain. With the new text monitor
implementation this can be now done in the qemu driver too.
Add code that wil total the stats for all disks if the path is not
provided.
Extract the code to look up the disk alias and return the block stats
struct so that it can be reused later in qemuDomainBlockStatsFlags.
The function uses qemuMonitorGetAllBlockStatsInfo instead of
qemuMonitorGetBlockStatsInfo.
By adding a call and check of return of virBitmapToData to the
IOThreads code, my Coverity checker lets me know qemuDomainHelperGetVcpus
also needs to check the status...
Depending on the flags passed, either attempt to return the active/live
IOThread data for the domain or the config data.
The active/live path will call into the Monitor in order to get the
IOThread data and then correlate the thread_id's returned from the
monitor to the currently running system/threads in order to ascertain
the affinity for each iothread_id.
The config path will map each of the configured IOThreads and return
any configured iothreadspin data
Signed-off-by: John Ferlan <jferlan@redhat.com>
There was a mess in the way how we store unlimited value for memory
limits and how we handled values provided by user. Internally there
were two possible ways how to store unlimited value: as 0 value or as
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED. Because we chose to store memory
limits as unsigned long long, we cannot use -1 to represent unlimited.
It's much easier for us to say that everything greater than
VIR_DOMAIN_MEMORY_PARAM_UNLIMITED means unlimited and leave 0 as valid
value despite that it makes no sense to set limit to 0.
Remove unnecessary function virCompareLimitUlong. The update of test
is to prevent the 0 to be miss-used as unlimited in future.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146539
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
When the domain's source disk type is network, if source protocol is rbd
or sheepdog, the 'if().. break' will end the current case, which lead to
miss check the driver type is raw or qcow2. Libvirt will allow to create
internal snapshot for a running domain with raw format disk which based
on rbd storage.
While both protocols support internal snapshots of the disk qemu is not
able to use it as it requires some place to store the memory image. The
check if the disk is backed by a qcow2 image needs to be executed
always.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1179533
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Previously when a domain would get stuck in a domain job due to a
programming mistake we'd report the following control state:
$ virsh domcontrol domain
occupied (1424343406.150s)
The timestamp is invalid as the monitor was not entered for that domain.
We can use that to detect that the domain has an active job and report a
better error instead:
$ virsh domcontrol domain
error: internal (locking) error
We have two different places that needs to be updated while touching
code for allocation spice ports. Add a bool option to
'qemuProcessSPICEAllocatePorts' function to switch between true and fake
allocation so we can use this function also in qemu_driver to generate
native domain definition.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The problem here was that when opening a channel, we were checking
whether the channel given is alias (can't be NULL for running domain) or
it's name, which can be NULL (for example with spicevmc). In case of
such domain qemuDomainOpenChannel() made the daemon crash.
STREQ_NULLABLE() is safe to use since the code in question is wrapped in
"if (name)" and is more readable, so use that instead of checking for
non-NULL "vm->def->channels[i]->target.name".
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
NUMA enabled guest configuration explicitly specifies memory sizes for
individual nodes. Allowing the virDomainSetMemoryFlags API (and friends)
to change the total doesn't make sense as the individual node configs
are not updated in that case.
Forbid use of the API in case NUMA is specified.
Commit f7afeddc added code to report to systemd an array of interface
indexes for all tap devices used by a guest. Unfortunately it not only
didn't add code to report the ifindexes for macvtap interfaces
(interface type='direct') or the tap devices used by type='ethernet',
it ended up sending "-1" as the ifindex for each macvtap or hostdev
interface. This resulted in a failure to start any domain that had a
macvtap or hostdev interface (or actually any type other than
"network" or "bridge").
This patch does the following with the nicindexes array:
1) Modify qemuBuildInterfaceCommandLine() to only fill in the
nicindexes array if given a non-NULL pointer to an array (and modifies
the test jig calls to the function to send NULL). This is because
there are tests in the test suite that have type='ethernet' and still
have an ifname specified, but that device of course doesn't actually
exist on the test system, so attempts to call virNetDevGetIndex() will
fail.
2) Even then, only add an entry to the nicindexes array for
appropriate types, and to do so for all appropriate types ("network",
"bridge", and "direct"), but only if the ifname is known (since that
is required to call virNetDevGetIndex().
https://bugzilla.redhat.com/show_bug.cgi?id=1183869
Soo. you've successfully started yourself a domain. And since you want
to use it on your host exclusively you are confident enough to
passthrough the host CPU model, like this:
<cpu mode='host-passthrough'/>
Then, after a while, you want to save the domain into a file (e.g.
virsh save dom dom.save). And here comes the trouble. The file consist
of two parts: Libvirt header (containing domain XML among other
things), and qemu migration data. Now, the domain XML in the header is
formatted using special flags (VIR_DOMAIN_XML_SECURE |
VIR_DOMAIN_XML_UPDATE_CPU | VIR_DOMAIN_XML_INACTIVE |
VIR_DOMAIN_XML_MIGRATABLE).
Then, on your way back from the bar, you think of changing something
in the XML in the saved file (we have a command for it after all), say
listen address for graphics console. So you successfully type in the
command:
virsh save-image-edit dom.save
Change all the bits, and exit the editor. But instead of success
you're left with sad error message:
error: unsupported configuration: Target CPU model <null> does not
match source Pentium Pro
Sigh. Digging into the code you see lines, where we check for ABI
stability. The new XML you've produced is compared with the old one
from the saved file to see if qemu ABI will break or not. Wait, what?
We are using different flags to parse the XML you've provided so we
were just lucky it worked in some cases? Yep, that's right.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
As virDomainNumatuneSet now doesn't allocate the virDomainNuma object
any longer it's not necessary to pass the pointer to a pointer to store
the object as it will not change any longer.
While touching the parameter definitions I've also changed the name of
the parameter to "numa".
https://bugzilla.redhat.com/show_bug.cgi?id=1126762
Commit 43b67f introduced a deadlock issue when we use numatune
to change numa settings to a vm in session mode.
Jump to endjob instead of jump to cleanup.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
The qemuDomainHelperGetVcpus attempted to report an error when the
vcpupids info was NULL. Unfortunately earlier code would clamp the
value of 'maxinfo' to 0 when nvcpupids was 0, so the error reporting
would end up being skipped.
This lead to 'virsh vcpuinfo <dom>' just returning an empty list
instead of giving the user a clear error.
In the event we're falling into the code that tries to create the file
in a forked environment (VIR_FILE_OPEN_FORK) we pass different mode bits,
but those are never set because the virFileOpenForceOwnerMode has a check
if the OPEN_FORCE_MODE bit is set before attempting to change the mode.
Since this is a special case it seems reasonable to set u+rw,g+rw,o
Export the required helpers and add backend code to hotplug RNG devices.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Some code paths have special logic depending on the page size
reported by sysconf, which in turn affects the test results.
We must mock this so tests always have a consistent page size.
This patch enables synchronization of the host macvtap
device options with the guest device's in response to the
NIC_RX_FILTER_CHANGED event.
The following device options will be synchronized:
* PROMISC
* MULTICAST
* ALLMULTI
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1158034
If we're expecting to create a file somewhere and that fails for some
reason during qemuOpenFileAs, then we unlink the path we're attempting
to create leaving no way to determine what the "existing" privileges,
protections, or labels are that caused the failure (open, change owner
and group, change mode, etc.).
Furthermore, if we fall into the path where we'll be opening / creating
the file using VIR_FILE_OPEN_FORK, we need to first unlink/delete the file
we created in the first path; otherwise, the attempt by the child process
to open as some specific user:group may fail because the file was already
created using nfsnobody:nfsnobody. Again, if we didn't create the file we
don't want to blindly delete what already exists. Thus, a second reason for
the original check to set need_unlink to false when we find the file with
CREAT set, but already existing.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '540c339a' to fix issues with reference counting and transient
domains moved the qemuDomainObjEndAsyncJob call prior to the attempt to
restart the guest CPU's resulting in an error:
error: Failed to save domain rhel70 to /tmp/pl/rhel70.save
error: internal error: unexpected async job 3
when (ret != 0) - eg, the error path from qemuDomainSaveMemory.
This patch will adjust the logic to call the EndAsyncJob only after
we've tried to restart the guest CPUs. It also needs to adjust the
test for qemuDomainRemoveInactive to add the ret == 0 condition.
Additionally, if we get to endjob: because of some error earlier, then
we need to save that error in the event the CPU restart logic fails.
We don't want to return the error from CPU restart failure, rather we
want to return the error from the failed save that caused us to fall
into the retry to start the CPU logic.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1183890
When we try to update a xml to a image file, we will clear the
graphics passwd settings, because we do not pass VIR_DOMAIN_XML_SECURE
to qemuDomainDefCopy, qemuDomainDefFormatBuf won't format the passwd.
Add VIR_DOMAIN_XML_SECURE flag when we call qemuDomainDefCopy
in qemuDomainSaveImageUpdateDef.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The code modifies the domain configuration but doesn't take a MODIFY
type job to do so.
This patch also fixes a few very long lines of code around the touched
parts.
The ACL check didn't check the VIR_DOMAIN_XML_SECURE flag and the
appropriate permission for it. Found via code inspection while fixing
permissions for save images.
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
In the device type-specific functions, exit early
if the domain has disappeared, because the cleanup
should have been done by qemuProcessStop.
Check the return value in processDeviceDeletedEvent
and qemuProcessUpdateDevices.
Skip audit and removing the device from live def because
it has already been cleaned up.
The virDomainDefineXMLFlags and virDomainCreateXML APIs both
gain new flags allowing them to be told to validate XML.
This updates all the drivers to turn on validation in the
XML parser when the flags are set
Make a local copy of the disk alias instead of pointing
to the domain definition, which might get freed if
the domain dies while we're in monitor.
Also exit early if that happens.
Exit the monitor right after we've done with it to get
the virDomainObjPtr lock back, otherwise we might be accessing
vm->def while it's being cleaned up by qemuProcessStop.
If the domain crashed while we were in the monitor, exit
early instead of changing vm->def which is now the persistent
definition.
Commit e3435caf fixed hot-plugging of vcpus with strict memory pinning
on NUMA hosts, but unfortunately it also broke updating number of vcpus
for offline guests using our API.
The issue is that we try to create a cpu cgroup for non-running guest
which fails as there are no cgroups for that domain. We should create
cgroups and update cpuset.mems only if we are hot-plugging.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
When create inactive external snapshot, after update disk definitions,
virDomainSaveConfig is needed, if not after restart libvirtd the new snapshot
file definitions in xml will be lost.
Reproduce steps:
1. prepare a shut off guest
$ virsh domstate rhel7 && virsh domblklist rhel7
shut off
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/rhel7.img
2. create external disk snapshot
$ virsh snapshot-create rhel7 --disk-only && virsh domblklist rhel7
Domain snapshot 1417882967 created
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/rhel7.1417882967
3. restart libvirtd then check guest source file
$ service libvirtd restart && virsh domblklist rhel7
Redirecting to /bin/systemctl restart libvirtd.service
Target Source
------------------------------------------------
vda /var/lib/libvirt/images/rhel7.img
This was first reported by Eric Blake
http://www.redhat.com/archives/libvir-list/2014-December/msg00369.html
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
The virDomainDefParse* and virDomainDefFormat* methods both
accept the VIR_DOMAIN_XML_* flags defined in the public API,
along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
defined in domain_conf.c.
This is seriously confusing & error prone for a number of
reasons:
- VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
formatting operation
- Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
to parse or to format, but not both.
This patch cleanly separates out the flags. There are two
distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
flags that are used by the corresponding methods. The
VIR_DOMAIN_XML_* flags received via public API calls must
be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
needed.
The various calls to virDomainDefParse which hardcoded the
use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
https://bugzilla.redhat.com/show_bug.cgi?id=1135339 documents some
confusing behavior when a user tries to start an inactive block
commit in a second connection while there is already an on-going
active commit from a first connection. Eventually, qemu will
support multiple simultaneous block jobs, but as of now, it does
not; furthermore, libvirt also needs an overhaul before we can
support simultaneous jobs. So, the best way to avoid confusing
ourselves is to quit relying on qemu to tell us about the situation
(where we risk getting in weird states) and instead forbid a
duplicate block commit ourselves.
Note that we are still relying on qemu to diagnose attempts to
interrupt an inactive commit (since we only track XML of an active
commit), but as inactive commit is less confusing for libvirt to
manage, there is less that can go wrong by leaving that detection
up to qemu.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Hoist check for
active commit to occur earlier outside of conditions.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1177723
When setting new bandwidth limits via
virDomainSetInterfaceParameters, the old ones are cleared first.
However, if setting the new ones fails, the old are already gone
and interface is left in inconsistent state. Therefore, right
before failing we ought to try to restore the old bandwidth.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Add the possibility to have more than one IP address configured for a
domain network interface. IP addresses can also have a prefix to define
the corresponding netmask.
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Coverity flagged commit 0282ca45 as introducing a memory leak;
in all my refactoring to make capacity probing conditional on
whether the image is non-raw, I missed deleting the unconditional
probe.
* src/qemu/qemu_driver.c (qemuStorageLimitsRefresh): Drop
redundant assignment.
Signed-off-by: Eric Blake <eblake@redhat.com>
Wire up backing chain recursion. For the first time, it is now
possible to get libvirt to expose that qemu tracks read statistics
on backing files, as well as report maximum extent written on a
backing file during a block-commit operation.
For a running domain, where one of the two images has a backing
file, I see the traditional output:
$ virsh domstats --block testvm2
Domain: 'testvm2'
block.count=2
block.0.name=vda
block.0.path=/tmp/wrapper.qcow2
block.0.rd.reqs=1
block.0.rd.bytes=512
block.0.rd.times=28858
block.0.wr.reqs=0
block.0.wr.bytes=0
block.0.wr.times=0
block.0.fl.reqs=0
block.0.fl.times=0
block.0.allocation=0
block.0.capacity=1310720000
block.0.physical=200704
block.1.name=vdb
block.1.path=/dev/sda7
block.1.rd.reqs=0
block.1.rd.bytes=0
block.1.rd.times=0
block.1.wr.reqs=0
block.1.wr.bytes=0
block.1.wr.times=0
block.1.fl.reqs=0
block.1.fl.times=0
block.1.allocation=0
block.1.capacity=1310720000
vs. the new output:
$ virsh domstats --block --backing testvm2
Domain: 'testvm2'
block.count=3
block.0.name=vda
block.0.path=/tmp/wrapper.qcow2
block.0.rd.reqs=1
block.0.rd.bytes=512
block.0.rd.times=28858
block.0.wr.reqs=0
block.0.wr.bytes=0
block.0.wr.times=0
block.0.fl.reqs=0
block.0.fl.times=0
block.0.allocation=0
block.0.capacity=1310720000
block.0.physical=200704
block.1.name=vda
block.1.path=/dev/sda6
block.1.backingIndex=1
block.1.rd.reqs=0
block.1.rd.bytes=0
block.1.rd.times=0
block.1.wr.reqs=0
block.1.wr.bytes=0
block.1.wr.times=0
block.1.fl.reqs=0
block.1.fl.times=0
block.1.allocation=327680
block.1.capacity=786432000
block.2.name=vdb
block.2.path=/dev/sda7
block.2.rd.reqs=0
block.2.rd.bytes=0
block.2.rd.times=0
block.2.wr.reqs=0
block.2.wr.bytes=0
block.2.wr.times=0
block.2.fl.reqs=0
block.2.fl.times=0
block.2.allocation=0
block.2.capacity=1310720000
I may later do a patch that trims the output to avoid 0 stats,
particularly for backing files (which are more likely to have
0 stats, at least for write statistics when no block-commit
is performed). Also, I still plan to expose physical size
information (qemu doesn't expose it yet, so it requires a stat,
and for block devices, a further open/seek operation). But
this patch is good enough without worrying about that yet.
* src/qemu/qemu_driver.c (QEMU_DOMAIN_STATS_BACKING): New internal
enum bit.
(qemuConnectGetAllDomainStats): Recognize new user flag, and pass
details to...
(qemuDomainGetStatsBlock): ...here, where we can do longer recursion.
(qemuDomainGetStatsOneBlock): Output new field.
Signed-off-by: Eric Blake <eblake@redhat.com>
In order to report stats on backing chains, we need to separate
the output of stats for one block from how we traverse blocks.
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Split...
(qemuDomainGetStatsOneBlock): ...into new helper.
Signed-off-by: Eric Blake <eblake@redhat.com>
A coming patch will make it optionally possible to list backing
chain block stats; in this mode of operation, block.counts is no
longer the number of <disks> in the domain, but the number of
blocks in the array being reported. We still want block.count
listed first, but rather than iterate the tree twice (once to
count, and once to list stats), it's easier to just touch things
up after the fact.
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Compute count
after the fact.
Signed-off-by: Eric Blake <eblake@redhat.com>
The prior refactoring can now be put to use. With the same domain
as the earlier commit 7b49926 (one qcow2 disk and an empty
cdrom drive):
$ virsh domstats --block foo
Domain: 'foo'
block.count=2
block.0.name=hda
block.0.path=/var/lib/libvirt/images/foo.qcow2
block.0.allocation=1309614080
block.0.capacity=42949672960
block.0.physical=1309671424
block.1.name=hdc
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Use
qemuStorageLimitsRefresh to report offline statistics.
Signed-off-by: Eric Blake <eblake@redhat.com>
Create a helper function that can be reused for gathering block
info from virDomainListGetStats.
* src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Split guts...
(qemuStorageLimitsRefresh): ...into new helper function.
Signed-off-by: Eric Blake <eblake@redhat.com>
The documentation for virDomainBlockInfo was confusing: it stated
that 'physical' was the size of the container, then gave an example
of it being the amount of storage used by a sparse file (that is,
for a sparse raw image on a regular file, the wording implied
capacity==physical, while allocation was smaller; but the example
instead claimed physical==allocation). Since we use 'physical' for
the last offset of a block device, we should do likewise for
regular files.
Furthermore, the example claimed that for a qcow2 regular file,
allocation==physical. At the time the code was first written,
this was true (qcow2 files were allocated sequentially, and were
never sparse, so the last sector written happened to also match
the disk space occupied); but modern qemu does much better and
can punch holes for a qcow2 with allocation < physical.
Basically, after this patch, the three fields are now reliably
mapped as:
'capacity' - how much storage the guest can see (equal to
physical for raw images, determined by image metadata otherwise)
'allocation' - how much storage the image occupies (similar to
what 'du' would report)
'physical' - the last offset of the image (similar to what 'ls'
would report)
'capacity' can be larger than 'physical' (such as for a qcow2
image that does not vary much from a backing file) or smaller
(such as for a qcow2 file with lots of internal snapshots).
Likewise, 'allocation' can be (slightly) larger than 'physical'
(such as counting the tail of cluster allocations required to
round a file size up to filesystem granularity) or smaller
(for a sparse file). A block-resize operation changes capacity
(which, for raw images, also changes physical); many non-raw
images automatically grow physical and allocation as necessary
when starting with an allocation smaller than capacity; and even
when capacity and physical stay unchanged, allocation can change
when converting sectors from holes to data or back.
Note that this does not change semantics for qcow2 images stored
on block devices; there, we still rely on qemu to report the
highest written extent for allocation. So using this API to
track when to extend a block device because a qcow2 image is
about to exceed a threshold will not see any changes.
Also, note that virStorageVolInfo is unfortunately limited to
just 'capacity' and 'allocation' (we can't expand it to add
'physical', although we can expand the XML to add it there);
historically, that struct's 'allocation' value has reported
file size for qcow2 files (what this patch terms 'physical'
for a domain block device), but disk usage for raw files (what
this patch terms 'allocation'). So follow-up patches will be
needed to make storage volumes report the same allocation
values and get at physical values, where those differ.
* include/libvirt/libvirt-domain.h (_virDomainBlockInfo): Tweak
documentation to match saner definition.
* src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): For regular
files, physical size is capacity, not allocation.
Signed-off-by: Eric Blake <eblake@redhat.com>
Ultimately, we want to avoid read()ing a file while qemu is running.
We still have to open() block devices to determine their physical
size, but that is safer. This patch rearranges code to group
together all code that reads the image, to make it easier for later
patches to skip the metadata collection when possible.
* src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Check for empty
disk up front. Place metadata reading next to use.
Signed-off-by: Eric Blake <eblake@redhat.com>
A future patch will allow recursion into backing chains when
collecting block stats. This patch should not change behavior,
but merely moves out the common code that will be reused once
recursion is enabled, and adds the parameter that will turn on
recursion.
* src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo)
(qemuMonitorBlockStatsUpdateCapacity): Add recursion parameter,
although it is ignored for now.
* src/qemu/qemu_monitor.h (qemuMonitorGetAllBlockStatsInfo)
(qemuMonitorBlockStatsUpdateCapacity): Likewise.
* src/qemu/qemu_monitor_json.h
(qemuMonitorJSONGetAllBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacity): Likewise.
* src/qemu/qemu_monitor_json.c
(qemuMonitorJSONGetAllBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacity): Add parameter, and
split...
(qemuMonitorJSONGetOneBlockStatsInfo)
(qemuMonitorJSONBlockStatsUpdateCapacityOne): ...into helpers.
(qemuMonitorJSONGetBlockStatsInfo): Update caller.
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Update caller.
* src/qemu/qemu_migration.c (qemuMigrationCookieAddNBD): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Right now, grabbing blockinfo always calls stat on the disk, then
opens the image to determine the capacity, using a throw-away
virStorageSourcePtr. This has a couple of drawbacks:
1. We are calling stat and opening a file on every invocation of
the API. However, there are cases where the stats should NOT be
changing between successive calls (if a domain is running, no
one should be changing the physical size of a block device or raw
image behind our backs; capacity of read-only files should not
be changing; and we are the gateway to the block-resize command
to know when the capacity of read-write files should be changing).
True, we still have to use stat in some cases (a sparse raw file
changes allocation if it is read-write and the amount of holes is
changing, and a read-write qcow2 image stored in a file changes
physical size if it was not fully pre-allocated). But for
read-only images, even this should be something we can remember
from the previous time, rather than repeating every call.
2. We want to enhance the power of virDomainListGetStats, by
sharing code. But we already have a virStorageSourcePtr for
each disk, and it would be easier to reuse the common structure
than to have to worry about the one-off virDomainBlockInfoPtr.
While this patch does not optimize reuse of information in point
1, it does get us closer to being able to do so; by updating a
structure that survives between consecutive calls.
* src/util/virstoragefile.h (_virStorageSource): Add physical, to
mirror virDomainBlockInfo; rearrange fields to match public struct.
(virStorageSourceCopy): Copy the new field.
* src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Store into
storage source, then copy to block info.
Signed-off-by: Eric Blake <eblake@redhat.com>
In order for a future patch to virDomainListGetStats to reuse
some code for determining disk usage of offline domains, we
need to make it easier to pull out part of the guts of grabbing
blockinfo. The current implementation grabs a job fairly late
in the game, while getstats will already own a job; reordering
things so that the job is always grabbed up front in both
functions will make it easier to pull out the common code.
This patch results in grabbing a job in cases where one was not
previously needed, but as it is a query job, it should not be
noticeably slower.
This patch touches the same code as the fix for CVE-2014-6458
(commit b799259); in that patch, we avoided hotplug changing
a disk reference during the time of obtaining a monitor lock
by copying all data we needed and no longer referencing disk;
this patch goes the other way and ensures that by holding the
job, the disk cannot be changed so we no longer need to worry
about the disk being invalidated across the monitor lock.
* src/qemu/qemu_driver.c (qemuDomainGetBlockInfo): Rearrange job
control to be outside of disk information.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit e3435caf added cleanup code to qemuDomainSetVcpusFlags() that was
not supposed to reset the error. Usual procedure was done, saving the
error to temporary variable, but it was never free'd, but rather leaked.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When hot-plugging a VCPU into the guest, kvm needs to allocate some data
from the DMA zone, which might be in a memory node that's not allowed in
cpuset.mems. Basically the same problem as there was with starting the
domain and due to which commit 7e72ac7878
exists. This patch just extends it to hotplugging as well.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1161540
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Instead of setting the value of cpuset.mems once when the domain starts
and then re-calculating the value every time we need to change the child
cgroup values, leave the cgroup alone and rather set the child data
every time there is new cgroup created. We don't leave any task in the
parent group anyway. This will ease both current and future code.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1174154
When we use attach-device add a hostdev or chr device which have a
iscsi address or others (just like guest agent, subsys iscsi disk...),
we will find there is no basic controller for our new attached device.
Somtimes this will make guest cannot start after we add them (although
they can start at the second time).
Signed-off-by: Luyao Huang <lhuang@redhat.com>
We can change vnc password by using virDomainUpdateDeviceFlags API with
live flag. But it can't be changed with config flag. Error is reported as
below.
error: Operation not supported: persistent update of device 'graphics' is not supported
This patch supports the graphics arguments changed with config flag.
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
A logic bug in qemuConnectGetAllDomainStats makes the code mark the
monitor as available when qemuDomainObjBeginJob fails, instead of when
it succeeds, as the correct flow requires.
This patch fixes the check and updates the code documentation
accordingly.
Broken by commit 57023c0a3a.
Signed-off-by: Francesco Romani <fromani@redhat.com>
When user doesn't have read access on one of the domains he requested,
the for loop could exit abruptly or continue and override pointer which
pointed to locked object.
This patch fixed two issues at once. One is that domflags might have
had QEMU_DOMAIN_STATS_HAVE_JOB even when there was no job started (this
is fixed by doing domflags |= QEMU_DOMAIN_STATS_HAVE_JOB only when the
job was acquired and cleaning domflags on every start of the loop.
Second one is that the domain is kept locked when
virConnectGetAllDomainStatsCheckACL() fails and continues the loop when
it didn't end. Adding a simple virObjectUnlock() and clearing the
pointer ought to do.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Avoid leaving the domain locked on a failed ACL check in
qemuDomainMigratePerform() and qemuDomainMigrateFinish2().
Introduced in commit abf75aea24 (Add ACL checks into the QEMU driver).
I'm about to make block stats optionally more complex to cover
backing chains, where block.count will no longer equal the number
of <disks> for a domain. For these reasons, it is nicer if the
statistics output includes the source path (for local files).
This patch doesn't add anything for network disks, although we
may decide to add that later.
With this patch, I now see the following for the same domain as
in the previous patch (one qcow2 file, and an empty cdrom drive):
$ virsh domstats --block foo
Domain: 'foo'
block.count=2
block.0.name=hda
block.0.path=/var/lib/libvirt/images/foo.qcow2
block.1.name=hdc
* src/libvirt-domain.c (virConnectGetAllDomainStats): Document
new field.
* tools/virsh.pod (domstats): Document new field.
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Return the new
stat for local files/block devices.
(QEMU_ADD_NAME_PARAM): Add parameter.
(qemuDomainGetStatsInterface): Update caller.
Signed-off-by: Eric Blake <eblake@redhat.com>
I noticed that for an offline domain, 'virsh domstats --block $dom'
was producing just the domain name, with no stats. But the older
'virsh domblkinfo' works just fine on offline domains. This patch
starts to get us closer, by at least reporting the disk names for
an offline domain.
With this patch, I now see the following for an offline domain
with one qcow2 disk and an empty cdrom drive:
$ virsh domstats --block foo
Domain: 'foo'
block.count=2
block.0.name=hda
block.1.name=hdc
* src/qemu/qemu_driver.c (qemuDomainGetStatsBlock): Don't short-circuit
output of block name.
Signed-off-by: Eric Blake <eblake@redhat.com>
qemuDomainGetStatsBlock() could leak a stats hash table if it
encountered OOM while populating the virTypedParameters.
Oddly, the fix doesn't even touch qemuDomainGetStatsBlock :)
* src/qemu/qemu_driver.c (QEMU_ADD_COUNT_PARAM)
(QEMU_ADD_NAME_PARAM): Don't return early.
(qemuDomainGetStatsInterface): Adjust caller.
Signed-off-by: Eric Blake <eblake@redhat.com>
When attempting to create internal system checkpoint with a passthrough
device qemu will report the following error:
error: operation failed: Error -22 while writing VM
This patch calls the function to check if migration is possible with
given VM and thus improves the error to:
error: Requested operation is not valid: domain has assigned non-USB host devices
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=874418#c19
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
If someone removes blockcopy storage file when still in mirroring phase
and then requesting blockjob abort using pivot, virsh cmd freezes. This
is not an issue with older qemu versions which did not support
asynchronous jobs (which we prefer by default).
As we have reached the mirroring phase successfully, polling monitor for
blockjob info always returns 1 and the loop never ends.
This fix introduces a check for qemuDomainBlockPivot return code, possibly
skipping the asynchronous waiting completely, if an error occurred and
asynchronous waiting was the preferred method.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1139567
Reconnect to the VM is a possibly long-running job spawned in a separate
thread. We should reload the snapshot defs and managedsave state prior
to spawning the thread to avoid blocking of the daemon startup which
would serialize on the VM lock.
Also the reloading code would violate the domain job held while
reconnecting as the loader functions don't create jobs.
Since virDomainSnapshotFree will call virObjectUnref anyway, let's just use
that directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
Since virDomainFree will call virObjectUnref anyway, let's just use that
directly so as to avoid the possibility that we inadvertently clear out
a pending error message when using the public API.
There is a race condition between the fopen and fscanf calls
in qemuGetProcessInfo. If fopen succeeds, there is a small
possibility that the file no longer exists before reading from it.
Now, if either fopen or fscanf calls fail, the function will behave
just as only fopen had failed.
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1169055
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160084
As of b6d4dad11b (1.2.5) we are trying to keep the status of FSFreeze
in the guest. Even though I've tried to fixed couple of corner cases
(6ea54769ba), it occurred to me just recently, that the approach is
broken by design. Firstly, there are many other ways to talk to
qemu-ga (even through libvirt) that filesystems can be thawed (e.g.
qemu-agent-command) without libvirt noticing. Moreover, there are
plenty of ways to thaw filesystems without even qemu-ga noticing (yes,
qemu-ga keeps internal track of FSFreeze status). So, instead of
keeping the track ourselves, or asking qemu-ga for stale state, it's
the best to let qemu-ga deal with that (and possibly let guest kernel
propagate an error).
Moreover, there's one bug with the following approach, if fsfreeze
command failed, we've executed fsthaw subsequently. So issuing
domfsfreeze in virsh gave the following result:
virsh # domfsfreeze gentoo
Froze 1 filesystem(s)
virsh # domfsfreeze gentoo
error: Unable to freeze filesystems
error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
virsh # domfsfreeze gentoo
Froze 1 filesystem(s)
virsh # domfsfreeze gentoo
error: Unable to freeze filesystems
error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Get mounted filesystems list, which contains hardware info of disks and its
controllers, from QEMU guest agent 2.2+. Then, convert the hardware info
to corresponding device aliases for the disks.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
New qemu added a new event that is emitted when a virtio serial channel
is opened in the guest OS. This allows us to update the state of the
port in the output-only XML element.
This patch implements the monitor callbacks and necessary handlers to
update the state in the definition.
When creating a disk image snapshot the libvirt code would blindly copy
the parents label to the newly created image. This runs into problems
when you start a VM from an image hosted on NFS (or other storage system
that doesn't support selinux labels) and the snapshot destination is on
a storage system that does support selinux labels. Libvirt's code in
that case generates a different security label for the image hosted on
NFS. This label is valid only for NFS images and doesn't allow access in
case of a locally stored image.
To fix this issue libvirt needs to refrain from copying security
information in cases where the default domain seclabel is a better
choice.
This patch repurposes the now unused @force argument of
virStorageSourceInitChainElement to denote whether a copy of the
security labelling stuff should be attempted or not. This allows to
fine-control the copy operation for cases where we need to keep the
label of the old disk vs. the cases where we need to keep the label
unset to use the default domain imagelabel.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1151718
I noticed this while working on qemuDomainGetBlockInfo. Assigning
a bool value to an int variable compiles fine, but raises red flags
on the maintenance front as it becomes too easy to assign -1 or 2
or any other non-bool value to the same variable.
* cfg.mk (sc_prohibit_int_assign_bool): New rule.
* src/conf/snapshot_conf.c (virDomainSnapshotRedefinePrep): Fix
offenders.
* src/qemu/qemu_driver.c (qemuDomainGetBlockInfo)
(qemuDomainSnapshotCreateXML): Likewise.
* src/test/test_driver.c (testDomainSnapshotAlignDisks):
Likewise.
* src/util/vircgroup.c (virCgroupSupportsCpuBW): Likewise.
* src/util/virpci.c (virPCIDeviceBindToStub): Likewise.
* src/util/virutil.c (virIsCapableVport): Likewise.
* tools/virsh-domain-monitor.c (cmdDomMemStat): Likewise.
* tools/virsh-domain.c (cmdBlockResize, cmdScreenshot)
(cmdInjectNMI, cmdSendKey, cmdSendProcessSignal)
(cmdDetachInterface): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
For some reason, commit id '72b4151f' triggered a Coverity uninitialized
'reply' variable check when referenced within the for loop.
It seems Coverity doesn't know that flags will have to be either AFFECT_LIVE
or AFFECT_CONFIG after the virDomainLiveConfigHelperMethod call.
By adding a "sa_assert()" to confirm that fact, Coverity is happy again.
https://bugzilla.redhat.com/show_bug.cgi?id=1164080
After a disk is hotunplugged a subsequent call to qemuDomainGetBlockIoTune
to get the --config settings of that disk will fail because the disk is no
longer found by qemuDiskPathToAlias causing an unexpected failure.
Since only the --live flag needs to have the disk device pointer, move the
fetch inside the (flags & VIR_DOMAIN_AFFECT_LIVE) condition. This will also
affect the results if no flags are provided or the --current flag is provided.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
and starting of guest, but this same deadlock exists for
updating/attaching network device to domain.
The deadlock was introduced by removing global QEMU driver lock because
nwfilter was counting on this lock and ensure that all driver locks are
locked inside of nwfilter{Define,Undefine}.
This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
the deadlock for all possible paths in QEMU driver. LXC and UML drivers
still have global lock.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add support for bps_max and friends in the driver part.
In the part checking if a qemu is running, check if the running binary
support bps_max, if not print an error message, if yes add it to
"info" variable
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160084
As of b6d4dad1 (1.2.5) libvirt keeps track if domain disks have been
frozen. However, this falls into that set of information which don't
survive domain restart. Therefore, we need to clear the flag upon some
state transitions. Moreover, once we clear the flag we must update the
status file too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since there was a valid note to patch 43b67f2e about the best spot to
check for bandwidth set call while having libvirt daemon run in session
mode, this patch reverts previous changes dealing with bandwith
(also reverts adding variable @cfg in qemuDomainGetNumaParameters which
does not have any use at the moment, but getting and unreferencing
driver's config) in qemu_driver.c and qemu_command.c. There will be
another patch in the series which introduces the fix itself.
https://bugzilla.redhat.com/show_bug.cgi?id=1159219
Users might want to update startupPolicy via the
virDomainUpdateDeviceFlags API too. This patch
implements the feature on config layer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Particularly in qemuBuildNumaArgStr(), there was a need for the advice
due to memory backing, which needs to know the nodeset it will be pinned
to. With newer qemu this caused the following error when starting
domain:
error: internal error: Advice from numad is needed in case of
automatic numa placement
even when starting perfectly valid domain, e.g.:
...
<vcpu placement='auto'>4</vcpu>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
<cpu>
<numa>
<cell id='0' cpus='0' memory='524288'/>
<cell id='1' cpus='1' memory='524288'/>
</numa>
</cpu>
...
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138545
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Now that all offenders have been cleaned, turn on a syntax-check
rule to prevent future offenders.
* cfg.mk (sc_prohibit_static_zero_init): New rule.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Avoid false
positive.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1141621
As part of attach processing, assign the device aliases by calling
qemuAssignDeviceAliases during qemuDomainQemuAttach once all the devices
are found after the qemuParseCommandLinePid processing.
This will alleviate a symptom that caused a libvirtd crash during an
attempted device detach.
This patch adds functionality to processNicRxFilterChangedEvent().
The old and new multicast lists are compared and the filters in
the macvtap are programmed to match the guest's filters.
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=956506 documents that
given a domain where an internal snapshot parent has an external
snapshot child, we lacked a safety check when trying to use the
--children-only option to snapshot-delete:
$ virsh start dom
$ virsh snapshot-create-as dom internal
$ virsh snapshot-create-as dom external --disk-only
$ virsh snapshot-delete dom external
error: Failed to delete snapshot external
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
$ virsh snapshot-delete dom internal --children
error: Failed to delete snapshot internal
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
$ virsh snapshot-delete dom internal --children-only
Domain snapshot internal children deleted
While I'd still like to see patches that actually do proper external
snapshot deletion, we should at least fix the inconsistency in the
meantime. With this patch:
$ virsh snapshot-delete dom internal --children-only
error: Failed to delete snapshot internal
error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
* src/qemu/qemu_driver.c (qemuDomainSnapshotDelete): Fix condition.
Signed-off-by: Eric Blake <eblake@redhat.com>
To prepare for introducing a single global driver, rename the
virDriver struct to virHypervisorDriver and the registration
API to virRegisterHypervisorDriver()
Tuning NUMA or network interface parameters requires root
privileges to manage cgroups. Thus an attempt to set some of these
parameters in session mode on a running domain should be invalid
followed by an error. An example might be memory tuning which raises
an error in such case.
The following behavior in session mode will be present after applying
this patch:
Tuning | SET | GET |
----------|---------------|--------|
NUMA | shut off only | always |
Memory | never | never |
Interface | never | always |
Resolves https://bugzilla.redhat.com/show_bug.cgi?id=1126762
The documentation for the restore hook states that returning an empty
XML is equivalent with copying the input. There was a bug in the code
checking the returned string by checking the string instead of the
contents. Use the new helper to check if the string is empty.
virt-manager on Fedora sets up i686 hosts with "/usr/bin/qemu-kvm" emulator,
which in turn unconditionally execs qemu-system-x86_64 querying capabilities
then fails:
Error launching details: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/engine.py", line 748, in _show_vm_helper
details = self._get_details_dialog(uri, vm.get_connkey())
File "/usr/share/virt-manager/virtManager/engine.py", line 726, in _get_details_dialog
obj = vmmDetails(conn.get_vm(connkey))
File "/usr/share/virt-manager/virtManager/details.py", line 399, in __init__
self.init_details()
File "/usr/share/virt-manager/virtManager/details.py", line 784, in init_details
domcaps = self.vm.get_domain_capabilities()
File "/usr/share/virt-manager/virtManager/domain.py", line 518, in get_domain_capabilities
self.get_xmlobj().os.machine, self.get_xmlobj().type)
File "/usr/lib/python2.7/site-packages/libvirt.py", line 3492, in getDomainCapabilities
if ret is None: raise libvirtError ('virConnectGetDomainCapabilities() failed', conn=self)
libvirtError: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
Journal:
Oct 16 21:08:26 goatlord.localdomain libvirtd[1530]: invalid argument: architecture from emulator 'x86_64' doesn't match given architecture 'i686'
After set domain's numa parameters for running domain, save the change,
save the change into live xml is needed to survive restarting the libvirtd,
same story with bug 1146511; meanwihle add call
qemuDomainObjBeginJob/qemuDomainObjEndJob in qemuDomainSetNumaParameters
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
After set the blkio parameters for running domain, save the change into
live xml is needed to survive restarting the libvirtd, same story with
bug 1146511, meanwhile add call qemuDomainObjBeginJob/qemuDomainObjEndJob
in qemuDomainSetBlkioParameters
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
This patch fills in the functionality of
processNicRxFilterChangedEvent(). It now checks if it is appropriate
to respond to the NIC_RX_FILTER_CHANGED event (based on device type
and configuration) and takes appropriate action. Currently it checks
if the guest interface has been configured with
trustGuestRxFilters='yes', and if the host side device is macvtap. If
so, and the MAC address on the guest has changed, the MAC address of
the macvtap device is changed to match.
The result of this is that networking from the guest will continue to
work if the mac address of a macvtap-connected network device is
changed from within the guest, as long as trustGuestRxFilters='yes'
(previously changing the MAC address in the guest would break
networking).
NIC_RX_FILTER_CHANGED is sent by qemu any time a NIC driver in the
guest modified the NIC's RX Filter (for example, if the MAC address of
the NIC is changed by the guest).
This patch doesn't do anything useful with that event; it just sets up
all the plumbing to get news of the event into a worker thread with
all proper locking/reference counting, and provide an easy place to
add in desired functionality.
See src/qemu/EVENTHANDLERS.txt for information/instructions on adding
a libvirt-internal handler for a qemu event (using
NIC_RX_FILTER_CHANGED as an example).
Prior patch removed the need for the virConnectPtr in the unplug
detach host path which caused ripple effect to remove in multiple
callers. The previous patch just left things as ATTRIBUTE_UNUSED -
this patch will remove the variable.
check domain's status before call virQEMUCapsGet to report a accurate
error when domain is shut off
Resolve: https://bugzilla.redhat.com/show_bug.cgi?id=1147847
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Up until now, we set memballoon period in monitor successfully, however
we did not update domain definition structure, thus dumpxml was omitting
period attribute in memballoon element
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1140960
When trying to update bandwidth limits on a running domain, limits get
updated in our internal structures, however XML parser reads
bandwidth limits from network 'actual' definition. Committing this patch
it is now available to update bandwidth 'actual' definition as well,
thus updating domain runtime XML.
Management software wants to be able to allocate disk space on demand.
To support this they need keep track of the space occupation of the
block device. This information is reported by qemu as part of block
stats.
This patch extend the block information in the bulk stats with the
allocation information.
To keep the same behaviour a helper is extracted from
qemuMonitorJSONGetBlockExtent in order to get per-device allocation
information.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This removes the artificial and unnecessary restriction that
virDomainSetMaxDowntime() only be called while a migration is in
progress.
https://bugzilla.redhat.com/show_bug.cgi?id=1146618
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The current block stats code matched up the disk name with the actual
stats by the order in the data returned from qemu. This unfortunately
isn't right as qemu may return the disks in any order. Fix this by
returning a hash of stats and index them by the disk alias.
For the new VIR_DOMAIN_EVENT_ID_TUNABLE event we have a bunch of
constants added
VIR_DOMAIN_EVENT_CPUTUNE_<blah>
VIR_DOMAIN_EVENT_BLKDEVIOTUNE_<blah>
This naming convention is bad for two reasons
- There is no common prefix unique for the events to both
relate them, and distinguish them from other event
constants
- The values associated with the constants were chosen
to match the names used with virConnectGetAllDomainStats
so having EVENT in the constant name is not applicable in
that respect
This patch proposes renaming the constants to
VIR_DOMAIN_TUNABLE_CPU_<blah>
VIR_DOMAIN_TUNABLE_BLKDEV_<blah>
ie, given them a common VIR_DOMAIN_TUNABLE prefix.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Use the universal tunable event to report changes to user. All
blkdeviotune values are prefixed with "blkdeviotune".
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
When you updated some blkdeviotune values for running domain the values
were stored only internally, but not saved into the live XML so they
won't survive restarting the libvirtd.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Request erroring out from the backing chain traveller and drop qemu's
internal backing chain integrity tester.
The backing chain traveller reports errors by itself with possibly more
detail than qemuDiskChainCheckBroken ever could.
We also need to make sure that we reconnect to existing qemu instances
even at the cost of losing the backing chain info (this really should be
stored in the XML rather than reloaded from disk, but that needs some
work).
Now we have universal tunable event so we can use it for reporting
changes to user. The cputune values will be prefixed with "cputune" to
distinguish it from other tunable events.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Add a new parameter that will allow to return the XML stored in the save
image for further manipulation and adjust the callers. This option will
be used in later patches.
We are not detecting the presence of FIPS from QEMU, but from procfs and
that means it's not QEMU capability. It was decided that we will pass
this flag to QEMU even if it's not supported by old QEMU binaries.
This patch also reverts changes done by commit a21cfb0f to
qemucapabilitestest and implements a new test case in qemuxml2argvtest.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1135431
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Commit f05b6a91 added virQEMUDriverConfigPtr argument to the
virQEMUCapsFillDomainCaps function and it uses forward declaration
of virQEMUDriverConfig and virQEMUDriverConfigPtr that casues clang
build to fail:
gmake[3]: Entering directory `/usr/home/novel/code/libvirt/src'
CC qemu/libvirt_driver_qemu_impl_la-qemu_capabilities.lo
In file included from qemu/qemu_capabilities.c:43:
In file included from qemu/qemu_hostdev.h:27:
qemu/qemu_conf.h:63:37: error: redefinition of typedef 'virQEMUDriverConfig'
is a C11 feature [-Werror,-Wtypedef-redefinition]
typedef struct _virQEMUDriverConfig virQEMUDriverConfig;
^
qemu/qemu_capabilities.h:328:37: note: previous definition is here
typedef struct _virQEMUDriverConfig virQEMUDriverConfig;
^
Fix that by passing loader and nloader config attributes directly
instead of passing complete config.
Clean up all _virDomainMemoryStat.
Signed-off-by: James <james.wangyufei@huawei.com>
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Clean up all _virDomainBlockStats.
Signed-off-by: James <james.wangyufei@huawei.com>
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Clean up all _virDomainInterfaceStats.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Wang Rui <moon.wangrui@huawei.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Live definition was used to look up the disk index while persistent one
was indexed leading to a crash in qemuDomainGetBlockIoTune. Use the
correct def and report a nice error.
Unfortunately it's accessible via read-only connection, though it can
only crash libvirtd in the cases where the guest is hot-plugging disks
without reflecting those changes to the persistent definition. So
avoiding hotplug, or doing hotplug where persistent is always modified
alongside live definition, will avoid the out-of-bounds access.
Introduced in: eca96694a7f992be633d48d5ca03cedc9bbc3c9aa (v0.9.8)
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1140724
Reported-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This patch implements the VIR_DOMAIN_STATS_BLOCK group of statistics.
To do so, a helper function to get the block stats of all the disks of
a domain is added.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This patch implements the VIR_DOMAIN_STATS_INTERFACE group of
statistics.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This patch implements the VIR_DOMAIN_STATS_VCPU group of statistics. To
do so, this patch also extracts a helper to gather the vCPU information.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This patch implements the VIR_DOMAIN_STATS_CPU_TOTAL group of
statistics.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Future patches which will implement more bulk stats groups for QEMU will
need to access the connection object.
To accommodate that, a few changes are needed:
* enrich internal prototype to pass qemu driver object
* add per-group flag to mark if one collector needs monitor access or not
* If at least one collector of the requested stats needs monitor access
we must start a query job for each domain. The specific collectors
will run nested monitor jobs inside that.
* If the job can't be acquired we pass flags to the collector so
specific collectors that need monitor access can be skipped in order
to gather as much data as is possible.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Check to see if the UEFI binary mentioned in qemu.conf actually
exists, and if so expose it in domcapabilities like
<loader ...>
<value>/path/to/ovmf</value>
</loader>
We introduce some generic domcaps infrastructure for handling
a dynamic list of string values, it may be of use for future bits.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Up till now the virQEMUCapsFillDomainCaps() was type of void as
there was no way for it to fail. This is, however, going to
change in the next commit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit b606bbb4 broke reporting of errors when setting of guest time
fails via the guest agent as the return value is not checked and later
overwritten by the return value qemuMonitorRTCResetReinjection();
Fix this by checking the return value before resetting the RTC
reinjection.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1142294
Modify qemuProcessStart() in order to allowing setting affinity to
specific CPU's for IOThreads. The process followed is similar to
that for the vCPU's.
This involves adding a function to fetch the IOThread id's via
qemuMonitorGetIOThreads() and adding them to iothreadpids[] list.
Then making sure all the cgroup data has been properly set up and
finally assigning affinity.
We stupidly modeled block job bandwidth after migration
bandwidth, which in turn was an 'unsigned long' and therefore
subject to 32-bit vs. 64-bit interpretations. To work around
the fact that 10-gigabit interfaces are possible but don't fit
within 32 bits, the original interface took the number scaled
as MiB/sec. But this scaling is rather coarse, and it might
be nice to tune bandwidth finer than in megabyte chunks.
Several of the block job calls that can set speed are fed
through a common interface, so it was easier to adjust them all
at once. Note that there is intentionally no flag for the new
virDomainBlockCopy; there, since the API already uses a 64-bit
type always, instead of a possible 32-bit type, and is brand
new, it was easier to just avoid scaling issues. As with the
previous patch that adjusted the query side (commit db33cc24),
omitting the new flag preserves old behavior, and the
documentation now mentions limits of what happens when a 32-bit
machine is on either client or server side.
* include/libvirt/libvirt.h.in (virDomainBlockJobSetSpeedFlags)
(virDomainBlockPullFlags)
(VIR_DOMAIN_BLOCK_REBASE_BANDWIDTH_BYTES)
(VIR_DOMAIN_BLOCK_COMMIT_BANDWIDTH_BYTES): New enums.
* src/libvirt.c (virDomainBlockJobSetSpeed, virDomainBlockPull)
(virDomainBlockRebase, virDomainBlockCommit): Document them.
* src/qemu/qemu_driver.c (qemuDomainBlockJobSetSpeed)
(qemuDomainBlockPull, qemuDomainBlockRebase)
(qemuDomainBlockCommit, qemuDomainBlockJobImpl): Support new flag.
Signed-off-by: Eric Blake <eblake@redhat.com>
Upstream qemu 1.4 added some drive-mirror tunables not present
when it was first introduced in 1.3. Management apps may want
to set these in some cases (for example, without tuning
granularity down to sector size, a copy may end up occupying
more bytes than the original because an entire cluster is
copied even when only a sector within the cluster is dirty,
although tuning it down results in more CPU time to do the
copy). I haven't personally needed to use the parameters, but
since they exist, and since the new API supports virTypedParams,
we might as well expose them.
Since the tuning parameters aren't often used, and omitted from
the QMP command when unspecified, I think it is safe to rely on
qemu 1.3 to issue an error about them being unsupported, rather
than trying to create a new capability bit in libvirt.
Meanwhile, all versions of qemu from 1.4 to 2.1 have a bug where
a bad granularity (such as non-power-of-2) gives a poor message:
error: internal error: unable to execute QEMU command 'drive-mirror': Invalid parameter 'drive-virtio-disk0'
because of abuse of QERR_INVALID_PARAMETER (which is supposed to
name the parameter that was given a bad value, rather than the
value passed to some other parameter). I don't see that a
capability check will help, so we'll just live with it (and it
has since been improved in upstream qemu).
* src/qemu/qemu_monitor.h (qemuMonitorDriveMirror): Add
parameters.
* src/qemu/qemu_monitor.c (qemuMonitorDriveMirror): Likewise.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONDriveMirror):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockCopyCommon): Likewise.
(qemuDomainBlockRebase, qemuDomainBlockCopy): Adjust callers.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror): Likewise.
* tests/qemumonitorjsontest.c (qemuMonitorJSONDriveMirror): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The hard part of managing the disk copy is already coded; all
this had to do was convert the XML and virTypedParameters into
the internal representation.
With this patch, all blockcopy operations that used the old
API should also work via the new API. Additional extensions,
such as supporting the granularity tunable or a network rather
than file destination, will be added as later patches.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): New function.
Signed-off-by: Eric Blake <eblake@redhat.com>
In order to implement the new virDomainBlockCopy, the existing
block copy internal implementation needs to be adjusted. The
new function will parse XML into a storage source, and parse
typed parameters into integers, then call into the same common
backend. For now, it's easier to keep the same implementation
limits that only local file destinations are suported, but now
the check needs to be explicit. Similar to qemuDomainBlockJobImpl
consuming 'vm', this code also consumes the caller's 'mirror'
description of the destination.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Rename...
(qemuDomainBlockCopyCommon): ...and adjust parameters.
(qemuDomainBlockRebase): Adjust caller.
Signed-off-by: Eric Blake <eblake@redhat.com>
When a domain is undefined, there are options to remove it's
managed save state or snapshots. However, there's another file
that libvirt creates per domain: the NVRAM variable store file.
Make sure that the file is not left behind if the domain is
undefined.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Test suites using the port allocator don't want to have different
behaviour depending on whether a port is in use on the host. Add
a VIR_PORT_ALLOCATOR_SKIP_BIND_CHECK which test suites can use
to skip the bind() test. The port allocator will thus only track
ports in use by the test suite process itself. This is fine when
using the port allocator to generate guest configs which won't
actually be launched
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Coverity notes that if the virConnectListAllDomains returns a negative
value then the loop at the cleanup label that ends on numDomains will
have issues.
Signed-off-by: John Ferlan <jferlan@redhat.com>
If we jump to cleanup before allocating the 'result', then the call
to virBlkioDeviceArrayClear will deref result causing a problem.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Coverity complains that checking for !domlist after setting doms = domlist
and making a deref of doms just above
It seems the call in question was intended to me made in the case that
'doms' was passed in and not when the virDomainObjListExport() call
allocated domlist and already called virConnectGetAllDomainStatsCheckACL().
Thus rather than check for !domlist - check that "doms != domlist" in
order to avoid the Coverity message.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In qemuDomainSetBlkioParameters(), Coverity points out that the calls
to qemuDomainParseBlkioDeviceStr() are slightly different and points
out there may be a cut-n-paste error.
In the first call (AFFECT_LIVE), the second parameter is "param->field";
however, for the second call (AFFECT_CONFIG), the second parameter is
"params->field". It seems the "param->field" is correct especially since
each path as a setting of "param" to "¶ms[i]". Furthermore, there
were a few more instances of using "params[i]" instead of "param->"
which I cleaned up.
Signed-off-by: John Ferlan <jferlan@redhat.com>
virDomainGetJobStats gains new VIR_DOMAIN_JOB_STATS_COMPLETED flag that
can be used to fetch statistics of a completed job rather than a
currently running job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Job statistics data were tracked in several structures and variables.
Let's make a new qemuDomainJobInfo structure which can be used as a
single source of statistics data as a preparation for storing data about
completed a job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Return failure right away when the domain object can't be looked up
instead of jumping to cleanup. This allows to remove the condition
before unlocking the domain object.
The code would lookup the snapshot object before acquiring the job. This
could lead to a crash as one thread could delete the snapshot object,
while a second thread already had the reference.
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Creating snapshots modifies the domain state. Currently we wouldn't
enter the job for certain operations although they would modify the
state. Refactor job handling so that everything is covered by an async
job.
To date, anyone performing a block copy and pivot ends up with
the destination being treated as <disk type='file'>. While this
works for data access for a block device, it has at least one
noticeable shortcoming: virDomainGetBlockInfo() reports allocation
differently for block devices visited as files (the size of the
device) than for block devices visited as <disk type='block'>
(the maximum sector used, as reported by qemu); and this difference
is significant when trying to manage qcow2 format on block devices
that can be grown as needed.
Of course, the more powerful virDomainBlockCopy() API can already
express the ability to set the <disk> type. But a new API can't
be backported, while a new flag to an existing API can; and it is
also rather inconvenient to have to resort to the full power of
generating XML when just adding a flag to the older call will do
the trick. So this patch enhances blockcopy to let the user flag
when the resulting XML after the copy must list the device as
type='block'.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_BLOCK_REBASE_COPY_DEV):
New flag.
* src/libvirt.c (virDomainBlockRebase): Document it.
* tools/virsh-domain.c (opts_block_copy, blockJobImpl): Add
--blockdev option.
* tools/virsh.pod (blockcopy): Document it.
* src/qemu/qemu_driver.c (qemuDomainBlockRebase): Allow new flag.
(qemuDomainBlockCopy): Remember the flag, and make sure it is only
used on actual block devices.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Test it.
Signed-off-by: Eric Blake <eblake@redhat.com>
While reviewing the new virDomainBlockCopy API, Peter Krempa
pointed out that our existing design of using MiB/s for block
job bandwidth is rather coarse, especially since qemu tracks
it in bytes/s; so virDomainBlockCopy only accepts bytes/s.
But once the new API is implemented for qemu, we will be in
the situation where it is possible to set a value that cannot
be accurately reflected back to the user, because the existing
virDomainGetBlockJobInfo defaults to the coarser units.
Fortunately, we have an escape hatch; and one that has already
served us well in the past: we can use the flags argument to
specify which scale to use (see virDomainBlockResize for prior
art). This patch fixes the query side of the API; made easier
by previous patches that split the query side out from the
modification code. Later patches will address the virsh
interface, as well retrofitting all other blockjob APIs to
also accept a flag for toggling bandwidth units.
* include/libvirt/libvirt.h.in (_virDomainBlockJobInfo)
(VIR_DOMAIN_BLOCK_COPY_BANDWIDTH): Document sizing issues.
(virDomainBlockJobInfoFlags): New enum.
* src/libvirt.c (virDomainGetBlockJobInfo): Document new flag.
* src/qemu/qemu_monitor.h (qemuMonitorBlockJobInfo): Add parameter.
* src/qemu/qemu_monitor.c (qemuMonitorBlockJobInfo): Likewise.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONBlockJobInfo):
Likewise.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONBlockJobInfo)
(qemuMonitorJSONGetBlockJobInfoOne): Likewise. Don't scale here.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror): Update
callers.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl): Likewise.
(qemuDomainGetBlockJobInfo): Likewise, and support new flag.
Signed-off-by: Eric Blake <eblake@redhat.com>
qemu treats blockjob bandwidth as a 64-bit number, in the units
of bytes/second. But we stupidly modeled block job bandwidth
after migration bandwidth, which in turn was an 'unsigned long'
and therefore subject to 32-bit vs. 64-bit interpretations, and
with a scale of MiB/s. Our code already has to convert between
the two scales, and report overflow as appropriate; although
this conversion currently lives in the monitor code. In fact,
our conversion code limited things to 63 bits, because we
checked against LLONG_MAX and reject what would be negative
bandwidth if treated as signed.
On the bright side, our use of MiB/s means that even with a
32-bit unsigned long, we still have no problem representing a
bandwidth of 2GiB/s, which is starting to be more feasible as
10-gigabit or even faster interfaces are used. And once you
get past the physical speeds of existing interfaces, any larger
bandwidth number behaves the same - effectively unlimited.
But on the low side, the granularity of 1MiB/s tuning is rather
coarse. So the new virDomainBlockJob API decided to go with
a direct 64-bit bytes/sec number instead of the scaled number
that prior blockjob APIs had used. But there is no point in
rounding this number to MiB/s just to scale it back to bytes/s
for handing to qemu.
In order to make future code sharing possible between the old
virDomainBlockRebase and the new virDomainBlockCopy, this patch
moves the scaling and overflow detection into the driver code.
Several of the block job calls that can set speed are fed
through a common interface, so it was easier to adjust all block
jobs at once, for consistency. This patch is just code motion;
there should be no user-visible change in behavior.
* src/qemu/qemu_monitor.h (qemuMonitorBlockJob)
(qemuMonitorBlockCommit, qemuMonitorDriveMirror): Change
parameter type and scale.
* src/qemu/qemu_monitor.c (qemuMonitorBlockJob)
(qemuMonitorBlockCommit, qemuMonitorDriveMirror): Move scaling
and overflow detection...
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl)
(qemuDomainBlockRebase, qemuDomainBlockCommit): ...here.
(qemuDomainBlockCopy): Use bytes/sec.
Signed-off-by: Eric Blake <eblake@redhat.com>
Another layer of overly-multiplexed code that deserves to be
split into obviously separate paths for query vs. modify.
This continues the cleanup started in commit cefe0ba.
In the process, make some tweaks to simplify the logic when
parsing the JSON reply. There should be no user-visible
semantic changes.
* src/qemu/qemu_monitor.h (qemuMonitorBlockJob): Drop parameter.
(qemuMonitorBlockJobInfo): New prototype.
(BLOCK_JOB_INFO): Drop enum.
* src/qemu/qemu_monitor_json.h (qemuMonitorJSONBlockJob)
(qemuMonitorJSONBlockJobInfo): Likewise.
* src/qemu/qemu_monitor.c (qemuMonitorBlockJob): Split...
(qemuMonitorBlockJobInfo): ...into second function.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONBlockJob): Move
block info portions...
(qemuMonitorJSONGetBlockJobInfo): ...here, and rename...
(qemuMonitorJSONBlockJobInfo): ...and export.
(qemuMonitorJSONGetBlockJobInfoOne): Alter return semantics.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl, qemuDomainGetBlockJobInfo): Adjust
callers.
* src/qemu/qemu_migration.c (qemuMigrationDriveMirror)
(qemuMigrationCancelDriveMirror): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The qemu implementation for virDomainGetBlockJobInfo() has a
minor bug: it grabs the qemu job with intent to QEMU_JOB_MODIFY,
which means it cannot be run in parallel with any other
domain-modifying command. Among others, virDomainBlockJobAbort()
is such a modifying command, and it defaults to being
synchronous, and can wait as long as several seconds to ensure
that the job has actually finished. Due to the job rules, this
means a user cannot obtain status about the job during that
timeframe, even though we know that some client management code
exists which is using a polling loop on status to see when a job
finishes.
This bug has been present ever since blockpull support was first
introduced (commit b976165, v0.9.4 in Jul 2011), all because we
stupidly tried to cram too much multiplexing through a single
helper routine, but was made worse in 97c59b9 (v1.2.7) when
BlockJobAbort was fixed to wait longer. It's time to disentangle
some of the mess in qemuDomainBlockJobImpl, and in the process
relax block job query to use QEMU_JOB_QUERY, since it can safely
be used in parallel with any long running modify command.
Technically, there is one case where getting block job info can
modify domain XML - we do snooping to see if a 2-phase job has
transitioned into the second phase, for an optimization in the
case of old qemu that lacked an event for the transition. I
claim this optimization is safe (the jobs are all about modifying
qemu state, not necessarily xml state); but if it proves to be
a problem, we could use the difference between the capabilities
QEMU_CAPS_BLOCKJOB_{ASYNC,SYNC} to determine whether we even
need snooping, and only request a modifying job in the case of
older qemu.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Move info
handling...
(qemuDomainGetBlockJobInfo): ...here, and relax job type.
(qemuDomainBlockJobAbort, qemuDomainBlockJobSetSpeed)
(qemuDomainBlockRebase, qemuDomainBlockPull): Adjust callers.
Signed-off-by: Eric Blake <eblake@redhat.com>
The existing virDomainBlockRebase code rejected the combination of
_RELATIVE and _COPY flags, but only by accident. It makes sense
to add support for the combination someday, at least for the case
of _SHALLOW and not _REUSE_EXT; but to implement it, libvirt would
have to pre-create the file with a relative backing name, and I'm
not ready to code that in yet.
Meanwhile, the code to forward on to the block copy code is getting
longer, and reorganizing the function to have the block pull done
early makes it easier to add even more block copy prep code.
This patch should have no semantic difference other than the quality
of the error message on the unsupported flag combination. Pre-patch:
error: unsupported flags (0x10) in function qemuDomainBlockCopy
Post-patch:
error: argument unsupported: Relative backing during copy not supported yet
* src/qemu/qemu_driver.c (qemuDomainBlockRebase): Reorder code,
and improve error message of relative copy.
Signed-off-by: Eric Blake <eblake@redhat.com>
In qemuDomainSnapshotCreateDiskActive() if we jumped to cleanup from a
failed actions = virJSONValueNewArray(), then 'cfg' would be NULL.
So just return -1, which in turn removes the need for cleanup:
Implement the API function for virDomainListGetStats and
virConnectGetAllDomainStats in a modular way and implement the
VIR_DOMAIN_STATS_STATE group of statistics.
Although it may look like the function looks universal I'd rather not
expose it to other drivers as the coming stats groups are likely to do
qemu specific stuff to obtain the stats.
In qemuDomainRevertToSnapshot(), it will check snap->def->state.
But when the state is PMSUSPENDED/NOSTATE/BLOCKED, it forgets to
call qemuDomainObjEndJob.
https://bugzilla.redhat.com/show_bug.cgi?id=1134154
Bug introduced in commit 1e833899.
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Let's fix this before we bake in a painful API. Since we know
that we have exactly one non-negative fd on success, we might
as well return the fd directly instead of forcing the user to
pass in a pointer. Furthermore, I found some memory and fd
leaks while reviewing the code - the idea is that on success,
libvirtd will have handed two fds in two different directions:
one to qemu, and one to the RPC client.
* include/libvirt/libvirt.h.in (virDomainOpenGraphicsFD): Drop
unneeded parameter.
* src/driver.h (virDrvDomainOpenGraphicsFD): Likewise.
* src/libvirt.c (virDomainOpenGraphicsFD): Adjust interface to
return fd directly.
* daemon/remote.c (remoteDispatchDomainOpenGraphicsFd): Adjust
semantics.
* src/qemu/qemu_driver.c (qemuDomainOpenGraphicsFD): Likewise,
and plug fd leak.
* src/remote/remote_driver.c (remoteDomainOpenGraphicsFD):
Likewise, and plug memory and fd leak.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit b606bbb41 reminded me that any time we drop locks to run
back-to-back guest interaction commands, we have to check that
the guest didn't disappear in between the two commands. A quick
audit found a couple of spots that were missing this check.
* src/qemu/qemu_driver.c (qemuDomainShutdownFlags)
(qemuDomainSetVcpusFlags): Check that domain is still up.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1078126
Using 'virsh attach-device --config' (or --persistent) to attach a
file backed lun device will succeed; however, subsequent domain restarts
will result in failure because the configuration of a file backed lun
is not supported.
Although allowing 'illegal configurations' is something that can be
allowed, it may not be practical in this case. Generally, when attaching
a device to a domain means the domain must be running. A way around
this is using the --config (or --persistent) option. When an attach
is done to a running domain, a temporary configuration is modified
first followed by the live update. The live update will make a number
of disk validity checks when building the qemu command to attach the
disk. If any fail, then change is rejected.
Rather than allow a potentially illegal combination, adjust the code
in the configuration path to make the same checks as the running path
will make with respect to disk validity checks. This way we avoid
having the potential for some subsequent start/reboot to fail because
an illegal combination was allowed.
NB: The live path still checks the configuration since it is possible
to just do --live guest modification...
https://bugzilla.redhat.com/show_bug.cgi?id=1103245
An advice appeared there on the qemu-devel list [1]. When a domain is
suspended and then resumed guest kernel is not aware of this. So we've
introduced virDomainSetTime API that resets the time within guest
using qemu-ga. On the other hand, qemu itself is trying to make RTC
beat faster to catch the difference. But if we don't tell qemu that
guest's time was reset via the other method, both mechanisms are
applied resulting in again wrong guest time. In order to avoid summing
both corrections we need to tell qemu that it should not use the RTC
injection if the guest time is set via guest agent.
1: http://www.mail-archive.com/qemu-devel@nongnu.org/msg236435.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a user would try changing the persistent IO tuning settings for a
disk that was hotplugged to a vm in a transient way, the
qemuDomainSetBlockIoTune API would use the same index for both the
live and config disk array. The disk was missing from the config array
though causing a crash of libvirtd.
To fix the issue, determine the indexes separately.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1131819
Pass the source of the changed media instead of a complete disk
definition.
Note that the @disk argument now contains what @olddisk would contain.
The new source is passed as a virStorageSource struct.
When we are changing media (or doing other hotplug operations) we need
to setup cgroups, locking and seclabels on the new disk. This is a
multi-step process where every piece can fail. To simplify dealing with
this introduce qemuDomainPrepareDisk that similarly to
qemuDomainPrepareDiskChainElement initializes/tears down a whole new
disk to be used with the domain.
Additionally the function supports passing a different source struct for
media changes of cdroms that will be refactored later.
Currently, qemu driver uses qemuTranslateDiskSourcePool()
to translate disk volume information. This function is
general enough and could be used for other drivers as well,
so move it to conf/domain_conf.c along with its helpers.
- qemuTranslateDiskSourcePool: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePool,
- qemuAddISCSIPoolSourceHost: move to storage/storage_driver.c
and rename to virStorageAddISCSIPoolSourceHost,
- qemuTranslateDiskSourcePoolAuth: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePoolAuth,
- Update users of qemuTranslateDiskSourcePool to use a
new name.
In commit 45ad1adb I added a nicer message for tunings that need
cgroups when unavailable (unprivileged), but I added this check for
I/O tuning of block devices, which doesn't need cgroups, because it is
done by QEMU, so let's fix that.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Remove the pinning info when removing to CPU, otherwise when the VM will
be started our code will try to pin non-existing vcpus as the definition
wasn't updated.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1129372
During a QEMU live migration several warning messages about job
handling could be written to syslog on the destination host:
"entering monitor without asking for a nested job is dangerous"
The messages are written because the job handling during migration
uses hard coded asyncJob values in several places that are incorrect.
This patch passes the required asyncJob value around and prevents
the warnings as well as any issues that the warnings may be referring
to.
https://bugzilla.redhat.com/show_bug.cgi?id=1130089
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Saving a shutoff VM doesn't make sense and libvirtd crashes while
attempting to do that. Check that the domain is alive after entering
the save async job.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1129207
The virDomainSetInterfaceParameters implementation in qemu over
VIR_DOMAIN_AFFECT_CONFIG doesn't work as expected. When trying to
clear out the bandwidth settings for an interface, it has no
actual effect:
virsh # domiftune --config $domain $interface
inbound.average: 100
inbound.peak : 0
inbound.burst : 0
outbound.average: 10
outbound.peak : 0
outbound.burst : 0
virsh domiftune --config $domain $interface 0 0
virsh # domiftune --config $domain $interface
inbound.average: 100
inbound.peak : 0
inbound.burst : 0
outbound.average: 10
outbound.peak : 0
outbound.burst : 0
But according to virsh man page:
To clear inbound or outbound settings, use --inbound or
--outbound respectfully with average value of zero.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit febf84c2 tried to delay in-memory modification of the actual
domain disk structure until after the qemu event was received.
However, I missed that the code for block pivot had been temporarily
setting disk->src = disk->mirror prior to the qemu command, in order
to label the backing chain of a reused external blockcopy disk;
and calls into qemu while still in that state before finally undoing
things at the cleanup label. Since the qemu event handler then does:
virStorageSourceFree(disk->src);
disk->src = disk->mirror;
we have the sad race that a fast enough qemu event can cause a leak of
the original disk->src, as well as a use-after-free of the disk->mirror
contents, bad enough to crash libvirtd in some of my test runs, even
though the common case of the qemu event being much later won't trip
the race.
I'll go wear the brown paper bag of shame, for introducing a crasher
in between rc1 and rc2 of the freeze for 1.2.7 :( My only
consolation is that virDomainBlockJobAbort requires the domain:write
ACL, so it is not a CVE.
The valgrind report when the race occurs looks like:
==25612== Invalid read of size 4
==25612== at 0x50E7C90: virStorageSourceGetActualType (virstoragefile.c:1948)
==25612== by 0x209C0B18: qemuDomainDetermineDiskChain (qemu_domain.c:2473)
==25612== by 0x209D7F6A: qemuProcessHandleBlockJob (qemu_process.c:1087)
==25612== by 0x209F40C9: qemuMonitorEmitBlockJob (qemu_monitor.c:1357)
...
==25612== Address 0xe4b5610 is 0 bytes inside a block of size 200 free'd
==25612== at 0x4A07577: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==25612== by 0x50839E9: virFree (viralloc.c:582)
==25612== by 0x50E7E51: virStorageSourceFree (virstoragefile.c:2015)
==25612== by 0x209D7EFF: qemuProcessHandleBlockJob (qemu_process.c:1073)
==25612== by 0x209F40C9: qemuMonitorEmitBlockJob (qemu_monitor.c:1357)
* src/qemu/qemu_driver.c (qemuDomainBlockPivot): Don't corrupt
disk->src, and only label chain for blockcopy.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit 232a31b munged job info to report 'active commit' instead of
'commit' when generating events, but forgot to also munge the polling
variant of the command.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Adjust type as
needed.
Signed-off-by: Eric Blake <eblake@redhat.com>
Otherwise this beautiful error would be overwritten when
the function is called with a really high rate number:
2014-07-28 12:51:47.920+0000: 2304: error : virCommandWait:2399 :
internal error: Child process (/sbin/tc class add dev vnet0 parent 1:
classid 1:1 htb rate 4294968kbps) unexpected exit status 1: Illegal "rate"
Usage: ... qdisc add ... htb [default N] [r2q N]
default minor id of class to which unclassified packets are sent {0}
r2q DRR quantums are computed as rate in Bps/r2q {10}
debug string of 16 numbers each 0-3 {0}
... class add ... htb rate R1 [burst B1] [mpu B] [overhead O]
[prio P] [slot S] [pslot PS]
[ceil R2] [cburst B2] [mtu MTU] [quantum Q]
rate rate allocated to this class (class can still borrow)
burst max bytes burst which can be accumulated during idle period {computed}
mpu minimum packet size used in rate computations
overhead per-packet size overhead used in rate computations
linklay adapting to a linklayer e.g. atm
ceil definite upper class rate (no borrows) {rate}
cburst burst but for ceil {computed}
mtu max packet size we create rate map for {1600}
prio priority of leaf; lowe
https://bugzilla.redhat.com/show_bug.cgi?id=1043735
With this in place, I can (finally!) now do:
virsh blockcommit $dom vda --shallow --verbose --pivot
and watch qemu shorten the backing chain by one, followed by
libvirt automatically updating the dumpxml output, effectively
undoing the work of virsh snapshot-commit --no-metadata --disk-only.
Commit is SOOOO much faster than blockpull, when I'm still fairly
close in time to when the temporary qcow2 wrapper file was created
via a snapshot operation!
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Implement live
commit.
Signed-off-by: Eric Blake <eblake@redhat.com>
A future patch is going to wire up qemu active block commit jobs;
but as they have similar events and are canceled/pivoted in the
same way as block copy jobs, it is easiest to track all bookkeeping
for the commit job by reusing the <mirror> element. This patch
adds domain XML to track which job was responsible for creating a
mirroring situation, and adds a job='copy' attribute to all
existing uses of <mirror>. Along the way, it also massages the
qemu monitor backend to read the new field in order to generate
the correct type of libvirt job (even though it requires a
future patch to actually cause a qemu event that can be reported
as an active commit). It also prepares to update persistent XML
to match changes made to live XML when a copy completes.
* docs/schemas/domaincommon.rng: Enhance schema.
* docs/formatdomain.html.in: Document it.
* src/conf/domain_conf.h (_virDomainDiskDef): Add a field.
* src/conf/domain_conf.c (virDomainBlockJobType): String conversion.
(virDomainDiskDefParseXML): Parse job type.
(virDomainDiskDefFormat): Output job type.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Distinguish
active from regular commit.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Set job type.
(qemuDomainBlockPivot, qemuDomainBlockJobImpl): Clean up job type
on completion.
* tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-mirror-old.xml:
Update tests.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-disk-active-commit.xml: New
file.
* tests/qemuxml2xmltest.c (mymain): Drive new test.
Signed-off-by: Eric Blake <eblake@redhat.com>
We were not directly saving the domain XML to file after starting
or finishing a blockcopy. Without the startup write, a libvirtd
restart in the middle of a copy job would forget that the job was
underway. Then at pivot, we were indirectly writing new XML in
reaction to events that occur as we stop and restart the guest CPUs.
But there was a race: since pivot is an async action, it is possible
that libvirtd is restarted before the pivot completes, so if XML
changes during the event, that change was not written. The original
blockcopy code cleared out the <mirror> element prior to restarting
the CPUs, but this is also a race, observed if a user does an async
pivot and a dumpxml before the event occurs. Furthermore, this race
will interfere with active commit in a future patch, because that
code will rely on the <mirror> element at the time of the qemu event
to determine whether to inform the user of a normal commit or an
active commit.
Fix things by saving state any time we modify live XML, while
delaying XML disk modifications until after the event completes. We
still need a to teach libvirtd restarts to examine all existing
<mirror> elements to see if the job completed in the meantime (that
is, if libvirtd misses the event, the updated state still needs to be
updated in live XML), but that will be a later patch, in part because
we also need to to start taking advantage of newer qemu's ability to
keep the job around after completion rather than the current usage
where the job disappears both on error and on success.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Track XML change
on disk.
(qemuDomainBlockJobImpl, qemuDomainBlockPivot): Move job-end XML
rewrites...
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): ...here.
Signed-off-by: Eric Blake <eblake@redhat.com>
Doing a blockcopy operation across a libvirtd restart is not very
robust at the moment. In particular, we are clearing the <mirror>
element prior to telling qemu to finish the job. Also, thanks to the
ability to request async completion, the user can easily regain
control prior to qemu actually finishing the effort, and they should
be able to poll the domain XML to see if the job is still going.
A future patch will fix things to actually wait until qemu is done
before modifying the XML to reflect the job completion. But since
qemu issues identical BLOCK_JOB_COMPLETE events regardless of whether
the job was cancelled (kept the original disk) or completed (pivoted
to the new disk), we have to track which of the two operations were
used to end the job. Furthermore, we'd like to avoid attempts to
end a job where we are already waiting on an earlier request to qemu
to end the job. Likewise, if we miss the qemu event (perhaps because
it arrived during a libvirtd restart), we still need enough state
recorded to be able to determine how to modify the domain XML once
we reconnect to qemu and manually learn whether the job still exists.
Although this patch doesn't actually fix the problem, it is a
preliminary step that makes it possible to track whether a job
has already begun steps towards completion.
* src/conf/domain_conf.h (virDomainDiskMirrorState): New enum.
(_virDomainDiskDef): Convert bool mirroring to new enum.
* src/conf/domain_conf.c (virDomainDiskDefParseXML)
(virDomainDiskDefFormat): Handle new values.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Adjust
client.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl): Likewise.
* docs/schemas/domaincommon.rng (diskMirror): Expose new values.
* docs/formatdomain.html.in (elementsDisks): Document it.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Test it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Use better detection of hugetlbfs mount points. Yes, there can be
multiple mount points each serving different huge page size.
Since we already have ability to override the mount point in the
qemu.conf file, this crazy backward compatibility code is brought in.
Now we allow multiple mount points, so the "hugetlbfs_mount" option
must take an list of strings (mount points). But previously, it was
just a string, so we must accept both types now.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since 24e5cafba6 (thankfully unreleased)
when a VM with an empty disk drive would be started the code would call
stat() on NULL path as a check was missing from the callback rendering
machines unstartable.
Report success when the path is empty (denoting an empty drive).
If user hasn't provided any @emulatorbin, the qemuCaps are
searched by @arch provided (which in fact can be guessed from the
host). However, there's no guarantee that the qemu binary for
@arch will exist. Therefore qemu capabilities may be nonexistent
too. If that's the case, we should throw an error message prior
jumping onto 'cleanup' label as the helper lookup function
remains silent on no search result.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
To integrate the security driver with the storage driver we need to
pass a callback for a function that will chown storage volumes.
Introduce and document the callback prototype.
Up to now, users have to pass two arguments at least: domain virt type
('qemu' vs 'kvm') and one of emulatorbin or architecture. This is not
much user friendly. Nowadays users mostly use KVM and share the host
architecture with the guest. So now, the API (and subsequently virsh
command) can be called with all NULLs (without any arguments).
Before this patch:
# virsh domcapabilities
error: failed to get emulator capabilities
error: virttype_str in qemuConnectGetDomainCapabilities must not be NULL
# virsh domcapabilities kvm
error: failed to get emulator capabilities
error: invalid argument: at least one of emulatorbin or architecture fields must be present
After:
# virsh domcapabilities
<domainCapabilities>
<path>/usr/bin/qemu-system-x86_64</path>
<domain>kvm</domain>
<machine>pc-i440fx-2.1</machine>
<arch>x86_64</arch>
<vcpu max='255'/>
</domainCapabilities>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch adds back the virDomainDef typedef into domain_conf and
makes all the numatune_conf functions independent of any virDomainDef
definitions.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1122205
Although the edits were changing in-memory XML, it was not flushed
to disk; so unless some other action changes XML, a libvirtd restart
would lose the changed information.
* src/conf/domain_conf.c (virDomainObjSetMetadata): Add parameter,
to save live status across restarts.
(virDomainSaveXML): Allow for test driver.
* src/conf/domain_conf.h (virDomainObjSetMetadata): Adjust
signature.
* src/bhyve/bhyve_driver.c (bhyveDomainSetMetadata): Adjust caller.
* src/lxc/lxc_driver.c (lxcDomainSetMetadata): Likewise.
* src/qemu/qemu_driver.c (qemuDomainSetMetadata): Likewise.
* src/test/test_driver.c (testDomainSetMetadata): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Before:
virsh # dominfo chx3
State: shut off
Max memory: 92160 KiB
Used memory: 92160 KiB
After:
virsh # dominfo container1
State: shut off
Max memory: 92160 KiB
Used memory: 0 KiB
Similar to qemu cases.
Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
Convert the target snapshot state selector to a switch statement
enumerating all possible values. This points out a few mistakes in the
original selector.
The logic of the code is preserved until later patches.
As with the local SCSI passthrough devicesm qemu can't support snapshots
on those as the block ops are handled by the device. This is also true
for iSCSI backing of the disk. Remove the check for the local block
device and just forbid snapshot when the disk is of type 'lun'.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1073368
There were numerous places where numatune configuration (and thus
domain config as well) was changed in different ways. On some
places this even resulted in persistent domain definition not to be
stable (it would change with daemon's restart).
In order to uniformly change how numatune config is dealt with, all
the internals are now accessible directly only in numatune_conf.c and
outside this file accessors must be used.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since there was already public virDomainNumatune*, I changed the
private virNumaTune to match the same, so all the uses are unified and
public API is kept:
s/vir\(Domain\)\?Numa[tT]une/virDomainNumatune/g
then shrunk long lines, and mainly functions, that were created after
that:
sed -i 's/virDomainNumatuneMemPlacementMode/virDomainNumatunePlacement/g'
And to cope with the enum name, I haad to change the constants as
well:
s/VIR_NUMA_TUNE_MEM_PLACEMENT_MODE/VIR_DOMAIN_NUMATUNE_PLACEMENT/g
Last thing I did was at least a little shortening of already long
name:
s/virDomainNumatuneDef/virDomainNumatune/g
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1119173 documents that
commit eaba79d was flawed in the implementation of the
VIR_DOMAIN_BLOCK_JOB_ABORT_ASYNC flag when it comes to completing
a blockcopy. Basically, the qemu pivot action is async (the QMP
command returns immediately, but the user must wait for the
BLOCK_JOB_COMPLETE event to know that all I/O related to the job
has finally been flushed), but the libvirt command was documented
as synchronous by default. As active block commit will also be
using this code, it is worth fixing now.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Don't skip wait
loop after pivot.
Signed-off-by: Eric Blake <eblake@redhat.com>
Rename linuxDomainInterfaceStats to virNetInterfaceStats in order
to allow adding platform specific implementations without
making consumer worrying about specific implementation to be used.
Also, rename util/virstatslinux.c to util/virstats.c so placing
other platform specific implementations into this file don't
look unexpected from the file name.
4cc1f1a01f introduced a crash when doing a
block copy as virStorageSourceInitChainElement was called on
"disk->mirror" that is still NULL at that point instead of "mirror"
which temporarily holds the mirror source struct until it's fully
initialized. This resulted into a crash as a NULL was dereferenced.
Reported by: Shanzi Yu <shyu@redhat.com>
Commit id '3ea661de' refactored the code to use the 'disk->src->path'
instead of getting the path from virDomainDiskGetSource(). The one
call to qemuOpenFile() didn't use the disk source path, rather it used
the path as passed from the caller (in this case 'vda') - this caused
a failure with the virt-test/tp-libvirt as follows:
$ virsh domblkinfo virt-tests-vm1 vda
error: cannot stat file '/home/virt-test/shared/data/images/jeos-20-64.qcow2': Bad file descriptor
$
The default graphics channel mode is 'any', so as to defaultMode attribute.
If defaultMode and channel mode are all the default value 'any',
qemuConnectDomainXMLToNative will set TLSPort.
But in qemuBuildGraphicsSPICECommandLine, if spice_tls is not enabled, libvirtd
will report an error to tell the user that spice TLS is disabled in qemu.conf.
So qemuConnectDomainXMLToNative should check spice_tls is enabled,
then decide to allocate an tlsPort number to this graphics.
If user specified defaultMode is 'secure', qemuConnectDomainXMLToNative
could allocate tlsPort, and then let qemuBuildGraphicsSPICECommandLine reports
the spice_tls disabled error.
The related bug is:
https://bugzilla.redhat.com/show_bug.cgi?id=1113868
Signed-off-by: Jincheng Miao <jmiao@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Now that cgroups/security driver/locking driver support labelling of
individual images and tolerate network storage we don't have to refrain
from passing all image files to it. This allows removing the checking
code as we already make sure that the snapshot function won't be called
with unsupported options.
Now that security, cgroup and locking APIs support working on individual
images and we track the backing chain security info on a per-image basis
we can finally kill swapping the disk source in virDomainDiskDef and use
the virStorageSource directly.
Until now we were changing information about the disk source via
multiple steps of copying data. Now that we changed to a pointer to
store the disk source we might use it to change the approach to track
the data.
Additionally this will allow proper tracking of the backing chain.
When pivoting to a new disk source after a block commit (and possibly
after a soon-to-be-added active block commit) we changed just a few
fields to the new target. In case we'd copy a network disk to a local
file we'd not change the type properly.
To avoid such problems, switch to tracking of the source via changing of
the complete source struct to the one tracking the mirroring info.
Use the source struct and the corresponding function so that we can
avoid using the path separately. Now that
qemuDomainPrepareDiskChainElementPath isn't use anywhere, we can safely
remove it.
Additionally, the removal fixes a misaligned comment as the removed
function was added under a comment for a different function.
In the future we might need to track state of individual images. Move
the readonly and shared flags to the virStorageSource struct so that we
can keep them in a per-image basis.
Now that we are able to select images from the backing chain via indexed
access we should also convert possible network sources to
qemu-compatible strings before passing them to qemu.
Now that we are able to select images from the backing chain via indexed
access we should also convert possible network sources to
qemu-compatible strings before passing them to qemu.
The qemu block info function relied on working with local storage. Break
this assumption by adding support for remote volumes. Unfortunately we
still need to take a hybrid approach as some of the operations require a
filedescriptor.
Previously you'd get:
$ virsh domblkinfo gl vda
error: cannot stat file '/img10': Bad file descriptor
Now you get some stats:
$ virsh domblkinfo gl vda
Capacity: 10485760
Allocation: 197120
Physical: 197120
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1110198
For the regular dump operation we migrate the VM to a file. This won't
work when the VM has passthrough devices assigned. Rather than reporting
a cryptic error from qemu run our check whether it can be migrated.
This does not influence the memory-only dump that is allowed with
passthrough devices.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=874418
To allow changing the name that is recorded in the top of the current
image chain used in a block pull/rebase operation, we need to specify
the backing name to qemu. This is done via the "backing-file" attribute
to the block-stream commad.
To allow changing the name that is recorded in the overlay of the TOP
image used in a block commit operation, we need to specify the backing
name to qemu. This is done via the "backing-file" attribute to the
block-commit command.
Use the probing functionality added in the last patch to turn on
a capability bit when active commit is present, and gate active
commit on that capability.
For my own reference: the difference between BLOCKJOB_SYNC and
BLOCKJOB_ASYNC is whether qemu generated an event at the
conclusion of blockpull; basically, RHEL 6.2 was the only release
of qemu that has the sync semantics and lacks the event. RHEL
6.3 added blockcopy, but also picked up on the upstream style
of qemu generating events. As no one is likely to backport
active commit to RHEL 6.2, it's safe for blockcommit to always
require async blockjob support.
Modifying qemucapabilitiestest is painful; the .replies files would
be so much easier if they had comments correlating which command
generated the given reply. Maybe I'll fix that up later...
* src/qemu/qemu_capabilities.h (QEMU_CAPS_ACTIVE_COMMIT): New
capability.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Use the new bit
* src/qemu/qemu_capabilities.c (virQEMUCaps): Name the new bit.
(virQEMUCapsProbeQMPCommands): Set it.
* tests/qemucapabilitiesdata/caps_1.3.1-1.replies: Update.
* tests/qemucapabilitiesdata/caps_1.4.2-1.replies: Likewise.
* tests/qemucapabilitiesdata/caps_1.5.3-1.replies: Likewise.
* tests/qemucapabilitiesdata/caps_1.6.0-1.replies: Likewise.
* tests/qemucapabilitiesdata/caps_1.6.50-1.replies: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
So far only information on disks and host devices are exposed in the
capabilities XML. Well, at least something. Even a new test is
introduced. The qemu capabilities are stolen from already existing
qemucapabilities test. There's one tricky point though. Functions that
checks host's KVM and VFIO capabilities, are impossible to mock
currently. So in the test, we are setting the capabilities by hand.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Replace:
if (virBufferError(&buf)) {
virBufferFreeAndReset(&buf);
virReportOOMError();
...
}
with:
if (virBufferCheckError(&buf) < 0)
...
This should not be a functional change (unless some callers
misused the virBuffer APIs - a different error would be reported
then)
So far, we only report an error if formatting the siblings bitmap
in NUMA topology fails.
Be consistent and always report error in virCapabilitiesFormatXML.
We have the following matrix of possible arguments handled by the logic
statement touched by this patch:
| flags & _REUSE_EXT | !(flags & _REUSE_EXT)
-------+--------------------+----------------------
format| (1) | (2)
-------+--------------------+----------------------
!format| (3) | (4)
-------+--------------------+----------------------
In cases 1 and 2 the user provided a format, in cases 3 and 4 not. The
user requests to use a pre-existing image in 1 and 3 and libvirt will
create a new image in 2 and 4.
The difference between cases 3 and 4 is that for 3 the format is probed
from the user-provided image, whereas in 4 we just use the existing disk
format.
The current code would treat cases 1,3 and 4 correctly but in case 2 the
format provided by the user would be ignored.
The particular piece of code was broken in commit 35c7701c64
but since it was introduced a few commits before that it was never
released as working.
Commit 55bbb011b9 introduced a regression
where we forgot to save the persistent domain configuration after an
external snapshot. This would make libvirt forget the snapshots and
effectively revert to the previous state in the following scenario:
1) Start VM
2) Take snapshot
3) Destroy VM
4) Restart libvirtd
Also fix spurious blank line added by patch mentioned above.
When creating a new disk mirror the new struct is stored in a separate
variable until everything went well. The removed hunk would actually
remove existing mirror information for example when the api would be run
if a mirror still exists.
I'm going to add functions that will deal with individual image files
rather than whole disks. Rename the security function to make room for
the new one.
The new VIR_CONNECT_COMPARE_CPU_FAIL_INCOMPATIBLE flag for
virConnectCompareCPU can be used to get an error
(VIR_ERR_CPU_INCOMPATIBLE) describing the incompatibility instead of the
usual VIR_CPU_COMPARE_INCOMPATIBLE return code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When CPU comparison APIs return VIR_CPU_COMPARE_INCOMPATIBLE, the caller
has no clue why the CPU is considered incompatible with host CPU. And in
some cases, it would be nice to be able to get such info in a client
rather than having to look in logs.
To achieve this, the APIs can be told to return VIR_ERR_CPU_INCOMPATIBLE
error for incompatible CPUs and the reason will be described in the
associated error message.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As we are doing with the enum structures, a cleanup in "src/qemu/"
directory was done now. All the enums that were defined in the
header files were converted to typedefs in this directory. This
patch includes all the adjustments to remove conflicts when you do
this kind of change. "Enum-to-typedef"'s conversions were made in
"src/qemu/qemu_{capabilities, domain, migration, hotplug}.h".
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
I'm going to add functions that will deal with individual image files
rather than whole disks. Rename the security function to make room for
the new one.
For future work we want to get info for not only the free memory
but overall memory size too. That's why the function must have
new signature too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When the block job event was first added, it was for block pull,
where the active layer of the disk remains the same name. It was
also in a day where we only cared about local files, and so we
always had a canonical absolute file name. But two things have
changed since then: we now have network disks, where determining
a single absolute string does not really make sense; and we have
two-phase jobs (copy and active commit) where the name of the
active layer changes between the first event (ready, on the old
name) and second (complete, on the pivoted name).
Adam Litke reported that having an unstable string between events
makes life harder for clients. Furthermore, all of our API that
operate on a particular disk of a domain accept multiple strings:
not only the absolute name of the active layer, but also the
destination device name (such as 'vda'). As this latter name is
stable, even for network sources, it serves as a better string
to supply in block job events.
But backwards-compatibility demands that we should not change the
name handed to users unless they explicitly request it. Therefore,
this patch adds a new event, BLOCK_JOB_2 (alas, I couldn't think of
any nicer name - but at least Migrate2 and Migrate3 are precedent
for a number suffix). We must double up on emitting both old-style
and new-style events according to what clients have registered for
(see also how IOError and IOErrorReason emits double events, but
there the difference was a larger struct rather than changed
meaning of one of the struct members).
Unfortunately, adding a new event isn't something that can easily
be broken into pieces, so the commit is rather large.
* include/libvirt/libvirt.h.in (virDomainEventID): Add a new id
for VIR_DOMAIN_EVENT_ID_BLOCK_JOB_2.
(virConnectDomainEventBlockJobCallback): Document new semantics.
* src/conf/domain_event.c (_virDomainEventBlockJob): Rename field,
to ensure we catch all clients.
(virDomainEventBlockJobNew): Add parameter.
(virDomainEventBlockJobDispose)
(virDomainEventBlockJobNewFromObj)
(virDomainEventBlockJobNewFromDom)
(virDomainEventDispatchDefaultFunc): Adjust clients.
(virDomainEventBlockJob2NewFromObj)
(virDomainEventBlockJob2NewFromDom): New functions.
* src/conf/domain_event.h: Add new prototypes.
* src/libvirt_private.syms (domain_event.h): Export new functions.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Generate two
different events.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Likewise.
* src/remote/remote_protocol.x
(remote_domain_event_block_job_2_msg): New struct.
(REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB_2): New RPC.
* src/remote/remote_driver.c
(remoteDomainBuildEventBlockJob2): New handler.
(remoteEvents): Register new event.
* daemon/remote.c (remoteRelayDomainEventBlockJob2): New handler.
(domainEventCallbacks): Register new event.
* tools/virsh-domain.c (vshEventCallbacks): Likewise.
(vshEventBlockJobPrint): Adjust client.
* src/remote_protocol-structs: Regenerate.
Signed-off-by: Eric Blake <eblake@redhat.com>
The block commit code looks for an explicit base file relative
to the discovered top file; so for a chain of:
base <- snap1 <- snap2 <- snap3
and a command of:
virsh blockcommit $dom vda --base snap2 --top snap1
we got a sane message (here from libvirt 1.0.5):
error: invalid argument: could not find base 'snap2' below 'snap1' in chain for 'vda'
Meanwhile, recent refactoring has slightly reduced the quality of the
libvirt error messages, by losing the phrase 'below xyz':
error: invalid argument: could not find image 'snap2' in chain for 'snap3'
But we had a one-off, where we were not excluding the top file
itself in searching for the base; thankfully qemu still reports
the error, but the quality is worse:
virsh blockcommit $dom vda --base snap2 --top snap2
error: internal error unable to execute QEMU command 'block-commit': Base '/snap2' not found
Fix the one-off in blockcommit by changing the semantics of name
lookup - if a starting point is specified, then the result must
be below that point, rather than including that point. The only
other call to chain lookup was blockpull code, which was already
forcing the lookup to omit the active layer and only needs a
tweak to use the new semantics.
This also fixes the bug exposed in the testsuite, where when doing
a lookup pinned to an intermediate point in the chain, we were
unable to return the name of the parent also in the chain.
* src/util/virstoragefile.c (virStorageFileChainLookup): Change
semantics for non-NULL startFrom.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Adjust caller,
to keep existing semantics.
* tests/virstoragetest.c (mymain): Adjust to expose new semantics.
Signed-off-by: Eric Blake <eblake@redhat.com>
For block devices used as snapshot source the new snapshot code would
set the reuse flag. This inhibits to take snapshot without specially
preparing the block image before taking the snapshot.
Fortunately this is not a regression as only the new way of specifying
snapshot source is affected.
For the followin snapshot XML:
<domainsnapshot>
<disks>
<disk name='vda' type='block'>
<driver type='qcow2'/>
<source dev="/dev/andariel/testsnap" />
</disk>
</disks>
</domainsnapshot>
You'd get:
error: internal error: unable to execute QEMU command 'transaction': Image is not in qcow2 format
After this patch the snapshot is created successfully.
A future patch will add two-phase block commit jobs; as the
mechanism for managing them is similar to managing a block copy
job, existing errors should be made generic enough to occur
for either job type.
* src/conf/domain_conf.c (virDomainHasDiskMirror): Update
comment.
* src/qemu/qemu_driver.c (qemuDomainDefineXML)
(qemuDomainSnapshotCreateXML, qemuDomainRevertToSnapshot)
(qemuDomainBlockJobImpl, qemuDomainBlockCopy): Update error
message.
* src/qemu/qemu_hotplug.c (qemuDomainDetachDiskDevice): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit f586965 accidentally changed the semantics of the
virDomainBlockCommit command; where it previously looked for
an explicit top argument from the top of the chain, it now
starts from the backing file of the top. Of course, until
we allow active commits, the only difference it makes is in
the quality of the error message, but with code for active
commit coming soon, we need to support an explicit mention
of the active layer.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Start looking
from top of chain.
Signed-off-by: Eric Blake <eblake@redhat.com>
When saving domain with relabel=no, the file that gets created must have the
context set anyway. That way restore can be successful without the need of
relabelling the file.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Now that qemu 2.0 allows commit of the active layer, people are
attempting to use virsh blockcommit and getting into a stuck
state, because libvirt is unprepared to handle the two-phase
commit required by qemu.
Stepping back a bit, there are two valid semantics for a
commit operation:
1. Maintain a 'golden' base, and a transient overlay. Make
changes in the overlay, and if everything appears to work,
commit those changes into the base, but still keep the overlay
for the next round of changes; repeat the cycle as desired.
2. Create an external snapshot, then back up the stable state
in the backing file. Once the backup is complete, commit the
overlay back into the base, and delete the temporary snapshot.
Since qemu doesn't know up front which of the two styles is
preferred, a block commit of the active layer merely gets
the job into a synchronized state, and sends an event; then
the user must either cancel (case 1) or complete (case 2),
where qemu then sends a second event that actually ends the
job. However, until commit e6bcbcd, libvirt was blindly
assuming the semantics that apply to a commit of an
intermediate image, where there is only one sane conclusion
(the job automatically ends with fewer elements in the chain);
and getting stuck because it wasn't prepared for qemu to enter
a second phase of the job.
This patch adds a flag to the libvirt API that a user MUST
supply in order to acknowledge that they will be using two-phase
semantics. It might be possible to have a mode where if the
flag is omitted, we automatically do the case 2 semantics on
the user's behalf; but before that happens, I must do additional
patches to track the fact that we are doing an active commit
in the domain XML. Later patches will add support of the flag,
and once 2-phase semantics are working, we can then decide
whether to relax things to allow an omitted flag to cause an
automatic pivot.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_BLOCK_COMMIT_ACTIVE)
(VIR_DOMAIN_BLOCK_JOB_TYPE_ACTIVE_COMMIT): New enums.
* src/libvirt.c (virDomainBlockCommit): Document two-phase job
when committing active layer, through new flag.
(virDomainBlockJobAbort): Document that pivot also occurs after
active commit.
* tools/virsh-domain.c (vshDomainBlockJob): Cover new job.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Explicitly
reject active copy; later patches will add it in.
Signed-off-by: Eric Blake <eblake@redhat.com>
The current implementation of 'virsh blockcopy' (virDomainBlockRebase)
is limited to copying to a local file name. But future patches want
to extend it to also copy to network disks. This patch converts over
to a virStorageSourcePtr, although it should have no semantic change
visible to the user, in anticipation of those future patches being
able to use more fields for non-file destinations.
* src/conf/domain_conf.h (_virDomainDiskDef): Change type of
mirror information.
* src/conf/domain_conf.c (virDomainDiskDefParseXML): Localize
mirror parsing into new object.
(virDomainDiskDefFormat): Adjust clients.
* src/qemu/qemu_domain.c (qemuDomainDeviceDefPostParse):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl, qemuDomainBlockCopy): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
As part of the work on backing chains, I'm finding that it would
be easier to directly manipulate chains of pointers (adding a
snapshot merely adjusts pointers to form the correct list) rather
than copy data from one struct to another. This patch converts
domain disk source to be a pointer.
In this patch, the pointer is ALWAYS allocated (thanks in part to
the previous patch forwarding all disk def allocation through a
common point), and all other changse are just mechanical fallout of
the new type; there should be no functional change. It is possible
that we may want to leave the pointer NULL for a cdrom with no
medium in a later patch, but as that requires a closer audit of the
source to ensure we don't fault on a null dereference, I didn't do
it here.
* src/conf/domain_conf.h (_virDomainDiskDef): Change type of src.
* src/conf/domain_conf.c: Adjust all clients.
* src/security/security_selinux.c: Likewise.
* src/qemu/qemu_domain.c: Likewise.
* src/qemu/qemu_command.c: Likewise.
* src/qemu/qemu_conf.c: Likewise.
* src/qemu/qemu_process.c: Likewise.
* src/qemu/qemu_migration.c: Likewise.
* src/qemu/qemu_driver.c: Likewise.
* src/lxc/lxc_driver.c: Likewise.
* src/lxc/lxc_controller.c: Likewise.
* tests/securityselinuxlabeltest.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
As part of the work on backing chains, I'm finding that it would
be easier to directly manipulate chains of pointers (adding a
snapshot merely adjusts pointers to form the correct list) rather
than copy data from one struct to another. This patch converts
snapshot source to be a pointer.
In this patch, the pointer is ALWAYS allocated (any code that
increases ndisks now also allocates a source pointer for each
new disk), and all other changes are just mechanical fallout of
the new type; there should be no functional change. It is
possible that we may want to leave the pointer NULL for internal
snapshots in a later patch, but as that requires a closer audit
of the source to ensure we don't fault on a null dereference, I
didn't do it here.
* src/conf/snapshot_conf.h (_virDomainSnapshotDiskDef): Change
type of src.
* src/conf/snapshot_conf.c: Adjust all clients.
* src/qemu/qemu_conf.c: Likewise.
* src/qemu/qemu_driver.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
This simplifies the usage in {libxl,qemu}DomainGetNumaParameters
and it's needed for consistent error reporting in virBitmapFormat.
Also remove the forgotten ATTRIBUTE_NONNULL marker.
Currently, we don not acquire any job when removing a device after
DEVICE_DELETED event was received from QEMU. This means that if there is
another API running at the time DEVICE_DELETED is delivered and the API
acquired a job, we may happily change the definition of the domain the
API is working with whenever it unlocks the domain object (e.g., to talk
with its monitor). That said, we have to acquire a job before finishing
device removal to make things safe. However, doing so in the main event
loop would cause a deadlock so we need to move most of the event handler
into a separate thread.
Another good reason for both acquiring a job and handling the event in a
separate thread is that we currently remove a device backend immediately
after removing its frontend while we should only remove the backend once
we already received DEVICE_DELETED event. That is, we will have to talk
to QEMU monitor from the event handler.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Currently we don't support mixed (external + internal) snapshots. The
code detecting the snapshot type didn't make sure that the memory image
was consistent with the snapshot type leading into strange error
message:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=external,file=/tmp/blah
error: internal error: unexpected code path
Fix the mixed detection code to detect this kind of mistake:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=external,file=/tmp/blah
error: unsupported configuration: mixing internal and external targets for a snapshot is not yet supported
A internal snapshot of a active VM with the memory snapshot disabled
explicitly would actually still take the memory snapshot. Reject it
explicitly.
Before:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=no
Domain snapshot 1401353155 created
After:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=no
error: Operation not supported: internal snapshot of a running VM must include the memory state
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1083345