The code handles XML bits and internal definition and should be
in conf directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Same as virStorageFileBackend, it doesn't belong into util directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Up until now we had a runtime code and XML related code in the same
source file inside util directory.
This patch takes the runtime part and extracts it into the new
storage_file directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This code is not directly relevant to virStorageSource so move it to
separate file.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Similarly to startup of the VM qemu doesn't like setting throttling for
an empty drive. Just skip it since we do the correct thing once new
media is inserted.
Resolves: https://gitlab.com/libvirt/libvirt/-/issues/117
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Tested-by: Han Han <hhan@redhat.com>
When an interface has some bandwidth limitation set (it's root
qdisc is htb in that case) but this gets cleared out via public
API call (virDomainSetInterfaceParameters() or
virDomainUpdateDeviceFlags()) then virNetDevBandwidthSet() clears
out whatever qdiscs were set on the interface and kernel places
the default qdisc at the root. What we need to do next is to
replace the root qdisc with the one we want.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1329644
Fixes: 0b66196d86ea375234cb0ee99289c486f9921820
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
While previously we returned 0 this is not correct. We have to
return a negative value to indicate error.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Even though we are getting driver capabilities with
refresh=false (so that it is not expensive), we still should do
ACL check first because there is no point in bothering with the
capabilities if caller doesn't have permissions to call the API.
Also, this way the comment makes more sense.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The code we have there to copy seclabel model or doi can be
replaced by virStrcpy() calls which do exactly the same checks.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In recent patches new mambers to _qemuAgentDiskAddress struct
were introduced to keep optional CCW address sent by the guest
agent. These two members are a struct to store CCW address into
and a boolean to keep track whether the CCW address is valid.
Well, we can hold the same information with a pointer - instead
of storing the CCW address structure let's keep just a pointer to
it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
On s390x, devices are accessed via the channel subsystem by default,
so we need to look up the devices via their CCW address there instead
of using PCI.
This fixes "virsh domfsinfo" on s390x for virtio-block devices (the first
attempt from commit f8333b3b0a7 did it in the wrong way, reporting the
device name on the guest side instead of the target name on the host side).
Fixes: f8333b3b0a ("qemu: Fix domfsinfo for non-PCI device information ...")
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1858771
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It must be used when migration URI uses `unix:` transport because otherwise we
cannot just guess where to connect for disk migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1638889
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
virDomainDefValidateAliases() is one of the static functions that
needs to be handled before moving virDomainDefValidateInternal().
Let's move all related validate functions to domain_validate.c
at the same time.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We decided to not do metadata-less checkpoints and checking whether the
metadata is consistent is done once the data is actually needed. Remove
the comment.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The alignment step is not really necessary once we've done it already
since we fully populate the definition. In case of checkpoints it was a
relic necessary for populating the 'idx' to match checkpoint disk to
definition disk, but that was already removed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Commit f5e8715a8b4 added logic which adds some fake job info when qemu
didn't return anything but in such case the job type would not be set.
Since we already have the proper job type recorded in qemuBlockJobDataPtr
which the caller fetched, we can use this it and also remove the lookup
from the disk which was necessary prior to the conversion to
qemuBlockJobDataPtr.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Nodename may be asociated to a disk backup job, add support to looking
up in that chain too. This is specifically useful for the
BLOCK_WRITE_THRESHOLD event which can be registered for any nodename.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In v6.0.0-rc1~439 (and friends) we tried to cache NUMA
capabilities because we assumed they are immutable. And to some
extent they are (NUMA hotplug is not a thing, is it). However,
our capabilities contain also some runtime info that can change,
e.g. hugepages pool allocation sizes or total amount of memory
per node (host side memory hotplug might change the value).
Because of the caching we might not be reporting the correct
runtime info in 'virsh capabilities'.
The NUMA caps are used in three places:
1) 'virsh capabilities'
2) domain startup, when parsing numad reply
3) parsing domain private data XML
In cases 2) and 3) we need NUMA caps to construct list of
physical CPUs that belong to NUMA nodes from numad reply. And
while this may seem static, it's not really because of possible
CPU hotplug on physical host.
There are two possible approaches:
1) build a validation mechanism that would invalidate the
cached NUMA caps, or
2) drop the caching and construct NUMA caps from scratch on
each use.
In this commit, the latter approach is implemented, because it's
easier.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1819058
Fixes: 1a1d848694f6c2f1d98a371124928375bc3bb4a3
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
If the job has finished, but we didn't yet process the completion fake
that it's still incomplete so that apps which decided to poll
qemuDomainGetBlockJobInfo rather than use events can be sure that the
XML update was completed.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Replace qemuMonitorGetBlockJobInfo by qemuMonitorGetAllBlockJobInfo and
hash table lookup. This basically open-codes qemuMonitorGetBlockJobInfo,
but it will be removed in next patch.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Let's pass along / fill @niothreads rather than trying to make dual
use as a return value and thread count.
This resolves a Coverity issue detected in qemuDomainGetIOThreadsMon
where if qemuDomainObjExitMonitor failed, then a -1 was returned and
overwrite @niothreads causing a memory leak.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Existing practice with the filesystem fields reported for the
virDomainGetGuestInfo API is to use the singular form for
field names. Ensure the disk info follows this practice.
Fixes
commit 05a75ca2ce743bc0bb119fb8d532ff84646fafa3
Author: Marc-André Lureau <marcandre.lureau@redhat.com>
Date: Fri Nov 20 22:09:46 2020 +0400
domain: add disk informations to virDomainGetGuestInfo
commit 0cb2d9f05d00497a715352f6ea28cf8fb6921731
Author: Marc-André Lureau <marcandre.lureau@redhat.com>
Date: Fri Nov 20 22:09:47 2020 +0400
qemu_driver: report guest disk informations
commit 172b8304352b1945e328394e61290a24446280dd
Author: Marc-André Lureau <marcandre.lureau@redhat.com>
Date: Fri Nov 20 22:09:48 2020 +0400
virsh: add --disk informations to guestinfo command
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
If there is an error getting info from guest agent, then the
control on qemuDomainGetGuestInfo() jumps onto 'exitagent' label
and subsequently continues on 'endagentjob'. Both labels are hit
also in success case too. The control then continues by
attempting to match fetched info (e.g. disk addresses) with
domain def. But this is needless - the API will return error
regardless.
To return early from the function move both 'exitagent' and
'endagentjob' labels at the end of the function and jump straight
onto 'cleanup' afterwards. This allows us to set 'ret = 0' later
- only when we know we succeeded.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Tested-by: Han Han <hhan@redhat.com>
To match the QGA schema name (we are introducing a qemuAgentDiskInfo
struct again for different purpose).
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Tested-by: Han Han <hhan@redhat.com>
When executing the hypervisor-cpu-baseline command and if there is
only a single CPU definition present in the XML file, then the
baseline handler will exit early and libvirt will print an unhelpful
message:
"error: An error occurred, but the cause is unknown"
This is due to no CPU definition ever being "baselined", since the
handler expects at least two CPU models.
Let's fix this by performing a CPU model expansion on the single CPU
definition and returning the result to the caller. This will also
ensure the CPU model's feature set is sane if any were provided in
the file.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Check the provided CPU models against the CPU models
known by the hypervisor before baselining and print
an error if an unrecognized model is found.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When executing the hypervisor-cpu-baseline command and the
XML file contains a CPU definition without a model name, or
an invalid CPU definition, then the commands will fail and
return an error message from the QMP response.
Let's clean this up by checking for a valid definition and
presence of a model name.
This code is copied from virCPUBaseline.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
Hypervisor-cpu-baseline requires the cpu-model-expansion
capability when expanding CPU model features if the
--features flag is provided.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
When introducing the API I've mistakenly used 'int' type for
@nkeys argument which does nothing more than tells the API how
many items there are in @keys array. Obviously, negative values
are not expected and therefore 'unsigned int' should have been
used.
Reported-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
A bad merge while rebasing 74b2834333a caused the @event variable
to be defined twice, inside the 'cleanup' label, causing coverity
errors.
This code was originally moved outside of the label by commit
773c7c43611a. Delete the unintended code in the 'cleanup'
label.
Fixes: 74b2834333ab3bf500f870e0a6d4e8309379d96a
Reported-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some labels became deprecated after the previous patches.
Reviewed-by: Jonathon Jongsma <jjongsma@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Due to failures to unlink on previous rename/undefine we can already have
autolink etc files for the domain to be defined. Remove them.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Let's move objlist restoring to cleanup section so that we can handle failure
of actions between virDomainObjListAdd and virDomainDefSave. We are going
to add such actions in next patch.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This is basically just saves checkpoints metadata on disk after name is changed
in memory as path to domain checkpoints directory depends on name. After that
old checkpoint directory is deleted with checkpoint metadata files.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This is basically just saves snapshots metadata on disk after name is changed
in memory as path to domain snapshot directory depends on name. After that
old snapshot directory is deleted with snapshot metadata files.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This patch also changes functionality a bit.
First if unlinking of old config file is failed we rollback and return error
previously and now we return success. I don't think this makes much difference.
I guess in both cases on libvirtd restart we have to deal with both new and old
config existing on disk with different names but same uuid.
Second if unlinking of old autolink is failed we rollback previously which
was not right as at this point we already unlink old config file. So this
is fixed now.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Going to cleanup label is mere return -1 thus let's just return
instead of goto to this label.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
We can simplify cleanup section by moving sending events to success path only
because only on sucess path events are not NULL.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
For example if saving config file with new name fails we send false undefine
event currently.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>