All APIs which expect a list of CPU models supported by hypervisors were
switched from char **models and int models to just accept a pointer to
virDomainCapsCPUModels object stored in domain capabilities. This avoids
the need to transform virDomainCapsCPUModelsPtr into a NULL-terminated
list of model names and also allows the various cpu driver APIs to
access additional details (such as its usability) about each CPU model.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1497396
In 0d3d020ba6c4f I've added capability to accept MAC addresses
for the API too. However, the implementation was faulty. It needs
to lookup the corresponding interface in the domain definition
and pass the ifname instead of MAC address.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1497396
The other APIs accept both, ifname and MAC address. There's no
reason virDomainInterfaceStats can't do the same.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Every caller reports the error themselves. Might as well move it
into the function and thus unify it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1447169
Since domain can have at most one watchdog it simplifies things a
bit. However, since we must be able to set the watchdog action as
well, new monitor command needs to be used.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Similarly to previous patch, for some types of interface domain
and host are on the same side of RX/TX barrier. In that case, we
need to set up the QoS differently. Well, swapped.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1497410
The comment in virNetDevTapInterfaceStats() implementation for
Linux states that packets transmitted by domain are received by
the host and vice versa. Well, this is true but not for all types
of interfaces. For instance, for macvtaps when TAP device is
hooked right onto a physical device any packet that domain sends
looks also like a packet sent to the host. Therefore, we should
allow caller to chose if the stats returned should be straight
copy or swapped.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Users might have configured interface so that it's type of
network, but the corresponding network plugs interfaces into an
OVS bridge. Therefore, we have to check for the actual type of
the interface instead of the configured one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Seeing a log message saying 'flags=93' is ambiguous & confusing unless
you happen to know that libvirt always prints flags as hex. Change our
debug messages so that they always add a '0x' prefix when printing flags,
and '0' prefix when printing mode. A few other misc places gain a '0x'
prefix in error messages too.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1448268
When migrating to a file (e.g. when doing 'virsh save file'),
couple of things are happening in the thread that is executing
the API:
1) the domain obj is locked
2) iohelper is spawned as a separate process to handle all I/O
3) the thread waits for iohelper to finish
4) the domain obj is unlocked
Now, the problem is that while the thread waits in step 3 for
iohelper to finish this may take ages because iohelper calls
fdatasync(). And unfortunately, we are waiting the whole time
with the domain locked. So if another thread wants to jump in and
say copy the domain name ('virsh list' for instance), they are
stuck.
The solution is to unlock the domain whenever waiting for I/O and
lock it back again when it finished.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Since commit v2.2.0-199-g7ce711a30e libvirt stores an updated guest CPU
in domain's live definition and there's no need to update it every time
we want to format the definition. The commit itself tried to address
this in qemuDomainFormatXML, but forgot to fix qemuDomainDefFormatLive.
Not to mention that masking a previously set flag is only acceptable if
the flag was set by a public API user. Internally, libvirt should have
never set the flag in the first place.
https://bugzilla.redhat.com/show_bug.cgi?id=1485022
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In the past we updated host-model CPUs with host CPU data by adding a
model and features, but keeping the host-model mode. And since the CPU
model is not normally formatted for host-model CPU defs, we had to pass
the updateCPU flag to the formatting code to be able to properly output
updated host-model CPUs. Libvirt doesn't do this anymore, host-model
CPUs are turned into custom mode CPUs once updated with host CPU data
and thus there's no reason for keeping the hacks inside CPU XML
formatters.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Add a new virStorageNetProtocol for Veritas HyperScale (VxHS) disks
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
Documentation states:
"'offset' and 'size' represent an area which must lie entirely within
the device or file." Enforce the that the buffer lies within fully.
Commit 3956af495e broke the blockPeek API since virStorageFileRead
allocates a return buffer and fills it with the data, while the API
fills a user-provided buffer. This did not get caught by the compiler
since the API prototype uses a 'void *'.
Fix it by transferring the data from the allocated buffer to the user
provided buffer.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1491217
The host CPU definitions reported in the capabilities XML may contain
CPU features unknown to QEMU, but the result of virConnectBaselineCPU is
supposed to be directly usable as a guest CPU definition and thus it
should only contain features QEMU knows about.
https://bugzilla.redhat.com/show_bug.cgi?id=1450317
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The implementation of virConnectBaselineCPU may be different for each
hypervisor. Thus it shouldn't really be implmented in the cpu code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Block job QMP commands with underscores rather than dashes were never
released in upstream qemu, (they were added, but modified in the same
release [1]), but a certain distro managed to backport the version in the
middle.
The change also slightly modified semantics for the abort command, which
made us have a lot of code which was only ever present in certain
downstream distros.
Clean the upstream code from the legacy cruft and support only the
upstream implementations.
[1] See qemu commit v1.0-2176-gdb58f9c060
Reviewed-by: Eric Blake <eblake@redhat.com>
The VIR_SECURITY_MANAGER_MOUNT_NAMESPACE flag informs the DAC driver
if mount namespaces are in use for the VM. Will be used for future
changes.
Wire it up in the qemu driver
Commit 703abf1d7 changed the logic so that we don't attempt to re-create
the image if it's a block device. This was done by modifying the
'reuse' variable. Unfortunately after modifying it one of the uses was
to infer whether we should probe the disk format. After changes in the
commit mentioned above we would attempt the probe if the target of the
copy is a block device and the format was not provided explicitly rather
than using the format of the disk.
Fix it by explicitly checking whether the user requested a reuse of the
disk rather than the modified boolean flag.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1490826
https://bugzilla.redhat.com/show_bug.cgi?id=1447169
With this patch users can cold plug a watchdog. Things are pretty
simple because a domain can have at most one watchdog device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When destroying a domain libvirt marks it internally with a
beingDestroyed flag to make sure the qemuDomainDestroyFlags API itself
cleans up after the domain rather than letting an uninformed EOF handler
do it. However, when the domain is being started at the moment libvirt
was asked to destroy it, only the starting thread can properly clean up
after the domain and thus it ignores the beingDestroyed flag. Once
qemuDomainDestroyFlags finally gets a job, the domain may not be running
anymore, which should not be reported as an error if the domain has been
starting up.
https://bugzilla.redhat.com/show_bug.cgi?id=1445600
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
In case of real migration (not migrating to file on save, dump etc)
migration info is not complete at time qemu finishes migration
in normal (non postcopy) mode. We need to update disks stats,
downtime info etc. Thus let's not expose this job status as
completed.
To archive this let's set status to 'qemu completed' after
qemu reports migration is finished. It is not visible as complete
job to clients. Cookie code on confirm phase will finally turn
job into completed. As we don't need more things to do when
migrating to file status is set to 'completed' as before
in this case.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When getting job info in case mirror does not reach ready phase
fetch mirror stats from qemu. Otherwise mirror stats are already
saved in current job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Looks like it is more simple to drop this optimization as we are
going to add getting disks stats during migration via quering qemu
process and checking if we have to acquire job condition becomes
more complicate.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of checking stat.status let's set status to migrating
as soon as migrate command is send (waiting for completion
is a good place too).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Setting status to none has little value - getting job status
will not return even elapsed time.
After this patch getting job stats stays correct in a sence
it will not fetch migration stats because it consults
stats.status before doing the fetch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Querying destination migration statistics may result in getting
a failure or getting a elapsed time value depending on stats.status
value which is odd. Instead let's always fail. Clients should
be ready to handle this as currently getting failure period
can be considerable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuMigrationFetchJobStatus is rather inconvinient. Some of its
callers don't need status to be updated, some don't need to update
elapsed time right away. So let's update status or elapsed time
in callers instead.
This patch drops updating job status on getting job stats by
client. This way we will not provide status 'completed' while
it is not yet updated by migration routine.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Let's introduce QEMU_DOMAIN_JOB_STATUS_POSTCOPY state for job.current->status
instead of checking job.current->stats.status. The latter can be changed
when fetching migration statistics. Moving state function from the variable
and leave only store function seems more managable.
This patch removes all state checking usage of stats except for
qemuDomainGetJobStatsInternal. This place will be handled separately.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch simply switches code from using VIR_DOMAIN_JOB_* to
introduced QEMU_DOMAIN_JOB_STATUS_*. Later this gives us freedom
to introduce states for postcopy and mirroring phases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1439991
Whenever a device is being updated via
virDomainUpdateDeviceFlags() API, we parse the device XML and
ideally run some generic checks to validate the configuration
(e.g. if device defines per-device boot order but the domain has
os/boot element already). Well, that's the theory - due to a
missing check we've jumped early from that check function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
This commit adds qemu driver implementation to edit xml
configuration of managed save state file of a domain.
Signed-off-by: Kothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
This commit adds qemu driver implementation to get xml description
for managed save state domain.
Signed-off-by: Kothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
This API is definitely modifying state of @vm. Therefore it
should grab a job.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
At some places we either already have synchronous job or we just
released it. Also, some APIs might want to use this code without
having to release their job. Anyway, the job acquire code is
moved out to qemuDomainRemoveInactiveJob so that
qemuDomainRemoveInactive does just what it promises.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1467245
Currently, there's a bug when undefining a domain with NVRAM
store. Basically, the unlink() of the NVRAM store file happens
during the undefine procedure iff domain is inactive. So, if
domain is running and undefine is called the file is left behind.
It won't be removed in the domain cleanup process either
(qemuProcessStop). One of the solutions is to remove if
regardless of the domain state and rely on qemu having the file
opened. This still has a downside that if the domain is defined
back the NVRAM store file is going to be new, any changes to the
current one are lost (just like with any other file that is
deleted while a process has it opened). But is it really a
downside?
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1458630
Introduce virQEMUDriverConfigTLSDirResetDefaults in order to check
if the defaultTLSx509certdir was changed, then change the default
for any other *TLSx509certdir that was not set to the default default.
Introduce virQEMUDriverConfigValidate to validate the existence of
any of the *_tls_x509_cert_dir values that were uncommented/set,
incuding the default.
Update the qemu.conf description for default to describe the consequences
if the default directory path does not exist.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The switch contains considerable amount of changes:
virQEMUCapsRememberCached() is removed because this is now handled
by virFileCacheSave().
virQEMUCapsInitCached() is removed because this is now handled by
virFileCacheLoad().
virQEMUCapsNewForBinary() is split into two functions,
virQEMUCapsNewData() which creates new data if there is nothing
cached and virQEMUCapsLoadFile() which loads the cached data.
This is now handled by virFileCacheNewData().
virQEMUCapsCacheValidate() is removed because this is now handled by
virFileCacheValidate().
virQEMUCapsCacheFree() is removed because it's no longer required.
Add virCapsPtr into virQEMUCapsCachePriv because for each call of
virFileCacheLookup*() we need to use current virCapsPtr.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>