The features were added to QEMU by commit v2.4.0-1690-gf7fda28094 as
Skylake Server features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Adding functionality to libvirt that will allow querying the interface
for the availability of switchdev Offloading NIC capabilities.
The switchdev mode was introduced in kernel 4.8, the iproute2-devlink
command to retrieve the switchdev NIC feature with command example:
devlink dev eswitch show pci/0000:03:00.0
This feature is needed for Openstack so we can do a scheduling decision
if the NIC is in Hardware Offload (switchdev) or regular SR-IOV (legacy) mode.
And select the appropriate hypervisors with the requested capability see [1].
[1] - https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/enable-sriov-nic-features.html
Reviewed-by: Laine Stump <laine@laine.org>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1075520
Apart from generic checks, we need to constrain netmask/prefix
length a bit. Thing is, with current implementation QEMU needs to
be able to 'assign' some IP addresses to the virtual network. For
instance, the default gateway is at x.x.x.2, dns is at x.x.x.3,
the default DHCP range is x.x.x.15-x.x.x.30. Since we don't
expose these settings yet, it's safer to require shorter prefix
to have room for the defaults.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: laine@laine.org
https://bugzilla.redhat.com/show_bug.cgi?id=1075520
Currently, all that users can specify for an interface type of
'user' is the common attributes: PCI address, NIC model (and
that's basically it). However, some need to configure other
address range than the default one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: laine@laine.org
Only feature policy is checked on s390, which was previously done in
virCPUUpdate, but that's not the correct place for the check once we
have virCPUValidateFeatures.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new API may be used to check whether all features used in a CPU
definition are valid (e.g., libvirt knows their name, their policy is
supported, etc.). Leaving this API unimplemented in an arch subdriver
means libvirt does not restrict CPU features usable on the associated
architectures.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The host CPU definitions reported in the capabilities XML may contain
CPU features unknown to QEMU, but the result of virConnectBaselineCPU is
supposed to be directly usable as a guest CPU definition and thus it
should only contain features QEMU knows about.
https://bugzilla.redhat.com/show_bug.cgi?id=1450317
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The filter only needs to know the CPU architecture. Passing
virQEMUCapsPtr as opaque is a bit overkill.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The implementation of virConnectBaselineCPU may be different for each
hypervisor. Thus it shouldn't really be implmented in the cpu code.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit id 'e02ff020cac' neglected to use the attrBuf and childBuf
in the virDomainDiskSourceFormatNetwork call.
So make the necessary alterations to allow usage.
Rather than checking during XML processing, move the check for
valid <encryption> into virDomainDiskDefParseValidate and alter
the text of the message slightly to be a bit more correct.
Rather than checking during XML processing, move the checks for correct
and valid auth into virDomainDiskDefParseValidate. This will introduce
virDomainDiskSourceDefParseAuthValidate to validate that the authdef
stored for the virStorageSource is valid. This can then be expanded
to service backingStore sources as well.
Alter the message text slightly as well to distinguish between an
unknown name and an incorrectly used name. Since type is not a
mandatory field, add the NULLSTR() around the output of the unknown
error. NB, a config using unknown formatting would fail virschematest
since it only accepts 'iscsi' and 'ceph' as "valid" types.
Some operations done to rollback disk image labelling and locking might
overwrite (or clear) the actual error. Remember the original error when
tearing down disk access so that it's not obscured.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1461301
Some cleanup paths overwrite a usefull error message with a less useful
one and we then try to preserve the original message. The handlers added
in this patch will simplify the operations since they are designed right
for the purpose.
Block job QMP commands with underscores rather than dashes were never
released in upstream qemu, (they were added, but modified in the same
release [1]), but a certain distro managed to backport the version in the
middle.
The change also slightly modified semantics for the abort command, which
made us have a lot of code which was only ever present in certain
downstream distros.
Clean the upstream code from the legacy cruft and support only the
upstream implementations.
[1] See qemu commit v1.0-2176-gdb58f9c060
Reviewed-by: Eric Blake <eblake@redhat.com>
No need to pass a @driver parameter since all that's done is deref
the @cfg especially since the only caller can just pass an already
referenced @cfg.
Also, looks like commit id '0298531b' at one time had a different
name for the API, so I took the liberty of fixing the comments too
since I would already be updating them for the @cfg variable.
For a logged in user this a path like /dev/dri/renderD128 will have
default ownership root:video which won't work for the qemu:qemu user,
so we need to chown it.
We only do this when mount namespaces are enabled in the qemu driver,
so the chown'ing doesn't interfere with other users of the shared
render node path
https://bugzilla.redhat.com/show_bug.cgi?id=1460804
The VIR_SECURITY_MANAGER_MOUNT_NAMESPACE flag informs the DAC driver
if mount namespaces are in use for the VM. Will be used for future
changes.
Wire it up in the qemu driver
https://bugzilla.redhat.com/show_bug.cgi?id=1464313
If a Disk pool was defined/created using XML that either didn't
specify a specific format or specified format type='unknown', then
restarting a pool after an initial disk backend build with overwrite
would fail after a libvirtd restart for a non-autostarted pool.
This is because the persistent pool data is not updated during pool
build w/ overwrite processing to have the VIR_STORAGE_POOL_DISK_DOS
default format.
So in addition to the alteration done during disk build processing,
alter the default expectation for disk startup to be DOS if nothing
has been defined yet. That will either succeed if the pool had been
successfully built previously using the default DOS format or fail
with a message indicating the format is something else that does not
match the expect format 'dos'.
https://bugzilla.redhat.com/show_bug.cgi?id=1477880
If the "/#" is missing from the provided iSCSI path, then we need
to provide the default LUN of /0; otherwise, QEMU will fail to parse
the URL causing a failure to either create the guest or hotplug
attach the storage.
During post parse, for any iSCSI disk or hostdev, scan the source
path looking for the presence of '/', if found, then we can assume
the LUN is provided. If not found, alter the input XML to add the
"/0". This will cause the generated XML to have the generated
value when the domain config is saved after post parse.
Commit 703abf1d7 changed the logic so that we don't attempt to re-create
the image if it's a block device. This was done by modifying the
'reuse' variable. Unfortunately after modifying it one of the uses was
to infer whether we should probe the disk format. After changes in the
commit mentioned above we would attempt the probe if the target of the
copy is a block device and the format was not provided explicitly rather
than using the format of the disk.
Fix it by explicitly checking whether the user requested a reuse of the
disk rather than the modified boolean flag.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1490826
https://bugzilla.redhat.com/show_bug.cgi?id=1447169
With this patch users can cold plug a watchdog. Things are pretty
simple because a domain can have at most one watchdog device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If there was an error when constructing the buffer, NULL is
returned. The buffer is never freed though.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When destroying a domain libvirt marks it internally with a
beingDestroyed flag to make sure the qemuDomainDestroyFlags API itself
cleans up after the domain rather than letting an uninformed EOF handler
do it. However, when the domain is being started at the moment libvirt
was asked to destroy it, only the starting thread can properly clean up
after the domain and thus it ignores the beingDestroyed flag. Once
qemuDomainDestroyFlags finally gets a job, the domain may not be running
anymore, which should not be reported as an error if the domain has been
starting up.
https://bugzilla.redhat.com/show_bug.cgi?id=1445600
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
This option requires:
<ioapic driver='qemu'/>
Report an error in case someone tries to combine
it with different ioapic setting.
Setting 'eim' on without enabling 'intremap' does not make sense.
https://bugzilla.redhat.com/show_bug.cgi?id=1457610
Add a 'cleanup' label and improve the readability of one of the
checks by making it conform to our formatting standard and moving
the corresponding comment.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Laine Stump <laine@laine.org>
TPM 2 does not implement sysfs files for cancellation of commands.
We therefore use /dev/null for the cancel path passed to QEMU.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Tested-by: Javier Martinez Canillas <javierm@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Add a new CPU model called 'EPYC' to model processors from AMD EPYC
family (which includes EPYC 76xx,75xx,74xx, 73xx and 72xx).
The following features bits have been added/removed compare to Opteron_G5
Added: monitor, movbe, rdrand, mmxext, ffxsr, rdtscp, cr8legacy, osvw,
fsgsbase, bmi1, avx2, smep, bmi2, rdseed, adx, smap, clfshopt, sha
xsaveopt, xsavec, xgetbv1, arat
Removed: xop, fma4, tbm
The patch is depend on EPYC CPU model supported introduced in qemu [1]
[1] https://patchwork.kernel.org/patch/9902205/
Cc: Tom Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
In case of real migration (not migrating to file on save, dump etc)
migration info is not complete at time qemu finishes migration
in normal (non postcopy) mode. We need to update disks stats,
downtime info etc. Thus let's not expose this job status as
completed.
To archive this let's set status to 'qemu completed' after
qemu reports migration is finished. It is not visible as complete
job to clients. Cookie code on confirm phase will finally turn
job into completed. As we don't need more things to do when
migrating to file status is set to 'completed' as before
in this case.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When getting job info in case mirror does not reach ready phase
fetch mirror stats from qemu. Otherwise mirror stats are already
saved in current job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Looks like it is more simple to drop this optimization as we are
going to add getting disks stats during migration via quering qemu
process and checking if we have to acquire job condition becomes
more complicate.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of checking stat.status let's set status to migrating
as soon as migrate command is send (waiting for completion
is a good place too).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Setting status to none has little value - getting job status
will not return even elapsed time.
After this patch getting job stats stays correct in a sence
it will not fetch migration stats because it consults
stats.status before doing the fetch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Querying destination migration statistics may result in getting
a failure or getting a elapsed time value depending on stats.status
value which is odd. Instead let's always fail. Clients should
be ready to handle this as currently getting failure period
can be considerable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>