Since commit 02e8d0cfdf we are using 'scsi-hd' and 'scsi-cd' instead of
'scsi-disk'. We also should probe capabilities for 'scsi-hd' instead of
'scsi-disk'. It is safe to do as 'scsi-disk' is the common denominator
of both 'scsi-hd' and 'scsi-cd' so all the properties were common.
For now the test data are modified by just changing the appropriate type
when probing for caps.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
To avoid changes to the filled in microcode in case we change the caps
replies file for any reason make the number depend on the filename.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This flag tells virDomainMigrateSetMaxSpeed and
virDomainMigrateGetMaxSpeed APIs to work on post-copy migration
bandwidth.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This typed parameter for virDomainMigrate3 and virDomainMigrateToURI3
APIs may be used for setting maximum post-copy migration bandwidth.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch adds a new VIR_MIGRATE_PARAM_BANDWIDTH_POSTCOPY typed
parameter for virDomainMigrate3 and virDomainMigrateToURI3 for setting
maximum post-copy migration bandwidth.
In case the initial VIR_MIGRATE_PARAM_BANDWIDTH_POSTCOPY value turns out
to be suboptimal a new VIR_DOMAIN_MIGRATE_MAX_SPEED_POSTCOPY flag for
virDomainMigrateSetMaxSpeed and virDomainMigrateGetMaxSpeed may be used
to set/get the maximum post-copy migration bandwidth while migration is
already running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
So far migration parameters were changed only at the beginning of
migration mostly via an automatic translation from flags and typed
parameters. We need to export a few more functions to support APIs which
may set migration parameters while migration is already running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make the code flow easier to follow and get rid of the ugly endjob
label inside if branch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Some migration parameters supported by libvirt may use units that differ
from the units used by QEMU for the corresponding parameters. For
example, libvirt defines migration bandwidth in MiB/s while QEMU expects
B/s. Let's add a unit field to qemuMigrationParamsTPMapItem for
automatic conversion when translating between libvirt's migration typed
parameters and QEMU's migration paramteres.
This patch is a preparation for future parameters as the existing
VIR_MIGRATE_PARAM_BANDWIDTH parameter is set using "migrate_set_speed"
QMP command rather than "migrate-set-parameters" for backward
compatibility.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemuDomainBlockPivot and qemuDomainBlockJobAbort need the job name for
cancelling or pivoting but were generating it locally instead of
accessing the existing copy in the job data structure.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The writing to an image actually starts when the copy job is initiated,
so checking this at the time of the pivot operation is too late.
Move the check to qemuDomainBlockCopyCommon. Note that modern qemu would
have prevented two writers with qcow2 so the slim possibility of a job
started with libvirtd without this patch missing the check is not really
worth worrying about.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
For copy and active commit jobs we record the state of the mirror so
that we can recover. The status XML was not saved in case of
qemuDomainBlockPivot due to an oversight.
Save the XML always when invoking qemuDomainBlockJobAbort even if
the job is not currently tracking any state. This will change later and
also this is not a particularly hot code path.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If the container is really a simple one (init is just bash and
the whole root is passed through) then virDomainReboot and
virDomainShutdown will talk to the actual init within the host.
Therefore, 'virsh shutdown $dom' will result in shutting down the
host. True, at that point the container is shut down too but
looks a bit harsh to me.
The solution is to check if the init inside the container is or
is not the same as the init running on the host.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
So far the virInitctlSetRunLevel() is fully automatic. It finds
the correct fifo to use to talk to the init and it will set the
desired runlevel. Well, callers (so far there is just one) will
need to inspect the fifo a bit just before the runlevel is set.
Therefore, expose the internal list of fifos and also allow
caller to explicitly use one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Due to a bug the seclabels are restored before any PID in the
container is killed. This should be done afterwards in
virLXCProcessCleanup.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Prior to rewrite of cgroup code we only had one backend to try.
After the rewrite the virCgroupBackendGetAll() returns both
backends (for v1 and v2). However, not both have to really be
present on the system which results in killRecursive callback
failing which in turn might mean we won't try the other backend.
At the same time, this function reports no error as it should.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Not that it would matter because LXC driver doesn't differentiate
the job types so far, but nevertheless the Destroy() should grab
LXC_JOB_DESTROY.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
The number of iothreads is not part of the vm state sent during
migration, nor exposed to the guest ABI, so this restriction is
a mistake in libvirt. Let's remove that bit of code.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Jie Wang <wangjie88@huawei.com>
Device attribute does not have dotted "portAddr" format. Instead it
has single number format described but "usbAddr" which corresponds
to device parsing code in virDomainHostdevSubsysUSBDefParseXML.
Looks like [1] mistakenly changed device format for hostdev devices.
And [2] copy-n-paste this for hostdev network interfaces.
[1] 31710a53 Modify USB port to be defined as a port path
[2] 3b1c191f conf: parse/format type='hostdev' network interfaces
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The checks and error messages are mostly the same across
all virtio-input devices, so we can avoid having multiple
copies of the same code.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Turns out different versions of QEMU on the same architecture
produce the same output, so we can have a single output file
per architecture instead of duplicating the same data over and
over again.
Spotted-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
It will not work. This breaks qemu capabilities probing as a user.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
For normal starts (no incoming migration) the refresh of the QEMU
state must be done before the VCPUs getting started since otherwise
there might be a race condition between a possible shutdown of the
guest OS and the QEMU monitor queries.
This fixes "qemu: migration: Refresh device information after
transferring state" (93db7eea1b).
Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
If a domain has a disk that is type='network' we require specific
cache mode to allow migration with it (either 'directsync' or
'none'). This doesn't make much sense since network disks are
supposed to be safe to migrate by default.
At the same time, we should be checking for the actual source
type, not apparent type set in the domain XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Storage pools might want to specify format of the image when translating
the volume thus we can't add any default format when parsing the XML.
Add a explicit format when starting the VM and format is not present
neither by user specifying it nor by the storage pool translation
function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Post parse callback adds the 'raw' type only for local files. Remote
files can also have backing store (even local) so we should do this also
for network backed storage.
Note that virStorageFileGetMetadata always considers files with no type
as raw so we will not accidentally traverse the backing chain and allow
unexpected files being labelled with svirt labels.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Modify some existing tests of network-based disks to omit the storage
format specification.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In commit f80eae8c2a I was too agresive in removing properties of
-drive for empty drives. It turns out that qemu actually persists the
state of 'readonly' and the throttling information even for the empty
drive.
Removing 'readonly' thus made qemu open any subsequent images added via
the 'change' command as RW which was forbidden by selinux thanks to the
restrictive sVirt label for readonly media.
Fix this by formating the property again and bump the tests and leave a
note detailing why the rest of the properties needs to be skipped.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>). VIR_ONCE_GLOBAL_INIT is almost
exclusively called without an ending semicolon, but let's
standardize on using one like the other macros.
Add a dummy struct definition at the end of the macro, so
the compiler will require callers to add a semicolon.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>), and we have a mix of semicolon and
non-semicolon usage through the code. Let's standardize on using
a semicolon for VIR_LOG_INIT calls.
Drop the semicolon from the final statement of the macro, so
the compiler will require callers to add a semicolon.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>), and we have a mix of semicolon and
non-semicolon usage through the code. Let's standardize on using
a semicolon for VIR_ENUM_IMPL calls.
Move the verify() statement to the end of the macro and drop
the semicolon, so the compiler will require callers to add a
semicolon.
While we are touching these call sites, standardize on putting
the closing parenth on its own line, as discussed here:
https://www.redhat.com/archives/libvir-list/2019-January/msg00750.html
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>), and we have a mix of semicolon and
non-semicolon usage through the code. Let's standardize on using
a semicolon for VIR_ENUM_DECL calls.
Drop the semicolon from the final statement of the macro, so
the compiler will require callers to add a semicolon.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Just before pushing the series containing commit 3bba4825 I had added
a "return true" to the top of virFirewallDZoneExists() to measure the
impact of calling that function once per network during startup. I
found that the effect was minimal, but forgot to remove the "return
true" before pushing. This unfortunately causes a failure to start
networks on systems that have a firewalld version that doesn't support
our libvirt zone file (i.e. pretty much everyone).
This patch removes the unintended line.
Signed-off-by: Laine Stump <laine@laine.org>
Document that using bhyve:commandline is not fully
supported and may cause issues.
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
When using custom command line arguments, warn that
this configuration is not fully supported.
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
- Remove ATTRIBUTE_UNUSED for the "buf" argument, it's
not unused
- Indent fix
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Since we're setting the zone anyway, it will be useful to allow
setting a different (custom) zone for each network. This will be done
by adding a "zone" attribute to the "bridge" element, e.g.:
...
<bridge name='virbr0' zone='myzone'/>
...
If a zone is specified in the config and it can't be honored, this
will be an error.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This patch restores broken guest network connectivity after a host
firewalld is switched to using an nftables backend. It does this by
adding libvirt networks' bridge interfaces to the new "libvirt" zone
in firewalld.
After this patch, the bridge interface of any network created by
libvirt (when firewalld is active) will be added to the firewalld
zone called "libvirt" if it exists (regardless of the firewalld
backend setting). This behavior does *not* depend on whether or not
libvirt has installed the libvirt zone file (set with
"--with[out]-firewalld-zone" during the configure phase of the package
build).
If the libvirt zone doesn't exist (either because the package was
configured to not install it, or possibly it was installed, but
firewalld doesn't support rule priorities, resulting in a parse
error), the bridge will remain in firewalld's default zone, which
could be innocuous (in the case that the firewalld backend is
iptables, guest networking will still function properly with the
bridge in the default zone), or it could be disastrous (if the
firewalld backend is nftables, we can be assured that guest networking
will fail). In order to be unobtrusive in the former case, and
informative in the latter, when the libvirt zone doesn't exist we
then check the firewalld version to see if it's new enough to support
the nftables backend, and then if the backend is actually set to
nftables, before logging an error (and failing the net-start
operation, since the network couldn't possibly work anyway).
When the libvirt zone is used, network behavior is *slightly*
different from behavior of previous libvirt. In the past, libvirt
network behavior would be affected by the configuration of firewalld's
default zone (usually "public"), but now it is affected only by the
"libvirt" zone), and thus almost surely warrants a release note for
any distro upgrading to libvirt 5.1 or above. Although it's
unfortunate that we have to deal with a mandatory behavior change, the
architecture of multiple hooks makes it impossible to *not* change
behavior in some way, and the new behavior is arguably better (since
it will now be possible to manage access to the host from virtual
machines vs from public interfaces separately).
Creates-and-Resolves: https://bugzilla.redhat.com/1650320
Resolves: https://bugzilla.redhat.com/1638342
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
In the past (when both libvirt and firewalld used iptables), if either
libvirt's rules *OR* firewalld's rules accepted a packet, it would
be accepted. This was because libvirt and firewalld rules were
processed during the same kernel hook, and a single ACCEPT result
would terminate the rule traversal and cause the packet to be
accepted.
But now firewalld can use nftables for its backend, while libvirt's
firewall rules are still using iptables; iptables rules are still
processed, but at a different time during packet processing
(i.e. during a different hook) than the firewalld nftables rules. The
result is that a packet must be accepted by *BOTH* the libvirt
iptables rules *AND* the firewalld nftable rules in order to be
accepted.
This causes pain because
1) libvirt always adds rules to permit DNS and DHCP (and sometimes
TFTP) from guests to the host network's bridge interface. But
libvirt's bridges are in firewalld's "default" zone (which is usually
the zone called "public"). The public zone allows ssh, but doesn't
allow DNS, DHCP, or TFTP. So even though libvirt's rules allow the
DHCP and DNS traffic, the firewalld rules (now processed during a
different hook) dont, thus guests connected to libvirt's bridges can't
acquire an IP address from DHCP, nor can they make DNS queries to the
DNS server libvirt has setup on the host. (This could be solved by
modifying the default firewalld zone to allow DNS and DHCP, but that
would open *all* interfaces in the default zone to those services,
which is most likely not what the host's admin wants.)
2) Even though libvirt adds iptables rules to allow forwarded traffic
to pass the iptables hook, firewalld's higher level "rich rules" don't
yet have the ability to configure the acceptance of forwarded traffic
(traffic that is going somewhere beyond the host), so any traffic that
needs to be forwarded from guests to the network beyond the host is
rejected during the nftables hook by the default zone's "default
reject" policy (which rejects all traffic in the zone not specifically
allowed by the rules in the zone, whether that traffic is destined to
be forwarded or locally received by the host).
libvirt can't send "direct" nftables rules (firewalld only supports
direct/passthrough rules for iptables), so we can't solve this problem
by just sending explicit nftables rules instead of explicit iptables
rules (which, if it could be done, would place libvirt's rules in the
same hook as firewalld's native rules, and thus eliminate the need for
packets to be accepted by both libvirt's and firewalld's own rules).
However, we can take advantage of a quirk in firewalld zones that have
a default policy of "accept" (meaning any packet that doesn't match a
specific rule in the zone will be *accepted*) - this default accept will
also accept forwarded traffic (not just traffic destined for the host).
Of course we don't want to modify firewalld's default zone in that
way, because that would affect the filtering of traffic coming into
the host from other interfaces using that zone. Instead, we will
create a new zone called "libvirt". The libvirt zone will have a
default policy of accept so that forwarded traffic can pass and list
specific services that will be allowed into the host from guests (DNS,
DHCP, SSH, and TFTP).
But the same default accept policy that fixes forwarded traffic also
causes *all* traffic from guest to host to be accepted. To close this
new hole, the libvirt zone can take advantage of a new feature in
firewalld (currently slated for firewalld-0.7.0) - priorities for rich
rules - to add a low priority rule that rejects all local traffic (but
leaves alone all forwarded traffic).
So, our new zone will start with a list of services that are allowed
(dhcp, dns, tftp, and ssh to start, but configurable via any firewalld
management application, or direct editing of the zone file in
/etc/firewalld/zones/libvirt.xml), followed by a low priority
<reject/> rule (to reject all other traffic from guest to host), and
finally with a default policy of accept (to allow forwarded traffic).
This patch only creates the zonefile for the new zone, and implements
a configure.ac option to selectively enable/disable installation of
the new zone. A separate patch contains the necessary code to actually
place bridge interfaces in the libvirt zone.
Why do we need a configure option to disable installation of the new
libvirt zone? It uses a new firewalld attribute that sets the priority
of a rich rule; this feature first appears in firewalld-0.7.0 (unless
it has been backported to am earlier firewalld by a downstream
maintainer). If the file were installed on a system with firewalld
that didn't support rule priorities, firewalld would log an error
every time it restarted, causing confusion and lots of extra bug
reports.
So we add two new configure.ac switches to avoid polluting the system
logs with this error on systems that don't support rule priorities -
"--with-firewalld-zone" and "--without-firewalld-zone". A package
builder can use these to include/exclude the libvirt zone file in the
installation. If firewalld is enabled (--with-firewalld), the default
is --with-firewalld-zone, but it can be disabled during configure
(using --without-firewalld-zone). Targets that are using a firewalld
version too old to support the rule priority setting in the libvirt
zone file can simply add --without-firewalld-zone to their configure
commandline.
These switches only affect whether or not the libvirt zone file is
*installed* in /usr/lib/firewalld/zones, but have no effect on whether
or not libvirt looks for a zone called libvirt and tries to use it.
NB: firewalld zones can only be added to the permanent config of
firewalld, and won't be loaded/enabled until firewalld is restarted,
so at package install/upgrade time we have to restart firewalld. For
rpm-based distros, this is done in the libvirt.spec file by calling
the %firewalld_restart rpm macro, which is a part of the
firewalld-filesystem package. (For distros that don't use rpm
packages, the command "firewalld-cmd --reload" will have the same
effect).
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
virFirewallDGetBackend() reports whether firewalld is currently using
an iptables or an nftables backend.
virFirewallDGetVersion() learns the version of the firewalld running
on this system and returns it as 1000000*major + 1000*minor + micro.
virFirewallDGetZones() gets a list of all currently active firewalld
zones.
virFirewallDInterfaceSetZone() sets the firewalld zone of the given
interface.
virFirewallDZoneExists() can be used to learn whether or not a
particular zone is present and active in firewalld.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
In preparation for adding several other firewalld-specific functions,
separate the code that's unique to firewalld from the more-generic
"firewall" file.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Support for firewalld is a feature that can be selectively enabled or
disabled (using --with-firewalld/--without-firewalld), not merely
something that must be accounted for in the code if it is present with
no exceptions. It is more consistent with other usage in libvirt to
use WITH_FIREWALLD rather than HAVE_FIREWALLD.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1657468
Commit be1bb6c95 changed the way volumes were stored from a forward
linked list to a hash table. In doing so, it required that each vol
object would have 3 unique values as keys into tables - key, name,
and path. Due to how vHBA/NPIV LUNs are created/used this resulted
in a failure to utilize all the LUN's found during processing.
During virStorageBackendSCSINewLun processing fetch the key (or
serial value) for NPIV LUN's using virStorageFileGetNPIVKey which
will formulate a more unique key based on the serial value and
the port for the LUN.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>