When a system has enabled the iptables/ip6tables services rather than
firewalld, there is no explicit ordering of the start of those
services vs. libvirtd. This creates a problem when libvirtd.service is
started before ip[6]tables, as the latter, when it finally is started,
will remove all of the iptables rules that had previously been added
by libvirt, including the custom chains where libvirt's rules are
kept. This results in an error message similar to the following when a
user subsequently tries to start a new libvirt network:
"Error while activating network: Call to virNetworkCreate failed:
internal error: Failed to apply firewall rules
/usr/sbin/ip6tables -w --table filter --insert LIBVIRT_FWO \
--in-interface virbr2 --jump REJECT:
ip6tables: No chain/target/match by that name."
(Prior to logging this error, it also would have caused failure to
forward (or block) traffic in some cases, e.g. for guests on a NATed
network, since libvirt's rules to forward/block had all been deleted
and libvirt didn't know about it, so it couldn't fix the problem)
When this happens, the problem can be remedied by simply restarting
libvirtd.service (which has the side-effect of reloading all
libvirt-generated firewall rules)
Instead, we can just explicitly stating in the libvirtd.service file
that libvirtd.service should start after ip6tables.service and
ip6tables.service, eliminating the race condition that leads to the
error.
There is also nothing (that I can see) in the systemd .service files
to guarantee that firewalld.service will be started (if enabled) prior
to libvirtd.service. The same error scenario given above would occur
if libvirtd.service started before firewalld.service. Even before
that, though libvirtd would have detected that firewalld.service was
disabled, and then turn off all firewalld support. So, for example,
firewalld's libvirt zone wouldn't be used, and most likely traffic
from guests would therefore be blocked (all with no external
indication of the source of the problem other than a debug-level log
when libvirtd was started saying that firewalld wasn't in use); also
libvirtd wouldn't notice when firewalld reloaded its rules (which also
simultaneously deletes all of libvirt's rules).
I'm not aware of any reports that have been traced back to
libvirtd.service starting before firewalld.service, but have seen that
error reported multiple times, and also don't see an existing
dependency that would guarantee firewalld.service starts before
libvirtd.service, so it's possible it's been happening and we just
haven't gotten to the bottom of it.
This patch adds an After= line to the libvirtd.service file for each
of iptables.service, ip6tables.service, and firewalld.servicee, which
should guarantee that libvirtd.service isn't started until systemd has
started whichever of the others is enabled.
This race was diagnosed, and patch proposed, by Jason Montleon in
https://bugzilla.redhat.com/1723698 . At the time (April 2019) danpb
agreed with him that this change to libvirtd.service was a reasonable
thing to do, but I guess everyone thought someone else was going to
post a patch, so in the end nobody did.
Signed-off-by: Laine Stump <laine@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
The default memlock limit is 64k which is not enough to start a single
VM. The requirements for one VM are 12k, 8k for eBPF map and 4k for eBPF
program, however, it fails to create eBPF map and program with 64k limit.
By testing I figured out that the minimal limit is 80k to start a single
VM with functional eBPF and if I add 12k I can start another one.
This leads into following calculation:
80k as memlock limit worked to start a VM with eBPF which means there
is 68k of lock memory that I was not able to figure out what was using
it. So to get a number for 4096 VMs:
68 + 12 * 4096 = 49220
If we round it up we will get 64M of memory lock limit to support 4096
VMs with default map size which can hold 64 entries for devices.
This should be good enough as a sane default and users can change it if
the need to.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1807090
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
We need to give users the ability to customize the length of the
shutdown timeout, or even disable timeouts entirely. Thus we must move
the timeout arg into the sysconf file, instead of the service unit.
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
To facilitate upgrades from earlier versions of libvirt which did not
use socket activation for libvirtd, we want to allow the libvirtd socket
units to be disabled (masked). This can only be supported if we use the
weaker Wants statement instead of Requires.
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Substitute in the @sysconfigdir@ value instead of /etc.
Reviewed-by: Christophe de Dinechin <dinechin@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The xencommons service provides all the essential services such as
xenstored, xenconsoled, etc. needed by the libvirt Xen driver, so
libvirtd should be started after xencommons.
The xendomains service uses Xen's xl tool to operate on any domains it
finds running, even those managed by libvirt. Add a conflicts on the
xendomains service to ensure it is not enabled when libvirtd is enabled.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Shutting down the daemon after 30 seconds of being idle is a little bit
too aggressive. Especially when using 'virsh' in single-shot mode, as
opposed to interactive shell mode, it would not be unusual to have
more than 30 seconds between commands. This will lead to the daemon
shutting down and starting up between a series of commands.
Increasing the shutdown timer to 2 minutes will make it less likely that
the daemon will shutdown while the user is in the middle of a series of
commands.
Reviewed-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Since we have socket activation available now, we can let the system
libvirtd exit when it is idle. This allows it to still do autostart
when the host boots up, but when nothing was started it will quickly
exit again until some mgmt app connects to the socket.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
We don't do socket activation of libvirtd, since we need to
unconditionally start libvirtd in order to perform autostart. This
doesn't mean we can't have systemd socket units. Some use cases will
not need libvirt's autostart & are thus free to use activation.
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
At daemon startup we query logind for host PM support status. Without
a service dependency host startup can trigger libvirtd errors like:
error : virNodeSuspendSupportsTarget:336 : internal error: Cannot probe for
supported suspend types
warning : virQEMUCapsInit:949 : Failed to get host power management
capabilities
https://bugzilla.redhat.com/show_bug.cgi?id=1588288
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>