libvirt was unconditionally calling virNetDevBandwidthClear() for
every interface (and network bridge) of a type that supported
bandwidth, whether it actually had anything set or not. This doesn't
hurt anything (unless ifname == NULL!), but is wasteful.
This patch makes sure that all calls to virNetDevBandwidthClear() are
qualified by checking that the interface really had some bandwidth
setup done, and checks for a null ifname inside
virNetDevBandwidthClear(), silently returning success if it is null
(as well as removing the ATTRIBUTE_NONNULL from that function's
prototype, since we can't guarantee that it is never null,
e.g. sometimes a type='ethernet' interface has no ifname as it is
provided on the fly by qemu).
The function that parses the <forward> subelement of a network used to
fail/log an error if the network definition contained both a <pf>
element as well as at least one <interface> or <address> element. That
check was present because the configuration of a network should have
either one <pf>, one or more <interface>, or one or more <address>,
but never combinations of multiple kinds.
This caused a problem when libvirtd was restarted with a network
already active - when a network with a <pf> element is started, the
referenced PF (Physical Function of an SRIOV-capable network card) is
checked for VFs (Virtual Functions), and the <forward> is filled in
with a list of all VFs for that PF either in the form of their PCI
addresses (a list of <address>) or their netdev names (a list of
<interface>); the <pf> element is not removed though. When libvirtd is
restarted, it parses the network status and finds both the original
<pf> from the config, as well as the list of either <address> or
<interface>, fails the parse, and the network is not added to the
active list. This failure is often obscured because the network is
marked as autostart so libvirt immediately restarts it.
It seems odd to me that <interface> and <address> are stored in the
same array rather than keeping two separate arrays, and having
separate arrays would have made the check much simpler. However,
changing to use two separate arrays would have required changes in
more places, potentially creating more conflicts and (more
importantly) more possible regressions in the event of a backport, so
I chose to keep the existing data structure in order to localize the
change.
It appears that this problem has been in the code ever since support
for <pf> was added (0.9.10), but until commit
34cc3b2f10 (first in libvirt 1.2.4)
networks with interface pools were not properly marked as active on
restart anyway, so there is no point in backporting this patch any
further than that.
Not all files we want to find using virFileFindResource{,Full} are
generated when libvirt is built, some of them (such as RNG schemas) are
distributed with sources. The current API was not able to find source
files if libvirt was built in VPATH.
Both RNG schemas and cpu_map.xml are distributed in source tarball.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When defining and creating networks, we have been checking to make
sure there is only a single "default" portgroup, but haven't verified
that no two portgroups have the same name. We *do* check for multiple
definitions when updating the portgroups in an existing network
though.
This patch adds a check to networkValidate(), which is called when a
network is defined or created, to disallow duplicate names. It would
actually make sense to do this in the network XML parser (since it's
not really "something that might make sense but isn't supported by
this driver", but is instead "something that should never be
allowed"), but doing that carries the danger of causing errors when
rereading the config of existing networks when libvirtd is restarted
after an upgrade, and that would result in networks disappearing from
libvirt's list. (I'm thinking I should change the error to "XML_ERROR"
instead of "UNSUPPORTED", even though that's not the type of error
that networkValidate is intended for)
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1115858
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This adds a new "localOnly" attribute on the domain element of the
network xml. With this set to "yes", DNS requests under that domain
will only be resolved by libvirt's dnsmasq, never forwarded upstream.
This was how it worked before commit f69a6b987d, and I found that
functionality useful. For example, I have my host's NetworkManager
dnsmasq configured to forward that domain to libvirt's dnsmasq, so I can
easily resolve guest names from outside. But if libvirt's dnsmasq
doesn't know a name and forwards it to the host, I'd get an endless
forwarding loop. Now I can set localOnly="yes" to prevent the loop.
Signed-off-by: Josh Stone <jistone@redhat.com>
Moving code for parsing and formatting network routes to
networkcommon_conf helps reusing those routes for domains. The route
definition has been hidden to help reducing the number of unnecessary
checks in the format function.
Lack of a lease (whether mac is given or not) is a normal expected
scenario, since we are already filling in rv with nleases (which is
okay as 0 if there is no lease). There is no need to raise an error.
This fixes:
> virsh # net-dhcp-leases --mac 00:50:56:c0:00:01 default
> error: Failed to get leases info for default
> error: internal error: no lease with matching MAC address: 00:50:56:c0:00:01
Signed-off-by: Nehal J Wani <nehaljw.kkd1@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
When the bridge device for a network has macTableManager='libvirt' the
intent is that all kernel management of the bridge's MAC table
(Forwarding Database, or fdb, in the case of a Linux Host Bridge) be
disabled, with libvirt handling updates to the table instead. The
setup required for the bridge itself is:
1) set the "vlan_filtering" property of the bridge device to 1.
2) If the bridge has a "Dummy" tap device used to set a fixed MAC
address on the bridge (which is always the case for a bridge created
by libvirt, and never the case for a bridge created by the host system
network config), turn off learning and unicast_flood on this tap (this
is needed even though this tap is never IFF_UP, because the kernel
ignores the IFF_UP flag of devices when using their settings to
automatically decide whether or not to turn off promiscuous mode for
any attached device).
(1) is done both for libvirt-created/managed bridges, and for bridges
that are created by the host system config, while (2) is done only for
bridges created by libvirt (i.e. for forward modes of nat, routed, and
isolated bridges)
There is no attempt to turn vlan_filtering off when destroying the
network because in the case of a libvirt-created bridge, the bridge is
about to be destroyed anyway, and in the case of a system bridge, if
the other devices attached to the bridge could operate properly before
destroying libvirt's network object, they will continue to operate
properly (this is similar to the way that libvirt will enable
ip_forwarding whenever a routed/natted network is started, but will
never attempt to disable it if they are stopped).
At the time that the network driver allocates a connection to a
network, the tap device that will be used hasn't yet been created -
that will be done later by qemu (or lxc or whoever) - but if the
network has macTableManager='libvirt', then when we do get around to
creating the tap device, we will need to add an entry for it to the
network bridge's fdb (forwarding database) *and* turn off learning and
unicast_flood for that tap device in the bridge's sysfs settings. This
means that qemu needs to know both the bridge name as well as the
setting of macTableManager, so we either need to create a new API to
retrieve that info, or just pass it back in the ActualNetDef that is
created during networkAllocateActualDevice. We choose the latter
method, since it's already done for the bridge device, and it has the
side effect of making the information available in domain status.
(NB: in the future, I think that the tap device should actually be
created by networkAllocateActualDevice(), as that will solve several
other problems, but that is a battle for another day, and this
information will still be useful outside the network driver)
When the actualType of a virDomainNetDef is "network", it means that
we are connecting to a libvirt-managed network (routed, natted, or
isolated) which does use a bridge device (created by libvirt). In the
past we have required drivers such as qemu to call the public API to
retrieve the bridge name in this case (even though it is available in
the NetDef's ActualNetDef if the actualType is "bridge" (i.e., an
externally-created bridge that isn't managed by libvirt). There is no
real reason for this difference, and as a matter of fact it
complicates things for qemu. Also, there is another bridge-related
attribute (macTableManager) that will need to be available in both
cases, so this makes things consistent.
In order to avoid problems when restarting libvirtd after an update
from an older version that *doesn't* store the network's bridgename in
the ActualNetDef, we also need to put it in place during
networkNotifyActualDevice() (this function is run for each interface
of each domain whenever libvirtd is restarted).
Along with making the bridge name available in the internal object, it
is also now reported in the <source> element of the <interface> state
XML (or the <actual> subelement in the internally-stored format).
The one oddity about this change is that usually there is a separate
union for every different "type" in a higher level object (e.g. in the
case of a virDomainNetDef there are separate "network" and "bridge"
members of the union that pivots on the type), but in this case
network and bridge types both have exactly the same attributes, so the
"bridge" member is used for both type==network and type==bridge.
https://bugzilla.redhat.com/show_bug.cgi?id=1115292
In one of the previous commits (eafb53fe) we disallowed
network-wide bandwidth to some network types. However, we
forgot about <portgroups/> which can have <bandwidth/> too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Coverity pointed out that in other places we always check the return
value from virJSONValueObjectGetNumberLong() but not in the new addition
in leaseshelper. To solve the issue and also be more robust in case
somebody would corrupt the file, skip outputting of the lease entry in
case the expiry time is missing.
This patch enables the helper program to detect event(s) triggered when
there is a change in lease length or expiry and client-id. This
transfers complete control of leases database to libvirt and obsoletes
use of the lease database file (<network-name>.leases). That file will
not be created, read, or written. This is achieved by adding the option
--leasefile-ro to dnsmasq and passing a custom env var to leaseshelper,
which helps us map events related to leases with their corresponding
network bridges, no matter what the event be.
Also, this requires the addition of a new non-lease entry in our custom
lease database: "server-duid". It is required to identify a DHCPv6
server.
Now that dnsmasq doesn't maintain its own leases database, it relies on
our helper program to tell it about previous leases and server duid.
Thus, this patch makes our leases program honor an extra action: "init",
in which it sends the known info in a particular format to dnsmasq
by printing it to stdout.
The drawback of this change is that upgrade to this new approach does
not transfer the existing leases for the network if the leaseshelper
wasn't already used.
Starting from libvirt-1.2.4, network state XML files moved to another
directory (see commit b9e95491) and libvirt automatically migrates the
network state files to a new location. However, the code used
dirent.d_type which is not supported by all filesystems. Thus, when
libvirt was upgraded on a host which used such filesystem, network state
XMLs were not properly moved and running networks disappeared from
libvirt.
This patch falls back to lstat() whenever dirent.d_type is DT_UNKNOWN to
fix this issue.
https://bugzilla.redhat.com/show_bug.cgi?id=1167145
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The shared network driver is stateful and inside the daemon so
there is no need to use the networkPrivateData field to get the
driver handle. Just access the global driver handle directly.
Many places already directly accessed the global driver handle
in any case, so the code could never work without relying on
this.
As is done with other items such as vlan, virtualport, and bandwidth,
set the actual trustGuestRxFilters value to be used by a domain
interface according to a merge of the same attribute in the interface,
portgroup, and network in use. the interface setting always takes
precedence (if specified), followed by portgroup, and finally the
setting in the network is used if it's not specified in the interface
or portgroup.
If the VIR_STRDUP(exptime,...) fails, then we will jump to cleanup,
no need to check if exptime is set which causes Coverity to issue
a complaint in the virStrToLong_ll call because there wasn't a check
for a NULL value while there was one for the reference right after
the VIR_STRDUP().
Signed-off-by: John Ferlan <jferlan@redhat.com>
Our style overwhelmingly uses hanging braces (the open brace
hangs at the end of the compound condition, rather than on
its own line), with the primary exception of the top level function
body. Fix the few remaining outliers, before adding a syntax
check in a later patch.
* src/interface/interface_backend_netcf.c (netcfStateReload)
(netcfInterfaceClose, netcf_to_vir_err): Correct use of { in
compound statement.
* src/conf/domain_conf.c (virDomainHostdevDefFormatSubsys)
(virDomainHostdevDefFormatCaps): Likewise.
* src/network/bridge_driver.c (networkAllocateActualDevice):
Likewise.
* src/util/virfile.c (virBuildPathInternal): Likewise.
* src/util/virnetdev.c (virNetDevGetVirtualFunctions): Likewise.
* src/util/virnetdevmacvlan.c
(virNetDevMacVLanVPortProfileCallback): Likewise.
* src/util/virtypedparam.c (virTypedParameterAssign): Likewise.
* src/util/virutil.c (virGetWin32DirectoryRoot)
(virFileWaitForDevices): Likewise.
* src/vbox/vbox_common.c (vboxDumpNetwork): Likewise.
* tests/seclabeltest.c (main): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Martin Kletzander pointed out in email that my commit 2a193f64
introduced a crash in networkCreateInterfacePool() during startup of
any network that doesn't have a <pf> subelement of its <forward>
element. He also supplied a patch.
http://www.redhat.com/archives/libvir-list/2014-August/msg00655.html
I expanded on that patch by cleaning up now-extraneous checks in the
callers of networkCreateInterfacePool().
Fortunately the offending patch hasn't been in any release, and hasn't
been (to my knowledge) backported to any other branch.
When a network is defined with "<pf dev='xyz'/>", libvirt will query
sysfs to learn the list of all virtual functions (VF) associated with
that Physical Function (PF) then populate the network's interface pool
accordingly. This action was previously done only when the first guest
actually requested an interface from the network. This patch changes
it to populate the pool immediately when the network is started. This
way any problems with the PF or its VFs will become apparent sooner.
Note that we can't remove the old calls to networkCreateInterfacePool
that happen whenever a guest requests an interface - doing so would be
asking for failures on hosts that had libvirt upgraded with a network
that had been started but not yet used.
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1047818
networkCreateInterfacePool was a bit loose in its error cleanup, which
could result in a network definition with interfaces in the pool that
were NULL. This would in turn lead to a libvirtd crash when a guest
tried to attach an interface using the network with that pool.
In particular this would happen when creating a pool to be used for
macvtap connections. macvtap needs the netdev name of the virtual
function in order to use it, and each VF only has a netdev name if it
is currently bound to a network driver. If one of the VFs of a PF
happened to be bound to the pci-stub or vfio-pci driver (indicating
it's already in use for PCI passthrough), or no driver at all, it
would have no name. In this case networkCreateInterfacePool would
return an error, but would leave the netdef->forward.nifs set to the
total number of VFs in the PF. The interface attach that triggered
calling of networkCreateInterfacePool (it uses a "lazy fill" strategy)
would simply fail, but the very next attempt to attach an interface
using the same network pool would result in a crash.
This patch refactors networkCreateInterfacePool to bring it more in
line with current coding practices (label name, use of a switch with
no default case) as well as providing the following two changes to
behavior:
1) If a VF with no netdev name is encountered, just log a warning and
continue; only fail if exactly 0 devices are found to put in the pool.
2) If the function fails, clean up any partial interface pool and set
netdef->forward.nifs to 0.
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1111455
Otherwise this beautiful error would be overwritten when
the function is called with a really high rate number:
2014-07-28 12:51:47.920+0000: 2304: error : virCommandWait:2399 :
internal error: Child process (/sbin/tc class add dev vnet0 parent 1:
classid 1:1 htb rate 4294968kbps) unexpected exit status 1: Illegal "rate"
Usage: ... qdisc add ... htb [default N] [r2q N]
default minor id of class to which unclassified packets are sent {0}
r2q DRR quantums are computed as rate in Bps/r2q {10}
debug string of 16 numbers each 0-3 {0}
... class add ... htb rate R1 [burst B1] [mpu B] [overhead O]
[prio P] [slot S] [pslot PS]
[ceil R2] [cburst B2] [mtu MTU] [quantum Q]
rate rate allocated to this class (class can still borrow)
burst max bytes burst which can be accumulated during idle period {computed}
mpu minimum packet size used in rate computations
overhead per-packet size overhead used in rate computations
linklay adapting to a linklayer e.g. atm
ceil definite upper class rate (no borrows) {rate}
cburst burst but for ceil {computed}
mtu max packet size we create rate map for {1600}
prio priority of leaf; lowe
https://bugzilla.redhat.com/show_bug.cgi?id=1043735
libvirt previously only touched an interface's disable_ipv6 setting in
sysfs if it needed to be set to 1, assuming that 0 is the
default. Apparently that isn't always the case though (kernel 3.15.7-1
in Arch Linux reportedly defaults a new interface's disable_ipv6
setting to 1) so this patch explicitly sets it to 0 or 1 as
appropriate.
Replace:
if (virBufferError(&buf)) {
virBufferFreeAndReset(&buf);
virReportOOMError();
...
}
with:
if (virBufferCheckError(&buf) < 0)
...
This should not be a functional change (unless some callers
misused the virBuffer APIs - a different error would be reported
then)
Instead of maintaining two very similar APIs, add the "@mac" parameter
to virNetworkGetDHCPLeases and kill virNetworkGetDHCPLeasesForMAC. Both
of those functions would return data the same way, so making @mac an
optional filter simplifies a lot of stuff.
Don't free individual JSON array members as the array will be freed at
the end. This may potentially lead to a crash although it didn't crash
on my setup.
Query the network driver for the path of the custom leases file for the given
virtual network and parse it to retrieve info.
src/network/bridge_driver.c:
* Implement networkGetDHCPLeases
* Implement networkGetDHCPLeasesForMAC
* Implement networkGetDHCPLeasesHelper
When copying entries from the old lease file into the new array the old
code would copy the pointer of the json object into the second array
without removing it from the first. Afterwards when both arrays were
freed this might lead to a crash due to access of already freed memory.
Refactor the code to use the new array item stealing helper added to the
json code so that the entry resides just in one array.
We create a 'lease_new' when we are adding new lease entry, then later
in the code we add the 'lease_new' into a 'leases_array_new' which
leads into the crash because we double free the 'lease_new'.
To prevent the double free we set the 'lease_new' to NULL after
successful append into the 'leases_array_new'.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Commit baafe668 introduced new leaseshelper with a crash of freeing
env string. Calling 'getenv()' inside 'virGetEnvAllowSUID()' may
return a static string and we definitely should not free it.
The author probably want to free the copy of that string.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
If the leasehelper_path couldn't be found the code would leak the
freshly constructed command structure. Re-arrange code to avoid the
problem.
Found by coverity, broken by baafe668fa.
In "src/conf/domain_conf.h" there are many enum declarations. The
cleanup in this header filer was started, but it wasn't enough and
there are many other files that has enum variables declared. So, the
commit was starting to be big. This commit finish the cleanup in this
header file and in other files that has enum variables, parameters,
or functions declared.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Introduce helper program to catch events from dnsmasq and maintain a custom
lease file per network. It supports dhcpv4 and dhcpv6. The file is saved as
"<interface-name>.status".
Each lease contains the following info:
<expiry-time (epoch time)> <mac> <iaid> <ip-address> <hostname> <clientid>
Example of custom leases file content:
[
{
"iaid": "1221229",
"ip-address": "2001:db8:ca2:2:1::95",
"mac-address": "52:54:00:12:a2:6d",
"hostname": "Fedora20",
"client-id": "00:04:1a:c1:d9:6b:5a:0a:e2:bc:f8:4b:1e:37:2e:38:22:55",
"expiry-time": 1393244216
},
{
"ip-address": "192.168.150.208",
"mac-address": "52:54:00:11:56:b3",
"hostname": "Wani-PC",
"client-id": "01:52:54:00:11:56:b3",
"expiry-time": 1393244248
}
]
src/Makefile.am:
* Add options to compile the helper program
src/network/bridge_driver.c:
* Introduce networkDnsmasqLeaseFileNameCustom()
* Invoke helper program along with dnsmasq
* Delete the .status file when corresponding n/w is destroyed.
src/network/leaseshelper.c
* Helper program to create the custom lease file
In "src/conf/" there are many enumeration (enum) declarations.
Similar to the recent cleanup to "src/util" directory, it's
better to use a typedef for variable types, function types and
other usages. Other enumeration and folders will be changed to
typedef's in the future. Most of the files changed in this commit
are reltaed to Network (network_conf.* and interface_conf.*) enums.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The check for a network being active during interface attach was being
done individually in several places (by both the lxc driver and the
qemu driver), but those places were too specific, leading to it *not*
being checked when allocating a connection/device from a macvtap or
hostdev network.
This patch puts a single check in networkAllocateActualDevice(), which
is always called before the any network interface is attached to any
type of domain. It also removes all the other now-redundant checks
from the lxc and qemu drivers.
NB: the following patches are prerequisites for this patch, in the
case that it is backported to any branch:
440beeb network: fix virNetworkObjAssignDef and persistence
8aaa5b6 network: create statedir during driver initialization
b9e9549 network: change location of network state xml files
411c548 network: set macvtap/hostdev networks active if their state
file exists
This fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=880483
libvirt attempts to determine at startup time which networks are
already active, and set their active flags. Previously it has done
this by assuming that all networks are inactive, then setting the
active flag if the network has a bridge device associated with it and
that bridge device exists. This is not useful for macvtap and hostdev
based networks, since they do not use a bridge device.
Of course the reason that such a check had to be done was that the
presence of a status file in the network "stateDir" couldn't be
trusted as an indicator of whether or not a network was active. This
was due to the network driver mistakenly using
/var/lib/libvirt/network to store the status files, rather than
/var/run/libvirt/network (similar to what is done by every other
libvirt driver that stores status xml for its objects). The difference
is that /var/run is cleared out when the host reboots, so you can be
assured that the state file you are seeing isn't just left over from a
previous boot of the host.
Now that the network driver has been switched to using
/var/run/libvirt/network for status, we can also modify it to assume
that any network with an existing status file is by definition active
- we do this when reading the status file. To fine tune the results,
networkFindActiveConfigs() is changed to networkUpdateAllState(),
and only sets active = 0 if the conditions for particular network
types are *not* met.
The result is that during the first run of libvirtd after the host
boots, there are no status files, so no networks are active. Any time
libvirtd is restarted, any network with a status file will be marked
as active (unless the network uses a bridge device and that device for
some reason doesn't exist).
For some reason these have been stored in /var/lib, although other
drivers (e.g. qemu and lxc) store their state files in /var/run.
It's much nicer to store state files in /var/run because it is
automatically cleared out when the system reboots. We can then use
existence of the state file as a convenient indicator of whether or
not a particular network is active.
Since changing the location of the state files by itself will cause
problems in the case of a *live* upgrade from an older libvirt that
uses /var/lib (because current status of active networks will be
lost), the network driver initialization has been modified to migrate
any network state files from /var/lib to /var/run.
This will not help those trying to *downgrade*, but in practice this
will only be problematic in two cases
1) If there are networks with network-wide bandwidth limits configured
*and in use* by a guest during a downgrade to "old" libvirt. In this
case, the class ID's used for that network's tc rules, as well as
the currently in-use bandwidth "floor" will be forgotten.
2) If someone does this: 1) upgrade libvirt, 2) downgrade libvirt, 3)
modify running state of network (e.g. add a static dhcp host, etc),
4) upgrade. In this case, the modifications to the running network
will be lost (but not any persistent changes to the network's
config).
This directory should be created when the network driver is first
started up, not just when a dhcp daemon is run. This hasn't posed a
problem in the past, because the directory has always been
pre-existing.
Experimentation showed that if virNetworkCreateXML() was called for a
network that was already defined, and then the network was
subsequently shutdown, the network would continue to be persistent
after the shutdown (expected/desired), but the original config would
be lost in favor of the transient config sent in with
virNetworkCreateXML() (which would then be the new persistent config)
(obviously unexpected/not desired).
To fix this, virNetworkObjAssignDef() has been changed to
1) properly save/free network->def and network->newDef for all the
various combinations of live/active/persistent, including some
combinations that were previously considered to be an error but didn't
need to be (e.g. setting a "live" config for a network that isn't yet
active but soon will be - that was previously considered an error,
even though in practice it can be very useful).
2) automatically set the persistent flag whenever a new non-live
config is assigned to the network (and clear it when the non-live
config is set to NULL). the libvirt network driver no longer directly
manipulates network->persistent, but instead relies entirely on
virNetworkObjAssignDef() to do the right thing automatically.
After this patch, the following sequence will behave as expected:
virNetworkDefineXML(X)
virNetworkCreateXML(X') (same name but some config different)
virNetworkDestroy(X)
At the end of these calls, the network config will remain as it was
after the initial virNetworkDefine(), whereas previously it would take
on the changes given during virNetworkCreateXML().
Another effect of this tighter coupling between a) setting a !live def
and b) setting/clearing the "persistent" flag, is that future patches
which change the details of network lifecycle management
(e.g. upcoming patches to fix detection of "active" networks when
libvirtd is restarted) will find it much more difficult to break
persistence functionality.
The networkCheckRouteCollision, networkAddFirewallRules and
networkRemoveFirewallRules APIs all take a virNetworkObjPtr
instance, but only ever access the 'def' member. It thus
simplifies testing if the APIs are changed to just take a
virNetworkDefPtr instead
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Update the iptablesXXXX methods so that instead of directly
executing iptables commands, they populate rules in an
instance of virFirewallPtr. The bridge driver can thus
construct the ruleset and then invoke it in one operation
having rollback handled automatically.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Since it is an abbreviation, PCI should always be fully
capitalized or full lower case, never Pci.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A patch submitted by Steven Malin last week pointed out a problem with
libvirt's DNS SRV record configuration:
https://www.redhat.com/archives/libvir-list/2014-March/msg00536.html
When searching for that message later, I found another series that had
been posted by Guannan Ren back in 2012 that somehow slipped between
the cracks:
https://www.redhat.com/archives/libvir-list/2012-July/msg00236.html
That patch was very much out of date, but also pointed out some real
problems.
This patch fixes all the noted problems by refactoring
virNetworkDNSSrvDefParseXML() and networkDnsmasqConfContents(), then
verifies those fixes by added several new records to the test case.
Problems fixed:
* both service and protocol now have an underscore ("_") prepended on
the commandline, as required by RFC2782.
<srv service='sip' protocol='udp' domain='example.com'
target='tests.example.com' port='5060' priority='10'
weight='150'/>
before: srv-host=sip.udp.example.com,tests.example.com,5060,10,150
after: srv-host=_sip._udp.example.com,tests.example.com,5060,10,150
* if "domain" wasn't specified in the <srv> element, the extra
trailing "." will no longer be added to the dnsmasq commandline.
<srv service='sip' protocol='udp' target='tests.example.com'
port='5060' priority='10' weight='150'/>
before: srv-host=sip.udp.,tests.example.com,5060,10,150
after: srv-host=_sip._udp,tests.example.com,5060,10,150
* when optional attributes aren't specified, the separating comma is
also now not placed on the dnsmasq commandline. If optional
attributes in the middle of the line are not specified, they are
replaced with a default value in the commandline (1 for port, 0 for
priority and weight).
<srv service='sip' protocol='udp' target='tests.example.com'
port='5060'/>
before: srv-host=sip.udp.,tests.example.com,5060,,
after: srv-host=_sip._udp,tests.example.com,5060
(actually the would have generated an error, because "optional"
attributes weren't really optional.)
* The allowed characters for both service and protocol are now limited
to alphanumerics, plus a few special characters that are found in
existing names in /etc/services and /etc/protocols. (One exception
is that both of these files contain names with an embedded ".", but
"." can't be used in these fields of an SRV record because it is
used as a field separator and there is no method to escape a "."
into a field.) (Previously only the strings "tcp" and "udp" were
allowed for protocol, but this restriction has been removed, since
RFC2782 specifically says that it isn't limited to those, and that
anyway it is case insensitive.)
* the "domain" attribute is no longer required in order to recognize
the port, priority, and weight attributes during parsing. Only
"target" is required for this.
* if "target" isn't specified, port, priority, and weight are not
allowed (since they are meaningless - an empty target means "this
service is *not available* for this domain").
* port, priority, and weight are now truly optional, as the comments
originally suggested, but which was not actually true.
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Coverity found an issue in lxc_driver and uml_driver that we don't
check the return value of register functions.
I've also updated all other places and unify the way we check the
return value.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The bridge_driver_platform.h defines many functions that
a platform driver must implement. Only two of these
functions are actually called from the main bridge driver
code. The remainder can be made internal to the linux
driver only.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
According to commit b4e0299d if networkAllocateActualDevice() was
successful, it will *always* allocate an iface->data.network.actual,
so we can use this during networkReleaseActualDevice() to know if
there is really anything to undo. We were properly using this
information to only decrement the network connections counter if it
had previously been incremented, but we were unconditionally
unplugging bandwidth and calling the "unplugged" network hook for
*all* interfaces (during qemuProcessStop()) whether they had been
previously plugged or not. This caused problems if a domain failed to
start at some time prior to all interfaces being allocated. (I
encountered this when an interface had a bandwidth floor set but no
inbound QoS).
This patch changes both the call to networkUnplugBandwidth() and the
call to networkRunHook() to only be called if there was a previous
call to "plug" for the same interface.
networkAllocateActualDevice() is called for *all* interfaces, not just
those with type='network'. In that case, it will jump down to its
validate: label immediately, without allocating anything. After
validation is done, two counters are potentially updated (one for the
network, and one for any particular physical device that is chosen),
and then networkRunHook() is called.
This patch refactors that code a slight bit so that networkRunHook()
doesn't get called if netdef is NULL (i.e. type != network) and to
place the conditional increment of dev->connections inside the "if
(netdef)" as well - dev can never be non-null if netdef is null
(because "dev" is the pointer to a device in a network's pool of
devices), so this doesn't have any functional effect, it just makes
the code clearer.
The network hook script gets called whenever an interface is plugged
into or unplugged from a network, but even though the full XML of both
the network and the domain is included, there is no reasonable way to
determine what exact resources the plugged interface is using:
1) Prior to a recent patch which modified the status XML of interfaces
to include the information about actual hardware resources used, it
would be possible to scan through the domain XML output sent to the
hook, and from there find the correct interface, but that interface
definition would not include any runtime info (e.g. bandwidth or vlan
taken from a portgroup, or which physdev was used in case of a macvtap
network).
2) After the patch modifying the status XML of interfaces, the network
name would no longer be included in the domain XML, so it would be
completely impossible to determine which interface was the one being
plugged.
To solve that problem, this patch includes a single <interface>
element at the beginning of the XML sent to the network hook for
"plugged" and "unplugged" (just inside <hookData>) that is the status
XML of the interface being plugged. This XML will include all info
gathered from the chosen network and portgroup.
NB: due to hardcoded spaces in all of the device *Format() functions,
the <interface> element inside the <hookData> will be indented by 6
spaces rather than 2. I had intended to fix this, but it turns out
that to make virDomainNetDefFormat() indentation relative, I would
have to do the same to virDomainDeviceInfoFormat(), and that function
is called from 19 places - making that a prerequisite of this patch
would cause too many merge difficulties if we needed to backport
network hooks, so I chose to ignore the problem here and fix the
problem for *all* devices in a followup later.
Currently, networkRunHook() is called in networkAllocateActualDevice and
friends. These functions, however, doesn't necessarily work on networks,
For example, if domain's interface is defined in this fashion:
<interface type='bridge'>
<mac address='52:54:00:0b:3b:16'/>
<source bridge='virbr1'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
The networkAllocateActualDevice jumps directly onto 'validate' label as
the interface is not type of 'network'. Hence, @network is left
initialized to NULL and networkRunHook(network, ...) is called. One of
the things that the hook function does is dereference @network. Soupir.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The networkNotifyActualDevice function is accepting two arguments, not
one:
qemu/qemu_process.c: In function 'qemuProcessNotifyNets':
qemu/qemu_process.c:2776:47: error: macro "networkNotifyActualDevice" passed 2 arguments, but takes just 1
if (networkNotifyActualDevice(def, net) < 0)
^
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Basically, the idea is copied from domain code, where tainting
exists for a while. Currently, only one taint reason exists -
VIR_NETWORK_TAINT_HOOK to mark those networks which caused invoking
of hook script.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There might be some use cases, where user wants to prepare the host or
its environment prior to starting a network and do some cleanup after
the network has been shut down. Consider all the functionality that
libvirt doesn't currently have as an example what a hook script can
possibly do.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The lack of debug printings might be frustrating in the future.
Moreover, this function doesn't follow the usual pattern we have in the
rest of the code:
int ret = -1;
/* do some work */
ret = 0;
cleanup:
/* some cleanup work */
return ret;
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1058839
Commit f9f56340 for CVE-2014-0028 almost had the right idea - we
need to check the ACL rules to filter which events to send. But
it overlooked one thing: the event dispatch queue is running in
the main loop thread, and therefore does not normally have a
current virIdentityPtr. But filter checks can be based on current
identity, so when libvirtd.conf contains access_drivers=["polkit"],
we ended up rejecting access for EVERY event due to failure to
look up the current identity, even if it should have been allowed.
Furthermore, even for events that are triggered by API calls, it
is important to remember that the point of events is that they can
be copied across multiple connections, which may have separate
identities and permissions. So even if events were dispatched
from a context where we have an identity, we must change to the
correct identity of the connection that will be receiving the
event, rather than basing a decision on the context that triggered
the event, when deciding whether to filter an event to a
particular connection.
If there were an easy way to get from virConnectPtr to the
appropriate virIdentityPtr, then object_event.c could adjust the
identity prior to checking whether to dispatch an event. But
setting up that back-reference is a bit invasive. Instead, it
is easier to delay the filtering check until lower down the
stack, at the point where we have direct access to the RPC
client object that owns an identity. As such, this patch ends
up reverting a large portion of the framework of commit f9f56340.
We also have to teach 'make check' to special-case the fact that
the event registration filtering is done at the point of dispatch,
rather than the point of registration. Note that even though we
don't actually use virConnectDomainEventRegisterCheckACL (because
the RegisterAny variant is sufficient), we still generate the
function for the purposes of documenting that the filtering
takes place.
Also note that I did not entirely delete the notion of a filter
from object_event.c; I still plan on using that for my upcoming
patch series for qemu monitor events in libvirt-qemu.so. In
other words, while this patch changes ACL filtering to live in
remote.c and therefore we have no current client of the filtering
in object_event.c, the notion of filtering in object_event.c is
still useful down the road.
* src/check-aclrules.pl: Exempt event registration from having to
pass checkACL filter down call stack.
* daemon/remote.c (remoteRelayDomainEventCheckACL)
(remoteRelayNetworkEventCheckACL): New functions.
(remoteRelay*Event*): Use new functions.
* src/conf/domain_event.h (virDomainEventStateRegister)
(virDomainEventStateRegisterID): Drop unused parameter.
* src/conf/network_event.h (virNetworkEventStateRegisterID):
Likewise.
* src/conf/domain_event.c (virDomainEventFilter): Delete unused
function.
* src/conf/network_event.c (virNetworkEventFilter): Likewise.
* src/libxl/libxl_driver.c: Adjust caller.
* src/lxc/lxc_driver.c: Likewise.
* src/network/bridge_driver.c: Likewise.
* src/qemu/qemu_driver.c: Likewise.
* src/remote/remote_driver.c: Likewise.
* src/test/test_driver.c: Likewise.
* src/uml/uml_driver.c: Likewise.
* src/vbox/vbox_tmpl.c: Likewise.
* src/xen/xen_driver.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1057321
pointed out that we weren't honoring the <bandwidth> element in
libvirt networks using <forward mode='bridge'/>. In fact, these
networks are just a method of giving a libvirt network name to an
existing Linux host bridge on the system, and libvirt doesn't have
enough information to know where to set such limits. We are working on
a method of supporting network bandwidths for some specific cases of
<forward mode='bridge'/>, but currently libvirt doesn't support it. So
the proper thing to do now is just log an error when someone tries to
put a <bandwidth> element in that type of network. (It's unclear if we
will be able to do proper bandwidth limiting for macvtap networks, and
most definitely we will not be able to support it for hostdev
networks).
While looking through the network XML documentation and comparing it
to the networkValidate function, I noticed that we also ignore the
presence of a mac address in the config in the same cases, rather than
failing so that the user will understand that their desired action has
not been taken.
This patch updates networkValidate() (which is called any time a
persistent network is defined, or a transient network created) to log
an error and fail if it finds either a <bandwidth> or <mac> element
and the network forward mode is anything except 'route'. 'nat', or
nothing. (Yes, neither of those elements is acceptable for any macvtap
mode, nor for a hostdev network).
NB: This does *not* cause failure to start any existing network that
contains one of those elements, so someone might have erroneously
defined such a network in the past, and that network will continue to
function unmodified. I considered it too disruptive to suddenly break
working configs on the next reboot after a libvirt upgrade.
The previous patch fixed "forwardPlainNames" so that it really is
doing only what is intended, but left the default to be
"forwardPlainNames='no'". Discussion around the initial version of
that patch led to the decision that the default should instead be
"forwardPlainNames='yes'" (i.e. the original behavior before commit
f3886825). This patch makes that change to the default.
In commit f386825 we began adding the options
--domain-needed
--local=/$mydomain/
to all dnsmasq commandlines with the stated reason of preventing
forwarding of DNS queries for names that weren't fully qualified
domain names ("FQDN", i.e. a name that included some "."s and a domain
name). This was later changed to
domain-needed
local=/$mydomain/
when we moved the options from the dnsmasq commandline to a conf file.
The original patch on the list, and discussion about it, is here:
https://www.redhat.com/archives/libvir-list/2012-August/msg01594.html
When a domain name isn't specified (mydomain == ""), the addition of
"domain-needed local=//" will prevent forwarding of domain-less
requests to the virtualization host's DNS resolver, but if a domain
*is* specified, the addition of "local=/domain/" will prevent
forwarding of any requests for *qualified* names within that domain
that aren't resolvable by libvirt's dnsmasq itself.
An example of the problems this causes - let's say a network is
defined with:
<domain name='example.com'/>
<dhcp>
..
<host mac='52:54:00:11:22:33' ip='1.2.3.4' name='myguest'/>
</dhcp>
This results in "local=/example.com/" being added to the dnsmasq options.
If a guest requests "myguest" or "myguest.example.com", that will be
resolved by dnsmasq. If the guest asks for "www.example.com", dnsmasq
will not know the answer, but instead of forwarding it to the host, it
will return NOT FOUND to the guest. In most cases that isn't the
behavior an admin is looking for.
A later patch (commit 4f595ba) attempted to remedy this by adding a
"forwardPlainNames" attribute to the <dns> element. The idea was that
if forwardPlainNames='yes' (default is 'no'), we would allow
unresolved names to be forwarded. However, that patch was botched, in
that it only removed the "domain-needed" option when
forwardPlainNames='yes', and left the "local=/mydomain/".
Really we should have been just including the option "--domain-needed
--local=//" (note the lack of domain name) regardless of the
configured domain of the network, so that requests for names without a
domain would be treated as "local to dnsmasq" and not forwarded, but
all others (including those in the network's configured domain) would
be forwarded. We also shouldn't include *either* of those options if
forwardPlainNames='yes'. This patch makes those corrections.
This patch doesn't remedy the fact that default behavior was changed
by the addition of this feature. That will be handled in a subsequent
patch.
This reverts commit 2996e6be19
and some parts of 2636dc8c4d.
The former one tried to implement QoS setting on bridgeless networks.
However, as discussed upstream [1], the patch is far away from being
useful in even a single case. The whole idea of network QoS is to have
aggregated limits over several interfaces. This patch is doing
completely the opposite when merging two QoS settings (from the network
and the domain interface) into one which is then set at the domain
interface itself, not the network.
The latter one is the test for the previous one. Now none of them makes
sense.
1: https://www.redhat.com/archives/libvir-list/2014-January/msg01441.html
Conflicts:
tests/virnetdevbandwidthtest.c: New test has been introduced since
then.
https://bugzilla.redhat.com/show_bug.cgi?id=1055484
Currently, libvirt's XML schema of network allows QoS to be defined for
every network even though it has no bridge. For instance:
<network>
<name>vdsm-no-bridge</name>
<forward mode='passthrough'>
<interface dev='em1.10'/>
</forward>
<bandwidth>
<inbound average='1000' peak='5000' burst='1024'/>
<outbound average='1000' burst='1024'/>
</bandwidth>
</network>
The bandwidth limitations can be, however, applied even on such
networks. In fact, they are going to be applied on the interface that
will be connected to the network on a domain startup. This approach,
however, has one limitation. With bridged networks, there are two points
where QoS can be set: bridge and domain interface. The lower limit of
the two is enforced then. For instance, if the interface has 10Mbps
average, but the network only 1Mbps, there's no way for interface to
transmit packets faster than the 1Mbps limit. With two points this is
enforced by kernel. With only one point, we must combine both QoS
settings into one which is set afterwards. Look at
virNetDevBandwidthMinimal() and you'll understand immediately what I
mean.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Ever since ACL filtering was added in commit 7639736 (v1.1.1), a
user could still use event registration to obtain access to a
domain that they could not normally access via virDomainLookup*
or virConnectListAllDomains and friends. We already have the
framework in the RPC generator for creating the filter, and
previous cleanup patches got us to the point that we can now
wire the filter through the entire object event stack.
Furthermore, whether or not domain:getattr is honored, use of
global events is a form of obtaining a list of networks, which
is covered by connect:search_domains added in a93cd08 (v1.1.0).
Ideally, we'd have a way to enforce connect:search_domains when
doing global registrations while omitting that check on a
per-domain registration. But this patch just unconditionally
requires connect:search_domains, even when no list could be
obtained, based on the following observations:
1. Administrators are unlikely to grant domain:getattr for one
or all domains while still denying connect:search_domains - a
user that is able to manage domains will want to be able to
manage them efficiently, but efficient management includes being
able to list the domains they can access. The idea of denying
connect:search_domains while still granting access to individual
domains is therefore not adding any real security, but just
serves as a layer of obscurity to annoy the end user.
2. In the current implementation, domain events are filtered
on the client; the server has no idea if a domain filter was
requested, and must therefore assume that all domain event
requests are global. Even if we fix the RPC protocol to
allow for server-side filtering for newer client/server combos,
making the connect:serach_domains ACL check conditional on
whether the domain argument was NULL won't benefit older clients.
Therefore, we choose to document that connect:search_domains
is a pre-requisite to any domain event management.
Network events need the same treatment, with the obvious
change of using connect:search_networks and network:getattr.
* src/access/viraccessperm.h
(VIR_ACCESS_PERM_CONNECT_SEARCH_DOMAINS)
(VIR_ACCESS_PERM_CONNECT_SEARCH_NETWORKS): Document additional
effect of the permission.
* src/conf/domain_event.h (virDomainEventStateRegister)
(virDomainEventStateRegisterID): Add new parameter.
* src/conf/network_event.h (virNetworkEventStateRegisterID):
Likewise.
* src/conf/object_event_private.h (virObjectEventStateRegisterID):
Likewise.
* src/conf/object_event.c (_virObjectEventCallback): Track a filter.
(virObjectEventDispatchMatchCallback): Use filter.
(virObjectEventCallbackListAddID): Register filter.
* src/conf/domain_event.c (virDomainEventFilter): New function.
(virDomainEventStateRegister, virDomainEventStateRegisterID):
Adjust callers.
* src/conf/network_event.c (virNetworkEventFilter): New function.
(virNetworkEventStateRegisterID): Adjust caller.
* src/remote/remote_protocol.x
(REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER)
(REMOTE_PROC_CONNECT_DOMAIN_EVENT_REGISTER_ANY)
(REMOTE_PROC_CONNECT_NETWORK_EVENT_REGISTER_ANY): Generate a
filter, and require connect:search_domains instead of weaker
connect:read.
* src/test/test_driver.c (testConnectDomainEventRegister)
(testConnectDomainEventRegisterAny)
(testConnectNetworkEventRegisterAny): Update callers.
* src/remote/remote_driver.c (remoteConnectDomainEventRegister)
(remoteConnectDomainEventRegisterAny): Likewise.
* src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister)
(xenUnifiedConnectDomainEventRegisterAny): Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainGetXMLDesc): Likewise.
* src/libxl/libxl_driver.c (libxlConnectDomainEventRegister)
(libxlConnectDomainEventRegisterAny): Likewise.
* src/qemu/qemu_driver.c (qemuConnectDomainEventRegister)
(qemuConnectDomainEventRegisterAny): Likewise.
* src/uml/uml_driver.c (umlConnectDomainEventRegister)
(umlConnectDomainEventRegisterAny): Likewise.
* src/network/bridge_driver.c
(networkConnectNetworkEventRegisterAny): Likewise.
* src/lxc/lxc_driver.c (lxcConnectDomainEventRegister)
(lxcConnectDomainEventRegisterAny): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
While comparing network and domain events, I noticed that the
test driver had to do a cast in one place and not the other.
For consistency, we should hide the necessary casting as low
as possible in the stack, with everything else using saner
types.
* src/conf/network_event.h (virNetworkEventStateRegisterID): Alter
type.
* src/conf/network_event.c (virNetworkEventStateRegisterID): Hoist
cast here.
* src/test/test_driver.c (testConnectNetworkEventRegisterAny):
Simplify callers.
* src/remote/remote_driver.c
(remoteConnectNetworkEventRegisterAny): Likewise.
* src/network/bridge_driver.c
(networkConnectNetworkEventRegisterAny): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
When the host is configured with very restrictive firewall (default policy
is DROP for all chains, including OUTPUT), the bridge driver for Linux
adds netfilter entries to allow DHCP and DNS requests to go from the VM
to the dnsmasq of the host.
The issue that this commit fixes is the fact that a DROP policy on the OUTPUT
chain blocks the DHCP replies from the host’s dnsmasq to the VM.
As DHCP replies are sent in UDP, they are not caught by any --ctstate ESTABLISHED
rule and so, need to be explicitly allowed.
Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr.eu.org>
Ever since their introduction (commit 1509b80 in v0.5.0 for
virConnectDomainEventRegister, commit 4445723 in v0.8.0 for
virConnectDomainEventDeregisterAny), the event deregistration
functions have been documented as returning 0 on success;
likewise for older registration (only the newer RegisterAny
must return a non-zero callbackID). And now that we are
adding virConnectNetworkEventDeregisterAny for v1.2.1, it
should have the same semantics.
Fortunately, all of the stateful drivers have been obeying
the docs and returning 0, thanks to the way the remote_driver
tracks things (in fact, the RPC wire protocol is unable to
send a return value for DomainEventRegisterAny, at least not
without adding a new RPC number). Well, except for vbox,
which was always failing deregistration, due to failure to
set the return value to anything besides its initial -1.
But for local drivers, such as test:///default, we've been
returning non-zero numbers; worse, the non-zero numbers have
differed over time. For example, in Fedora 12 (libvirt 0.8.2),
calling Register twice would return 0 and 1 [the callbackID
generated under the hood]; while in Fedora 20 (libvirt 1.1.3),
it returns 1 and 2 [the number of callbacks registered for
that event type]. Since we have changed the behavior over
time, and since it differs by local vs. remote, we can safely
argue that no one could have been reasonably relying on any
particular behavior, so we might as well obey the docs, as well
as prepare callers that might deal with older clients to not be
surprised if the docs are not strictly followed.
For consistency, this patch fixes the code for all drivers,
even though it only makes an impact for vbox and for local
drivers. By fixing all drivers, future copy and paste from
a remote driver to a local driver is less likely to
reintroduce the bug.
Finally, update the testsuite to gain some coverage of the
issue for local drivers, including the first test of old-style
domain event registration via function pointer instead of
event id.
* src/libvirt.c (virConnectDomainEventRegister)
(virConnectDomainEventDeregister)
(virConnectDomainEventDeregisterAny): Clarify docs.
* src/libxl/libxl_driver.c (libxlConnectDomainEventRegister)
(libxlConnectDomainEventDeregister)
(libxlConnectDomainEventDeregisterAny): Match documentation.
* src/lxc/lxc_driver.c (lxcConnectDomainEventRegister)
(lxcConnectDomainEventDeregister)
(lxcConnectDomainEventDeregisterAny): Likewise.
* src/test/test_driver.c (testConnectDomainEventRegister)
(testConnectDomainEventDeregister)
(testConnectDomainEventDeregisterAny)
(testConnectNetworkEventDeregisterAny): Likewise.
* src/uml/uml_driver.c (umlConnectDomainEventRegister)
(umlConnectDomainEventDeregister)
(umlConnectDomainEventDeregisterAny): Likewise.
* src/vbox/vbox_tmpl.c (vboxConnectDomainEventRegister)
(vboxConnectDomainEventDeregister)
(vboxConnectDomainEventDeregisterAny): Likewise.
* src/xen/xen_driver.c (xenUnifiedConnectDomainEventRegister)
(xenUnifiedConnectDomainEventDeregister)
(xenUnifiedConnectDomainEventDeregisterAny): Likewise.
* src/network/bridge_driver.c
(networkConnectNetworkEventDeregisterAny): Likewise.
* tests/objecteventtest.c (testDomainCreateXMLOld): New test.
(mymain): Run it.
(testDomainCreateXML): Check return values.
Signed-off-by: Eric Blake <eblake@redhat.com>
While the public API & wire protocol included the 'detail'
arg for network lifecycle events, the internal event handling
code did not process it. This meant that if a future libvirtd
server starts sending non-0 'detail' args, the current libvirt
client will not process them.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1035336
The basic problem is that during a network update, the required
iptables rules sometimes change, and this was being handled by simply
removing and re-adding the rules. However, the removal of the old
rules was done based on the *new* state of the network, which would
mean that some of the rules would not match those currently in the
system, so the old rules wouldn't be removed.
This patch removes the old rules prior to updating the network
definition then adds the new rules as soon as the definition is
updated. Note that this could lead to a stray packet or two during the
interim, but that was already a problem before (the period of limbo is
now just slightly longer).
While moving the location for the rules, I added a few more sections
that should result in the iptables rules being redone:
DHCP_RANGE and DHCP_HOST - these are needed because adding/removing a dhcp
host entry could lead to the dhcp service being started/stopped, which
would require that the mangle rule that fixes up dhcp response
checksums sould need to be added/removed, and this wasn't being done.
Most of our code base uses space after comma but not before;
fix the remaining uses before adding a syntax check.
* src/network/bridge_driver.c: Consistently use commas.
* src/node_device/node_device_hal.c: Likewise.
* src/node_device/node_device_udev.c: Likewise.
* src/storage/storage_backend_rbd.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1020135
If networkAllocateActualDevice() had failed due to a pool of hostdev
or direct devices being depleted, the calling function could still
call networkReleaseActualDevice() as part of its cleanup, and that
function would then unconditionally decrement the connections count
for the network, even though it hadn't been incremented (due to
failure of allocate). This *was* necessary because the .actual member
of the netdef was allocated with a "lazy" algorithm, only being
created if there was a need to store data there (e.g. if a device was
allocated from a pool, or bandwidth was allocated for the device), so
there was no simple way for networkReleaseActualDevice() to tell if
something really had been allocated (i.e. if "connections++" had been
executed).
This patch changes networkAllocateDevice() to *always* allocate an
actual device for any netdef of type='network', even if it isn't
needed for any other reason. This has no ill effects anywhere else in
the code (except for using a small amount of memory), and
networkReleaseActualDevice() can then determine if there was a
previous successful allocate by checking for .actual != NULL (if not,
it skips the "connections--").
Currently, we ignore whether dnsmasqCapsRefresh succeeds or fails. We
shouldn't do that as we may generate wrong dnsmasq command line (what
is done just a few lines below).
Signed-off-by: Hongwei Bi <hwbi2008@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Packets sent by guests on virbrN, *or* by dnsmasq on the same, to
- 255.255.255.255/32 (netmask-independent local network broadcast
address), or to
- 224.0.0.0/24 (local subnetwork multicast range)
are never forwarded, hence it is not necessary to masquerade them.
In fact we must not masquerade them: translating their source addresses or
source ports (where applicable) may confuse receivers on virbrN.
One example is the DHCP client in OVMF (= UEFI firmware for virtual
machines):
http://thread.gmane.org/gmane.comp.bios.tianocore.devel/1506/focus=2640
It expects DHCP replies to arrive from remote source port 67. Even though
dnsmasq conforms to that, the destination address (255.255.255.255) and
the source address (eg. 192.168.122.1) in the reply allow the UDP
masquerading rule to match, which rewrites the source port to or above
1024. This prevents the DHCP client in OVMF from accepting the packet.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=709418
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Useful to set custom forwarders instead of using the contents of
/etc/resolv.conf. It helps me to setup dnsmasq as local nameserver to
resolve VM domain names from domain 0, when domain option is used.
Signed-off-by: Diego Woitasen <diego.woitasen@vhgroup.net>
Signed-off-by: Eric Blake <eblake@redhat.com>