Mark Asselstine 72d99b094b Avoid unnecessary error messages handling udev events
The udev monitor thread "udevEventHandleThread()" will lag the
actual/real view of devices in sysfs as it serially processes udev
monitor events. So for instance if you were to run the following cmd
to create a new veth pair and rename one of the veth endpoints

you might see the following monitor events and real world that looks like

                                     time
			              |    create v0 sysfs entry
wake udevEventHandleThread            |    create v1 sysfs entry
udev_monitor_receive_device(v1-add)   |    move v0 sysfs to v2
udevHandleOneDevice(v1)               |
udev_monitor_receive_device(v0-add)   |
udevHandleOneDevice(v0)               | <--- error msgs in virNetDevGetLinkInfo()
udev_monitor_receive_device(v2-move)  |      as v0 no longer exists
udevHandleOneDevice(v2)               |
                                     \/

As you can see the changes in sysfs can take place well before we get
to act on the events in the udevEventHandleThread(), so by the time we
get around to processing the v0 add event, the sysfs entry has been
moved to v2.

To work around this we check if the sysfs entry is valid before
attempting to read it and don't bother trying to read link info if
not. This is safe since we will never read sysfs entries earlier than
it existing, ie. if the entry is not there it has either been removed
in the time since we enumerated the device or something bigger is
busted, in either case, no sysfs entry, no link info. In the case
described above we will eventually get the link info as we work
through the queue of monitor events and get to the 'move' event.

https://bugzilla.redhat.com/show_bug.cgi?id=1557902

Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
2020-04-20 15:25:52 +02:00
..
2020-04-16 16:38:07 +02:00
2020-04-03 11:51:00 +02:00

       libvirt library code README
       ===========================

The directory provides the bulk of the libvirt codebase. Everything
except for the libvirtd daemon and client tools. The build uses a
large number of libtool convenience libraries - one for each child
directory, and then links them together for the final libvirt.so,
although some bits get linked directly to libvirtd daemon instead.

The files directly in this directory are supporting the public API
entry points & data structures.

There are two core shared modules to be aware of:

 * util/  - a collection of shared APIs that can be used by any
            code. This directory is always in the include path
            for all things built

 * conf/  - APIs for parsing / manipulating all the official XML
            files used by the public API. This directory is only
            in the include path for driver implementation modules

 * vmx/   - VMware VMX config handling (used by esx/ and vmware/)


Then there are the hypervisor implementations:

 * bhyve         - bhyve - The BSD Hypervisor
 * esx/          - VMware ESX and GSX support using vSphere API over SOAP
 * hyperv/       - Microsoft Hyper-V support using WinRM
 * lxc/          - Linux Native Containers
 * openvz/       - OpenVZ containers using cli tools
 * qemu/         - QEMU / KVM using qemu CLI/monitor
 * remote/       - Generic libvirt native RPC client
 * test/         - A "mock" driver for testing
 * vbox/         - Virtual Box using native API
 * vmware/       - VMware Workstation and Player using the vmrun tool
 * xen/          - Xen using hypercalls, XenD SEXPR & XenStore


Finally some secondary drivers that are shared for several HVs.
Currently these are used by LXC, OpenVZ, QEMU and Xen drivers.
The ESX, Hyper-V, Remote, Test & VirtualBox drivers all
implement the secondary drivers directly

 * cpu/          - CPU feature management
 * interface/    - Host network interface management
 * network/      - Virtual NAT networking
 * nwfilter/     - Network traffic filtering rules
 * node_device/  - Host device enumeration
 * secret/       - Secret management
 * security/     - Mandatory access control drivers
 * storage/      - Storage management drivers


Since both the hypervisor and secondary drivers can be built as
dlopen()able modules, it is *FORBIDDEN* to have build dependencies
between these directories. Drivers are only allowed to depend on
the public API, and the internal APIs in the util/ and conf/
directories