So far only QEMU_MONITOR_MIGRATION_CAPS_POSTCOPY was reset, but only in
a single code path leaving post-copy enabled in quite a few cases.
https://bugzilla.redhat.com/show_bug.cgi?id=1425003
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
It's only called from qemuMigrationReset now so it doesn't need to be
exported and {tls,sec}Alias are always NULL.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new API is supposed to reset all migration parameters to make sure
future migrations won't accidentally use them. This patch makes the
first step and moves qemuMigrationResetTLS call inside
qemuMigrationReset.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Migration parameters are either reset by the main migration code path or
from qemuProcessRecoverMigration* in case libvirtd is restarted during
migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Finished qemuMigrationRun does not mean the migration itself finished
(it might have just switched to post-copy mode). While resetting TLS
parameters is probably OK at this point even if migration is still
running, we want to consolidate the code which resets various migration
parameters. Thus qemuMigrationResetTLS will be called from the Confirm
phase (or at the end of the Perform phase in case of v2 protocol), when
migration is either canceled or finished.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessRecoverMigrationOut doesn't explicitly call
qemuMigrationResetTLS relying on two things:
- qemuMigrationCancel resets TLS parameters
- our migration code resets TLS before entering
QEMU_MIGRATION_PHASE_PERFORM3_DONE phase
But this is not obvious and the assumptions will be broken soon. Let's
explicitly reset TLS parameters on all paths which do not kill the
domain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
There is no async job running when a freshly started libvirtd is trying
to recover from an interrupted incoming migration. While at it, let's
call qemuMigrationResetTLS every time we don't kill the domain. This is
not strictly necessary since TLS is not supported when v2 migration
protocol is used, but doing so makes more sense.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Because of the changes done in the previous commit, @host is already a
migratable CPU and there's no need to do any additional filtering.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We will need to store two more host CPU models and nested structs look
better than separate items with long complicated names.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new internal API makes a copy of virCPUDef while removing all
features which would block migration. It uses cpu_map.xml as a database
of such features, which should only be used as a fallback when we cannot
get the data from a hypervisor. The main goal of this API is to decouple
this filtering from virCPUUpdate so that the hypervisor driver can
filter the features according to the hypervisor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If formatting NUMA topology fails, the function returns immediatelly,
but the buffer structure allocated on the stack references lot of
heap-allocated memory and that would get lost in such case.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
qemuProcessVerifyHypervFeatures is supposed to check whether all
requested hyperv features were actually honored by QEMU/KVM. This is
done by checking the corresponding CPUID bits reported by the virtual
CPU. In other words, it doesn't work for string properties, such as
VIR_DOMAIN_HYPERV_VENDOR_ID (there is no CPUID bit we could check). We
could theoretically check all 96 bits corresponding to the vendor
string, but luckily we don't have to check the feature at all. If QEMU
is too old to support hyperv features, the domain won't even start.
Otherwise, it is always supported.
Without this patch, libvirt refuses to start a domain which contains
<features>
<hyperv>
<vendor_id state='on' value='...'/>
</hyperv>
</features>
reporting internal error: "unknown CPU feature __kvm_hv_vendor_id.
This regression was introduced by commit v3.1.0-186-ge9dbe7011, which
(by fixing the virCPUDataCheckFeature condition in
qemuProcessVerifyHypervFeatures) revealed an old bug in the feature
verification code. It's been there ever since the verification was
implemented by commit v1.3.3-rc1-5-g95bbe4bf5, which effectively did not
check VIR_DOMAIN_HYPERV_VENDOR_ID at all.
https://bugzilla.redhat.com/show_bug.cgi?id=1439424
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Introduce STRICT_FRAME_LIMIT_CFLAGS that will be used for
production code and RELAXED_FRAME_LIMIT_CFLAGS for tests.
Raising the limit for tests allows building them with clang
with optimizations disabled.
This header file has been created so that we can expose
internal functions to the test suite without making them
public: those in qemu_capabilities.h bearing the comment
/* Only for use by test suite */
are obvious candidates for being moved over.
This function runs an iscsi command and parses its output.
However, due to the nature of things, virISCSIExtractSession()
callback can be called multiple times. In each run it would
allocate new memory and overwrite the variable where we keep
pointer to it and thus leaking old allocations.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Even though the virMacMap object is not necessarily created at
the same time as the network object, the former makes no sense
without the latter and thus should be unref'd in the network
object dispose function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Imagine that this function is called twice over the same disk
source. While in the first run all allocated memory is freed, not
all pointers are set to NULL (e.g. def->srcpool). So when called
again, these poitners are freed again resulting in double free.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
After restart of libvirtd the 'checkPool' method is supposed to validate
that the pool is online. Since libvirt then refreshes the pool contents
anyways just return whether the pool was supposed to be online so that
the code can be reached. This is necessary since if a pool does not
implement the method it's automatically considered as inactive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436065
qemu requires that the topology equals to the maximum vcpu count.
Document this along with the API to set maximum vcpu count and the XML
element.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1426220
Use the relative lookup specifier rather than the global one. Otherwise
only the first name would be looked up. Add a test case to cover the
scenario.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436574
The native gluster pool source list data differs from the data used for
attaching gluster volumes as netfs pools. Currently the only difference
was the format. Since native pools don't use it and later there will be
more differences add a more deterministic way to switch between the
types instead.
Similar to commit b202c39 ignore the warning that breaks the build
with clang:
util/virnetlink.c:365:52: error: cast from 'char *' to 'struct nlmsghdr *'
increases required alignment from 1 to 4 [-Werror,-Wcast-align]
for (msg = resp; NLMSG_OK(msg, len); msg = NLMSG_NEXT(msg, len)) {
^~~~~~~~~~~~~~~~~~~~
/usr/include/linux/netlink.h:87:7: note: expanded from macro 'NLMSG_NEXT'
(struct nlmsghdr*)(((char*)(nlh)) + NLMSG_ALIGN((nlh)->nlmsg_len)))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buggy condition meant that vcpu0 would not be iterated in the checks.
Since it's not hotpluggable anyways we would not be able to break the
configuration of a live VM.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1437013
Like all devices, add the 'id' option for mdevs as well. Patch also
adjusts the test accordingly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1438431
Signed-off-by: Erik Skultety <eskultet@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1371892
The 'capacity' value (e.g. guest logical size) for a LUKS volume is
smaller than the 'physical' value of the file in the file system, so
we need to account for that.
When peeking at the encryption information about the volume add a fetch
of the payload_offset which is described as the offset to the start of
the volume data (in 512 byte sectors) in QEMU's QCryptoBlockLUKSHeader.
Then adjust the ->capacity appropriately when we determine that the
volume target encryption has a payload_offset value.
Add check for more than one RTA_OIF, even though this is rather
unlikely.
Get rid of the buggy switch / break as this code won't need to
handle more attributes.
Use VIR_WARNINGS_NO_CAST_ALIGN to fix impossible to fix
util/virnetdevip.c:560:17: error: cast increases required alignment of target type [-Werror=cast-align]
Depending on the architecture, requirements for ACPI and UEFI can
be different; more specifically, while on x86 UEFI requires ACPI,
on aarch64 it's the other way around.
Enforce these requirements when validating the domain, and make
the error message more accurate by mentioning that they're not
necessarily applicable to all architectures.
Several aarch64 test cases had to be tweaked because they would
have failed the validation step otherwise.
The capabilities used in test cases should match those used
during normal operation for the tests to make any sense.
This results in the generated command line for a few test
cases (most notably non-x86 test cases that were wrongly
assuming they could use -no-acpi) changing.
Instead of having a single function that probes the
architecture from the monitor and then sets a bunch of
basic capabilities based on it, have a separate function
for each part: virQEMUCapsInitQMPArch() only sets the
architecture, and virQEMUCapsInitQMPBasicArch() only sets
the capabilities.
This split will be useful later on, when we will want to
set basic capabilities from the test suite without having
to go through the pain of mocking the monitor.
If a transient storage pool is deemed inactive after libvirtd restart it
would not be deleted from the list. Reuse virStoragePoolUpdateInactive
along with a refactor necessary to properly update the state.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242801
After a pool is made inactive the definition objects need to be updated
(if a new definition is prepared) and transient pools need to be
completely removed. Split out the code doing these steps into a separate
function for later reuse.
When registering a storage poll backend, the code would use
virStorageTypeToString instead of virStoragePoolTypeToString. The
following message would be logged:
virDriverLoadModuleFunc:71 : Lookup function 'virStorageBackendSCSIRegister'
virStorageBackendRegister:174 : Registering storage backend '(null)'
Currently, if we want to zero out disk source (e,g, due to
startupPolicy when starting up a domain) we use
virDomainDiskSetSource(disk, NULL). This works well for file
based storage (storage type file, dir, or block). But it doesn't
work at all for other types like volume and network.
So imagine that you have a domain that has a CDROM configured
which source is a volume from an inactive pool. Because it is
startupPolicy='optional', the CDROM is empty when the domain
starts. However, the source element is not cleared out in the
status XML and thus when the daemon restarts and tries to
reconnect to the domain it refreshes the disks (which fails - the
storage pool is still not running) and thus the domain is killed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The virMacMap module is there for dumping [domain, <list of is
MACs>] pairs into a file so that libvirt_guest NSS module can use
it. Whenever a interface is allocated from network (e.g. on
domain<F2> startup or NIC hotplug), network is notified and so is
virMacMap module subsequently. The module update functions
networkMacMgrAdd() and networkMacMgrDel() gracefully handle the
case when there's no module. The problem is, the module is
created if and only if network is freshly started, or if the
daemon restarts and network previously had the module.
This is not very user friendly - if users want to use the NSS
module they need to destroy their network and bring it up again
(and subsequently all the domains using it).
One disadvantage of this approach implemented here is that one
may get just partial results: any already running network does
not record mac maps, thus only newly plugged domains will be
stored in the module. The network restart scenario is not touched
by this of course. But one can argue that older libvirts had
never recorded the mac maps anyway.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far our code is full of the following pattern:
dom = virGetDomain(conn, name, uuid)
if (dom)
dom->id = 42;
There is no reasong why it couldn't be just:
dom = virGetDomain(conn, name, uuid, id);
After all, client domain representation consists of tuple (name,
uuid, id).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There was an unhandled 'open' call which resulted in:
"error: Library function returned error but did not set virError"
Even if this happens during the daemon's start when we still don't have
any set of outputs defined yet, we can safely report an error, since we
automatically fallback to stderr which is fine even for both
running as a daemonized process, since this happens before the daemon
forks into the background, and running as a systemd service, since
systemd re-directs std outputs to journald by default.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436060
Signed-off-by: Erik Skultety <eskultet@redhat.com>
In 9e2465834 a check that denies internal snapshots when pflash
based loader is configured for the domain. However, if there's
none and an user tries to do an internal snapshot they will
witness daemon crash as in that case vm->def->os.loader is NULL
and we dereference it unconditionally.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
CPU features which change their value from disabled to enabled between
two calls to query-cpu-model-expansion (the first with no extra
properties set and the second with 'migratable' property set to false)
can be marked as enabled and non-migratable in qemuMonitorCPUModelInfo.
Since the code consuming qemuMonitorCPUModelInfo currently ignores the
migratable flag, this change is effectively changing the CPU model
advertised in domain capabilities to contain all features (even those
which block migration). And this matches what we do for QEMU older than
2.9.0, when we detect all CPUID bits ourselves without asking QEMU.
As a result of this change
<cpu mode='host-model'>
<feature name='invtsc' policy='require'/>
</cpu>
will work with all QEMU versions. Such CPU definition would be forbidden
with QEMU >= 2.9.0 without this patch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If calling query-cpu-model-expansion on the 'host'/'max' CPU model with
'migratable' property set to false succeeds, we know QEMU is able to
tell us which features would disable migration. Thus we can mark all
enabled features as migratable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
QEMU is able to tell us whether a CPU feature would block migration or
not. This patch adds support for storing such features in
qemuMonitorCPUModelInfo.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When idx is 0 virStorageFileChainLookup returns the base (bottom) of the
backing chain rather than the top. This is expected by the callers of
qemuDomainGetStorageSourceByDevstr.
Add a special case for idx == 0
One of the problems with our virGetDomain function is that it
copies just domain name and domain UUID. Therefore it's very
easy to forget aboud domain ID. This can cause some bugs, like
virConnectGetAllDomainStats not reporting proper domain IDs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1434882
Imagine the following scenario:
1) virsh net-start default
2) virsh start myFavouriteDomain
3) virsh net-destroy default
4) virsh destroy myFavouriteDomain
(assuming myFavouriteDomain has an interface from default
network)
Regardless of how unlikely this scenario looks like, we should
not crash. The problem is, on net-destroy in
networkShutdownNetworkVirtual() the virMacMap module is unrefed,
but the stale pointer is kept around. Thus when the domain
destroy procedure comes in, networkReleaseActualDevice() and
subsequently networkMacMgrDel() is called. This function sees the
stale pointer and starts calling the virMacMap module APIs which
work over freed memory.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
off_t is signed and it's size is the same as long only on 64b archs.
Thus it cannot be formatted as %lu.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The value we use internally to represent the lack of a memory
locking limit, VIR_DOMAIN_MEMORY_PARAM_UNLIMITED, doesn't
match the value setrlimit() and prlimit() use for the same
purpose, RLIM_INFINITY, so we have to handle the translation
ourselves.
Partially-resolves: https://bugzilla.redhat.com/1431793
For guests that use <memoryBacking><locked>, our only option
is to remove the memory locking limit altogether.
Partially-resolves: https://bugzilla.redhat.com/1431793
Instead of having a separate function, we can simply return
zero from the existing qemuDomainGetMemLockLimitBytes() to
signal the caller that the memory locking limit doesn't need
to be set for the guest.
Having a single function instead of two makes it less likely
that we will use the wrong value, which is exactly what
happened when we started applying the limit that was meant
for VFIO-using guests to <memoryBacking><locked>-using
guests.
This reverts commit c2e60ad0e5.
Turns out this check is excessively strict: there are ways
other than <memtune><hard_limit> to raise the memory locking
limit for QEMU processes, one prominent example being
tweaking /etc/security/limits.conf.
Partially-resolves: https://bugzilla.redhat.com/1431793
If if_indextoname is not defined, the whole function using it should
not be defined either. Add stub to fix build on mingw.
Caused by 5dd607059d
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Creating a copy of the definition we want to add in a migration cookie
makes the code cleaner and less prone to memory leaks or double free
errors.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1398087
Clean up the virsh man page description for --pool-create-as in order
to better describe how the various arguments are used when creating
(or defining) a logical pool.
Also modify the storage pool XML parsing algorithm to check for the
mismatched "name" and "source-name".
While parsing if the storage source is not present, then a defaultFormat
was not set. This could lead to oddities such as seeing "unknown" format
in output for the "logical" pool even though the only format the pool could
support would be "lvm2".
This does "put a label" on other pool defaults as follows:
File System: FS_AUTO
Network File System: NETFS_AUTO
Disk: UNKNOWN
Each of which is the "0" value for their respective pools and thus
would be no "real" change.
QEMU allows for TSC frequency to be explicitly set to enable migration
with invtsc (migration fails if the destination QEMU cannot set the
exact same frequency used when starting the domain on the source host).
Libvirt already supports setting the TSC frequency in the XML using
<clock>
<timer name='tsc' frequency='1234567890'/>
</clock>
which will be transformed into
-cpu Model,tsc-frequency=1234567890
QEMU command line.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The frequency is documented and formatted as an attribute of the <timer>
element rather than a nested <frequency> element expected by the parser.
Luckily enough, timer frequency has not been used by any driver so far.
And users were not able to set it in the XML either.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
- Make virMediatedDeviceNew() stub args match its prototype
- Fix typo: virRerportError -> virReportError
- Move MDEV_SYSFS_DEVICES definition out of the #ifdef __linux__ block
so we don't have to stub virMediatedDeviceGetSysfsPath()
https://bugzilla.redhat.com/show_bug.cgi?id=1430679
As it turns out some file headers (e.g. ext4) may be larger/longer than
the 512 bytes of zeros being written prior to a pvcreate, so let's write
out 2048 bytes similar to how the pvcreate sources would peek at the first
4 sectors of the device.
Make sure there is at enough bytes on the device to clear before doing
doing the clear - just to be sure.
This adds a few validations to the devices listed for a hostdev network:
* devices must be listed by PCI address, not by netdev name
* listing a device by PCI address is valid only for hostdev networks, not
for other types of network (e.g. macvtap passthrough).
* each device in a hostdev pool must be an SR-IOV VF
Resolves: https://bugzilla.redhat.com/1004676
Previously, this function must've been called only on Linux in order
to fail gracefully. That lead to #ifdef mess in callers, so the
function was redesigned so it failed gracefully on non-existing
files. However that commit forgot to define the function outside the
__linux__ ifdef, it broke non-Linux builds.
Caused by c67e04e25f.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The public API flags are handled by the cpuBaselineXML wrapper. The
internal cpuBaseline API only needs to know whether it is supposed to
drop non-migratable features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
cpuBaseline is responsible for computing a baseline CPU while feature
expansion is done by virCPUExpandFeatures. The cpuBaselineXML wrapper
(used by hypervisor drivers to implement virConnectBaselineCPU API)
calls cpuBaseline followed by virCPUExpandFeatures if requested by
VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES flag.
The features in the three changed test files had to be sorted using
"sort -k 3" because virCPUExpandFeatures returns a sorted list of
features.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Having to use cpuBaseline with VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES
flag to expand CPU features is strange. Not to mention that cpuBaseline
can only expand host CPU definitions (i.e., it completely ignores
feature policies). The new virCPUExpandFeatures API is designed to work
with both host and guest CPU definitions.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This is the maximum for many reasons, for starters because index ==
bus number, and a controller's bus number is 8 bits.
This incidentally resolves: https://bugzilla.redhat.com/1329090
Having this information available will make it easier to determine the
culprit when MAC or vlan tag appear to not be set, eg.:
https://bugzilla.redhat.com/1364073
(This patch doesn't fix that bug, just makes it easier to diagnose)
If an SRIOV VF has previously been used for VFIO device assignment,
the "admin MAC" that is stored in the PF driver's table of VF info
will have been set to the MAC address that the virtual machine wanted
the device to have. Setting the admin MAC for a VF also sets a flag in
the PF that is loosely called the "administratively set" flag. Once
that flag is set, it is no longer possible for the net driver of the
VF (either on the host or in a virtual machine) to directly set the
VF's MAC again; this flag isn't reset until the *PF* driver is
restarted, and that requires taking *all* VFs offline, so it's not
really feasible to do.
If the same SRIOV VF is later used for macvtap passthrough mode, the
VF's MAC address must be set, but normally we don't unbind the VF from
its host net driver (since we actually need the host net driver in
this case). Since setting the VF MAC directly will fail, in the past
"we" ("I") had tried to fix the problem by simply setting the admin MAC
(via the PF) instead. This *appeared* to work (and might have at one
time, due to promiscuous mode being turned on somewhere or something),
but it currently creates a non-working interface because only the
value for admin MAC is set to the desired value, *not* the actual MAC
that the VF is using.
Earlier patches in this series reverted that behavior, so that we once
again set the MAC of the VF itself for macvtap passthrough operation,
not the admin MAC. But that brings back the original bug - if the
interface has been used for VFIO device assignment, you can no longer
use it for macvtap passthrough.
This patch solves that problem by noticing when virNetDevSetMAC()
fails for a VF, and in that case it sets the desired MAC to the admin
MAC via the PF, then "bounces" the VF driver (by unbinding and the
immediately rebinding it to the VF). This causes the VF's MAC to be
reinitialized from the admin MAC, and everybody is happy (until the
*next* time someone wants to set the VF's MAC address, since the
"administratively set" bit is still turned on).
Some PF drivers allow setting the admin MAC (that is the MAC address
that the VF will be initialized to the next time the VF's driver is
loaded) to 00:00:00:00:00:00, and some don't. Multiple drivers
initialize the admin MACs to all 0, but don't allow setting it to that
very same value. It has been an uphill battle convincing the driver
people that it's reasonable to expect The argument that's used is
that an all 0 device MAC address on a device is invalid; however, from
an outsider's point of view, when the admin MAC is set to 0 at the
time the VF driver is loaded, the VF's MAC is *not* set to 0, but to a
random non-0 value. But that's beside the point - even if I could
convince one or two SRIOV driver maintainers to permit setting the
admin MAC to 0, there are still several other drivers.
So rather than fighting that losing battle, this patch checks for a
failure to set the admin MAC due to an all 0 value, and retries it
with 02:00:00:00:00:00. That won't result in a random value being set
in the VF MAC at next VF driver init, but that's okay, because we
always want to set a specific value anyway. Rather, the "almost 0"
setting makes it easy to visually detect from the output of "ip link
show" which VFs are currently in use and which are free.
The global functions virNetDevReplaceMacAddress(),
virNetDevReplaceNetConfig(), virNetDevRestoreMacAddress(), and
virNetDevRestoreNetConfig() are no longer used, as their functionality
has been replaced by virNetDev(Save|Read|Set)NetConfig().
The static functions virNetDevReplaceVfConfig() and
virNetDevRestoreVfConfig() were only used by the above-named global
functions that were removed.
It takes longer to explain this than to fix it...
In the past we weren't able to save the VF's own MAC address *at all*
when using it for hostdev assignment, because we had already unbound
the VF from the host net driver prior to saving its config. With the
previous patch, that problem has been solved, so we now have the VF's
MAC address saved and can move on to the *next* problem, which is twofold:
1) during teardown we restore the config before we've re-bound, so the
VF doesn't have a net driver, and thus we can't set its MAC address
directly.
2) even if we delay restoring the config until the VF is bound to a
net driver, the request to set its MAC address would fail, since
(during device setup) we had set the "admin MAC" for the VF via an
RTM_SETLINK to the PF - once you've set the admin MAC for a VF, the
VF driver (either on host or on guest) is not allowed to change the
VF's MAC address "forever" (well, until you reload the PF driver,
but that requires destroying and recreating every single VF, which
isn't something you can require).
The solution is to keep the restoration of config at the same place,
but to set the *admin MAC* to the address you want the VF to have -
when the VF net driver is later initialized (as a part of re-binding
to the VF net driver) its MAC will be initialized to the current value
of the admin MAC.
In order to properly restore the original state of an SRIOV VF when
we're finished with it, we need to save the MAC address of the VF
itself (not just the admin MAC address for the VF that is stored in
the PF). But that can only be done when the VF is still bound to the
host's netdev driver, and we have always done the saving of device
config after the VF is already bound to vfio-pci. This patch prepares
us for adding a save of the VF's MAC by calling the function that
saves netconfig earlier in the device preparation, before we've
unbound it from the host netdev driver.
These two operations will need to be separated so that saving of the
original config is done before detaching the host net driver, and
setting the new config is done after attaching vfio-pci. This patch
splits the single function into two, but for now calls them together
(to make bisecting easier if there is a regression).
virHostdevNetConfigReplace() and virHostdevNetConfigRestore() are
modified to use the new virNetDev*NetConfig() functions.
Note that due to the VF's original MAC addresses being saved after it
has already been un-bound from the host net driver, the actual current
VF MAC address won't be saved (because it no longer exists) - only the
"admin MAC" will be saved. This reflects existing behavior that will
be fixed in an upcoming patch.
This patch modifies the macvtap passthrough setup to use
virNetDevSaveNetConfig()+virNetDevSetConfig() instead of
virNetDevReplaceNetConfig() or virNetDevReplaceMacAddress(), and the
teardown to use virNetDevReadNetConfig()+virNetDevSetConfig() instead
of virNetDevRestoreNetConfig() or virNetDevRestoreMacAddress().
Since the older functions only saved/restored the admin MAC and vlan
tag (which is incorrect) and the new functions save/restore the VF's
own MAC address and vlan tag (correct), this actually fixes a bug
(which was introduced by commit cb3fe38c7, which was itself supposed
to be a fix for https://bugzilla.redhat.com/1113474 ).
The downside to this patch is that it causes an *apparent* regression
in that bug (because there will once again be an error reported if the
interface had previously been used for VFIO device assignment), but in
reality, the code hasn't been working for *any* case before this
current patch (at least not with any recent kernel). Anyway, that
"regression" will be fixed with an upcoming patch that fixes it the
*right* way.
These three functions are destined to replace
virNetDev(Replace|Restore)NetConfig() and
virNetDev(Replace|Restore)MacAddress(), which both do the save and set
together as a single step. We need to separate the save, read, and set
steps because there will be situations where we need to do something
else in between (in particular, we will need to rebind a VF's driver
after save but before set).
This patch creates the new functions, but doesn't call them - that
will come in a subsequent patch. Note that the new functions to
read/write the file that stores the original network config now uses
JSON rather than plaintext (it still recognizes the old format as well
though, so it won't get confused during an upgrade).
The hyperv panic notifier reports additional data in form of 5 registers
that are reported in the crash event from qemu. Log them into the VM log
file and report them as a warning so that admins can see the cause of
crash of their windows VMs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1426176
For certain kinds of panic notifiers (notably hyper-v) qemu is able to
report some data regarding the crash passed from the guest.
Make the data accessible to the callback in qemu so that it can be
processed further.
Format the mediated devices on the qemu command line as
-device vfio-pci,sysfsdev='/path/to/device/in/syfs'.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Since mdevs are just another type of VFIO devices, we should increase
the memory locking limit the same way we do for VFIO PCI devices.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
As goes for all the other hostdev device types, grant the qemu process
access to /dev/vfio/<mediated_device_iommu_group>.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Keep track of the assigned mediated devices the same way we do it for
the rest of hostdevs. Methods like 'Prepare', 'Update', and 'ReAttach'
are introduced by this patch.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
So far, the official support is for x86_64 arch guests so unless a
different device API than vfio-pci is available let's only turn on
support for PCI address assignment. Once a different device API is
introduced, we can enable another address type easily.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
This merely introduces virDomainHostdevMatchSubsysMediatedDev method that
is supposed to check whether device being cold-plugged does not already
exist in the domain configuration.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
This patch updates all of our security driver to start labeling the
VFIO IOMMU devices under /dev/vfio/ as well.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
A mediated device will be identified by a UUID (with 'model' now being
a mandatory <hostdev> attribute to represent the mediated device API) of
the user pre-created mediated device. We also need to make sure that if
user explicitly provides a guest address for a mdev device, the address
type will be matching the device API supported on that specific mediated
device and error out with an incorrect XML message.
The resulting device XML:
<devices>
<hostdev mode='subsystem' type='mdev' model='vfio-pci'>
<source>
<address uuid='c2177883-f1bb-47f0-914d-32a22e3a8804'>
</source>
</hostdev>
</devices>
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Beside creation, disposal, getter, and setter methods the module exports
methods to work with lists of mediated devices.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Just to make the code a bit cleaner, move hostdev specific post parse
code to its own function just in case it grows in the future.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Just a tiny wrapper over the SCSI def clearing logic to drop some
if-else branches from a switch, mainly because extending the switch in
the future would render the current code with branching less readable.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Enforce virDomainHostdevSubsysType checking during compilation. Again,
one of a few spots in our code where we should enforce the typecast to
the enum type, thus not forgetting to update *all* switch occurrences
dealing with the give enum.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
We keep forgetting that older setups don't like 'index':
CC util/libvirt_util_la-virsysinfo.lo
cc1: warnings being treated as errors
util/virstoragefile.c: In function 'virStorageSourceFindByNodeName':
util/virstoragefile.c:3804: error: declaration of 'index' shadows a global declaration [-Wshadow]
/usr/include/string.h:489: error: shadowed declaration is here [-Wshadow]
Signed-off-by: Eric Blake <eblake@redhat.com>
This way more drivers can utilize the functionality without copying
the code. And we can therefore test it in one place for all of them.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
That file has only two exported files and each one of them has
different naming. virNode is what all the other files use, so let's
use it. It wasn't used before because the clash with public API
naming, so let's fix that by shortening the name (there is no other
private variant of it anyway).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There is no "node driver" as there was before, drivers have to do
their own ACL checking anyway, so they all specify their functions and
nodeinfo is basically just extending conf/capablities. Hence moving
the code to src/conf/ is the right way to go.
Also that way we can de-duplicate some code that is in virsysfs and/or
virhostcpu that got duplicated during the virhostcpu.c split. And
Some cleanup is done throughout the changes, like adding the vir*
prefix etc.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There is no reason for it not to be in the utils, all global symbols
under that file already have prefix vir* and there is no reason for it
to be part of DRIVER_SOURCES because that is just a leftover from
older days (pre-driver modules era, I believe).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
While on that, drop support for kernels from RHEL-5 era (missing
cpu/present file). Also add some useful functions and export them.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
By using this we are able to easily switch the sysfs path being
used (fake it). This will not only help tests in the future but can
be also used from files where the code is duplicated currently.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
These helpers are doing just a read and covert the value, but they
properly size the read limit, handle additional whitespace characters,
and unify error reporting.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commits eaf18f4c2b and 86dd9fac0f separated util/host{cpu,mem}
stuff from nodeinfo, but did not adjust the syms file.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
It is everywhere else. I even remember one of our scripts failing if
the newline is missing, but it doesn't happen currently.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Don't leak guest if adding it to virCapabilities fails. Also return
NULL and not pointer to free'd object with zero references in such
case.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Guests are handled in callers, but if something goes wrong (when it
cannot be added to virCapabilities, for example), there's no way for
them to free it properly.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Both QEMU and bhyve are using the same function for setting up the CPU
in virCapabilities, so de-duplicate it, save code and time, and help
other drivers adopt it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
STREQ_NULLABLE returns true if both parameters are NULL. And that's
not what we want here. We just want to skop comparing source nodes
that don't have that info set. The function wouldn't make much sense
with nodeName == NULL, so we don't need to check that. Moreover, the
function's declaration uses ATTRIBUDE_NONNULL for nodeName, which not
only means that function expects the parameter not to be NULL, but
actually tells the compiler that it can optimize out the NULL checks.
That way it could end up calling strcmp on NULL (either nodeformat or
nodebacking). GCC figures this out if libvirt is compiled with
lv_cv_static_analysis=yes, unfortunately not everyone uses that.
Caused by cbc6d53513.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Management tools may want to check whether the threshold is still set if
they missed an event. Add the data to the bulk stats API where they can
also query the current backing size at the same time.
To allow updating stats based on the node name, add a helper function
that will fetch the required data from 'query-named-block-nodes' and
return it in hash table for easy lookup.
Detect the node names when setting block threshold and when reconnecting
or when they are cleared when a block job finishes. This operation will
become a no-op once we fully support node names.
To allow matching the node names gathered via 'query-named-block-nodes'
we need to query and then use the top level nodes from 'query-block'.
Add the data to the structure returned by qemuMonitorGetBlockInfo.
qemu for some time already sets node names automatically for the block
nodes. This patch adds code that attempts a best-effort detection of the
node names for the backing chain from the output of
'query-named-block-nodes'. The only drawback is that the data provided
by qemu needs to be matched by the filename as seen by qemu and thus
if two disks share a single backing store file the detection won't work.
This will allow us to use qemu commands such as
'block-set-write-threshold' which only accepts node names.
In this patch only the detection code is added, it will be used later.
Add monitor tooling for calling query-named-block-nodes. The monitor
returns the data as the raw JSON array that is returned from the
monitor.
Unfortunately the logic to extract the node names for a complete backing
chain will be so complex that I won't be able to extract any meaningful
subset of the data in the monitor code.
The code is currently simple, but if we later add node names, it will be
necessary to generate the names based on the node name. Add a helper so
that there's a central point to fix once we add self-generated node
names.
The event is fired when a given block backend node (identified by the
node name) experiences a write beyond the bound set via
block-set-write-threshold QMP command. This wires up the monitor code to
extract the data and allow us receiving the events and the capability.
When using thin provisioning, management tools need to resize the disk
in certain cases. To avoid having them to poll disk usage introduce an
event which will be fired when a given offset of the storage is written
by the hypervisor. Together with the API which will be added later, it
will allow registering thresholds for given storage backing volumes and
this event will then notify management if the threshold is exceeded.
The function has very specific semantics. Split out the part that parses
the backing store specification string into a separate helper so that it
can be reused later while keeping the wrapper with existing semantics.
Note that virStorageFileParseChainIndex is pretty well covered by the
test suite.
Along with video and VNC support, bhyve has introduced USB tablet
support as an input device. This tablet is exposed to a guest
as a device on an XHCI controller.
At present, tablet is the only supported device on the XHCI controller
in bhyve, so to make things simple, it's allowed to only have a
single XHCI controller with a single tablet device.
In detail, this commit:
- Introduces a new capability bit for XHCI support in bhyve
- Adds an XHCI controller and tabled support with 1:1 mapping
between them
- Adds a couple of unit tests
There are a number of functions in bhyve_capabilities.c that probe
hypervisor capabilities by executing the bhyve(1) binary with the
specific device arugment, checking error message (if any) and setting
proper capability bit. As those are extremely similar, move this logic
into a helper function and convert existing functions to use that.
* Extract filling bhyve capabilities from virBhyveDomainCapsBuild()
into a new function virBhyveDomainCapsFill() to make testing
easier by not having to mock firmware directory listing and
hypervisor capabilities probing
* Also, just presence of the firmware files is not sufficient
to enable os.loader.supported, hypervisor should support UEFI
boot too
* Add tests to domaincapstest for the main caps possible flows:
- when UEFI bootrom is supported
- when video (fbus) is supported
- neither of above is supported
qemuMigrationResetTLS() does not initialize 'ret' by default,
so when it jumps to 'cleanup' on error, the 'ret' variable will be
uninitialized, which clang complains about.
Set it to '-1' by default.
https://bugzilla.redhat.com/show_bug.cgi?id=1300769
If the migration flags indicate this migration will be using TLS,
then while we have connection in the Begin phase check and setup the
TLS environment that will be used by virMigrationRun during the Perform
phase for the source to configure TLS.
Processing adds an "-object tls-creds-x509,endpoint=client,..." and
possibly an "-object secret,..." to handle the passphrase response.
Then it sets the 'tls-creds' and possibly 'tls-hostname' migration
parameters.
The qemuMigrateCancel will clean up and reset the environment as it
was originally found.
Signed-off-by: John Ferlan <jferlan@redhat.com>
If the migration flags indicate this migration will be using TLS,
then set up the destination during the prepare phase once the target
domain has been started to add the TLS objects to perform the migration.
This will create at least an "-object tls-creds-x509,endpoint=server,..."
for TLS credentials and potentially an "-object secret,..." to handle the
passphrase response to access the TLS credentials. The alias/id used for
the TLS objects will contain "libvirt_migrate".
Once the objects are created, the code will set the "tls-creds" and
"tls-hostname" migration parameters to signify usage of TLS.
During the Finish phase we'll be sure to attempt to clear the
migration parameters and delete those objects (whether or not they
were created). We'll also perform the same reset during recovery
if we've reached FINISH3.
If the migration isn't using TLS, then be sure to check if the
migration parameters exist and clear them if so.
Add an asyncJob argument for add/delete TLS Objects. A future patch will
add/delete TLS objects from a migration which may have a job to join.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add the fields to support setting tls-creds and tls-hostname during
a migration (either source or target). Modify the query migration
function to check for the presence and set the field for future
consumers to determine which of 3 conditions is being met (NULL,
present and set to "", or present and sent to something). These
correspond to qemu commit id '4af245dc3' which added support to
default the value to "" and allow setting (or resetting) to ""
in order to disable. This reset option allows libvirt to properly
use the tls-creds and tls-hostname parameters.
Modify code paths that either allocate or use stack space in order
to call qemuMigrationParamsClear or qemuMigrationParamsFree for cleanup.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add a new TLS X.509 certificate type - "migrate". This will handle the
creation of a TLS certificate capability (and possibly repository) to
be used for migrations. Similar to chardev's, credentials will be handled
via a libvirt secrets; however, unlike chardev's enablement and usage
will be via a CLI flag instead of a conf flag and a domain XML attribute.
The migrations using the *x509_verify flag require the client-cert.pem
and client-key.pem files to be present in the TLS directory - so let's
also be sure to note that in the qemu.conf file.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Fix typo in virNetDevPFGetVF() stub:
ATTRUBUTE_UNUSED -> ATTRIBUTE_UNUSED.
While here, use common indent style for arguments in
virNetDevGetVirtualFunctionIndex() stub.
If the variable store (<nvram>) file is raw qemu can't do a snapshot of
it and thus the snapshot fails. QEMU rejects such snapshot by a message
which would not be properly interpreted as an error by libvirt.
Additionally allowing to use a qcow2 variable store backing file would
solve this issue but then it would become eligible to become target of
the memory dump.
Offline internal snapshot would be incomplete too with either storage
format since libvirt does not handle the pflash file in this case.
Forbid such snapshot so that we can avoid problems.
commit 00d28a78 added a check to see if there were any IPv6 routes
added by RA (Router Advertisement) via an interface that had accept_ra
set to something other than "2". The check was being done
unconditionally, but it's only relevant if IPv6 forwarding is going to
be turned on, and that will only happen if the network has an IPv6
address.
Given an SRIOV PF netdev name (e.g. "enp2s0f0") and VF#, this new
function returns the netdev name of the referenced VF device
(e.g. "enp2s11f6"), or NULL if the device isn't bound to a net driver.
We will want to allow silent failure of virNetDevSetMAC() in the case
that the SIOSIFHWADDR ioctl fails with errno == EADDRNOTAVAIL. (Yes,
that is very specific, but we really *do* want a logged failure in all
other circumstances, and don't want to duplicate code in the caller
for the other possibilities).
This patch renames the 3 different virNetDevSetMAC() functions to
virNetDevSetMACInternal(), adding a 3rd arg called "quiet" and making
them static (because this extra control will only be needed within
virnetdev.c). A new global virNetDevSetMAC() is defined that calls
whichever of the three *Internal() functions gets compiled with quiet
= false. Callers in virnetdev.c that want to notice a failure with
errno == EADDRNOTAVAIL and retry with a different strategy rather than
immediately failing, can call virNetDevSetMACInternal(..., true).
This function unbinds a device from its driver, then immediately
rebinds it to its driver again. The code for this new function is just
the 2nd half of virPCIDeviceBindWithDriverOverride(), so that
function's 2nd half is replaced with a call to virPCIDeviceRebind().
...and cleanup the callers to report it when it *is* an error.
In many cases It's useful for virPCIGetNetName() to not log an error
and simply return a NULL pointer when the given device isn't bound to
a net driver (e.g. we're looking at a VF that is permanently bound to
vfio-pci). The existing code would silently return an error in this
case, which could eventually lead to the dreaded "An error occurred
but the cause is unknown" log message.
This patch changes virPCIGetNetName() to still return success if the
device simply isn't bound to a net driver, and adjusts all the callers
that require a non-null netname to check for that condition and log an
error when it happens.
vf in virNetDevMacVLanDeleteWithVPortProfile() is initialized to -1
and never set. It's not set for a good reason - because it doesn't
make sense during macvtap device setup to refer to a VF device as
"PF:VF#". This patch replaces the two uses of "vf" with "-1", and
removes the local variable, so that it's more clear we are always
calling the utility functions with vf set to -1.
This function is only called in two places, and the ifindex,
nltarget_kernel, and getPidFunc args are never used (and never will
be).
ifindex - we always know the name of the device, and never know the
ifindex - if we really did need the ifindex we would have to get it
from the name using virNetDevGetIndex(). In practice, we just send -1
to virNetDevSetVfConfig(), which doesn't bother to learn the real
ifindex (you only need a name *or* an ifindex for the netlink command
to succeed, not both).
nltarget_kernel - messages to set the config of an SRIOV VF will
always go to netlink in the kernel, not to another user process, so
this arg is always true (there are other uses of netlink messages
where the message might need to go to another user process, but never
in the case of RTM_SETLINK for SRIOV).
getPidFunc - this arg is only used if nltarget_kernel is false, and it
never is.
None of this has any functional effect, it just makes it easier to
follow what's happening when virNetDevSetVfConfig() is called.
virNetDevParseVfConfig() assumed that both the MAC address and VLAN
tag pointers were valid, so even if you only wanted one or the other,
you would need a variable to hold the returned value for both. This
patch checks each for a NULL pointer before filling it in.
The source code will check for NULL arguments for 'macvtap_macaddr' and
'vmuuid', so no need for the NONNULL in the prototypes. Following the stack
for both arguments to virNetDevVPortProfileOpSetLink also shows called
functions would handle a NULL value.
Additionally, modified the prototype to use the same 'macvtap_macaddr'
name as the source code for consistency.
Since the code checks 'mgr == NULL' anyway, no need for the prototype
to have the NONNULL arg check. Also add an error message to indicate what
the failure is so that there isn't a failed for some reason error.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The comparison code used STREQ_NULLABLE anyway for both 'drv_name' and
'dom_name', so no need. Add a NULLSTR on the 'dom_name' too.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The comparison code used STREQ_NULLABLE anyway for both 'drv_name' and
'dom_name', so no need. Add a NULLSTR on the 'dom_name' too.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The code checks and handles a NULL 'str', so just remove the NONNULL.
Update the error message to add the NULLSTR() around 'str' also.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The 'mon' argument validity is checked in the QEMU_CHECK_MONITOR for the
following functions, so they don't need the NONNULL on their prototype:
qemuMonitorUpdateVideoMemorySize
qemuMonitorUpdateVideoVram64Size
qemuMonitorGetAllBlockStatsInfo
qemuMonitorBlockStatsUpdateCapacity
Signed-off-by: John Ferlan <jferlan@redhat.com>
The prototype requires not passing a NULL in the parameter and the callers
all would fail far before this code would fail if 'vm' was NULL, so just
remove the check.
Signed-off-by: John Ferlan <jferlan@redhat.com>
The prototype requires a NONNULL argument and the only caller passes in
a non-null parameter. Besides the "else if" condition would deref it anyway.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since the code checks and handles NULL parameters, remove the NONNULL
from the prototype.
Also fix the comment in the source to reference the right name.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since the code checks and handles a NULL 'node' before proceeding
there's no need for the prototype with the NONNULL(2).
Signed-off-by: John Ferlan <jferlan@redhat.com>
If a network is destroyed and restarted, or its bridge is changed, any
tap devices that had previously been connected to the bridge will no
longer be connected. As a first step in automating a reconnection of
all tap devices when this happens, this patch modifies
networkNotifyActualDevice() (which is called once for every
<interface> of every active domain whenever libvirtd is restarted) to
reconnect any tap devices that it finds disconnected.
With this patch in place, you will need to restart libvirtd to
reconnect all the taps to their proper bridges. A future patch will
add a callback that hypervisor drivers can register with the network
driver to that the network driver can trigger this behavior
automatically whenever a network is started.
This patch splits out the part of virNetDevTapCreateInBridgePort()
that would need to be re-done if an existing tap device had to be
re-attached to a bridge, and puts it into a separate function. This
can be used both when an existing domain interface config is updated
to change its connection, and also to re-attach to the "same" bridge
when a network has been stopped and restarted. So far it is used for
nothing.
This function provides the bridge/bond device that the given network
device is attached to. The return value is 0 or -1, and the master
device is a char** argument to the function - this is needed in order
to allow for a "success" return from a device that has no master.
The only reason that the ethtool features weren't being retrieved in
an unprivileged libvirtd was because they required ioctl(), and the
ioctl was using an AF_PACKET socket, which requires root. Now that we
are using AF_UNIX for ioctl(), this restriction can be removed.
The exact family of the socket created for the fd used by ioctl(7)
doesn't matter, it just needs to be a socket and not a file. But for
some reason when macvtap support was added, it used
AF_PACKET/SOCK_DGRAM sockets for its ioctls; we later used the same
AF_PACKET/SOCK_DGRAM socket for new ioctls we added, and eventually
modified the other pre-existing ioctl sockets (for creating/deleting
bridges) to also use AF_PACKET/SOCK_DGRAM (that code originally used
AF_UNIX/SOCK_STREAM).
The problem with using AF_PACKET (intended for sending/receiving "raw"
packets, i.e. packets that can be some protocol other than TCP or UDP)
is that it requires root privileges. This meant that none of the
ioctls in virnetdev.c or virnetdevip.c would work when running
libvirtd unprivileged.
This packet solves that problem by changing the family to AF_UNIX when
creating the socket used for any ioctl().
For some drivers the domain's machine type makes no sense. They
just don't use it. A great example is bhyve driver. Therefore it
makes very less sense to report machine in domain capabilities
XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When enabling IPv6 on all interfaces, we may get the host Router
Advertisement routes discarded. To avoid this, the user needs to set
accept_ra to 2 for the interfaces with such routes.
See https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
on this topic.
To avoid user mistakenly losing routes on their hosts, check
accept_ra values before enabling IPv6 forwarding. If a RA route is
detected, but neither the corresponding device nor global accept_ra
is set to 2, the network will fail to start.
virNetlinkCommand() processes only one response message, while some
netlink commands, like route dumping, need to process several.
Add virNetlinkDumpCommand() as a virNetlinkCommand() sister.
Allow to reuse as much as possible from virNetlinkCommand(). This
comment prepares for the introduction of virNetlinkDumpCommand()
only differing by how it handles the responses.
Commit id '85af0b8' added a 'timeout' as the 4th parameter to
qemuMonitorOpen, but neglected to update the ATTRIBUTE_NONNULL(4)
to be (5) for the cb parameter.
It was pointed out here:
https://bugzilla.redhat.com/show_bug.cgi?id=1331796#c4
that we shouldn't be adding a "no-resolv" to the dnsmasq.conf file for
a network if there isn't any <forwarder> element that specifies an IP
address but no qualifying domain. If there is such an element, it will
handle all DNS requests that weren't otherwise handled by one of the
forwarder entries with a matching domain attribute. If not, then DNS
requests that don't match the domain of any <forwarder> would not be
resolved if we added no-resolv.
So, only add "no-resolv" when there is at least one <forwarder>
element that specifies an IP address but no qualifying domain.
We reported error in caller virQEMUCapsCacheLookupByArch.
So the same error messages in qemuConnectGetDomainCapabilities
is useless.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
Calling virCPUUpdateLive on a domain with no guest CPU configuration
does not make sense. Especially when doing so would crash libvirtd.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Use "virStoragePoolObj" as a prefix for any external API in virstorageobj.
Also a couple of functions were local to virstorageobj.c, so remove their
external defs iin virstorageobj.h.
NB: The virStorageVolDef* API's won't change.
Signed-off-by: John Ferlan <jferlan@redhat.com>
In an effort to be consistent with the source module, alter the function
prototypes to follow the similar style of source with the "type" on one
line followed by the function name and arguments on subsequent lines with
with argument getting it's own line.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Alter the format of the code to follow more recent style guidelines of
two empty lines between functions, function decls with "[static] type"
on one line followed by function name with arguments to functions each
on one line.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move all the StoragePoolObj related API's into their own module
virstorageobj from the storage_conf
Purely code motion at this point, plus adjustments to cleanly build
Signed-off-by: John Ferlan <jferlan@redhat.com>
The log and lock protocol don't have an extra handshake to close the
connection. Instead they just close the socket. Unfortunately that
resulted into a lot of spurious garbage logged to the system log files:
2017-03-17 14:00:09.730+0000: 4714: error : virNetSocketReadWire:1800 : End of file while reading data: Input/output error
or in the journal as:
Mar 13 16:19:33 xxxx virtlogd[32360]: End of file while reading data: Input/output error
Use the new facility in the netserverclient to suppress the IO error
report from the virNetSocket layer.
When starting a domain with custom guest CPU specification QEMU may add
or remove some CPU features. There are several reasons for this, e.g.,
QEMU/KVM does not support some requested features or the definition of
the requested CPU model in libvirt's cpu_map.xml differs from the one
QEMU is using. We can't really avoid this because CPU models are allowed
to change with machine types and libvirt doesn't know (and probably
doesn't even want to know) about such changes.
Thus when we want to make sure guest ABI doesn't change when a domain
gets migrated to another host, we need to update our live CPU definition
according to the CPU QEMU created. Once updated, we will change CPU
checking to VIR_CPU_CHECK_FULL to make sure the virtual CPU created
after migration exactly matches the one on the source.
https://bugzilla.redhat.com/show_bug.cgi?id=822148https://bugzilla.redhat.com/show_bug.cgi?id=824989
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuMonitorGetGuestCPU can now optionally create CPU data from
filtered-features in addition to feature-words.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The checks are now in a dedicated qemuProcessVerifyHypervFeatures
function.
In addition to moving the code this patch also fixes a few bugs: the
original code was leaking cpuFeature and the return value of
virCPUDataCheckFeature was not checked properly.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The attribute can be used to request a specific way of checking whether
the virtual CPU matches created by the hypervisor matches the
specification in domain XML.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The disk tuning group parameter is ignored by qemu if no other
throttling options are set. Reject such configuration, since the name
would not be honored after setting parameters via the live tuning API.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1433180
When checking capabilities for qemu we need to check whether subsets of
the disk throttling settings are supported. Extract the checks into a
separate functions as they will be reused in next patch.
While the code path that queries the monitor allocates a separate copy
of the 'group_name' string the path querying the config would not copy
it. The call to virTypedParameterAssign would then steal the pointer
(without clearing it) and the RPC layer freed it. Any subsequent call
resulted into a crash.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1433183
ioh3420 is emulated Intel hardware, so it always looked
quite out of place in aarch64/virt guests. Even for x86/q35
guests, the recently-introduced pcie-root-port is a better
choice because, unlike ioh3420, it doesn't require IO space
(a fairly constrained resource) to work.
If pcie-root-port is available in QEMU, use it; ioh3420 is
still used as fallback for when pcie-root-port is not
available.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1408808
QEMU 2.9 introduces the pcie-root-port device, which is
a generic version of the existing ioh3420 device.
Make the new device available to libvirt users.
If the SASL config does not have any mechanisms we currently
just report an empty list to the client which will then
fail to identify a usable mechanism. This is a server config
error, so we should fail immediately on the server side.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Linux still defaults to a 1024 open file handle limit. This causes
scalability problems for libvirtd / virtlockd / virtlogd on large
hosts which might want > 1024 guest to be running. In fact if each
guest needs > 1 FD, we can't even get to 500 guests. This is not
good enough when we see machines with 100's of physical cores and
TBs of RAM.
In comparison to other memory requirements of libvirtd & related
daemons, the resource usage associated with open file handles
is essentially line noise. It is thus reasonable to increase the
limits unconditionally for all installs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There were couple of reports on the list (e.g. [1]) that guests
with huge amounts of RAM are unable to start because libvirt
kills qemu in the initialization phase. The problem is that if
guest is configured to use hugepages kernel has to zero them all
out before handing over to qemu process. For instance, 402GiB
worth of 1GiB pages took around 105 seconds (~3.8GiB/s). Since we
do not want to make the timeout for connecting to monitor
configurable, we have to teach libvirt to count with this
fact. This commit implements "1s per each 1GiB of RAM" approach
as suggested here [2].
1: https://www.redhat.com/archives/libvir-list/2017-March/msg00373.html
2: https://www.redhat.com/archives/libvir-list/2017-March/msg00405.html
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While connecting to qemu monitor, the first thing we do is wait
for it to show up. However, we are doing it with some timeout to
avoid indefinite waits (e.g. when qemu doesn't create the monitor
socket at all). After beaa447a29 we are using exponential back
off timeout meaning, after the first connection attempt we wait
1ms, then 2ms, then 4 and so on. This allows us to bring down
wait time for small domains where qemu initializes quickly.
However, on the other end of this scale are some domains with
huge amounts of guest memory. Now imagine that we've gotten up to
wait time of 15 seconds. The next one is going to be 30 seconds,
and the one after that whole minute. Well, okay - with current
code we are not going to wait longer than 30 seconds in total,
but this is going to change in the next commit.
The exponential back off is usable only for first few iterations.
Then it needs to be caped (one second was chosen as the limit)
and switch to constant wait time.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Fix a "bug" in the storage pool test driver code which "assumed"
testStoragePoolObjSetDefaults should fill in the configFile for
both the Define/Create (persistent) and CreateXML (transient) pools
by just VIR_FREE()'ing it during CreateXML. Because the configFile
was filled in, during Destroy the pool wouldn't be free'd which
could cause issues for future patches which add tests to validate
vHBA creation for the storage pool using the same name.
Commit id 'bb74a7ffe' added a fairly non specific message when providing
only the <parent wwnn='xxx'/> or <parent wwpn='xxx'/> instead of providing
both wwnn and wwpn. This patch just modifies the message to be more specific
about which was missing.
Rather than returning true/false and having the caller check if the
vHBA was actually created, let's do that check within the CreateVport
function. That way the caller can faithfully assume success based
on a name start the thread looking for the LUNs. Prior to this change
it's possible that the vHBA wasn't really created (e.g if the call to
virVHBAGetHostByWWN returned NULL), we'd claim success, but in reality
there'd be no vHBA for the pool. This also fixes a second yet seen
issue that if the nodedev was present, but the parent by name wasn't
provided (perhaps parent by wwnn/wwpn or by fabric_name), then a failure
would be returned. For this path it shouldn't be an error - we should
just be happy that something else is managing the device and we don't
have to create/delete it.
The end result is that the createVport code can now just start the
refresh thread once it gets a non NULL name back.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the bulk of createVport and rename to virNodeDeviceCreateVport.
Remove the deleteVport entirely and replace with virNodeDeviceDeleteVport
Signed-off-by: John Ferlan <jferlan@redhat.com>
The function is actually in virutil.c, but prototyped in virfile.h.
This patch fixes that by renaming the function to virWaitForDevices,
adding the prototype in virutil.h and libvirt_private.syms, and then
changing the callers to use the new name.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Move the virStoragePoolSourceAdapter from storage_conf.h and rename
to virStorageAdapter.
Continue with code realignment for brevity and flow.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the code to use the new FCHost specific adapter structures.
Also rework the parameters to only pass what's need and leave logic in
the caller for the adapter type and the need to call the helpers.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types.
Continue to realign the code for shorter lines.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rework the helpers/APIs to use the FCHost and SCSIHost adapter types.
Continue to realign the code for shorter lines.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have lots of ugly inline code, create helpers to try and
make things more readable. While creating the helpers realign the code
as necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have lots of ugly inline code, create helpers to try and
make things more readable. While creating the helpers realign the code
as necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than use virXPathString, pass along an virXPathNode and alter
the parsing to use virXMLPropString.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Split out the code that munges through the storage pool adapter into
helpers - it's about to be moved into it's own source file.
This is purely code motion at this point.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id 'bb74a7ffe' added some new fields to search for a fchost by
parent wwnn/wwpn or parent_fabric_name, but neglected to validate that
the data within the fields was valid at parse time. This could lead to
eventual failure at run time, so rather than have the failure then, let's
validate now.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1428209
Commit id 'bb74a7ffe' neglected to check that both the parent_wwnn
parent_wwpn are in the XML if one or the other is similar to how
the node device code checked (commit id '2b13361bc').
If only one is provided, the "default" is to use a vHBA capable
adapter (see commit id '78be2e8b'), so the vHBA could start, but
perhaps not on the expected adapter.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Some users might want to pass a blockdev or a chardev as a
backend for NVDIMM. In fact, this is expected to be the mostly
used configuration. Therefore libvirt should allow the device in
devices CGroup then.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Now that we have APIs for relabel memdevs on hotplug, fill in the
missing implementation in qemu hotplug code.
The qemuSecurity wrappers might look like overkill for now,
because qemu namespace code does not deal with the nvdimms yet.
Nor does our cgroup code. But hey, there's cgroup_device_acl
variable in qemu.conf. If users add their /dev/pmem* device in
there, the device is allowed in cgroups and created in the
namespace so they can successfully passthrough it to the domain.
It doesn't look like overkill after all, does it?
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When domain is being started up, we ought to relabel the host
side of NVDIMM so qemu has access to it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When domain is being started up, we ought to relabel the host
side of NVDIMM so qemu has access to it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For NVDIMM devices it is optionally possible to specify the size
of internal storage for namespaces. Namespaces are a feature that
allows users to partition the NVDIMM for different uses.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>