Both RHEL and Fedora build with the storage driver and
most of its sub-drivers enabled at all times.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Both RHEL and Fedora build with driver modules enabled by
default, so there is no need for any conditional.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
A client only build dates back to RHEL5 where some architectures
did not build the libvirtd daemon, only the clients. Since RHEL5
was dropped this is no longer required.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Simplify conditionals to assume Fedora >= 20 or RHEL >= 6
The %prep section will explicitly check the version and
refuse to run if insufficient.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit id 'df1011ca8' modified virStorageBackendDiskDeleteVol to use
"dmsetup remove --force" to remove the volume, but left things in an
inconsistent state since the partition still existed on the disk and
only the device mapper device (/dev/dm-#) was removed.
Prior to commit '1895b421' (or '1ffd82bb' and '471e1c4e'), this could
go unnoticed since virStorageBackendDiskRefreshPool wasn't called.
However, the pool would be unusable since the /dev/dm-# device would
be removed even though the partition was not removed unless a multipathd
restart reset the link. That would of course make the volume appear again
in the pool after a refresh or pool start after libvirt reload.
This patch removes the 'dmsetup' logic and re-implements the partition
deletion logic for device mapper devices. The removal of the partition
via 'parted rm --script #' will cause udev device change logic to allow
multipathd to handle removing the dm-* device associated with the partition.
https://bugzilla.redhat.com/show_bug.cgi?id=1265694
Commit id '020135dc' didn't quite get the algorithm correct when a
device mapper source ended with a non numeric value (e.g. ends with
an alphabet value).
This patch modifies the 'part_separator' logic to add the "p" separator
to the attempted target path name only when specified as part_separator='yes'.
For a source name that already ends with a number, the logic doesn't change
as the part separator would need to be there.
For a source name that ends with something other than a number, this allows
the possibility that a "p" separator can be added. The default for one of
these source devices is to not add the separator.
The key for device mapper and the need for a partition separator "p" is
the presence of a number in the last character of the device name link
in /dev/mapper. A name such as "/dev/mapper/mpatha1" would generate
a "/dev/mapper/mpatha1p1" partition, while "/dev/mapper/mpatha" would
generate partition "/dev/mapper/mpatha1". Similarly for a device
mapper entry not using friendly names or an alias, a device such as
"/dev/mapper/3600a0b80005b10ca00005ad656fd8d93" would generate a
paritition "/dev/mapper/3600a0b80005b10ca00005ad656fd8d93p1", while
a device such as "/dev/mapper/3600a0b80005b10ca00005e115729093f" would
generate a partition "/dev/mapper/3600a0b80005b10ca00005e115729093f1".
The long number is the WWID of the device. It's also possible to assign
an alias for a device mapper entry, that alias follows the same rules
with respect to ending with a number or not when adding a "p" to create
the target device path.
Prior to calling the 'refreshPool' during CreatePool or UploadPool
operations, we need to clear the pool; otherwise, the pool will
have duplicated entries.
https://bugzilla.redhat.com/show_bug.cgi?id=1318993
Commit id 'dd519a294' caused a regression cloning a volume into a
logical pool by removing just the 'allocation' adjustment during
storageVolCreateXMLFrom. Combined with the change to not require the
new volume input XML to have a capacity listed (commit id 'e3f1d2a8')
left the possibility that a zero allocation value (e.g., not provided)
would create a thin/sparse logical volume. When a thin lv becomes fully
populated, then LVM sets the partition 'inactive' and the subsequent
fdatasync() fails.
Add a new 'has_allocation' flag to be set at XML parse time to indicate
that allocation was provided. This is done so that if it's not provided
the create-from code uses the capacity value since we document that if
omitted, the volume will be fully allocated at time of creation.
For a logical backend, that creation time is 'createVol', while for a
file backend, creation doesn't set the size, but the 'createRaw' called
during buildVolFrom will decide whether the file is sparse or not based
on the provided capacity and allocation value.
For volume clones that provide different allocation and capacity values
to allow for sparse files, there is no change.
Usage of this keyword in front of function declaration that is exported via a
header file is unnecessary, since internally, this has been the default for most
compilers for quite some time.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
libvirt may automatically add a pci-root or pcie-root controller to a
domain, depending on the arch/machinetype, and it hopefully always
makes the right decision about which to add (since in all cases these
controllers are an implicit part of the virtual machine).
But it's always possible that someone will create a config that
explicitly supplies the wrong type of PCI controller for the selected
machinetype. In the past that would lead to an error later when
libvirt was trying to assign addresses to other devices, for example:
XML error: PCI bus is not compatible with the device at
0000:00:02.0. Device requires a PCI Express slot, which is not
provided by bus 0000:00
(that's the error message that appears if you replace the pcie-root
controller in a Q35 domain with a pci-root controller).
This patch adds a check at the same place that the implicit
controllers are added (to ensure that the same logic is used to check
which type of pci root is correct). If a pci controller with index='0'
is already present, we verify that it is of the model that we would
have otherwise added automatically; if not, an error is logged:
The PCI controller with index='0' must be " model='pcie-root' for
this machine type, " but model='pci-root' was found instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1004602
Similar to "support Xen migration stream V2 in save/restore",
add support for indicating the migration stream version in
the migration code. To accomplish this, add a minimal migration
cookie in the libxl driver that is passed between source and
destination hosts. Initially, the cookie is only used in
the Begin and Prepare phases of migration to communicate the
version of the migration stream produced by the source.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Xen 4.6 introduced a new migration stream commonly referred to as
"migration V2". Xen 4.6 and newer always produce this new stream,
whereas Xen 4.5 and older always produce the legacy stream.
Support for migration stream V2 can be detected at build time with
LIBXL_HAVE_SRM_V2 from libxl.h. The legacy and V2 streams are not
compatible, but a V2 host can accept and convert a legacy stream.
Commit e7440656 changed the libxl driver to use the lowest libxl
API version possible (version 0x040200) to ensure the driver
builds against older Xen releases. The old 4.2 restore API does
not support specifying a stream version and assumes a legacy
stream, even if the incoming stream is migration V2. Thinking it
has been given a legacy stream, libxl will fail to convert an
incoming stream that is already V2, which causes the entire
restore operation to fail. Xen's libvirt-related OSSTest has been
failing since commit e7440656 landed in libvirt.git master. One
of the more recent failures can be seen here
http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg00071.html
This patch changes the call to libxl_domain_create_restore() to
include the stream version if LIBXL_HAVE_SRM_V2 is defined. The
version field of the libxlSavefileHeader struct is also updated
to '2' when LIBXL_HAVE_SRM_V2 is defined, ensuring the stream
version in the header matches the actual stream version produced
by Xen. Along with bumping the libxl API requirement to 0x040400,
this patch fixes save/restore on a migration V2 Xen host.
Oddly, migration has never used the libxlSavefileHeader. It
handles passing configuration in the Begin and Prepare phases,
and then calls libxl directly to transfer domain state/memory
in the Perform phase. A subsequent patch will add stream
version handling in the Begin and Prepare phase handshaking,
which will fix the migration related OSSTest failures.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
In LIBXL_API_VERSION 0x040400, the libxl_domain_create_restore API
gained a parameter for specifying restore parameters. Switch to
using version 0x040400, which will be useful in a subsequent commit
to specify the Xen migration stream version when restoring.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remove the possibility that a NULL hostdev->privateData or a
disk->privateData could crash libvirtd by checking for NULL
before dereferencing for the secinfo structure in the
qemuDomainSecret{Disk|Hostdev}Destroy functions. The hostdevPriv
could be NULL if qemuProcessNetworkPrepareDevices adds a new
hostdev during virDomainNetGetActualHostdev that then gets
inserted via virDomainHostdevInsert. The hostdevPriv was added
by commit id '27726d8' and is currently only used by scsi hostdev.
SRIOV VFs used in macvtap passthrough mode can take advantage of the
SRIOV card's transparent vlan tagging. All the code was there to set
the vlan tag, and it has been used for SRIOV VFs used for hostdev
interfaces for several years, but for some reason, the vlan tag for
macvtap passthrough devices was stubbed out with a -1.
This patch moves a bit of common validation down to a lower level
(virNetDevReplaceNetConfig()) so it is shared by hostdev and macvtap
modes, and updates the macvtap caller to actually send the vlan config
instead of -1.
Our tests should use either VIRT_TEST_MAIN() or
VIRT_TEST_MAIN_PRELOAD() macros which create main() function and
call the passed callback subsequently. This is important because
the wrapper which calls the callback eventually does important
stuff like setting logging based on env variables and such.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Once we're able to list and identify all clients connected to a specific
server, we can then support force-closing a connection. This patch introduces
a simple API calling virNetServerClientClose on a specific client, which
can be later extended easily, e.g. by sending an event once the client is
disconnected successfully.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Unlike the previous commit, we do actually support one client-side only flag
VIR_CONNECT_NO_ALIASES, so besides removing the check for flags this flag
has to be masked out before sending a message to the daemon, otherwise it
would trigger an error when checking flags on the daemon side.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit 5ed235c6 added unnecessary redifinition of
virDomainCapsDeviceHostdev in conf/domain_capabilities.h. This breaks
build with clang 3.4:
In file included from conf/domain_capabilities.c:25:
conf/domain_capabilities.h:88:44: error: redefinition of typedef
'virDomainCapsDeviceHostdev' is a C11 feature
[-Werror,-Wtypedef-redefinition]
typedef struct _virDomainCapsDeviceHostdev virDomainCapsDeviceHostdev;
^
conf/domain_capabilities.h:86:44: note: previous definition is here
typedef struct _virDomainCapsDeviceHostdev virDomainCapsDeviceHostdev;
So drop one of those.
If the call to virXPathNodeSet to set naddresses fails, Coverity notes
that the subsequent VIR_ALLOC_N cannot have a negative value (well it
probably wouldn't be negative per se).
Signed-off-by: John Ferlan <jferlan@redhat.com>
Coverity noted that in adminServerListClients if virNetServerGetClients
returns a -1 into ret, then the call virObjectListFreeCount in cleanup
will not be very happy.
Adjust the code to skip the cleanup label and just return -1 if
virNetServerGetClients fails.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Both instances use VIR_WARN() to print the error from a failed
virDBusGetSystemBus() call. Rather than use the virGetLastError
and need to check for valid return err pointer, just use the
virGetLastErrorMessage.
Signed-off-by: John Ferlan <jferlan@redhat.com>