https://bugzilla.redhat.com/show_bug.cgi?id=1373711
Add support and documentation for the [NO_]OVERWRITE flags for the
logical backend.
Update virsh.pod with a description of the process for usage of
the flags and building of the pool's volume group.
Signed-off-by: John Ferlan <jferlan@redhat.com>
If the build fails, then we need to ensure that we've run pvremove
on any devices which we've run pvcreate on; otherwise, a subsequent
build could fail since running pvcreate twice on a device requires
special force arguments.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Currently as long as the disk is formatted using a known parted format
type, the algorithm is happy to continue. However, that leaves a scenario
whereby a disk formatted using "pc98" could be used by a pool that's defined
using "dvh" (or vice versa). Alter the check to be match and different
and adjust the caller.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than have the Disk code having to use PARTED to determine if
there's something on the device, let's use the virStorageBackendDeviceProbe.
and only fallback to the PARTED probing if the BLKID code isn't built in.
This will also provide a mechanism for the other current caller (File
System Backend) to utilize a PARTED parsing algorithm in the event that
BLKID isn't built in to at least see if *something* exists on the disk
before blindly trying to use. The PARTED error checking will not find
file system types, but if there is a partition table set on the device,
it will at least cause a failure.
Move virStorageBackendDiskValidLabel and virStorageBackendDiskFindLabel
to storage_backend and rename/rework the code to fit the new model.
Update the virsh.pod description to provide a more generic description
of the process since we could now use either blkid or parted to find
data on the target device.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Prior to starting up, let's be sure the target volume device is
formatted as we expect; otherwise, inhibit the start.
Signed-off-by: John Ferlan <jferlan@redhat.com>
It's possible that the API could be called from a startup path in
order to check whether the label on the device matches what our
format is. In order to handle that condition, add a 'writelabel'
boolean to the API in order to indicate whether a write or just
read is about to happen.
This alters two "error" conditions that would care about knowing.
Signed-off-by: John Ferlan <jferlan@redhat.com>
A device may be formatted using some sort of disk partition format type.
We can check that using the blkid_ API's as well - so alter the logic to
allow checking the device for both a filesystem and a disk partition.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1363586
Commit id '27758859' introduced the "NO_OVERWRITE" flag check for
file system backends; however, the implementation, documentation,
and algorithm was inconsistent. For the "flag" description for the
API the flag was described as "Do not overwrite existing pool";
however, within the storage backend code the flag is described
as "it probes to determine if filesystem already exists on the
target device, renurning an error if exists".
The code itself was implemented using the paradigm to set up the
superblock probe by creating a filter that would cause the code
to only search for the provided format type. If that type wasn't
found, then the algorithm would return success allowing the caller
to format the device. If the format type already existed on the
device, then the code would fail indicating that the a filesystem
of the same type existed on the device.
The result is that if someone had a file system of one type on the
device, it was possible to overwrite it if a different format type
was specified in updated XML effectively trashing whatever was on
the device already.
This patch alters what NO_OVERWRITE does for a file system backend
to be more realistic and consistent with what should be expected when
the caller requests to not overwrite the data on the disk.
Rather than filter results based on the expected format type, the
code will allow success/failure be determined solely on whether the
blkid_do_probe calls finds some known format on the device. This
adjustment also allows removal of the virStoragePoolProbeResult
enum that was under utilized.
If it does find a formatted file system different errors will be
generated indicating a file system of a specific type already exists
or a file system of some other type already exists.
In the original virsh support commit id 'ddcd5674', the description
for '--no-overwrite' within the 'pool-build' command help output
has an ambiguous "of this type" included in the short description.
Compared to the longer description within the "Build a given pool."
section of the virsh.pod file it's more apparent that the meaning
of this flag would cause failure if a probe of the target already
has a filesystem.
So this patch also modifies the short description to just be the
antecedent of the 'overwrite' flag, which matches the API description.
This patch also modifies the grammar in virsh.pod for no-overwrite
as well as reworking the paragraph formats to make it easier to read.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rename virStorageBackendFileSystemProbe and to virStorageBackendBLKIDFindFS
and move to the more common storage_backend module.
Create a shim virStorageBackendDeviceIsEmpty which will make the call
to the virStorageBackendBLKIDFindFS and check the return value.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Again, there is no need to create /var/lib/libvirt/$domain.*
directories in CreateNamespace(). It is sufficient to create them
as soon as we need them which is in BuildNamespace. This way we
don't leave them around for the whole lifetime of domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The c1140eb9e got me thinking. We don't want to special case /dev
in qemuDomainGetPreservedMounts(), but in all other places in the
code we special case it anyway. I mean,
/var/run/libvirt/$domain.dev path is constructed separately just
so that it is not constructed here. It makes only a little sense
(if any at all).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If something goes wrong in this function we try a rollback. That
is unlink all the directories we created earlier. For some weird
reason unlink() was called instead of rmdir().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far if qemu is spawned under separate mount namespace in order
to relabel everything it needs an access to the security driver
to run in that namespace too. This has a very nasty down side -
it is being run in a separate process, so any internal state
transition is NOT reflected in the daemon. This can lead to many
sleepless nights. Therefore, use the transaction APIs so that
libvirt developers can sleep tight again.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
With our new qemu namespace code in place, the relabelling of
devices is done not as good is it could: a child process is
spawned, it enters the mount namespace of the qemu process and
then runs desired API of the security driver.
Problem with this approach is that internal state transition of
the security driver done in the child process is not reflected in
the parent process. While currently it wouldn't matter that much,
it is fairly easy to forget about that. We should take the extra
step now while this limitation is still fresh in our minds.
Three new APIs are introduced here:
virSecurityManagerTransactionStart()
virSecurityManagerTransactionCommit()
virSecurityManagerTransactionAbort()
The Start() is going to be used to let security driver know that
we are starting a new transaction. During a transaction no
security labels are actually touched, but rather recorded and
only at Commit() phase they are actually updated. Should
something go wrong Abort() aborts the transaction freeing up all
memory allocated by transaction.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The code at the very bottom of the DAC secdriver that calls
chown() should be fine with read-only data. If something needs to
be prepared it should have been done beforehand.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
virtio-pci is the way forward for aarch64 guests: it's faster
and less alien to people coming from other architectures.
Now that guest support is finally getting there (Fedora 24,
CentOS 7.3, Ubuntu 16.04 and Debian testing all support
virtio-pci out of the box), we'd like to start using it by
default instead of virtio-mmio.
Users and applications can already opt-in by explicitly using
<address type='pci'/>
inside the relevant elements, but that's kind of cumbersome and
requires all users and management applications to adapt, which
we'd really like to avoid.
What we can do instead is use virtio-mmio only if the guest
already has at least one virtio-mmio device, and use virtio-pci
in all other situations.
That means existing virtio-mmio guests will keep using the old
addressing scheme, and new guests will automatically be created
using virtio-pci instead. Users can still override the default
in either direction.
Existing tests such as aarch64-aavmf-virtio-mmio and
aarch64-virtio-pci-default already cover all possible
scenarios, so no additions to the test suites are necessary.
When coldplugging vcpus to a VM that already has a few hotpluggable
vcpus the code might generate invalid configuration as
non-hotpluggable cpus need to be clustered starting from vcpu 0.
This fix forces the added vcpus to be hotpluggable in such case.
Fixes a corner case described in:
https://bugzilla.redhat.com/show_bug.cgi?id=1370357
The virsh manpage lists "shutdown" and "dying" as two of the possible
domain states that could be listed in the output of the "virsh list"
command. However, a domain that is being shutdown will be listed as
"in shutdown", and the "dying" state doesn't even exist (and never
has, as far as I can tell from looking through git history - it was
shown in the original import of the virsh.pod file in 2006; there was
no VIR_DOMAIN_DYING state then, there wasn't one when those lines of
virsh.pod were tweaked in 2008, and there still isn't one
today. Apparently it was just something that sounded like a good idea
to someone at some time, but was never implemented...)
Resolves: https://bugzilla.redhat.com/1408778
This patch adds support and documentation for
a generalized hardware cache event called cache_l1d
perf event.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com>
The virsh manpage lists options --uuid and --name as
mutually exclusive along option --table when actually
the option --table is mutually exclusive and can't go
with options --uuid and/or --name. This patch rewords the
virsh manpage to state the correct meaning.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com>
When setting perf events, the enabled/disabled perf events are not
listed. Since we know which events were changed it's possible to
print out the values on successful set, such as :
virsh perf Domain --enable instructions --disable cache_misses
instructions : enabled
cache_misses : disabled
Created a helper to print the messages - use the vshPrintExtra to
adhere to the --quiet|-q option being set by some script. This will
cause the get code to print nothing, but will return success/failure.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com>
The public virSecret object has a single "usage_id" field
but the virSecretDef object has a different 'char *' field
for each usage type, but the code all assumes every usage
type has a corresponding single string. Get rid of the
pointless union in virSecretDef and just use "usage_id"
everywhere. This doesn't impact public XML format, only
the internal handling.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The handler for the device removal failed event was using
the struct for the device added event. Fortunately the
layout was the same, so this was harmless.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When changing the metadata via virDomainSetMetadata, we now
emit an event to notify the app of changes. This is useful
when co-ordinating different applications read/write of
custom metadata.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently when spawning containers with systemd, the container PID 1
will get moved into the systemd machine slice. Libvirt then manually
moves the libvirt_lxc and qemu-nbd processes into the cgroups associated
with the slice, but skips the systemd controller cgroup. This means that
from systemd's POV, libvirt_lxc and qemu-nbd are still part of the
libvirtd.service unit.
On systemctl daemon-reload, it will notice that libvirt_lxc & qemu-nbd
are in the libvirtd.service unit for the systemd controller, but in the
machine cgroups for resources. Systemd will thus move them back into
the libvirtd.service resource cgroups next time libvirtd is restarted.
This causes libvirtd to kill off the container due to incorrect cgroup
placement.
The solution is to ensure that when moving libvirt_lxc & qemu-nbd, we
also move the systemd cgroup controller placement. Normally this is
not something we ever want todo, but this is a special case as we are
intentionally wanting to move them to a different systemd unit.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently, there's only linux implementation for
virGetFCHostNameByFabricWWN(). Since the symbol is exported in
our private symbols we ought to have implementation for other
platforms too. This also triggers compilation error on FreeBSD:
../src/.libs/libvirt_driver_storage_impl.a(libvirt_driver_storage_impl_la-storage_backend_scsi.o): In function `createVport':
/usr/home/jenkins/libvirt-master/systems/libvirt-freebsd/build/src/../../src/storage/storage_backend_scsi.c:740: undefined reference to `virGetFCHostNameByFabricWWN'
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add a test case for when the QEMU_CAPS_NO_KVM_PIT capability is set.
This capability is mutually exclusive to QEMU_CAPS_KVM_PIT_TICK_POLICY
and results in the same output regardless of whether "discard" or
"delay" was specified in the guest XML for 'tickpolicy'.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Separate out the "policy=discard" into it's own specific
qemu command line.
We'll rename "kvm-pit-device" test case to be "kvm-pit-discard"
since it has the syntax we'd be using.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
By a mistake, for the VIR_DOMAIN_TIMER_TICKPOLICY_DELAY qemu
command line creation, 'discard' was used instead of 'delay'
in commit id '1569fa14'.
Test "kvm-pit-delay" is fixed accordingly to show the correct
option being generated.
Remove the (now) redundant kvm-pit-device tests. As it turns
out there is no need to specify both QEMU_CAPS_NO_KVM_PIT and
QEMU_CAPS_KVM_PIT_TICK_POLICY since they are mutually exclusive
and "kvm-pit-device" becomes just the same as "kvm-pit-delay".
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1349696
As it turns out using only the 'parent' to achieve the goal of a
consistent vHBA parent has issues with reboots where the scsi_hostX
parent could change to scsi_hostY causing either failure to create
the vHBA or usage of the wrong HBA for our vHBA.
Thus add the ability to search for the "parent" by the parent wwnn/
wwpn values or just a fabric_name if someone only cares to ensure
usage of the same SAN for the vHBA.
Add new fields to the fchost structure to allow creation of a vHBA via
the storage pool when a parent_wwnn/parent_wwpn or parent_fabric_wwn is
supplied in the storage pool XML.