Rework the logic to remove the need for the @ok_to_mklabel boolean.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rather than having an error path, let's rework the code to allocate
and fill into an @def variable and then steal that into @ret when we
are successful leaving just a cleanup: path.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rather than having an error path, let's rework the code to allocate
and fill into an @def variable and then steal that into @ret when we
are successful leaving just a cleanup: path.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make use of the auto __cleanup capabilities cleaning up any
now unnecessary goto paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Rather than having an error path, let's rework the code to allocate
and fill into an @authdef variable and then steal that into @ret when
we are successful leaving just a cleanup: path.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Since commit a7424faff QMP is always used.
Also, commit 932534e8 removed the last use of this apart from:
* parsing/formatting this in the caps cache
* using it as a temporary variable to know when to report an error
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Jiri Denemark <jdenemar@redhat.com>
No functional change, but this will allow us to mock out the function
in the test suite
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
If 2 threads call abort for example then one of them
will hang because client will send 2 abort messages and
server will reply only on first of them, the second will be
ignored. And on server reply client changes the state only
one of abort message to complete, the second will hang forever.
There are other similar issues.
We should complete all messages waiting reply if we got
error or expected abort/finish reply from server. Also if one
thread send finish and another abort one of them will win
the race and server will either abort or finish stream. If
stream is aborted then thread requested finishing should report
error. In order to archive this let's keep stream closing reason
in @closed field. If we receive VIR_NET_OK message for stream
then stream is finished if oldest (closest to queue end) message
in stream queue is finish message and stream is aborted if oldest
message is abort message. Otherwise it is protocol error.
By the way we need to fix case of receiving VIR_NET_CONTINUE
message. Now we take oldest message in queue and check if
this is dummy message. If one thread first sends abort and
second thread then receives data then oldest message is abort
message and second thread won't be notified when data arrives.
Let's find oldest dummy message instead.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If we call virStreamFinish and virStreamAbort from 2 distinct
threads for example we can have access to freed memory.
Because when virStreamFinish finishes for example virStreamAbort
yet to be finished and it access virNetClientStreamPtr object
in stream->privateData.
Also it does not make sense to clear @driver field. After
stream is finished/aborted it is better to have appropriate
error message instead of "unsupported error".
This commit reverts [1] or virNetClientStreamPtr and
virStreamPtr will never be unrefed due to cyclic dependency.
Before this patch we don't have leaks because all execution
paths we call virStreamFinish or virStreamAbort.
[1] 8b6ffe40 : virNetClientStreamNew: Track origin stream
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
This mixing errors and EOF condition in one flag is odd.
Instead let's check st->err.code where appropriate.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Checking virNetClientStreamRaiseError without client lock
is racy which is fixed in [1] for example. Thus let's remove such checks
when we are sending message to server. And in other cases
(like virNetClientStreamRecvHole for example) let's move the check
into client stream code.
virNetClientStreamRecvPacket already have stream lock so we could
introduce another error checking function like virNetClientStreamRaiseErrorLocked
but as error is set when both client and stream lock are hold we
can remove locking from virNetClientStreamRaiseError because all
callers hold either client or stream lock.
Also let's split virNetClientStreamRaiseErrorLocked into checking
state function and checking message send status function. They are
same yet.
[1] 1b6a29c21: rpc: fix race on stream abort/finish and server side abort
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Stream server error is not propagated if thread does not have the buck.
In case we have the buck we are ok due to the code added in [1].
Let's check for stream error on all paths. Now we don't need
to raise error in virNetClientCallDispatchStream.
Old code reported error only if the first message in wait
queue awaits reply. It is odd as depends on wait queue
situation. For example if we have only TX
message in queue and in one iteration loop both send the
message and receive error then thread sending TX message did
not receive the error. Next if we have RX message (first)
and TX message (second) in queue and in one iteration
loop both send the TX message and receive error then
thread sending TX message received error. In short
it was inconsistent. Let's report error whenever
we received it and for every type of message as it makes
sense to report errors as early as possible.
[1] 16c6e2b41: Fix propagation of RPC errors from streams
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
In next patches we'll add stream state checks to this
function that applicable to all call paths. This is handy
place because we hold client lock here.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Stream abort/finish can hang because we can receive abort message
from server and yet sent abort/finish message to server. The latter
will not be answered ever because after server sends abort message
it forgets the stream and messages for unknown stream are simply ignored.
We check for stream error at the very beginning of remoteStreamFinish/remoteStreamAbort
but stream error can be set after the check in another thread operating
on stream. Let's check for stream error under client lock similar
to what's done in [1].
[1] 833b901cb: stream: Check for stream EOF
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
These functions do mostly the same things, and it would be
preferrable if they did them in mostly the same ways. This
also fixes a few violations to our code style guidelines.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
The function operates on a virDomainDef and is not tied to
device address assignment in any way, so it makes more sense
for it to live along with qemuDomainIs*() and the like.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Ideally we'd make all of them static, but there are a few
cases where we don't have a virDomainDef instance handy and
so they are the only option.
For the few ones we're forced to keep exporting, document
through comments that the alternative is preferred.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Now that we have added architecture checks to all
qemuDomainIs*() functions, we no longer need to perform the
same checks separately.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
There is very little overlap in the machine types available
on different architectures, so broadly speaking checking the
machine type is usually enough; regardless, it's better to
check the architecture as well.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
We want the signatures to be consistent, and also we're
going to start using the additional parameter next.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Make sure related functions, eg. all qemuDomainIs*(), are
close together instead of being sprinkled throughout both
the header and implementation file, and also that all
qemuDomainMachine*() functions are declared first since
we're going to make a bunch of them static later on.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
While the chances of the current checks resulting in false
positives are basically zero, it's still nicer to check for
the full prefix instead of the prefix's prefix.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
For consistency, let's use the semicolon for all definitions.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
QEMU plans to deprecate 'query-events' as it's non-extensible. Events
are also described by 'query-qmp-schema' so we can use that one instead.
This patch adds detection of events to
virQEMUCapsProbeQMPSchemaCapabilities using the same structure declaring
them for the old approach (virQEMUCapsEvents). This is possible as the
name is the same in the QMP schema and our detector supports that
trivially.
For any complex queries virQEMUCapsQMPSchemaQueries can be used in the
future.
For now we still call 'query-events' and discard the result so that it's
obvious that the tests pass. This will be cleaned up later.
https://bugzilla.redhat.com/show_bug.cgi?id=1673320
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
QEMU accidentally exposed the id of -drive (or same value as disk
serial, if provided) in one of the identifiers visible from the guest.
To avoid regression in case when -blockdev will be used we need to
always specify it ourselves.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
The property allows to control the guest-visible content of the vendor
specific designator of the 'Device Identification' page of a SCSI
device's VPD (vital product data).
QEMU was leaking the id string of -drive as the value if the 'serial' of
the disk was not specified. Switching to -blockdev would impose an ABI
change.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
For SCSI, IDE, and AHCI cdroms the appropriate device types which select
the correct media are used. In qemu there's one other code path that
looks at -drive media=cdrom in the XEN pv code. Thankfully we don't
support it with qemu (see qemuBuildDiskDeviceStr). All other devices
ignore it as the comment states, thus we can drop that code.
The test fallout is expectedly only in the test added for uncommon cdrom
types.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Attempting to create an empty virtio-blk drive results into:
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0xc,drive=drive-virtio-disk1,id=virtio-disk1: Device needs media, but drive is empty
Attempting to eject media from virtio-blk based drive results into:
error: internal error: unable to execute QEMU command 'eject': Device 'drive-virtio-disk0' is not removable
Forbid configurations where users would attempt to use cdroms in virtio
bus.
Fix few wrong examples which are not really relevant to the tested code.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Cast disk->bus to proper type and add missing values to the enum so it's
more obvious what types are supported.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
The split of ide-disk into the two separate devices was introduced by
qemu commit 1f56e32a7f4b3 released in qemu v0.15.
Note that when compared to the previous commit which made sure that no
disk related tests were touched, in this case it's not as careful.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
The split of scsi-disk into the two separate devices was introduced by
qemu commit b443ae67 released in qemu v0.15.
All changes to test files are not really related to disk testing thanks
to previous refactors.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Since commit a4cda054e7 we are using 'ide-hd' and 'ide-cd' instead of
'ide-drive'. We also should probe capabilities for 'ide-hd' instead of
'ide-drive'. It is safe to do as 'ide-drive' is the common denominator
of both 'ide-hd' and 'ide-cd' so all the properties were common.
For now the test data are modified by just changing the appropriate type
when probing for caps.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Since commit 02e8d0cfdf we are using 'scsi-hd' and 'scsi-cd' instead of
'scsi-disk'. We also should probe capabilities for 'scsi-hd' instead of
'scsi-disk'. It is safe to do as 'scsi-disk' is the common denominator
of both 'scsi-hd' and 'scsi-cd' so all the properties were common.
For now the test data are modified by just changing the appropriate type
when probing for caps.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
This flag tells virDomainMigrateSetMaxSpeed and
virDomainMigrateGetMaxSpeed APIs to work on post-copy migration
bandwidth.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This typed parameter for virDomainMigrate3 and virDomainMigrateToURI3
APIs may be used for setting maximum post-copy migration bandwidth.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This patch adds a new VIR_MIGRATE_PARAM_BANDWIDTH_POSTCOPY typed
parameter for virDomainMigrate3 and virDomainMigrateToURI3 for setting
maximum post-copy migration bandwidth.
In case the initial VIR_MIGRATE_PARAM_BANDWIDTH_POSTCOPY value turns out
to be suboptimal a new VIR_DOMAIN_MIGRATE_MAX_SPEED_POSTCOPY flag for
virDomainMigrateSetMaxSpeed and virDomainMigrateGetMaxSpeed may be used
to set/get the maximum post-copy migration bandwidth while migration is
already running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
So far migration parameters were changed only at the beginning of
migration mostly via an automatic translation from flags and typed
parameters. We need to export a few more functions to support APIs which
may set migration parameters while migration is already running.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Let's make the code flow easier to follow and get rid of the ugly endjob
label inside if branch.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Some migration parameters supported by libvirt may use units that differ
from the units used by QEMU for the corresponding parameters. For
example, libvirt defines migration bandwidth in MiB/s while QEMU expects
B/s. Let's add a unit field to qemuMigrationParamsTPMapItem for
automatic conversion when translating between libvirt's migration typed
parameters and QEMU's migration paramteres.
This patch is a preparation for future parameters as the existing
VIR_MIGRATE_PARAM_BANDWIDTH parameter is set using "migrate_set_speed"
QMP command rather than "migrate-set-parameters" for backward
compatibility.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
qemuDomainBlockPivot and qemuDomainBlockJobAbort need the job name for
cancelling or pivoting but were generating it locally instead of
accessing the existing copy in the job data structure.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The writing to an image actually starts when the copy job is initiated,
so checking this at the time of the pivot operation is too late.
Move the check to qemuDomainBlockCopyCommon. Note that modern qemu would
have prevented two writers with qcow2 so the slim possibility of a job
started with libvirtd without this patch missing the check is not really
worth worrying about.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
For copy and active commit jobs we record the state of the mirror so
that we can recover. The status XML was not saved in case of
qemuDomainBlockPivot due to an oversight.
Save the XML always when invoking qemuDomainBlockJobAbort even if
the job is not currently tracking any state. This will change later and
also this is not a particularly hot code path.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
If the container is really a simple one (init is just bash and
the whole root is passed through) then virDomainReboot and
virDomainShutdown will talk to the actual init within the host.
Therefore, 'virsh shutdown $dom' will result in shutting down the
host. True, at that point the container is shut down too but
looks a bit harsh to me.
The solution is to check if the init inside the container is or
is not the same as the init running on the host.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
So far the virInitctlSetRunLevel() is fully automatic. It finds
the correct fifo to use to talk to the init and it will set the
desired runlevel. Well, callers (so far there is just one) will
need to inspect the fifo a bit just before the runlevel is set.
Therefore, expose the internal list of fifos and also allow
caller to explicitly use one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Due to a bug the seclabels are restored before any PID in the
container is killed. This should be done afterwards in
virLXCProcessCleanup.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Prior to rewrite of cgroup code we only had one backend to try.
After the rewrite the virCgroupBackendGetAll() returns both
backends (for v1 and v2). However, not both have to really be
present on the system which results in killRecursive callback
failing which in turn might mean we won't try the other backend.
At the same time, this function reports no error as it should.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Not that it would matter because LXC driver doesn't differentiate
the job types so far, but nevertheless the Destroy() should grab
LXC_JOB_DESTROY.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
The number of iothreads is not part of the vm state sent during
migration, nor exposed to the guest ABI, so this restriction is
a mistake in libvirt. Let's remove that bit of code.
Reviewed-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Jie Wang <wangjie88@huawei.com>
The checks and error messages are mostly the same across
all virtio-input devices, so we can avoid having multiple
copies of the same code.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
It will not work. This breaks qemu capabilities probing as a user.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
For normal starts (no incoming migration) the refresh of the QEMU
state must be done before the VCPUs getting started since otherwise
there might be a race condition between a possible shutdown of the
guest OS and the QEMU monitor queries.
This fixes "qemu: migration: Refresh device information after
transferring state" (93db7eea1b).
Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
If a domain has a disk that is type='network' we require specific
cache mode to allow migration with it (either 'directsync' or
'none'). This doesn't make much sense since network disks are
supposed to be safe to migrate by default.
At the same time, we should be checking for the actual source
type, not apparent type set in the domain XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Storage pools might want to specify format of the image when translating
the volume thus we can't add any default format when parsing the XML.
Add a explicit format when starting the VM and format is not present
neither by user specifying it nor by the storage pool translation
function.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Post parse callback adds the 'raw' type only for local files. Remote
files can also have backing store (even local) so we should do this also
for network backed storage.
Note that virStorageFileGetMetadata always considers files with no type
as raw so we will not accidentally traverse the backing chain and allow
unexpected files being labelled with svirt labels.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
In commit f80eae8c2a I was too agresive in removing properties of
-drive for empty drives. It turns out that qemu actually persists the
state of 'readonly' and the throttling information even for the empty
drive.
Removing 'readonly' thus made qemu open any subsequent images added via
the 'change' command as RW which was forbidden by selinux thanks to the
restrictive sVirt label for readonly media.
Fix this by formating the property again and bump the tests and leave a
note detailing why the rest of the properties needs to be skipped.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>). VIR_ONCE_GLOBAL_INIT is almost
exclusively called without an ending semicolon, but let's
standardize on using one like the other macros.
Add a dummy struct definition at the end of the macro, so
the compiler will require callers to add a semicolon.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>), and we have a mix of semicolon and
non-semicolon usage through the code. Let's standardize on using
a semicolon for VIR_LOG_INIT calls.
Drop the semicolon from the final statement of the macro, so
the compiler will require callers to add a semicolon.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>), and we have a mix of semicolon and
non-semicolon usage through the code. Let's standardize on using
a semicolon for VIR_ENUM_IMPL calls.
Move the verify() statement to the end of the macro and drop
the semicolon, so the compiler will require callers to add a
semicolon.
While we are touching these call sites, standardize on putting
the closing parenth on its own line, as discussed here:
https://www.redhat.com/archives/libvir-list/2019-January/msg00750.html
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Missing semicolon at the end of macros can confuse some analyzers
(like cppcheck <filename>), and we have a mix of semicolon and
non-semicolon usage through the code. Let's standardize on using
a semicolon for VIR_ENUM_DECL calls.
Drop the semicolon from the final statement of the macro, so
the compiler will require callers to add a semicolon.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Just before pushing the series containing commit 3bba4825 I had added
a "return true" to the top of virFirewallDZoneExists() to measure the
impact of calling that function once per network during startup. I
found that the effect was minimal, but forgot to remove the "return
true" before pushing. This unfortunately causes a failure to start
networks on systems that have a firewalld version that doesn't support
our libvirt zone file (i.e. pretty much everyone).
This patch removes the unintended line.
Signed-off-by: Laine Stump <laine@laine.org>
When using custom command line arguments, warn that
this configuration is not fully supported.
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
- Remove ATTRIBUTE_UNUSED for the "buf" argument, it's
not unused
- Indent fix
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Since we're setting the zone anyway, it will be useful to allow
setting a different (custom) zone for each network. This will be done
by adding a "zone" attribute to the "bridge" element, e.g.:
...
<bridge name='virbr0' zone='myzone'/>
...
If a zone is specified in the config and it can't be honored, this
will be an error.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This patch restores broken guest network connectivity after a host
firewalld is switched to using an nftables backend. It does this by
adding libvirt networks' bridge interfaces to the new "libvirt" zone
in firewalld.
After this patch, the bridge interface of any network created by
libvirt (when firewalld is active) will be added to the firewalld
zone called "libvirt" if it exists (regardless of the firewalld
backend setting). This behavior does *not* depend on whether or not
libvirt has installed the libvirt zone file (set with
"--with[out]-firewalld-zone" during the configure phase of the package
build).
If the libvirt zone doesn't exist (either because the package was
configured to not install it, or possibly it was installed, but
firewalld doesn't support rule priorities, resulting in a parse
error), the bridge will remain in firewalld's default zone, which
could be innocuous (in the case that the firewalld backend is
iptables, guest networking will still function properly with the
bridge in the default zone), or it could be disastrous (if the
firewalld backend is nftables, we can be assured that guest networking
will fail). In order to be unobtrusive in the former case, and
informative in the latter, when the libvirt zone doesn't exist we
then check the firewalld version to see if it's new enough to support
the nftables backend, and then if the backend is actually set to
nftables, before logging an error (and failing the net-start
operation, since the network couldn't possibly work anyway).
When the libvirt zone is used, network behavior is *slightly*
different from behavior of previous libvirt. In the past, libvirt
network behavior would be affected by the configuration of firewalld's
default zone (usually "public"), but now it is affected only by the
"libvirt" zone), and thus almost surely warrants a release note for
any distro upgrading to libvirt 5.1 or above. Although it's
unfortunate that we have to deal with a mandatory behavior change, the
architecture of multiple hooks makes it impossible to *not* change
behavior in some way, and the new behavior is arguably better (since
it will now be possible to manage access to the host from virtual
machines vs from public interfaces separately).
Creates-and-Resolves: https://bugzilla.redhat.com/1650320
Resolves: https://bugzilla.redhat.com/1638342
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
In the past (when both libvirt and firewalld used iptables), if either
libvirt's rules *OR* firewalld's rules accepted a packet, it would
be accepted. This was because libvirt and firewalld rules were
processed during the same kernel hook, and a single ACCEPT result
would terminate the rule traversal and cause the packet to be
accepted.
But now firewalld can use nftables for its backend, while libvirt's
firewall rules are still using iptables; iptables rules are still
processed, but at a different time during packet processing
(i.e. during a different hook) than the firewalld nftables rules. The
result is that a packet must be accepted by *BOTH* the libvirt
iptables rules *AND* the firewalld nftable rules in order to be
accepted.
This causes pain because
1) libvirt always adds rules to permit DNS and DHCP (and sometimes
TFTP) from guests to the host network's bridge interface. But
libvirt's bridges are in firewalld's "default" zone (which is usually
the zone called "public"). The public zone allows ssh, but doesn't
allow DNS, DHCP, or TFTP. So even though libvirt's rules allow the
DHCP and DNS traffic, the firewalld rules (now processed during a
different hook) dont, thus guests connected to libvirt's bridges can't
acquire an IP address from DHCP, nor can they make DNS queries to the
DNS server libvirt has setup on the host. (This could be solved by
modifying the default firewalld zone to allow DNS and DHCP, but that
would open *all* interfaces in the default zone to those services,
which is most likely not what the host's admin wants.)
2) Even though libvirt adds iptables rules to allow forwarded traffic
to pass the iptables hook, firewalld's higher level "rich rules" don't
yet have the ability to configure the acceptance of forwarded traffic
(traffic that is going somewhere beyond the host), so any traffic that
needs to be forwarded from guests to the network beyond the host is
rejected during the nftables hook by the default zone's "default
reject" policy (which rejects all traffic in the zone not specifically
allowed by the rules in the zone, whether that traffic is destined to
be forwarded or locally received by the host).
libvirt can't send "direct" nftables rules (firewalld only supports
direct/passthrough rules for iptables), so we can't solve this problem
by just sending explicit nftables rules instead of explicit iptables
rules (which, if it could be done, would place libvirt's rules in the
same hook as firewalld's native rules, and thus eliminate the need for
packets to be accepted by both libvirt's and firewalld's own rules).
However, we can take advantage of a quirk in firewalld zones that have
a default policy of "accept" (meaning any packet that doesn't match a
specific rule in the zone will be *accepted*) - this default accept will
also accept forwarded traffic (not just traffic destined for the host).
Of course we don't want to modify firewalld's default zone in that
way, because that would affect the filtering of traffic coming into
the host from other interfaces using that zone. Instead, we will
create a new zone called "libvirt". The libvirt zone will have a
default policy of accept so that forwarded traffic can pass and list
specific services that will be allowed into the host from guests (DNS,
DHCP, SSH, and TFTP).
But the same default accept policy that fixes forwarded traffic also
causes *all* traffic from guest to host to be accepted. To close this
new hole, the libvirt zone can take advantage of a new feature in
firewalld (currently slated for firewalld-0.7.0) - priorities for rich
rules - to add a low priority rule that rejects all local traffic (but
leaves alone all forwarded traffic).
So, our new zone will start with a list of services that are allowed
(dhcp, dns, tftp, and ssh to start, but configurable via any firewalld
management application, or direct editing of the zone file in
/etc/firewalld/zones/libvirt.xml), followed by a low priority
<reject/> rule (to reject all other traffic from guest to host), and
finally with a default policy of accept (to allow forwarded traffic).
This patch only creates the zonefile for the new zone, and implements
a configure.ac option to selectively enable/disable installation of
the new zone. A separate patch contains the necessary code to actually
place bridge interfaces in the libvirt zone.
Why do we need a configure option to disable installation of the new
libvirt zone? It uses a new firewalld attribute that sets the priority
of a rich rule; this feature first appears in firewalld-0.7.0 (unless
it has been backported to am earlier firewalld by a downstream
maintainer). If the file were installed on a system with firewalld
that didn't support rule priorities, firewalld would log an error
every time it restarted, causing confusion and lots of extra bug
reports.
So we add two new configure.ac switches to avoid polluting the system
logs with this error on systems that don't support rule priorities -
"--with-firewalld-zone" and "--without-firewalld-zone". A package
builder can use these to include/exclude the libvirt zone file in the
installation. If firewalld is enabled (--with-firewalld), the default
is --with-firewalld-zone, but it can be disabled during configure
(using --without-firewalld-zone). Targets that are using a firewalld
version too old to support the rule priority setting in the libvirt
zone file can simply add --without-firewalld-zone to their configure
commandline.
These switches only affect whether or not the libvirt zone file is
*installed* in /usr/lib/firewalld/zones, but have no effect on whether
or not libvirt looks for a zone called libvirt and tries to use it.
NB: firewalld zones can only be added to the permanent config of
firewalld, and won't be loaded/enabled until firewalld is restarted,
so at package install/upgrade time we have to restart firewalld. For
rpm-based distros, this is done in the libvirt.spec file by calling
the %firewalld_restart rpm macro, which is a part of the
firewalld-filesystem package. (For distros that don't use rpm
packages, the command "firewalld-cmd --reload" will have the same
effect).
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
virFirewallDGetBackend() reports whether firewalld is currently using
an iptables or an nftables backend.
virFirewallDGetVersion() learns the version of the firewalld running
on this system and returns it as 1000000*major + 1000*minor + micro.
virFirewallDGetZones() gets a list of all currently active firewalld
zones.
virFirewallDInterfaceSetZone() sets the firewalld zone of the given
interface.
virFirewallDZoneExists() can be used to learn whether or not a
particular zone is present and active in firewalld.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
In preparation for adding several other firewalld-specific functions,
separate the code that's unique to firewalld from the more-generic
"firewall" file.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Support for firewalld is a feature that can be selectively enabled or
disabled (using --with-firewalld/--without-firewalld), not merely
something that must be accounted for in the code if it is present with
no exceptions. It is more consistent with other usage in libvirt to
use WITH_FIREWALLD rather than HAVE_FIREWALLD.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1657468
Commit be1bb6c95 changed the way volumes were stored from a forward
linked list to a hash table. In doing so, it required that each vol
object would have 3 unique values as keys into tables - key, name,
and path. Due to how vHBA/NPIV LUNs are created/used this resulted
in a failure to utilize all the LUN's found during processing.
During virStorageBackendSCSINewLun processing fetch the key (or
serial value) for NPIV LUN's using virStorageFileGetNPIVKey which
will formulate a more unique key based on the serial value and
the port for the LUN.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The vHBA/NPIV LUNs created via the udev processing of the
VPORT_CREATE command end up using the same serial value
as seen/generated by the /lib/udev/scsi_id as returned
during virStorageFileGetSCSIKey. Therefore, in order to
generate a unique enough key to be used when adding the
LUN as a volume during virStoragePoolObjAddVol a more
unique key needs to be generated for an NPIV volume.
The problem is illustrated by the following example, where
scsi_host5 is a vHBA used with the following LUNs:
$ lsscsi -tg
...
[5:0:4:0] disk fc:0x5006016844602198,0x101f00 /dev/sdh /dev/sg23
[5:0:5:0] disk fc:0x5006016044602198,0x102000 /dev/sdi /dev/sg24
...
Calling virStorageFileGetSCSIKey would return:
/lib/udev/scsi_id --device /dev/sdh --whitelisted --replace-whitespace /dev/sdh
350060160c460219850060160c4602198
/lib/udev/scsi_id --device /dev/sdh --whitelisted --replace-whitespace /dev/sdi
350060160c460219850060160c4602198
Note that althrough /dev/sdh and /dev/sdi are separate LUNs, they
end up with the same serial number used for the vol->key value.
When virStoragePoolFCRefreshThread calls virStoragePoolObjAddVol
the second LUN fails to be added with the following message
getting logged:
virHashAddOrUpdateEntry:341 : internal error: Duplicate key
To resolve this, virStorageFileGetNPIVKey will use a similar call
sequence as virStorageFileGetSCSIKey, except that it will add the
"--export" option to the call. This results in more detailed output
which needs to be parsed in order to formulate a unique enough key
to be used. In order to be unique enough, the returned value will
concatenate the target port as returned in the "ID_TARGET_PORT"
field from the command to the "ID_SERIAL" value.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Alter the code to use the virStorageFileGetSCSIKey helper
to fetch the unique key for the SCSI disk. Alter the logic
to follow the former code which would return a duplicate
of @dev when either the virCommandRun succeeded, but returned
an empty string or when WITH_UDEV was not true.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Alter the "real" code to return -2 on virCommandRun failure.
Alter the comments and function header to describe the function
and its returns.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1503284
The way we currently start qemu from CPU affinity POV is as
follows:
1) the child process is set affinity to all online CPUs (unless
some vcpu pinning was given in the domain XML)
2) Once qemu is running, cpuset cgroup is configured taking
memory pinning into account
Problem is that we let qemu allocate its memory just anywhere in
1) and then rely in 2) to be able to move the memory to
configured NUMA nodes. This might not be always possible (e.g.
qemu might lock some parts of its memory) and is very suboptimal
(copying large memory between NUMA nodes takes significant amount
of time).
The solution is to set affinity to one of (in priority order):
- The CPUs associated with NUMA memory affinity mask
- The CPUs associated with emulator pinning
- All online host CPUs
Later (once QEMU has allocated its memory) we then change this
again to (again in priority order):
- The CPUs associated with emulator pinning
- The CPUs returned by numad
- The CPUs associated with vCPU pinning
- All online host CPUs
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
This is mainly about /dev/sev and its default permissions 0600. Of
course, rule of 'tinfoil' would be that we can't trust anything, but the
probing code in QEMU is considered safe from security's perspective + we
can't create an udev rule for this at the moment, because ioctls and
file system permissions aren't cross-checked in kernel and therefore a
user with read permissions could issue a 'privileged' operation on SEV
which is currently only limited to root.
https://bugzilla.redhat.com/show_bug.cgi?id=1665400
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
The default permissions (0600 root:root) are of no use to the qemu
process so we need to change the owner to qemu iff running with
namespaces.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Instead of exposing /dev/sev to every domain, do it selectively.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
SEV has a limit on number of concurrent guests. From security POV we
should only expose resources (any resources for that matter) to domains
that truly need them.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
We should not give domains access to something they don't necessarily
need by default. Remove it from the qemu driver docs too.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
virtio-mmio is still used by default, so if PCI is desired
it's necessary to explicitly opt-in by adding an appropriate
<address type='pci' domain='0x0000' ... />
element to the corresponding device.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
This adds an additional directive to the dnsmasq configuration file that
notifies clients via dhcp about the link's MTU. Guests can then choose
adjust their link accordingly.
Signed-off-by: Casey Callendrello <cdc@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
The 'qemu' binary used to provide the i386 emulator until it was renamed
to qemu-system-i386 in QEMU 1.0. Since we don't support such old
versions we don't need to check for 'qemu' when probing capabilities.
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The custom namespaces were originally registered against the storage
pool source struct, but during review this was changed to the top level
storage pool struct. The namespace URIs were not updated to match, so
had a redundant '/source' component.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Some clients poll virDomainGetBlockJobInfo rather than wait for the
VIR_DOMAIN_BLOCK_JOB_READY event. In some cases qemu can get to 100% and
still not reach the synchronised phase. Initiating a pivot in that case
will fail.
Given that computers are interacting here, the error that the job
can't be finalized yet is not handled very well by those specific
implementations.
Our docs now correctly state to use the event. We already do a similar
output adjustment in case when the progress is not available from qemu
as in that case we'd report 0 out of 0, which some apps also incorrectly
considered as 100% complete.
In this case we subtract 1 from the progress if the ready state is not
signalled by qemu if the progress was at 100% otherwise.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
A copy+paste mistaken meant the wrong enum -> string convertor
function was used for the error when an incorrect feature capability was
used.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
PEP 8 says:
"Comparisons to singletons like None should always be done
with 'is' or 'is not', never the equality operators."
There are potentially semantics differences, though in the case of this
libvirt code its merely a style change:
http://jaredgrubb.blogspot.com/2009/04/python-is-none-vs-none.html
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The virDomainDeviceInfo parameter is a large struct so it is preferrable
to pass it by reference instead of by value.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The struct _virStorageBackendQemuImgInfo is quite large so it is
preferrable to pass it by reference instead of by value. This requires
us to stop modifying the "compat" field.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The 'rv' variable is never changed after being declared, so can be
removed.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
'val' is initialized from virDomainCapsFeatureTypeFromString and a
few lines earlier there was already a check for 'val < 0'.
The 'val >= 0' is thus always true. The enum conversion similarly
ensures that the val will be less than VIR_DOMAIN_CAPS_FEATURE_LAST,
so "val < VIR_DOMAIN_CAPS_FEATURE_LAST' is thus always true too.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Be more sensible when setting labels of the target of a
virDomainBlockCopy operation. Previously we'd relabel everything in case
it's a copy job even if there's no unlabelled backing chain. Since we
are also not sure whether the backing chain is shared we don't relabel
the chain on completion of the blockjob. This certainly won't play nice
with the image permission relabelling feature.
While this does not fix the case where the image is reused and has
backing chain it certainly sanitizes all the other cases. Later on it
will also allow to do the correct thing in cases where only one layer
was introduced.
The change is necessary as in case when -blockdev will be used we will
need to hotplug the backing chain and thus labeling needs to be setup in
advance and not only at the time of pivot. To avoid multiple code paths
move the labeling now.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Rather than passing in a virStorageSource which would override the
originally passed disk->src we can now drop passing in a disk completely
as all functions called inside here require a virStorageSource.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Use the functions designed to deal with single images as the *Disk
functions were just wrappers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Previously there weren't any suitable functions which would allow
setting up host side of a full disk chain so we've opted to replace the
'src' in a virDomainDiskDef by the new image source.
That is now no longer necessary so remove the munging.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Now that we have replacement in the form of the image labeling function
we can drop the unnecessary functions by replacing all callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The same can be achieved by using qemuSecurity[Set|Restore]ImageLabel.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The flag will control the VIR_SECURITY_DOMAIN_IMAGE_LABEL_BACKING_CHAIN
flag of the security driver image labeling APIs.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Security labeling of disks consists of labeling of the disk image
itself and it's backing chain. Modify
virSecurityManager[Set|Restore]ImageLabel to take a boolean flag that
will label the full chain rather than the top image itself.
This allows to delete/unify some parts of the code and will also
simplify callers in some cases.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Since the disk is necessary only to get the source modify the functions
to take the source directly and rename them to
qemu[Setup|Teardown]ImageChainCgroup.
Additionally drop a pointless comment containing the old function name.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
When we need to detect a chain for a image which will become the new
source for a disk (e.g. after a disk media change or a blockjob) we'd
need to replace disk->src temporarily to do so.
Move the 'disksrc' temporary variable to an argument and adjust callers.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The function at first validates the top image of the chain, then
traverses the chain as declared in the XML (if any) and then procedes to
detect the rest of the chain from images. All of the steps have their
own temporary iterator.
Clarify the use scope of the steps by introducing a new temp variable
holding the top level source and adding comments.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Allow for adjustment of RBD configuration options via Storage
Pool XML Namespace adjustments. When namespace arguments are
used to start the pool, add a VIR_WARN to indicate that the
startup was tainted by custom config_opts.
Based off original patch/concept:
https://www.redhat.com/archives/libvir-list/2014-May/msg00940.html
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
If the Storage Pool Namespace XML data exists, format the mount
options on the MOUNT command line and issue a VIR_WARN to indicate
that the storage pool was tainted by custom mount_opts.
When the pool is started, the options will be generated on the
command line along with the options already defined.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Introduce the virStoragePoolFSMountOptionsDef to be used to
manage the Storage Pool XML Namespace for mount options.
Using a new virStorageBackendNamespaceInit function, set the
virStoragePoolXMLNamespace into the _virStoragePoolOptions when
the storage backend is loaded.
Modify the storagepool.rng to allow for the usage of a different
XML namespace to parse the fs_mount_opts to be included with
the fs and netfs storage pool definitions.
Modify the storagepoolxml2xmltest to utilize a properly modified
XML file to parse and format the namespace for a netfs storage pool.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Introduce the infrastructure necessary to manage a Storage Pool XML
Namespace. The general concept is similar to virDomainXMLNamespace,
except that for Storage Pools the storage backend specific details
can be stored within the _virStoragePoolOptions unlike the domain
processing code which manages its xmlopt's via the virDomainXMLOption
which is allocated/passed around for each domain.
This patch defines the add the parse, format, free, and href methods
required to process the XML and callout from the Storage Pool Def
parse, format, and free API's to perform the action on the XML data
for/from the backend.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
If protocolVer present, add the -o nfsvers=# to the command
line for the NFS Storage Pool
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Add an optional way to define which NFS Server version will be
used to content the target NFS server.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1584663
Modify the command generation to add some default options to the
fs/netfs storage pools based on the OS type. For Linux, it'll be
the "nodev, nosuid, noexec". For FreeBSD, it'll be "nosuid, noexec".
For others, just leave the options alone.
Modify the storagepoolxml2argvtest to handle the fact that the
same input XML could generate different output XML based on whether
Linux, FreeBSD, or other was being built.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Rather than deref off of "caps->guests", let's pass "caps->guests" and
caps->nguests to have the helper use "guests[i]->" instead.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Let's extract out the <guest> code into it's own method/helper.
NB: One minor change between the two is usage of "buf" instead
of "&buf" in the new code since we pass the address of &buf to
the helper.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Rather than deref off of "caps->host.", let's pass "&caps->host"
and make the helper use "host->" instead.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
Let's extract out the <host> code into it's own method/helper.
NB: One minor change between the two is usage of "buf" instead
of "&buf" in the new code since we pass the address of &buf to
the helper.
Signed-off-by: John Ferlan <jferlan@redhat.com>
ACKed-by: Michal Privoznik <mprivozn@redhat.com>
This reverts commit 8b035c84d8.
The MTTCG impl in QEMU does allow pinning vCPUs.
When the guest is running we already check if pinning is
possible in the qemuDomainPinVcpuLive method, so this
check was adding no benefit.
When the guest is not running, we cannot know whether the
subsequent launch will use MTTCG or TCG, so we must allow
the pinning request. If the guest does use TCG on the next
launch it will fail, but this is no worse than if the user
had done a virDomainDefineXML with an XML doc specifying
vCPU pinning.
Reviewed-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
MTTCG is the new multi-threaded impl of TCG which follows
KVM in having one host OS thread per vCPU. Historically
we have discarded all PIDs reported for TCG guests, but
we must now selectively honour this data.
We don't have anything in the domain XML that indicates
whether a guest is using TCG or MTTCG. While QEMU does
have an option (-accel tcg,thread=single|multi), it is
not desirable to expose this in libvirt. QEMU will
automatically use MTTCG when the host/guest architecture
pairing is known to be safe. Only developers of QEMU TCG
have a strong reason to override this logic.
Thus we use two sanity checks to decide if the vCPU
PID information is usable. First we see if the PID
duplicates the main emulator PID, and second we see
if the PID duplicates any other vCPUs.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The transition to the ready state is best observed by events as it's
ansynchronous and does not hint users to do polling. As currently only
the qemu driver supports block copy and block commit and the ready state
event was introduced by qemu 1.3 we can fully switch to the new
approach.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
The documentation was only referring to a copy job, but in fact any
running blockjob will have the same results.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Add documentation that the 'VIR_DOMAIN_BLOCK_COPY_TRANSIENT_JOB' flag
is auto-assumed if the block copy job is started while the VM is
transient and remove the restriction to define the domain when copy
is running.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Historically firewall rules for virtual networks were added straight
into the base chains. This works but has a number of bugs and design
limitations:
- It is inflexible for admins wanting to add extra rules ahead
of libvirt's rules, via hook scripts.
- It is not clear to the admin that the rules were created by
libvirt
- Each rule must be deleted by libvirt individually since they
are all directly in the builtin chains
- The ordering of rules in the forward chain is incorrect
when multiple networks are created, allowing traffic to
mistakenly flow between networks in one direction.
To address all of these problems, libvirt needs to move to creating
rules in its own private chains. In the top level builtin chains,
libvirt will add links to its own private top level chains.
Addressing the traffic ordering bug requires some extra steps. With
everything going into the FORWARD chain there was interleaving of rules
for outbound traffic and inbound traffic for each network:
-A FORWARD -d 192.168.3.0/24 -o virbr1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.3.0/24 -i virbr1 -j ACCEPT
-A FORWARD -i virbr1 -o virbr1 -j ACCEPT
-A FORWARD -o virbr1 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr1 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -d 192.168.2.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 192.168.2.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
The rule allowing outbound traffic from virbr1 would mistakenly
allow packets from virbr1 to virbr0, before the rule denying input
to virbr0 gets a chance to run.
What we really need todo is group the forwarding rules into three
distinct sets:
* Cross rules - LIBVIRT_FWX
-A FORWARD -i virbr1 -o virbr1 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
* Incoming rules - LIBVIRT_FWI
-A FORWARD -d 192.168.3.0/24 -o virbr1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o virbr1 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -d 192.168.2.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
* Outgoing rules - LIBVIRT_FWO
-A FORWARD -s 192.168.3.0/24 -i virbr1 -j ACCEPT
-A FORWARD -i virbr1 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -s 192.168.2.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
There is thus no risk of outgoing rules for one network mistakenly
allowing incoming traffic for another network, as all incoming rules
are evalated first.
With this in mind, we'll thus need three distinct chains linked from
the FORWARD chain, so we end up with:
INPUT --> LIBVIRT_INP (filter)
OUTPUT --> LIBVIRT_OUT (filter)
FORWARD +-> LIBVIRT_FWX (filter)
+-> LIBVIRT_FWO
\-> LIBVIRT_FWI
POSTROUTING --> LIBVIRT_PRT (nat & mangle)
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Some of the query callbacks want to know the firewall layer that was
being used for triggering the query to avoid duplicating that data.
Reviewed-by: Laine Stump <laine@laine.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Allow the platform driver impls to run logic before and after the
firewall reload process.
Reviewed-by: Laine Stump <laine@laine.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
disk->mirror would not be cleared while the local pointer was freed in
qemuDomainBlockCommit if qemuDomainObjExitMonitor or qemuBlockJobDiskNew
would return a failure.
Since block job handling is executed in the separate handler which needs
a qemu job, we don't need to pre-set the mirror state prior to starting
the job. Similarly the block copy job does not do that.
Move the setting of the data after starting the job so that we avoid
this problem.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
While this should not be necessary as we clear it in the event handler,
let's be sure and clear it prior to starting the job.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Switching a block job to some states (e.g. QEMU_BLOCKJOB_STATE_READY)
might not require a job, thus if it will become ready asynchronously we
should not overwrite the state any more.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
While the callers should make sure that they don't call
qemuBlockJobEmitEvents for any internal state or job, let's add checks
that prevents us from emitting wrong events altogether.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1665553
Ceph can be mounted just like any other filesystem and in fact is
a shared and cluster filesystem. The filesystem magic constant
was taken from kernel sources as it is not in magic.h yet.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
We have this very handy macro called VIR_STEAL_PTR() which steals
one pointer into the other and sets the other to NULL. The
following coccinelle patch was used to create this commit:
@ rule1 @
identifier a, b;
@@
- b = a;
...
- a = NULL;
+ VIR_STEAL_PTR(b, a);
Some places were clean up afterwards to make syntax-check happy
(e.g. some curly braces were removed where the body become a one
liner).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Implement support for passing custom command line arguments
to bhyve using the 'bhyve:commandline' element:
<bhyve:commandline>
<bhyve:arg value='-newarg'/>
</bhyve:commandline>
* Define virDomainXMLNamespace for the bhyve driver, which
at this point supports only the 'commandline' element
described above,
* Update command generation code to inject these command line
arguments between driver-generated arguments and the vmname
positional argument.
Signed-off-by: Roman Bogorodskiy <bogorodskiy@gmail.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
networkMigrateStateFiles was added nearly 5 years ago when the network
state directory was moved from /var/lib/libvirt to /var/run/libvirt
just prior to libvirt-1.2.4). It was only required to maintain proper
state information for networks that were active during an upgrade that
didn't involve rebooting the host. At this point the likelyhood of
anyone upgrading their libvirt from pre-1.2.4 directly to 5.0.0 or
later *without rebooting the host* is probably so close to 0 that no
properly informed bookie would take *any* odds on it happening, so it
seems appropriate to remove this pointless code.
Signed-off-by: Laine Stump <laine@laine.org>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>