We already support several ways of setting migration bandwidth and this
is not adding another one. With this patch we are able to read and write
this parameter using query-migrate-parameters and migrate-set-parameters
in one call with all other parameters.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The parameters used "migrate" prefix which is pretty redundant and
qemuMonitorMigrationParams structure is our internal representation of
QEMU migration parameters and it is supposed to use names which match
QEMU names.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
We already support setting the maximum downtime with a dedicated
virDomainMigrateSetMaxDowntime API. This patch does not implement
another way of setting the downtime by adding a new public migration
parameter. It just makes sure any parameter we are able to get from a
QEMU monitor by query-migrate-parameters can be passed back to QEMU via
migrate-set-parameters.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The second CHECK macro was used for string parameters. Let's rename it
to CHECK_STR and move it up to have all checks in one place.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The first CHECK macro in the test is used for checking integer values.
Let's make it a bit more generic to be usable for any numeric type and
use it for a new CHECK_INT macro.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The check can be easily replaced with a simple test in the JSON
implementation and we don't need to update it every time a new parameter
is added.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
The macro (now called PARSE_SET) is now usable for any type which needs
a *_set bool for indicating a valid value.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Linux kernel shows our "cmt" feature as "cqm". Let's mention the name in
the cpu_map.xml to make it easier to find.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Some tests require JSON_MODELS to be parsed into qemuCaps and applied
when computing CPU models and such test cannot succeed if QEMU driver is
disabled. Let's mark the tests with JSON_MODELS_REQUIRED and skip the
appropriate parts if building without QEMU.
On the other hand, CPU tests with JSON_MODELS should succeed even if
model definitions from QEMU are not parsed and applied. Let's explicitly
test this by repeating the tests without JSON_MODELS set.
This fixes the build with QEMU driver disabled, e.g., on some
architectures on RHEL/CentOS.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When upgrading libvirt packages, there's no strict ordering for the
installation or removal of the individual libvirt sub packages. Thus
libvirt-daemon may be upgraded (and its %postun scriptlet) started
before all sub packages with driver libraries are upgraded. When
libvirt-daemon's %postun scriptlet restarts the daemon old drivers may
still be laying around and the daemon may crash when it tries to use
them.
Let's restart the daemon in %posttrans to make sure libvirtd is
restarted only after all sub packages are at the same version.
https://bugzilla.redhat.com/show_bug.cgi?id=1464300
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since vhostuser type is really a tap that is just plugged into
different type of bridge, supporting QoS is trivial.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For instance, NET_TYPE_MCAST doesn't support setting QoS. Instead
of claiming success and doing nothing, we should be explicit
about that and report an error.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1427049
Use virStorageBackendCreateVolUsingQemuImg to apply the LUKS information
to the logical volume just created. As part of the processing of the
lvcreate command add 2MB to the capacity to account for the LUKS header
when it's determined that the volume desires to use encryption.
Refactor to extract out the LVCREATE command. This also removes the
need for the local @created since the error path can now only be reached
after the creation of the logical volume.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1490279
Turns out the virStorageBackendVolResizeLocal did not differentiate
whether the target volume was a LUKS volume or not and just blindly
did the ftruncate() on the target volume.
Follow the volume creation logic (in general) and create a qemu-img
resize command to resize the target volume for LUKS ensuring that
the --object secret is provided as well as the '--image-opts' used
by the qemu-img resize logic to describe the path and secret ensuring
that it's using the luks driver on the volume of course.
Since all that was really needed was a couple of fields and building
the object can be more generic, let's alter the args a bit. This will
be useful shortly for adding the secret object for a volume resize
operation on a luks volume that will need a secret object.
Rather than passing just the path, pass the virStorageVolDefPtr as we're
going to need it shortly.
Also fix the order of code and stack variables in the calling function
virStorageBackendVolResizeLocal.
By Default (without -d) the tests will only print Failures.
So a log should follow general "no message is a good message" style.
But the testfw checks always emit the skip info to stdout. Instead
they should use the redirection that is controlled by -d.
This avoids mesages like the following to clutter the log:
Skipping FW AAVMF32 test. Could not find /usr/share/AAVMF/AAVMF32_CODE.fd
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Some globbing chars in the domain name could be used to break out of
apparmor rules, so lets forbid these when in virt-aa-helper.
Also adding a test to ensure all those cases were detected as bad char.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Don't leak @blockNodes in the loop.
==226576== 7,120 bytes in 60 blocks are definitely lost in loss record 122 of 125
==226576== at 0x4835214: calloc (vg_replace_malloc.c:711)
==226576== by 0x4950D7B: virAllocN (viralloc.c:191)
==226576== by 0x49EB5BB: virXPathNodeSet (virxml.c:676)
==226576== by 0x104DB67: virQEMUCapsLoadCPUModels (qemu_capabilities.c:3738)
==226576== by 0x105510D: virQEMUCapsLoadCache (qemu_capabilities.c:3929)
==226576== by 0x104459F: qemuTestParseCapabilities (testutilsqemu.c:498)
==226576== by 0x1040DC9: testQemuCapsCopy (qemucapabilitiestest.c:105)
==226576== by 0x1041F07: virTestRun (testutils.c:180)
==226576== by 0x1040B45: mymain (qemucapabilitiestest.c:181)
==226576== by 0x104320F: virTestMain (testutils.c:1119)
==226576== by 0x1041149: main (qemucapabilitiestest.c:193)
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit 6c43149c removed the minsize directive from the qemu logrotate
file but missed other hypervisors. Remove minsize from the libxl, lxc,
and uml logrotate files as well.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
On a cloud host it is possible to create 100's of unique instances
per day, each leaving behind a /var/log/libvirt/qemu/instance-name.log
file that is < 100k. With the current 'minsize 100k' directive, these
files are never rotated and hence never removed. Over months of time,
tens of thousands of these files can accumulate on the host.
Dropping 'minsize 100k' allows rotating small files, which will
increase the number of log files, but 'rotate 4' ensures they will
be removed after a month.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
In bf3a4140 "virt-aa-helper: fix libusb access to udev usb data" the
libusb access to properly detect the device/bus ids was fixed.
The path /run/udev/data/+usb* contains a subset of that information we
already allow to be read and are currently not needed for the function
qemu needs libusb for. But on the init of libusb all those files are
still read so a lot of apparmor denials can be seen when using usb host
devices, like:
apparmor="DENIED" operation="open" name="/run/udev/data/+usb:2-1.2:1.0"
comm="qemu-system-x86" requested_mask="r" denied_mask="r"
Today we could silence the warnings with a deny rule without breaking
current use cases. But since the data in there is only a subset of those
it can read already it is no additional information exposure. And on the
other hand a future udev/libusb/qemu combination might need it so allow
the access in the default apparmor profile.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Hot-adding disks does not parse the full XML to generate apparmor rules.
Instead it uses -f <PATH> to append a generic rule for that file path.
580cdaa7: "virt-aa-helper: locking disk files for qemu 2.10" implemented
the qemu 2.10 requirement to allow locking on disks images that are part of
the domain xml.
But on attach-device a user will still trigger an apparmor deny by going
through virt-aa-helper -f, to fix that add the lock "k" permission to the
append file case of virt-aa-helper.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
When adding CPU usability blockers I forgot to properly free them when
in virDomainCapsCPUModelsDispose.
Reported-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The assumption so far was an average of 4 disks per guest.
But some architectures, like s390x, still often use plenty of smaller disks.
To include those in the considerations an assumption of an average of 10
disks is more reasonable.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
The initial assumption was ~2 files per guest, but some common setups
like Openstack drive up to 4 files per guest.
E.g. on Arm where the following XML leads to 4 file handles:
<serial type='file'>
<source path='/var/lib/nova/instances/7c0dcd78-.../console.log'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='file'>
<source path='/var/lib/nova/instances/7c0dcd78-.../console.log'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
With that in mind and the target to support 4k guests by default we
should raise the limit to 16k.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
QEMU identified a race condition between the device state serialization
and the end of storage migration. Both QEMU and libvirt needs to be
updated to fix this.
Our migration work flow is modified so that after starting the migration
we to wait for QEMU to enter "pre-switchover", "postcopy-active", or
"completed" state. Once there, we cancel all block jobs as usual. But if
QEMU is in "pre-switchover", we need to resume the migration afterwards
and wait again for the real end (either "postcopy-active" or
"completed" state).
Old QEMU will just enter either "postcopy-active" or "completed"
directly, which is still correctly handled even by new libvirt. The
"pre-switchover" state will only be entered if QEMU supports it and the
pause-before-switchover capability was enabled. Thus all combinations of
libvirt and QEMU will work, but only new QEMU with new libvirt will
avoid the race condition.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new capability enables a pause before device state serialization so
that we can finish all block jobs without racing with the end of the
migration. The pause is indicated by "pre-switchover" state. Once we're
done QEMU enters "device" migration state.
This patch just defines the new capability and QEMU migration states and
their mapping to our job states.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
For multipath disks it might be useful to have the same WWN for multiple
disks. It's the users choice to do so. Since we dropped the check that
disallows using duplicate WWNs drop the docs as well.
https://bugzilla.redhat.com/show_bug.cgi?id=1464975
VirutalBox has a IVRDEServerInfo structure available that
gives the effective runtime port that the VM is using when it's
running. This is useful when the "TCP/Ports" VBox property was set to
port range (e.g. via autoport = "yes" or via VBoxManage) in which
case it would be impossible to get the "active" port otherwise.
Originally autoport in vbox driver was setting the port to default value
(3389) which caused multiple VM instances use the same port. Since
libvirt XML does not allow to set port ranges, this patch changes the
"autoport" behavior to set VBox's "TCP/Ports" property to an arbitrary
port range (3389-3689) to avoid that issue.
The VBOX_SESSION_OPEN/CLOSE macros are only called in
_vboxDomainSnapshotRestore and they are unflexible because:
* assume the caller will have variable named "data"
* can only create Write lock type
As per above, it's not that hard to simply use the VBOX API directly.
Commit fdeac7a05f tried to fix the output
of 'virsh domxml-to-native --help' by switching types around. One of the
changes broke the option parser. VSH_OT_ARGV should be used only for
variable argument count, not to make the help generator look pretty.
The correct option type in this case is VSH_OT_STRING as it's not
mandatory now since it can be substituted by using --domain.
This makes --help for this command look incorrect, but the parser works
as it should.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1494400
When starting a domain with managed save image, we try to restore it
first. If the image is corrupted, we silently unlink it and just
normally start the domain. At this point the domain has no managed save
image, yet we did not reset the hasManagedSave flag.
https://bugzilla.redhat.com/show_bug.cgi?id=1460962
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
One of the usecases of iohelper is to read from pipe and write
to file with O_DIRECT. As we read from pipe we can have partial
read and then we fail to write this data because output file
is open with O_DIRECT and buffer size is not aligned.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>