If the function can never return an error this is now a clippy failure:
error: this function's return value is unnecessarily wrapped by `Result`
--> virtio-devices/src/watchdog.rs:215:5
|
215 | / fn set_state(&mut self, state: &WatchdogState) -> io::Result<()> {
216 | | self.common.avail_features = state.avail_features;
217 | | self.common.acked_features = state.acked_features;
218 | | // When restoring enable the watchdog if it was previously enabled. We reset the timer
... |
223 | | Ok(())
224 | | }
| |_____^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_wraps
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add the ability for cloud-hypervisor to create, manage and monitor a
pty for serial and/or console I/O from a user. The reasoning for
having cloud-hypervisor create the ptys is so that clients, libvirt
for example, could exit and later re-open the pty without causing I/O
issues. If the clients were responsible for creating the pty, when
they exit the main pty fd would close and cause cloud-hypervisor to
get I/O errors on writes.
Ideally the main and subordinate pty fds would be kept in the main
vmm's Vm structure. However, because the device manager owns parsing
the configuration for the serial and console devices, the information
is instead stored in new fields under the DeviceManager structure
directly.
From there hooking up the main fd is intended to look as close to
handling stdin and stdout on the tty as possible (there is some future
work ahead for perhaps moving support for the pty into the
vmm_sys_utils crate).
The main fd is used for reading user input and writing to output of
the Vm device. The subordinate fd is used to setup raw mode and it is
kept open in order to avoid I/O errors when clients open and close the
pty device.
The ability to handle multiple inputs as part of this change is
intentional. The current code allows serial and console ptys to be
created and both be used as input. There was an implementation gap
though with the queue_input_bytes needing to be modified so the pty
handlers for serial and console could access the methods on the serial
and console structures directly. Without this change only a single
input source could be processed as the console would switch based on
its input type (this is still valid for tty and isn't otherwise
modified).
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
Let's create a fixed VHD disk file from the existing RAW file thanks to
qemu-img, and create a new integration test to validate that
Cloud-Hypervisor can boot VHD disk image.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By using `net_util::open_tap` to create the TAP interface, the created
interface will be deleted when the returned variable (`net_utils::Tap`)
is dropped.
Signed-off-by: Bo Chen <chen.bo@intel.com>
The Windows image is quite large (about 20GiB), hence it takes some time
to copy it for every test in order to avoid potential corruption.
One way to mitigate that without compromising on safety between each
test is by using device mapper. By creating a read-only base, we ensure
the image won't be modified by any of the tests, and by creating one
snapshot for each test, we avoid copying the entire image each time.
A dedicated Copy On Write disk image is created to handle any change
that might be performed on the base image, letting the tests behave as
expected.
Fixes#2155
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By relying on the Guest object, Windows dedicated tests copy the Windows
guest image before booting from it. The point being to avoid corruption
between multiple tests. This is already how the rest of the integration
tests work, Windows tests were the only ones missing this feature.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This image does not have the pollinate service which can sometimes fail
and prevent SSH from starting as it marks itself as a prerequisite. This
service will never fully succeed as it tries to make a network
connection which will fail inside our test VMs.
Fixes: #2113
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Using --net=host is not necessary for any of the integration tests, so
let's use the default network option called "bridge".
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Some sporadic failures were due to an early connection to the VM while
it was not fully ready. Increasing sleep times fixes these issues.
Fixes#2104
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given we already check the connected IP address matches the expected
guest IP address, the check on the "booted" message is not needed.
Fixes: #2117
Signed-off-by: Bo Chen <chen.bo@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This test is very flaky and regularly causing CI failures. Until we can
identify the root cause we should disable this test.
See: #2103
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Simplify our image handling by not copying both QCOW2 and raw images for
every test. Allow the test to choose QCOW2 or raw by specifying the
image name manually. A follow on patch will add explicity QCOW2 tests.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When an SSH command fails we want to be able to see, via a panic() why
and where it failed. Replace use of .unwrap_or_default() from SSH
command calls to ensure that we can see the location of the panic.
Also enhance the existing SSH output code to show the error if there is
one.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The boot time for direct kernel boot based tests is significantly
quicker than booting via the firmware and stock kernel as it triggers a
reboot during the boot process due to the initrd handling.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When doing a direct kernel boot only have console=ttyS0 in the command
line if we are explicitly testing the serial output. The default
behaviour is `--serial null` so this output will not be visible but will
trigger a KVM exit for every byte which is very costly when running
under nested virtualization.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Starting the virtio device threads from the VMM thread has slowed down
the start of the VM when running on a highly contested system like the
CI.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
On the CI we are seeing that sometimes the epoll is receiving these
errors which do not indicate a failure but that we should retry.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
As we switched to focal for this test we no longer get any output during
the boot unless serial is used over virtio-console.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
There have been a lot of flakes around tests such as
test_virtio_fs_hotplug_dax_on_w_vhost_user_fs_daemon() or
test_virtio_fs_hotplug_dax_on() which all try and hotplug memory.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With the removal of vhost-user self-spawning support we should migrate
the tests to use the binaries so that we can remove the functionality
from the cloud-hypervisor binary itself.
See: #1925
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
While the addressable space size reduction of 4k in necessary due to
the Linux bug, the 64k alignment of the addressable space size is
required by Windows. This patch satisfies both.
Signed-off-by: Anatol Belski <anbelski@linux.microsoft.com>
Set the test case test_snapshot_restore X86 only, instead of excluding
it from test command line.
The command line option was added because we used to support migration
with Virtio-MMIO, but not Virtio-PCI.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Tests not ported include 1) the ones that start guest VMs without
network (e.g. test_net_hotplug, test_initramfs), 2) test_vfio that
involves l2 guest. Also, some tests that use bionic guest image are
given extended timeout (120s) for 'wait_vm_boot'.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Instead of waiting blindly with fixed amount of sleeping time, we can
use the `wait-timeout` crate to explicitly wait VM shutdown (with a
timeout). It can reduces the execution time of some tests
substantially. Also, this patch increases the `shutdown` timeout for
'test_reboot', which should fix the recent sporadic failures on this
test.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Instead of blindly waiting for 20-40s for the guest VM to boot, this
patch waits the notification from the guest VM explicitly by using a
simple TcpListener on the host and a custom systemd service in the
guest.
This patch also ported few tests to use this new machanism, while more
tests are to be ported.
Signed-off-by: Bo Chen <chen.bo@intel.com>