This should address any flakiness as the VMM process will have
completely terminated and all files closed.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the tests attempted to SSH into the VM and then run "shutdown"
but don't actually check that the VM has shutdown correctly and proceed
to kill the child process. Remove the associated SSH commands and sleeps
from those tests that are not explicitly checking the shutdown
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Only some tests require the output for the tests to be captured so
default to not capturing the output to a pipe and instead make it
controllable.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the tests use an identical network configuration. Add a
GuestCommand::default_net() to generate this configuration and use it
wherever possible.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the tests use an identical disk configuration. Add a
GuestCommand::default_disks() to generate this configuration and use it
wherever possible.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is a thin wrapper over std::process:Command which currently only
specifies the default binary but in future will handle more default
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In anticipation of the support for device hotplug, this commit moves the
DeviceManager object into an Arc<Mutex<>> when the DeviceManager is
being created. The reason is, we need the DeviceManager to implement the
BusDevice trait and then provide it to the IO bus, so that IO accesses
related to device hotplug can be handled correctly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We weren't processing events arriving at the HIPRIO queue, which
implied ignoring FUSE_INTERRUPT, FUSE_FORGET, and FUSE_BATCH_FORGET
requests.
One effect of this issue was that file descriptors weren't closed on
the server, so it eventually hits RLIMIT_NOFILE. Additionally, the
guest OS may hang while attempting to unmount the filesystem.
Signed-off-by: Sergio Lopez <slp@redhat.com>
There is no reason to give some special capabilities to the Rust version
of virtiofsd since it behaves slightly differently and does not require
neither DAC_OVERRIDE nor SYS_ADMIN.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
vhost_user_fs doesn't really support all vhost protocol features, just
MQ and SLAVE_REQ, so return that in protocol_features().
Signed-off-by: Sergio Lopez <slp@redhat.com>
Indirect descriptors is a virtio feature that allows the driver to
store a table of descriptors anywhere in memory, pointing to it from a
virtqueue ring's descriptor with a particular flag.
We can't seamlessly transition from an iterator over a conventional
descriptor chain to an indirect chain, so Queue users need to
explicitly support this feature by calling Queue::is_indirect() and
Queue::new_from_indirect().
Signed-off-by: Sergio Lopez <slp@redhat.com>
We import slave_fs_cache mod under vhost-user-slave feature control,
but not the self::slave_fs_cache::SlaveFsCacheReq import.
Signed-off-by: Eryu Guan <eguan@linux.alibaba.com>
We want to prevent from losing interrupts while they are masked. The
way they can be lost is due to the internals of how they are connected
through KVM. An eventfd is registered to a specific GSI, and then a
route is associated with this same GSI.
The current code adds/removes a route whenever a mask/unmask action
happens. Problem with this approach, KVM will consume the eventfd but
it won't be able to find an associated route and eventually it won't
be able to deliver the interrupt.
That's why this patch introduces a different way of masking/unmasking
the interrupts, simply by registering/unregistering the eventfd with the
GSI. This way, when the vector is masked, the eventfd is going to be
written but nothing will happen because KVM won't consume the event.
Whenever the unmask happens, the eventfd will be registered with a
specific GSI, and if there's some pending events, KVM will trigger them,
based on the route associated with the GSI.
Suggested-by: Liu Jiang <gerry@linux.alibaba.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We should not assume the offset produced by ECAM is identical to the
CONFIG_ADDRESS register of legacy PCI port io enumeration.
Signed-off-by: Qiu Wenbo <qiuwenbo@phytium.com.cn>
This option improves the security of the guest by randomising the start
address of the kernel in physical memory. We should turn this on so as
to ensure all our functionality such as memory hotplug and kernel
loading works as this is an option used widely in production.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Recently, vhost_user_block gained the ability of actively polling the
queue, a feature that can be disabled with the poll_queue property.
This change adds this property to DiskConfig, so it can be used
through the "disk" argument.
For the moment, it can only be used when vhost_user=true, but this
will change once virtio-block gets the poll_queue feature too.
Fixes: #787
Signed-off-by: Sergio Lopez <slp@redhat.com>
Fix "readonly" and "wce" defaults in cloud-hypervisor.yaml to match
their respective defaults in config.rs:DiskConfig.
Signed-off-by: Sergio Lopez <slp@redhat.com>
This is a perfectly acceptable situation as it causes the backend to
exit because the VMM has closed the connection. This addresses the
rather ugly reporting of errors from the backend that appears
interleaved with the output from the VMM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Return an error wen recvmsg() returns without a message using the
libc::ECONNRESET error so that the upper levels will correctly
interpret this as the connection being broken.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
It's missing a few knobs (readonly, vhost, wce) that should be exposed
through the rest API.
Fixes: #790
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The kernel does not adhere to the ACPI specification (probably to work
around broken hardware) and rather than busy looping after requesting an
ACPI reset it will attempt to reset by other mechanisms (such as i8042
reset.)
In order to trigger a reset the devices write to an EventFd (called
reset_evt.) This is used by the VMM to identify if a reset is requested
and make the VM reboot. As the reset_evt is part of the VMM and reused
for both the old and new VM it is possible for the newly booted VM to
immediately get reset as there is an old event sitting in the EventFd.
The simplest solution is to "drain" the reset_evt EventFd on reboot to
make sure that there is no spurious events in the EventFd.
Fixes: #783
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now that vhost_user_backend and vm-virtio do support EVENT_IDX, use it
in vhost_user_block to reduce the number of notifications sent between
the driver and the device.
This is specially useful when using active polling on the virtqueue,
as it'll be implemented by a future patch.
This is a snapshot of kvm_stat while generating ~60K IOPS with fio on
the guest without EVENT_IDX:
Event Total %Total CurAvg/s
kvm_entry 393454 20.3 62494
kvm_exit 393446 20.3 62494
kvm_apic_accept_irq 378146 19.5 60268
kvm_msi_set_irq 369720 19.0 58881
kvm_fast_mmio 370497 19.1 58817
kvm_hv_timer_state 10197 0.5 1715
kvm_msr 8770 0.5 1443
kvm_wait_lapic_expire 7018 0.4 1118
kvm_apic 2768 0.1 538
kvm_pv_tlb_flush 2028 0.1 360
kvm_vcpu_wakeup 1453 0.1 278
kvm_apic_ipi 1384 0.1 269
kvm_fpu 1148 0.1 164
kvm_pio 574 0.0 82
kvm_userspace_exit 574 0.0 82
kvm_halt_poll_ns 24 0.0 3
And this is the snapshot while doing the same thing with EVENT_IDX:
Event Total %Total CurAvg/s
kvm_entry 35506 26.0 3873
kvm_exit 35499 26.0 3873
kvm_hv_timer_state 14740 10.8 1672
kvm_apic_accept_irq 13017 9.5 1438
kvm_msr 12845 9.4 1421
kvm_wait_lapic_expire 10422 7.6 1118
kvm_apic 3788 2.8 502
kvm_pv_tlb_flush 2708 2.0 340
kvm_vcpu_wakeup 1992 1.5 258
kvm_apic_ipi 1894 1.4 251
kvm_fpu 1476 1.1 164
kvm_pio 738 0.5 82
kvm_userspace_exit 738 0.5 82
kvm_msi_set_irq 701 0.5 69
kvm_fast_mmio 238 0.2 4
kvm_halt_poll_ns 50 0.0 1
kvm_ple_window_update 28 0.0 0
kvm_page_fault 4 0.0 0
It can be clearly appreciated how the number of vm exits per second,
specially the ones related to notifications (kvm_fast_mmio and
kvm_msi_set_irq) is drastically lower.
Signed-off-by: Sergio Lopez <slp@redhat.com>