On tests that expect a clean shutdown there is no need to try and kill
the child after wait() has returned as the process has already exited.
Further there is no need to sleep before wait() as wait will block until
the VM and VMM shutdown is complete.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This should address any flakiness as the VMM process will have
completely terminated and all files closed.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the tests attempted to SSH into the VM and then run "shutdown"
but don't actually check that the VM has shutdown correctly and proceed
to kill the child process. Remove the associated SSH commands and sleeps
from those tests that are not explicitly checking the shutdown
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Only some tests require the output for the tests to be captured so
default to not capturing the output to a pipe and instead make it
controllable.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the tests use an identical network configuration. Add a
GuestCommand::default_net() to generate this configuration and use it
wherever possible.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the tests use an identical disk configuration. Add a
GuestCommand::default_disks() to generate this configuration and use it
wherever possible.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is a thin wrapper over std::process:Command which currently only
specifies the default binary but in future will handle more default
behaviour.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In anticipation of the support for device hotplug, this commit moves the
DeviceManager object into an Arc<Mutex<>> when the DeviceManager is
being created. The reason is, we need the DeviceManager to implement the
BusDevice trait and then provide it to the IO bus, so that IO accesses
related to device hotplug can be handled correctly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We weren't processing events arriving at the HIPRIO queue, which
implied ignoring FUSE_INTERRUPT, FUSE_FORGET, and FUSE_BATCH_FORGET
requests.
One effect of this issue was that file descriptors weren't closed on
the server, so it eventually hits RLIMIT_NOFILE. Additionally, the
guest OS may hang while attempting to unmount the filesystem.
Signed-off-by: Sergio Lopez <slp@redhat.com>
There is no reason to give some special capabilities to the Rust version
of virtiofsd since it behaves slightly differently and does not require
neither DAC_OVERRIDE nor SYS_ADMIN.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
vhost_user_fs doesn't really support all vhost protocol features, just
MQ and SLAVE_REQ, so return that in protocol_features().
Signed-off-by: Sergio Lopez <slp@redhat.com>
Indirect descriptors is a virtio feature that allows the driver to
store a table of descriptors anywhere in memory, pointing to it from a
virtqueue ring's descriptor with a particular flag.
We can't seamlessly transition from an iterator over a conventional
descriptor chain to an indirect chain, so Queue users need to
explicitly support this feature by calling Queue::is_indirect() and
Queue::new_from_indirect().
Signed-off-by: Sergio Lopez <slp@redhat.com>
We import slave_fs_cache mod under vhost-user-slave feature control,
but not the self::slave_fs_cache::SlaveFsCacheReq import.
Signed-off-by: Eryu Guan <eguan@linux.alibaba.com>
We want to prevent from losing interrupts while they are masked. The
way they can be lost is due to the internals of how they are connected
through KVM. An eventfd is registered to a specific GSI, and then a
route is associated with this same GSI.
The current code adds/removes a route whenever a mask/unmask action
happens. Problem with this approach, KVM will consume the eventfd but
it won't be able to find an associated route and eventually it won't
be able to deliver the interrupt.
That's why this patch introduces a different way of masking/unmasking
the interrupts, simply by registering/unregistering the eventfd with the
GSI. This way, when the vector is masked, the eventfd is going to be
written but nothing will happen because KVM won't consume the event.
Whenever the unmask happens, the eventfd will be registered with a
specific GSI, and if there's some pending events, KVM will trigger them,
based on the route associated with the GSI.
Suggested-by: Liu Jiang <gerry@linux.alibaba.com>
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We should not assume the offset produced by ECAM is identical to the
CONFIG_ADDRESS register of legacy PCI port io enumeration.
Signed-off-by: Qiu Wenbo <qiuwenbo@phytium.com.cn>
This option improves the security of the guest by randomising the start
address of the kernel in physical memory. We should turn this on so as
to ensure all our functionality such as memory hotplug and kernel
loading works as this is an option used widely in production.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Recently, vhost_user_block gained the ability of actively polling the
queue, a feature that can be disabled with the poll_queue property.
This change adds this property to DiskConfig, so it can be used
through the "disk" argument.
For the moment, it can only be used when vhost_user=true, but this
will change once virtio-block gets the poll_queue feature too.
Fixes: #787
Signed-off-by: Sergio Lopez <slp@redhat.com>
Fix "readonly" and "wce" defaults in cloud-hypervisor.yaml to match
their respective defaults in config.rs:DiskConfig.
Signed-off-by: Sergio Lopez <slp@redhat.com>
This is a perfectly acceptable situation as it causes the backend to
exit because the VMM has closed the connection. This addresses the
rather ugly reporting of errors from the backend that appears
interleaved with the output from the VMM.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Return an error wen recvmsg() returns without a message using the
libc::ECONNRESET error so that the upper levels will correctly
interpret this as the connection being broken.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>