When a vm is created with a pty device, on reboot the pty fd (sub
only) will only be associated with the vmm through the epoll event
loop. The fd being polled will have been closed due to the vm itself
dropping the pty files (and potentially reopening the fd index to a
different item making things quite confusing) and new pty fds will be
opened but not polled on for input.
This change creates a structure to encapsulate the information about
the pty fd (main File, sub File and the path to the sub File). On
reboot, a copy of the console and serial pty structs is then passed
down to the new Vm instance which will be used instead of creating a
new pty device.
This resolves the underlying issue from #2316.
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
Now that virtio-mem devices can update VFIO mappings through dedicated
handlers, let's provide them from the DeviceManager.
Important to note these handlers should either be provided to virtio-mem
devices or to the unique virtio-iommu device. This must be mutually
exclusive.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of letting the VfioPciDevice take the decision on how/when to
perform the DMA mapping/unmapping, we move this to the DeviceManager
instead.
The point is to let the DeviceManager choose which guest memory regions
should be mapped or not. In particular, we don't want the virtio-mem
region to be mapped/unmapped as it will be virtio-mem device
responsibility to do so.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When memory is resized through ACPI, a new region is added to the guest
memory. This region must also be added to the corresponding memory zone
in order to keep everything in sync.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In particular update for the vmm-sys-util upgrade and all the other
dependent packages. This requires an updated forked version of
kvm-bindings (due to updated vfio-ioctls) but allowed the removal of our
forked version of kvm-ioctls.
The changes to the API from kvm-ioctls and vmm-sys-util required some
other minor changes to the code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit moves both pci and vmm code from the internal vfio-ioctls
crate to the upstream one from the rust-vmm project.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that ExternalDmaMapping is defined in vm-device, let's use it from
there.
This commit also defines the function get_host_address_range() to move
away from the vfio-ioctls dependency.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The help information displayed for our `--disk` option is incorrect and
incomplete, e.g. missing the `direct` and `poll_queue` field.
Signed-off-by: Bo Chen <chen.bo@intel.com>
The main idea behind this commit is to remove all the complexity
associated with TX/RX handling for virtio-net. By using writev() and
readv() syscalls, we could get rid of intermediate buffers for both
queues.
The complexity regarding the TAP registration has been simplified as
well. The RX queue is only processed when some data are ready to be
read from TAP. The event related to the RX queue getting more
descriptors only serves the purpose to register the TAP file if it's not
already.
With all these simplifications, the code is more readable but more
performant as well. We can see an improvement of 10% for a single
queue device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This function can then be used by the TDX code to allocate the memory at
specific locations required for the TDVF to run from.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Update for clippy in Rust 1.50.0:
error: Unnecessary nested match
--> vmm/src/vm.rs:419:17
|
419 | / if let vm_device::BusError::MissingAddressRange = e {
420 | | warn!("Guest MMIO write to unregistered address 0x{:x}", gpa);
421 | | }
| |_________________^
|
= note: `-D clippy::collapsible-match` implied by `-D warnings`
help: The outer pattern can be modified to include the inner pattern.
--> vmm/src/vm.rs:418:17
|
418 | Err(e) => {
| ^ Replace this binding
419 | if let vm_device::BusError::MissingAddressRange = e {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ with this pattern
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_match
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If the function can never return an error this is now a clippy failure:
error: this function's return value is unnecessarily wrapped by `Result`
--> virtio-devices/src/watchdog.rs:215:5
|
215 | / fn set_state(&mut self, state: &WatchdogState) -> io::Result<()> {
216 | | self.common.avail_features = state.avail_features;
217 | | self.common.acked_features = state.acked_features;
218 | | // When restoring enable the watchdog if it was previously enabled. We reset the timer
... |
223 | | Ok(())
224 | | }
| |_____^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_wraps
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Depending on the host OS the code for looking up the time for the CMOS
make require extra syscalls to be permitted for the vCPU thread.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With all the preliminary work done in the previous commits, we can
update the VFIO implementation to support INTx along with MSI and MSI-X.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Here we are adding the PCI routing table, commonly called _PRT, to the
ACPI DSDT. For simplification reasons, we chose not to implement PCI
links as this involves dynamic decision from the guest OS, which result
in lots of complexity both from an AML perspective and from a device
manager perspective.
That's why the _PRT creates a static list of 32 entries, each assigned
with the IRQ number previously reserved by the device manager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to support INTx for PCI devices, each PCI device must be
assigned an IRQ. This is preliminary work to reserve 8 IRQs which will
be shared across the 32 PCI devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for accessing the legacy interrupt manager from the
function creating a VFIO PCI device, we store it as part of the
DeviceManager, to make it available for all methods.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DeviceManager already has a hold onto the MSI interrupt manager,
therefore there's no need to pass it through every function. Instead,
let's simplify the code by using the attribute from DeviceManager's
instance.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both GIC and IOAPIC must implement a new method notifier() in order to
provide the caller with an EventFd corresponding to the IRQ it refers
to.
This is needed in anticipation for supporting INTx with VFIO PCI
devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for supporting the notifier function for the legacy
interrupt source group, we need this function to return an EventFd
instead of a reference to this same EventFd.
The reason is we can't return a reference when there's an Arc<Mutex<>>
involved in the call chain.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Swap the last two parameters of guest_mem_{read,write} to be consistent
with other read / write functions.
Use more descriptive parameter names.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This reflects that it generates CPUID state used across all vCPUs.
Further ensure that errors from this function get correctly propagated.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the code for populating the CPUID with KVM HyperV emulation details from
the per-vCPU CPUID handling code to the shared CPUID handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the code for populating the CPUID with details of the CPU
identification from the per-vCPU CPUID handling code to the shared CPUID
handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the code for populating the CPUID with details of the maximum
address space from the per-vCPU CPUID handling code to the shared CPUID
handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add the ability for cloud-hypervisor to create, manage and monitor a
pty for serial and/or console I/O from a user. The reasoning for
having cloud-hypervisor create the ptys is so that clients, libvirt
for example, could exit and later re-open the pty without causing I/O
issues. If the clients were responsible for creating the pty, when
they exit the main pty fd would close and cause cloud-hypervisor to
get I/O errors on writes.
Ideally the main and subordinate pty fds would be kept in the main
vmm's Vm structure. However, because the device manager owns parsing
the configuration for the serial and console devices, the information
is instead stored in new fields under the DeviceManager structure
directly.
From there hooking up the main fd is intended to look as close to
handling stdin and stdout on the tty as possible (there is some future
work ahead for perhaps moving support for the pty into the
vmm_sys_utils crate).
The main fd is used for reading user input and writing to output of
the Vm device. The subordinate fd is used to setup raw mode and it is
kept open in order to avoid I/O errors when clients open and close the
pty device.
The ability to handle multiple inputs as part of this change is
intentional. The current code allows serial and console ptys to be
created and both be used as input. There was an implementation gap
though with the queue_input_bytes needing to be modified so the pty
handlers for serial and console could access the methods on the serial
and console structures directly. Without this change only a single
input source could be processed as the console would switch based on
its input type (this is still valid for tty and isn't otherwise
modified).
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
Use the newly added hugepages_size option if provided by the user to
pick a huge page size when creating the memfd region. If none is
specified use the system default.
Sadly different huge pages cannot be tested by an integration test as
creating a pool of the non-default size cannot be done at runtime
(requires kernel to be booted with certain parameters.)
TETS=Manually tested with a kernel booted with both 1GiB and 2MiB huge
pages (hugepagesz=1G hugepages=1 hugepagesz=2M hugepages=512)
Fixes: #2230
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the user to use an alternative huge page size otherwise the
default size will be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit introduces a new information to the VirtioMemZone structure
in order to know if the memory zone is backed by hugepages.
Based on this new information, the virtio-mem device is now able to
determine if madvise(MADV_DONTNEED) should be performed or not. The
madvise documentation specifies that MADV_DONTNEED advice will fail if
the memory range has been allocated with some hugepages.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Hui Zhu <teawater@antfin.com>
By introducing a ResizeSender object, we avoid having a Resize clone
with a different content than the original Resize object.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the simplified version of the synchronous support for RAW
disk files, the new fixed_vhd_sync module in the block_util crate
introduces the synchronous support for fixed VHD disk files.
With this patch, the fixed VHD support is complete as it is implemented
in both synchronous and asynchronous versions.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Using directly preadv and pwritev, we can simply use a RawFd instead of
a file, and we don't need to use the more complex implementation from
the qcow crate.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit adds the asynchronous support for fixed VHD disk files.
It introduces FixedVhd as a new ImageType, moving the image type
detection to the block_util crate (instead of qcow crate).
It creates a new vhd module in the block_util crate in order to handle
VHD footer, following the VHD specification.
It creates a new fixed_vhd_async module in the block_util crate to
implement the asynchronous version of fixed VHD disk file. It relies on
io_uring.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>