Move the code for populating the CPUID with details of the maximum
address space from the per-vCPU CPUID handling code to the shared CPUID
handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add the ability for cloud-hypervisor to create, manage and monitor a
pty for serial and/or console I/O from a user. The reasoning for
having cloud-hypervisor create the ptys is so that clients, libvirt
for example, could exit and later re-open the pty without causing I/O
issues. If the clients were responsible for creating the pty, when
they exit the main pty fd would close and cause cloud-hypervisor to
get I/O errors on writes.
Ideally the main and subordinate pty fds would be kept in the main
vmm's Vm structure. However, because the device manager owns parsing
the configuration for the serial and console devices, the information
is instead stored in new fields under the DeviceManager structure
directly.
From there hooking up the main fd is intended to look as close to
handling stdin and stdout on the tty as possible (there is some future
work ahead for perhaps moving support for the pty into the
vmm_sys_utils crate).
The main fd is used for reading user input and writing to output of
the Vm device. The subordinate fd is used to setup raw mode and it is
kept open in order to avoid I/O errors when clients open and close the
pty device.
The ability to handle multiple inputs as part of this change is
intentional. The current code allows serial and console ptys to be
created and both be used as input. There was an implementation gap
though with the queue_input_bytes needing to be modified so the pty
handlers for serial and console could access the methods on the serial
and console structures directly. Without this change only a single
input source could be processed as the console would switch based on
its input type (this is still valid for tty and isn't otherwise
modified).
Signed-off-by: William Douglas <william.r.douglas@gmail.com>
This thread is virtio-net specific, so it is not handled in the common
virtio device code.
The non-vhost implementation resumes the thread itself. Do the same
thing for vhost-user-net.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Use the newly added hugepages_size option if provided by the user to
pick a huge page size when creating the memfd region. If none is
specified use the system default.
Sadly different huge pages cannot be tested by an integration test as
creating a pool of the non-default size cannot be done at runtime
(requires kernel to be booted with certain parameters.)
TETS=Manually tested with a kernel booted with both 1GiB and 2MiB huge
pages (hugepagesz=1G hugepages=1 hugepagesz=2M hugepages=512)
Fixes: #2230
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Including a warning that the user is respsonsible for ensuring that they
have sufficient pages of the specified size.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the user to use an alternative huge page size otherwise the
default size will be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit introduces a new information to the VirtioMemZone structure
in order to know if the memory zone is backed by hugepages.
Based on this new information, the virtio-mem device is now able to
determine if madvise(MADV_DONTNEED) should be performed or not. The
madvise documentation specifies that MADV_DONTNEED advice will fail if
the memory range has been allocated with some hugepages.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Hui Zhu <teawater@antfin.com>
This commit performs some refactoring to make all functions a method
from a specific object, and in particular methods for MemEpollHandler.
The point is to simplify the code to make it more readable.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Adjust the code to comply better with the virtio-mem specification by
adding some validation for the virtio-mem configuration, but also by
updating the virtio-mem configuration itself.
Nowhere in the virtio-mem specification is stated the usable region size
must be adjusted everytime the plugged size changes. For simplification
reason, and without going against the specification, the usable region
size is now kept static, setting its value to the size of the whole
region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing a ResizeSender object, we avoid having a Resize clone
with a different content than the original Resize object.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
MOV R/RM is a special case of MOVZX, so we generalize the mov_r_rm macro
to make it support both instructions.
Fixes: #2227
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The vhd module is the implementation of the VHD specification, which is
why it is important to unit test it.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Let's create a fixed VHD disk file from the existing RAW file thanks to
qemu-img, and create a new integration test to validate that
Cloud-Hypervisor can boot VHD disk image.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the simplified version of the synchronous support for RAW
disk files, the new fixed_vhd_sync module in the block_util crate
introduces the synchronous support for fixed VHD disk files.
With this patch, the fixed VHD support is complete as it is implemented
in both synchronous and asynchronous versions.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Using directly preadv and pwritev, we can simply use a RawFd instead of
a file, and we don't need to use the more complex implementation from
the qcow crate.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit adds the asynchronous support for fixed VHD disk files.
It introduces FixedVhd as a new ImageType, moving the image type
detection to the block_util crate (instead of qcow crate).
It creates a new vhd module in the block_util crate in order to handle
VHD footer, following the VHD specification.
It creates a new fixed_vhd_async module in the block_util crate to
implement the asynchronous version of fixed VHD disk file. It relies on
io_uring.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
_EJx built in should not return.
dsdt.dsl 813: Return (CEJ0 (0x00))
Warning 3104 - ^ Reserved method should not return a value (_EJ0)
dsdt.dsl 813: Return (CEJ0 (0x00))
Error 6080 - ^ Called method returns no value
Fixes: #2216
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The mutex timeout should be 0xffff rather than 0xfff to disable the
timeout feature.
dsdt.dsl 745: Acquire (\_SB.PRES.CPLK, 0x0FFF)
Warning 3130 - ^ Result is not used, possible operator timeout will be missed
dsdt.dsl 767: Acquire (\_SB.PRES.CPLK, 0x0FFF)
Warning 3130 - ^ Result is not used, possible operator timeout will be missed
dsdt.dsl 775: Acquire (\_SB.PRES.CPLK, 0x0FFF)
Warning 3130 - ^ Result is not used, possible operator timeout will be missed
Fixes: #2216
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By using `net_util::open_tap` to create the TAP interface, the created
interface will be deleted when the returned variable (`net_utils::Tap`)
is dropped.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch enables multi-queue support for creating virtio-net devices by
accepting multiple TAP fds, e.g. '--net fds=3:7'.
Fixes: #2164
Signed-off-by: Bo Chen <chen.bo@intel.com>
This helper can open a TAP device and configure the interface on it. If
the device needs to be opened multiple times for MQ then it also handles
that correctly.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Because of the behavior of the NVIDIA proprietary driver, we can't
expect NVIDIA cards with only MSI support to be functioning correctly
after they've been passed through with Cloud-Hypervisor.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Building with 1.51 nightly produces the following warning:
warning: unnecessary trailing semicolon
--> vmm/src/device_manager.rs:396:6
|
396 | };
| ^ help: remove this semicolon
|
= note: `#[warn(redundant_semicolons)]` on by default
warning: 1 warning emitted
Signed-off-by: Wei Liu <liuwe@microsoft.com>