In anticipation for resizing support of an individual memory zone,
this commit introduces a new option 'hotplug_size' to '--memory-zone'
parameter. This defines the amount of memory that can be added through
each specific memory zone.
Because memory zone resize is tied to virtio-mem, make sure the user
selects 'virtio-mem' hotplug method, otherwise return an error.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Both MemoryManager and DeviceManager are updated through this commit to
handle the creation of multiple virtio-mem devices if needed. For now,
only the framework is in place, but the behavior remains the same, which
means only the memory zone created from '--memory' generates a
virtio-mem region that can be used for resize.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to anticipate the need for storing memory regions along with
virtio-mem information for each memory zone, we create a new structure
MemoryZone that will replace Vec<Arc<GuestRegionMmap>> in the hash map
MemoryZones.
This makes thing more logical as MemoryZones becomes a list of
MemoryZone sorted by their identifier.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Inject CPUID leaves for advertising KVM HyperV support when the
"kvm_hyperv" toggle is enabled. Currently we only enable a selection of
features required to boot.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Currently we don't need to do anything to service these exits but when
the synthetic interrupt controller is active an exit will be triggered
to notify the VMM of details of the synthetic interrupt page.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Some of the io_uring setup happens upon activation of the virtio-blk
device, which is initially triggered through an MMIO VM exit. That's why
the vCPU threads must authorize io_uring related syscalls.
This commit ensures the virtio-blk io_uring implementation can be used
along with the seccomp filters enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Extract common code for adding devices to the PCI bus into its own
function from the VFIO and VIRTIO code paths.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This removes the dependency of the pci crate on the devices crate which
now only contains the device implementations themselves.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The goal of this commit is to rename the existing NUMA option 'id' with
'guest_numa_id'. This is done without any modification to the way this
option behaves.
The reason for the rename is caused by the observation that all other
parameters with an option called 'id' expect a string to be provided.
Because in this particular case we expect a u32 representing a proximity
domain from the ACPI specification, it's better to name it with a more
explicit name.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The way to describe guest NUMA nodes has been updated through previous
commits, letting the user describe the full NUMA topology through the
--numa parameter (or NumaConfig).
That's why we can remove the deprecated and unused 'guest_numa_node'
option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the previous changes introducing new options for both memory
zones and NUMA configuration, this patch changes the behavior of the
NUMA node definition. Instead of relying on the memory zones to define
the guest NUMA nodes, everything goes through the --numa parameter. This
allows for defining NUMA nodes without associating any particular memory
range to it. And in case one wants to associate one or multiple memory
ranges to it, the expectation is to describe a list of memory zone
through the --numa parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This new option provides a new way to describe the memory associated
with a NUMA node. This is the first step before we can remove the
'guest_numa_node' option from the --memory-zone parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that we have an identifier per memory zone, and in order to keep
track of the memory regions associated with the memory zones, we create
and store a map referencing list of memory regions per memory zone ID.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for allowing memory zones to be removed, but also in
anticipation for refactoring NUMA parameter, we introduce a mandatory
'id' option to the --memory-zone parameter.
This forces the user to provide a unique identifier for each memory zone
so that we can refer to these.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing the SLIT (System Locality Distance Information Table), we
provide the guest with the distance between each node. This lets the
user describe the NUMA topology with a lot of details so that slower
memory backing the VM can be exposed as being further away from other
nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the NumaConfig which now provides distance information, we can
internally update the list of NUMA nodes with the exact distances they
should be located from other nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By introducing 'distances' option, we let the user describe a list of
destination NUMA nodes with their associated distances compared to the
current node (defined through 'id').
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the list of CPUs related to each NUMA node, Processor Local
x2APIC Affinity structures are created and included into the SRAT table.
This describes which CPUs are part of each node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the list of CPUs defined through the NumaConfig, this patch
will update the internal list of CPUs attached to each NUMA node.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through this new parameter, we give users the opportunity to specify a
set of CPUs attached to a NUMA node that has been previously created
from the --memory-zone parameter.
This parameter will be extended in the future to describe the distance
between multiple nodes.
For instance, if a user wants to attach CPUs 0, 1, 2 and 6 to a NUMA
node, here are two different ways of doing so:
Either
./cloud-hypervisor ... --numa id=0,cpus=0-2:6
Or
./cloud-hypervisor ... --numa id=0,cpus=0:1:2:6
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SRAT table (System Resource Affinity Table) is needed to describe
NUMA nodes and how memory ranges and CPUs are attached to them.
For now it simply attaches a list of Memory Affinity structures based on
the list of NUMA nodes created from the VMM.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the 'guest_numa_node' option, we create and store a list of
NUMA nodes in the MemoryManager. The point being to associate a list of
memory regions to each node, so that we can later create the ACPI tables
with the proper memory range information.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
With the introduction of this new option, the user will be able to
describe if a particular memory zone should belong to a specific NUMA
node from a guest perspective.
For instance, using '--memory-zone size=1G,guest_numa_node=2' would let
the user describe that a memory zone of 1G in the guest should be
exposed as being associated with the NUMA node 2.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Given that ACPI uses u32 as the type for the Proximity Domain, we can
use u32 instead of u64 as the type for 'host_numa_node' option.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
"struct MemoryConfig" has balloon_size but not in MemoryConfig
of cloud-hypervisor.yaml.
This commit adds it.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Let's narrow down the limitation related to mbind() by allowing shared
mappings backed by a file backed by RAM. This leaves the restriction on
only for mappings backed by a regular file.
With this patch, host NUMA node can be specified even if using
vhost-user devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Relying on the new option 'host_numa_node' from the 'memory-zone'
parameter, the user can now define which NUMA node from the host
should be used to back the current memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Since memory zones have been introduced, it is now possible for a user
to specify multiple backends for the guest RAM. By adding a new option
'host_numa_node' to the 'memory-zone' parameter, we allow the guest RAM
to be backed by memory that might come from a specific NUMA node on the
host.
The option expects a node identifier, specifying which NUMA node should
be used to allocate the memory associated with a specific memory zone.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The flag 'mergeable' should only apply to the entire guest RAM, which is
why it is removed from the MemoryZoneConfig as it is defined as a global
parameter at the MemoryConfig level.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The 'cmdline' parameter should not be required as it is not needed when
the 'kernel' parameter is the rust-hypervisor-fw, which means the kernel
and the associated command line will be found from the EFI partition.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Factorize the codepath between simple memory and multiple memory zones.
This simplifies the way regions are memory mapped, as everything relies
on the same codepath. This is performed by creating a memory zone on the
fly for the specific use case where --memory is used with size being
different from 0. Internally, the code can rely on memory zones to
create the memory regions forming the guest memory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
After the introduction of user defined memory zones, we can now remove
the deprecated 'file' option from --memory parameter. This makes this
parameter simpler, letting more advanced users define their own custom
memory zones through the dedicated parameter.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
User defined memory regions can now support being snapshot and restored,
therefore this commit removes the restrictions that were applied through
earlier commit.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By factorizing a lot of code into create_ram_region(), this commit
achieves the simplification of the restore codepath. Additionally, it
makes user defined memory zones compatible with snapshot/restore.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
First thing, this patch introduces a new function to identify if a file
descriptor is linked to any hard link on the system. This can let the
VMM know if the file can be accessed by the user, or if the file will
be destroyed as soon as the VMM releases the file descriptor.
Based on this information, and associated with the knowledge about the
region being MAP_SHARED or not, the VMM can now decide to skip the copy
of the memory region content. If the user has access to the file from
the filesystem, and if the file has been mapped as MAP_SHARED, we can
consider the guest memory region content to be present in this file at
any point in time. That's why in this specific case, there's no need for
performing the copy of the memory region content into a dedicated file.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Let's not assume that a backing file is going to be the result from
a snapshot for each memory region. These regions might be backed by
a file on the host filesystem (not a temporary file in host RAM), which
means they don't need to be copied and stored into dedicated files.
That's why this commit prepares for further changes by introducing an
optional PathBuf associated with the snapshot of each memory region.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
There will be some cases where the implementation of the snapshot()
function from the Snapshottable trait will require to modify some
internal data, therefore we make this possible by updating the trait
definition with snapshot(&mut self).
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the memory size is 0, this means the user defined memory
zones are used as a way to specify how to back the guest memory.
This is the first step in supporting complex use cases where the user
can define exactly which type of memory from the host should back the
memory from the guest.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In anticipation for the need to map part of a file with the function
create_ram_region(), it is extended to accept a file offset as argument.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the provided backing file is an actual file and not a directory,
we should not truncate it, as we expect the file to already be the right
size.
This change will be important once we try to map the same file through
multiple memory mappings. We can't let the file be truncated as the
second mapping wouldn't work properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing a new CLI option --memory-zone letting the user specify
custom memory zones. When this option is present, the --memory size
must be explicitly set to 0.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It is otherwise seems to be able to cause resource conflicts with
Windows APCI_HAL. The OS might do a better job on assigning resources
to this device, withouth them to be requested explicitly. 0xcf8 and
0xcfc are only what is certainly needed for the PCI device enumeration.
Signed-off-by: Anatol Belski <anatol.belski@microsoft.com>
Some OS might check for duplicates and bail out, if it can't create a
distinct mapping. According to ACPI 5.0 section 6.1.12, while _UID is
optional, it becomes required when there are multiple devices with the
same _HID.
Signed-off-by: Anatol Belski <ab@php.net>
The brk syscall is not always called as the system might not need it.
But when it's needed from the API thread, this causes the thread to
terminate as it is not part of the authorized list of syscalls.
This should fix some sporadic failures on the CI with the musl build.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Add mprotect to API thread rules. Prevent the VMM is
killed when it is used.
Signed-off-by: Jose Carlos Venegas Munoz <jose.carlos.venegas.munoz@intel.com>
This patch added the seccomp_filter module to the virtio-devices crate
by taking reference code from the vmm crate. This patch also adds
allowed-list for the virtio-block worker thread.
Partially fixes: #925
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch propagates the SeccompAction value from main to the
Vm struct constructor (i.e. Vm::new_from_memory_manager), so that we can
use it to construct the DeviceManager and CpuManager struct for
controlling the behavior of the seccomp filters for vcpu/virtio-device
worker threads.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch extends the CLI option '--seccomp' to accept the 'log'
parameter in addition 'true/false'. It also refactors the
vmm::seccomp_filters module to support both "SeccompAction::Trap" and
"SeccompAction::Log".
Fixes: #1180
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch replaces the usage of 'SeccompLevel' with 'SeccompAction',
which is the first step to support the 'log' action over system
calls that are not on the allowed list of seccomp filters.
Signed-off-by: Bo Chen <chen.bo@intel.com>
By adding a new io_uring feature gate, we let the user the possibility
to choose if he wants to enable the io_uring improvements or not.
Since the io_uring feature depends on the availability on recent host
kernels, it's better if we leave it off for now.
As soon as our CI will have support for a kernel 5.6 with all the
features needed from io_uring, we'll enable this feature gate
permanently.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the host supports io_uring and the specific io_uring options
needed, the VMM will choose the asynchronous version of virtio-blk.
This will enable better I/O performances compared to the default
synchronous version.
This is also important to note the VMM won't be able to use the
asynchronous version if the backend image is in QCOW format.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Cloud Hypervisor allows either the serial or virtio console to output to
TTY, but TTY input is pushed to both.
This is not correct. When Linux guest is configured to spawn TTYs on
both ttyS0 and hvc0, the user effectively issues the same commands twice
in different TTYs.
Fix this by only direct input to the one choice that is using host side
TTY.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit fixes an "Bad syscall" error when shutting down the VM
on AArch64 by adding the SYS_unlinkat syscall to the seccomp
whitelist.
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Ensure that the width of the I/O port is correctly set to 32-bits in the
generic address used for the X_PM_TMR_BLK. Do this by type
parameterising GenericAddress::io_port_address() fuction.
TEST=Boot with clocksource=acpi_pm and observe no errors in the dmesg.
Fixes: #1496
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is a counter exposed via an I/O port that runs at 3.579545MHz. Here
we use a hardcoded I/O and expose the details through the FADT table.
TEST=Boot Linux kernel and see the following in dmesg:
[ 0.506198] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
We store the device passthrough handler, so we should use it through our
internal API and only carry the passed through device configuration.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Shrink GICDevice trait to contain hypervisor agnostic API's only, which
are used in generating FDT.
Move all KVM specific logic into KvmGICDevice trait.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Make set_gsi_routing take a list of IrqRoutingEntry. The construction of
hypervisor specific structure is left to set_gsi_routing.
Now set_gsi_routes, which is part of the interrupt module, is only
responsible for constructing a list of routing entries.
This further splits hypervisor specific code from hypervisor agnostic
code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
That function is going to return a handle for passthrough related
operations.
Move create_kvm_device code there.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
SGX expects the EPC region to be reported as "reserved" from the e820
table. This patch adds a new entry to the table if SGX is enabled.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The support for SGX is exposed to the guest through CPUID 0x12. KVM
passes static subleaves 0 and 1 from the host to the guest, without
needing any modification from the VMM itself.
But SGX also relies on dynamic subleaves 2 through N, used for
describing each EPC section. This is not handled by KVM, which means
the VMM is in charge of setting each subleaf starting from index 2
up to index N, depending on the number of EPC sections.
These subleaves 2 through N are not listed as part of the supported
CPUID entries from KVM. But it's important to set them as long as index
0 and 1 are present and indicate that SGX is supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of passing the GuestMemoryMmap directly to the CpuManager upon
its creation, it's better to pass a reference to the MemoryManager. This
way we will be able to know if SGX EPC region along with one or multiple
sections are present.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The SGX EPC region must be exposed through the ACPI tables so that the
guest can detect its presence. The guest only get the full range from
ACPI, as the specific EPC sections are directly described through the
CPUID of each vCPU.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the presence of one or multiple SGX EPC sections from the VM
configuration, the MemoryManager will allocate a contiguous block of
guest address space to hold the entire EPC region. Within this EPC
region, each EPC section is memory mapped.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Introducing the new CLI option --sgx-epc along with the OpenAPI
structure SgxEpcConfig, so that a user can now enable one or multiple
SGX Enclave Page Cache sections within a contiguous region from the
guest address space.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In this commit we saved the BDF of a PCI device and set it to "devid"
in GSI routing entry, because this field is mandatory for GICv3-ITS.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Move the definition of RawFile from virtio-devices crate into qcow
crate. All the code that consumes RawFile also already depends on the
qcow crate for image file type detection so this change breaks the
need for the qcow crate to depend on the very large virtio-devices
crate.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit store balloon size to MemoryConfig.
After reboot, virtio-balloon can use this size to inflate back to
the size before reboot.
Signed-off-by: Hui Zhu <teawater@antfin.com>
Remove the vmm dependency from vhost_user_block and vhost_user_net where
it was existing to use config::OptionParser. By moving the OptionParser
to its own crate at the top-level we can remove the very heavy
dependency that these vhost-user backends had.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactored the construction of KVM IOCTL rules for Seccomp.
Separating the rules by architecture can reduce the risk of bugs and
attacks.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to move the hypervisor specific parts of the VM exit handling
path, we're defining a generic, hypervisor agnostic VM exit enum.
This is what the hypervisor's Vcpu run() call should return when the VM
exit can not be completely handled through the hypervisor specific bits.
For KVM based hypervisors, this means directly forwarding the IO related
exits back to the VMM itself. For other hypervisors that e.g. rely on the
VMM to decode and emulate instructions, this means the decoding itself
would happen in the hypervisor crate exclusively, and the rest of the VM
exit handling would be handled through the VMM device model implementation.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Fix test_vm unit test by using the new abstraction and dropping some
dead code.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The code is purely for maintaining an internal counter. It is not really
tied to KVM.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The _fd suffix is KVM specific. But since it now point to an hypervisor
agnostic hypervisor::Vm implementation, we should just rename it vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
The fd naming is quite KVM specific. Since we're now using the
hypervisor crate abstractions, we can rename those into something more
readable and meaningful. Like e.g. vcpu or vm.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Split the generic virtio code (queues and device type) from the
VirtioDevice trait, transport and device implementations.
This also simplifies the feature handling in vhost_user_backend as the
vm-virtio crate is no longer has any features.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
On x86 architecture, we need to save a list of MSRs as part of the vCPU
state. By providing the full list of MSRs supported by KVM, this patch
fixes the remaining snapshot/restore issues, as the vCPU is restored
with all its previous states.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Some vCPU states such as MP_STATE can be modified while retrieving
other states. For this reason, it's important to follow a specific
order that will ensure a state won't be modified after it has been
saved. Comments about ordering requirements have been copied over
from Firecracker commit 57f4c7ca14a31c5536f188cacb669d2cad32b9ca.
This patch also set the previously saved VCPU_EVENTS, as this was
missing from the restore codepath.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The logic can be shared among hypervisor implementations.
The 'static bound is used such that we don't need to deal with extra
lifetime parameter everywhere. It should be okay because we know the
entry type E doesn't contain any reference.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This trait contains a function which produces a interrupt routing entry.
Implement that trait for KvmRoutingEntry and rewrite the update
function.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is only the route entry is hypervisor dependent.
Keep a definition of KvmMsiInterruptManager to avoid too much code
churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is that only the route field is hypervisor specific.
Provide a new function in blanket implementation. Also redefine
KvmRoutingEntry with RoutingEntry to avoid code churn.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
The observation is that the GSI hashmap remains untouched before getting
passed into the MSI interrupt manager. We can create that hashmap
directly in the interrupt manager's new function.
The drops one import from the interrupt module.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This is a hash table of string to hash tables of u64s. In JSON these
hash tables are object types.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Because we don't want the guest to miss any event triggered by the
emulation of devices, it is important to resume all vCPUs before we can
resume the DeviceManager with all its associated devices.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We need consistency between pause/resume and snapshot/restore
operations. The symmetrical behavior of pausing/snapshotting
and restoring/resuming has been introduced recently, and we must
now ensure that no matter if we're using pause/resume or
snapshot/restore features, the resulting VM should be running in
the exact same way.
That's why the vCPU state is now stored upon VM pausing. The snapshot
operation being a simple serialization of the previously saved state.
The same way, the vCPU state is now restored upon VM resuming. The
restore operation being a simple deserialization of the previously
restored state.
It's interesting to note that this patch ensures time consistency from a
guest perspective, no matter which clocksource is being used. From a
previous patch, the KVM clock was saved/restored upon VM pause/resume.
We now have the same behavior for TSC, as the TSC from the vCPUs are
saved/restored upon VM pause/resume too.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of calling the resume() function from the CpuManager, which
involves more than what is needed from the shutdown codepath, and
potentially ends up with a deadlock, we replace it with a subset.
The full resume operation is reserved for a VM that has been paused.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We want each Vcpu to store the vCPU state upon VM pausing. This is the
reason why we need to explicitly implement the Pausable trait for the
Vcpu structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When set_user_memory_region was moved to hypervisor crate, it was turned
into a safe function that wrapped around an unsafe call. All but one
call site had the safety statements removed. But safety statement was
not moved inside the wrapper function.
Add the safety statement back to help reasoning in the future. Also
remove that one last instance where the safety statement is not needed .
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
That removes one more KVM-ism in VMM crate.
Note that there are more KVM specific code in those files to be split
out, but we're not at that stage yet.
No functional change.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Collate the virtio device counters in DeviceManager for each device that
exposes any and expose it through the recently added HTTP API.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The counters are a hash of device name to hash of counter name to u64
value. Currently the API is only implemented with a stub that returns an
empty set of counters.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The structure is tightly coupled with KVM. It uses KVM specific
structures and calls. Add Kvm prefix to it.
Microsoft hypervisor will implement its own interrupt group(s) later.
No functional change intended.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Now that the VMM uses KVM_KVMCLOCK_CTRL from the KVM API, it must be
added to the seccomp filters list.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Through the newly added API notify_guest_clock_paused(), this patch
improves the vCPU pause operation by letting the guest know that each
vCPU is being paused. This is important to avoid soft lockups detection
from the guest that could happen because the VM has been paused for more
than 20 seconds.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's important that on restore path, the CpuManager's vCPU gets filled
with each new vCPU that is being created. In order to cover both boot
and restore paths, the list is being filled from the common function
create_vcpu().
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that the VMM uses both KVM_GET_CLOCK and KVM_SET_CLOCK from the KVM
API, they must be added to the seccomp filters list.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to maintain correct time when doing pause/resume and
snapshot/restore operations, this patch stores the clock value
on pause, and restore it on resume. Because snapshot/restore
expects a VM to be paused before the snapshot and paused after
the restore, this covers the migration use case too.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When a request is made to increase the number of vCPUs in the VM attempt
to reuse any previously removed (and hence inactive) vCPUs before
creating new ones.
This ensures that the APIC ID is not reused for a different KVM vCPU
(which is not allowed) and that the APIC IDs are also sequential.
The two key changes to support this are:
* Clearing the "kill" bit on the old vCPU state so that it does not
immediately exit upon thread recreation.
* Using the length of the vcpus vector (the number of allocated vcpus)
rather than the number of active vCPUs (.present_vcpus()) to determine
how many should be created.
This change also introduced some new info!() debugging on the vCPU
creation/removal path to aid further development in the future.
TEST=Expanded test_cpu_hotplug test.
Fixes: #1338
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
After the vCPU has been ejected and the thread shutdown it is useful to
clear the "kill" flag so that if the vCPU is reused it does not
immediately exit upon thread recreation.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
These messages are intended to be useful to support debugging related to
vCPU hotplug/unplug issues.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The same way the VM and the vCPUs are restored in a paused state, all
devices associated with the device manager must be restored in the same
paused state.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because we need to pause the VM before it is snapshot, it should be
restored in a paused state to keep the sequence symmetrical. That's the
reason why the state machine regarding the valid VM's state transition
needed to be updated accordingly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
To follow a symmetrical model, and avoid potential race conditions, it's
important to restore a previously snapshot VM in a "paused" state.
The snapshot operation being valid only if the VM has been previously
paused.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When the hypervisor crate was introduced, a few places that handled
errors were commented out in favor of unwrap, but that's bad practice.
Restore proper error handling in those places in this patch.
We cannot use from_raw_os_error anymore because it is wrapped deep under
hypervisor crate. Create new custom errors instead.
Fixes: e4dee57e81 ("arch, pci, vmm: Initial switch to the hypervisor crate")
Signed-off-by: Wei Liu <liuwe@microsoft.com>
This commit fixes some warnings introduced in the previous
hyperviosr crate PR.Removed some unused variables from arch/aarch64
module.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Start moving the vmm, arch and pci crates to being hypervisor agnostic
by using the hypervisor trait and abstractions. This is not a complete
switch and there are still some remaining KVM dependencies.
Signed-off-by: Muminul Islam <muislam@microsoft.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
There are two CPUID leaves for handling CPU topology, 0xb and 0x1f. The
difference between the two is that the 0x1f leaf (Extended Topology
Leaf) supports exposing multiple die packages.
Fixes: #1284
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The extended topology leaf (0x1f) also needs to have the APIC ID (which
is the KVM cpu ID) set. This mirrors the APIC ID set on the 0xb topology
leaf
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than saving the individual parts into the CpuManager save the
full struct as it now also contains the topology data.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows the user to optionally specify the desired CPU topology. All
parts of the topology must be specified and the product of all parts
must match the maximum vCPUs.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Its test case calls remove unconditionally. Instead of making the test
code call remove conditionally, removing the pci_support dependency
simplifies things -- that function is just a wrapper around HashMap's
remove function anyway.
Signed-off-by: Wei Liu <liuwe@microsoft.com>
Now that PCI device hotplug returns a response, the OpenAPI definition
must reflect it, describing what is expected to be received.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to provide a more comprehensive b/d/f to the user, the
serialization of PciDeviceInfo is implemented manually to control the
formatting.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This patch completes the series by connecting the dots between the HTTP
frontend and the device manager backend.
Any request to hotplug a VFIO, disk, fs, pmem, net, or vsock device will
now return a response including the device name and the place of the
device in the PCI topology.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Pass from the device manager to the calling code the information about
the PCI device that has just been hotplugged.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to provide the device name and PCI b/d/f associated with a
freshly hotplugged device, the hotplugging functions from the device
manager return a new structure called PciDeviceInfo.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
X86 and AArch64 work in different ways to shutdown a VM.
X86 exit VMM event loop through ACPI device;
AArch64 need to exit from CPU loop of a SystemEvent.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Screened IO bus because it is not for AArch64.
Enabled Serial, RTC and Virtio devices with MMIO transport option.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Implemented GSI allocator and system allocator for AArch64.
Renamed some layout definitions to align more code between architectures.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
In order to workaround a Linux bug that happens when we place devices at
the end of the physical address space on recent hardware (52 bits limit)
we reduce the MMIO address space by one 4k page. This way, nothing gets
allocated in the last 4k of the address space, which is negligible given
the amount of space in the address space.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The action of "vm.delete" should not report errors on non-booted
VMs. This patch also revised the "docs/api.md" to reflect the right
'Prerequisites' of different API actions, e.g. on "vm.delete" and
"vm.boot".
Fixes: #1110
Signed-off-by: Bo Chen <chen.bo@intel.com>
This removes the need to use CAP_NET_ADMIN privileges and instead the
host MAC addres is either provided by the user or alternatively it is
retrieved from the kernel.
TEST=Run cloud-hypervisor without CAP_NET_ADMIN permission and a
preconfigured tap device:
sudo ip tuntap add name tap0 mode tap
sudo ifconfig tap0 192.168.249.1 netmask 255.255.255.0 up
cargo clean
cargo build
target/debug/cloud-hypervisor --serial tty --console off --kernel ~/src/rust-hypervisor-firmware/target/target/release/hypervisor-fw --disk path=~/workloads/clear-33190-kvm.img --net tap=tap0
VM was also rebooted to check that works correctly.
Fixes: #1274
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This allows an existing TAP interface to be used without needing
CAP_NET_ADMIN permissions on the Cloud Hypervisor binary as the ioctl to
bring up the interface is avoided.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
There is a much stronger PCI dependency from vfio_pci.rs than a VFIO one
from pci/src/vfio.rs. It seems more natural to have the PCI specific
VFIO implementation in the PCI crate rather than the other way around.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Now the flow of both architectures are aligned to:
1. load kernel
2. create VCPU's
3. configure system
4. start VCPU's
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Between X86 and AArch64, there is some difference in booting a VM:
- X86_64 can setup IOAPIC before creating any VCPU.
- AArch64 have to create VCPU's before creating GIC.
The old process is:
1. load_kernel()
load kernel binary
configure system
2. activate_vcpus()
create & start VCPU's
So we need to separate "activate_vcpus" into "create_vcpus" and
"activate_vcpus" (to start vcpus only). Setup GIC and create FDT
between the 2 steps.
The new procedure is:
1. load_kernel()
load kernel binary
(X86_64) configure system
2. create VCPU's
3. (AArch64) setup GIC
4. (AArch64) configure system
5. start VCPU's
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Instead of responding only headers with error code, we now return
complete error responses to HTTP requests with errors (e.g. undefined
endpoints and InternalSeverError).
Fixes: #472
Signed-off-by: Bo Chen <chen.bo@intel.com>
When doing self spawning the child will attempt to set the umask() again. Let
it through the seccomp rules so long as it the safe mask again.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This commit only implements the InterruptController crate on AArch64.
The device specific part for GIC is to be added.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
IOAPIC, a X86 specific interrupt controller, is referenced by device
manager and CPU manager. To work with more architectures, a common
type for all architectures is needed.
This commit introduces trait InterruptController to provide architecture
agnostic functions. Device manager and CPU manager can use it without
caring what the underlying device is.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
This is a preparing commit to build and test CH on AArch64. All building
issues were fixed, but no functionality was introduced.
For X86, the logic of code was not changed at all.
For ARM, the architecture specific part is still empty. And we applied
some tricks to workaround lint warnings. But such code will be replaced
later by other commits with real functionality.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
This config option provided very little value and instead we now enable
this feature (which then lets the guest control the cache mode)
unconditionally.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Explicit call to 'close()' is required on file descriptors allocated
from 'epoll::create()', which is missing for the 'EpollContext' and
'VringWorker'. This patch enforces to close the file descriptors by
reusing the Drop trait of the 'File' struct.
Signed-off-by: Bo Chen <chen.bo@intel.com>
Add a new "host_mac" parameter to "--net" and "--net-backend" and use
this to set the MAC address on the tap interface. If no address is given
one is randomly assigned and is stored in the config.
Support for vhost-user-net self spawning was also included.
Fixes: #1177
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The clock_nanosleep system call needs to be whitelisted since the commit
12e00c0f45 introduced the use of a sleep()
function. Without this patch, we can see an error when the VM is paused
or killed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Like the actions that don't take data such as "pause" or "resume" use a
common handler implementation to remove duplicated code for handling
simple endpoints like the hotplug ones.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Many of the API requests take a similar form with a single data item
(i.e. config for a device hotplug) expand the VmAction enum to handle
those actions and a single function to dispatch those API events.
For now port the existing helper functions to use this new API. In the
future the HTTP layer can create the VmAction directly avoiding the
extra layer of indirection.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than save the save a function pointer and use that instead the
underlying action. This is useful for two reasons:
1. We can ensure that we generate HttpErrors in the same way as the
other endpoints where API error variant should be determined by the
request being made not the underlying error.
2. It can be extended to handle other generic actions where the function
prototype differs slightly.
As result of this refactoring it was found that the "vm.delete" endpoint
was not connected so address that issue.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Extend the EndpointHandler trait to include automatic support for
handling PUT requests. This will allow the removal of lots of duplicated
code in the following commit from the API handling code.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The ch branch has been rebased to incorporate the latest upstream code
requiring a small change to the unit tests.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By passing a reference of the DeviceTree to the AddressManager, we can
now update the DeviceTree whenever a PCI BAR is reprogrammed. This is
mandatory to maintain the correct resources information related to each
virtio-pci device, which will ensure correct information will be stored
upon VM snapshot.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We want to be able to share the same DeviceTree across multiple threads,
particularly to handle the use case where PCI BAR reprogramming might
need to update the tree while from another thread a new device is being
added to the tree.
That's why this patch moves the DeviceTree instance into an Arc<Mutex<>>
so that we can later share a reference of the same mutable tree with the
AddressManager responsible for handling PCI BAR reprogramming.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
By using the vector of resources provided by the DeviceNode, the device
manager can store the information related to PCI BARs from a virtio-pci
device. Based on this, and upon VM restoration, the device manager can
restore the BARs in the expected location in the guest address space.
One thing to note is that we only need to provide the VirtioPciDevice
with the configuration BAR (BAR 0) since the SHaredMemory BAR info comes
from the virtio device directly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the new field "pci_bdf", a virtio-pci device can be restored at
the same place on the PCI bus it was located before the VM snapshot.
This ensures consistent placement on the PCI bus, based on the stored
information related to each device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
We need a way to store the information about where a PCI device was
placed on the PCI bus before the VM was snapshotted. The way to do this
is by adding an extra field to the DeviceNode structure.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Switch to using the recently added OptionParser in the code that parses
the block backend.
Fixes: #1092
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Switch to using the recently added OptionParser in the code that parses
the network backend.
Whilst doing this also update the net-backend syntax to use "sock"
rather than socket.
Fixes: #1092
Partially fixes: #1091
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To avoid a race condition where the signal might "miss" the KVM_RUN
ioctl() instead reapeatedly try sending a signal until the vCPU run is
interrupted (as indicated by setting a new per vCPU atomic.)
It important to also clear this atomic when coming out of a paused
state.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
To ensure that the DeviceManager threads (such as those used for virtio
devices) are cleaned up it is necessary to unpark them so that they get
cleanly terminated as part of the shutdown.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
After setting the kill signal flag for the vCPU thread release the pause
flag and unpark the threads. This ensures that that the vCPU thread will
wake up and check the kill signal flag if the VM is paused.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than immediately entering the vCPU run() code check if the kill
signal is set. This allows paused VMs to be shutdown.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This module will be dedicated to DeviceNode and DeviceTree definitions
along with some dedicated unit tests.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This iterator will let the VMM enumerate the resources associated
with the DeviceManager, allowing for introspection.
Moreover, by implementing a double ended iterator, we can get the
hierarchy from the leaves to the root of the tree, which is very
helpful in the context of restoring the devices in the right order.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that the device tree fully replaced the need for a dedicated list of
migratable devices, this commit cleans up the codebase by removing it
from the DeviceManager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit switches from migratable_devices to device_tree in order to
restore devices exclusively based on the device tree.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit adds an extra field to the DeviceNode so that the structure
can hold a Migratable device. The long term plan is to be able to remove
the dedicated table of migratable devices, but instead rely only on the
device tree.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In order to hide the complexity chosen for the device tree stored in
the DeviceManager, we introduce a new DeviceTree structure.
For now, this structure is a simple passthrough of a HashMap, but it can
be extended to handle some DeviceTree specific operations.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This device has a dedicated memory region in the guest address space,
which means in case of snapshot/restore, it must be restored in the
exact same location it was during the snapshot.
That's through the resources that we can describe the location of this
extra memory region, allowing the device for correct restoring.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This device has a dedicated memory region in the guest address space,
which means in case of snapshot/restore, it must be restored in the
exact same location it was during the snapshot.
That's through the resources that we can describe the location of this
extra memory region, allowing the device for correct restoring.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the device tree, retrieve the resources associated with a
virtio-mmio device to restore it at the right location in guest address
space. Also, the IRQ number is correctly restored.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Instead of splitting the MMIO allocation and the device creation into
separate functions for virtio-mmio devices, it's is easier to move
everything into the same function as we'll be able to gather resources
in the same place for the same device.
These resources will be stored in the device tree in a follow up patch.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In case the VM is created from scratch, the devices should be created
after the DeviceManager has been created. But this should not affect the
restore codepath, as in this case the devices should be created as part
of the restore() function.
It's necessary to perform this differentiation as the restore must go
through the following steps:
- Create the DeviceManager
- Restore the DeviceManager with the right state
- Create the devices based on the restored DeviceManager's device tree
- Restore each device based on the restored DeviceManager's device tree
That's why this patch leverages the recent split of the DeviceManager's
creation to achieve what's needed.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This commit performs the split of the DeviceManager's creation into two
separate functions by moving anything related to device's creation after
the DeviceManager structure has been initialized.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Based on the device tree, we now ensure the restore can be done in the
right order, as it will respect the dependencies between nodes.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DeviceManager itself must be snapshotted in order to store the
information regarding the devices associated with it, which effectively
means we need to store the device tree.
The mechanics to snapshot and restore the DeviceManagerState are added
to the existing snapshot() and restore() implementations.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The DeviceManager now creates a tree of devices in order to store the
resources associated with each device, but also to track dependencies
between devices.
This is a key part for proper introspection, but also to support
snapshot and restore correctly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's not possible to call UnixListener::Bind() on an existing file so
unlink the created socket when shutting down the Vsock device.
This will allow the VM to be rebooted with a vsock device.
Fixes: #1083
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Generate an error during validation if an attempt it made to place a
device behind an IOMMU or using a VFIO device when not using PCI.
Fixes: #751
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Let's put an underscore "_" in front of each device name to identify
when it has been set internally.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The virtio-console was not added to the list of Migratable devices,
which is fixed from this patch.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
It is based off the name from the virtio device attached to this
transport layer.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
It is based off the name from the virtio device attached to this
transport layer.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Because we know we will need every virtio device to be identified with a
unique id, we can simplify the code by making the identifier mandatory.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This identifier is chosen from the DeviceManager so that it will manage
all identifiers across the VM, which will ensure uniqueness.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Even in the context of "mmio" feature, we need the next device name to
be generated as we need to identify virtio-mmio devices to support
snapshot and restore functionalities.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This will be later used to identify each device used by the VM in order
to perform introspection and snapshot/restore properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This will be later used to identify each device used by the VM in order
to perform introspection and snapshot/restore properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This will be later used to identify each device used by the VM in order
to perform introspection and snapshot/restore properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This will be later used to identify each device used by the VM in order
to perform introspection and snapshot/restore properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This will be later used to identify each device used by the VM in order
to perform introspection and snapshot/restore properly.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If the virtio-console device is supposed to be placed behind the virtual
IOMMU, this must be explicitly propagated through the code.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If the virtio-rng device is supposed to be placed behind the virtual
IOMMU, this must be explicitly propagated through the code.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
If the virtio-vsock device is supposed to be placed behind the virtual
IOMMU, this must be explicitly propagated through the code.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Now that all virtio devices are assigned with identifiers, they could
all be removed from the VM. This is not something that we want to allow
because it does not make sense for some devices. That's why based on the
device type, we remove the device or we return an error to the user.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
By giving the devices ids this effectively enables the removal of the
device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The parameters regarding the attachment to the virtio-iommu device was
not propagated correclty, and any modification to the configuration was
not stored back into it.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
It's possible to have multiple vsock devices so in preparation for
hotplug/unplug it is important to be able to have a unique identifier
for each device.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If the current state is paused that means most of the handles got killed by pthread_kill
We need to unpark those threads to make the shutdown worked. Otherwise
The shutdown API hangs and the API is not responding afterwards. So
before the shutdown call we need to resume the VM make it succeed.
Fixes: #817
Signed-off-by: Muminul Islam <muislam@microsoft.com>
If a size is specified use it (in particular this is required if the
destination is a directory) otherwise seek in the file to get the size
of the file.
Add a new check that the size is a multiple of 2MiB otherwise the kernel
will reject it.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Check that if any device using vhost-user (net & disk with
vhost_user=true) or virtio-fs is enabled then check shared memory is
also enabled.
Fixes: #848
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The new 'shared' and 'hugepages' controls aim to replace the 'file'
option in MemoryConfig. This patch also updated all related integration
tests to use the new controls (instead of providing explicit paths to
"/dev/shm" or "/dev/hugepages").
Fixes: #1011
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Signed-off-by: Bo Chen <chen.bo@intel.com>
Replace alignment calculation of start address with functionally
equivalent version that does not assume that the block size is a power
of two.
Signed-off-by: Martin Xu <martin.xu@intel.com>
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
When a virtio device is dynamically removed from the VM through the
hot-unplug mechanism, every mapping associated with it must be properly
removed.
Based on the previous patches letting a VirtioDevice expose the list of
userspace mappings associated with it, this patch can now remove all the
KVM userspace memory regions through the MemoryManager.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
The same way we added a helper for creating userspace memory mappings
from the MemoryManager, this patch adds a new helper to remove some
previously added mappings.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>