We fixed the L2 and L3 level cache sharing issues and confirmed that the
L2 level cache is independent, while the L3 level cache is shared per-socket.
See:#5505
Signed-off-by: zhongbingnan <zhongbingnan@bytedance.com>
The commit b92fe648e9 (vmm: cpu: Disable KVM_FEATURE_ASYNC_PF_INT in
CPUID) disabled APF (Asynchronous Page Fault) mechanism to address
problem that makes vcpu thread spin 100%. As the actual issue is in
KVM, which has been merged in commit 2f15d027c05f (KVM: x86: Properly
handle APF vs disabled LAPIC situation) since 2021, so it's okay to
re-enable APF now.
Signed-off-by: Yi Wang <foxywang@tencent.com>
Using the data from sysfs forward the host host cache layout to the
guest using the FDT tables.
TEST=The host cache layout (from sysfs) can be seen in inside the guest
using lscpu.
Signed-off-by: zhongbingnan <zhongbingnan@bytedance.com>
This patch clarifies the assumptions we have regarding the guest address
space layout while creating memory mapping in E820 on x86_64 and fdt on
aarch64. It also explicitly checks on these assumptions and report
errors if these assumptions do not hold.
Signed-off-by: Bo Chen <chen.bo@intel.com>
The previous `arch_memory_regions` function will provide some memory
regions with the specified memory size and fill all the previous
regions before using the next one, but sometimes there may be no need
to fill up the previous one, e.g., the previous one should be aligned
with hugepage size.
This commit make `arch_memory_regions` function not take any
parameters and return the max available regions, the memory manager
can use them on demand.
Fixes: #5463
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
The original codes did not consider that the previous memory region
might not be full and always set it to the maximum size.
This commit fixes this problem by creating memory mappings based on
the actual memory details in both E820 on x86_64 and fdt on aarch64.
Fixes: #5463
Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
Program the APIC ID (CPUID leaf 0x1 EBX) with the CPU id. This resolves
an issue where the EDKII firmware expects the APIC ID to vary per-CPU.
Fixes: #5475
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
Bump to the latest rust-vmm crates, including vm-memory, vfio,
vfio-bindings, vfio-user, virtio-bindings, virtio-queue, linux-loader,
vhost, and vhost-user-backend,
Signed-off-by: Bo Chen <chen.bo@intel.com>
This will allow the SIGWINCH listener to run on kernels older than
5.5, although on those kernels it will have to make 64 syscalls to
reset all the signal handlers.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
This doesn't need to be rendered in the HTML API documentation, and
wouldn't be formatted correctly if it was.
Signed-off-by: Alyssa Ross <hi@alyssa.is>
This hypervisor leaf includes details of the TSC frequency if that is
available from KVM. This can be used to efficiently calculate time
passed when there is an invariant TSC.
TEST=Run `cpuid` in the guest and observe the frequency populated.
Fixes: #5178
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Before Linux v6.0, AArch64 didn't support "socket" in "cpu-map"
(CPU topology) of FDT.
We found that clusters can be used in the same way of sockets. That is
the way we implemented the socket settings in Cloud Hypervisor. But in
fact it was a bug.
Linux commit 26a2b7 fixed the mistake. So the cluster nodes can no
longer act as sockets. And in a following commit dea8c0, sockets were
supported.
This patch fixed the way to configure sockets. In each socket, a default
cluster was added to contain all the cores, because cluster layer is
mandatory in CPU topology on AArch64.
This fix will break the socket settings on the guests where the kernel
version is lower than v6.0. In that case, if socket number is set to
more than 1, the kernel will treat that as FDT mistake and all the CPUs
will be put in single cluster of single socket.
The patch only impacts the case of using FDT, not ACPI.
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
e.g. on QEMU on KVM:
cloud-hypervisor: 17.079406ms: <vmm> INFO:arch/src/x86_64/mod.rs:565 -- Running under nested virtualisation. Hypervisor string: KVMKVMKVM
Or under Azure:
cloud-hypervisor: 3.881263ms: <vmm> INFO:arch/src/x86_64/mod.rs:565 -- Running under nested virtualisation. Hypervisor string: Microsoft Hv
Fixes: #5067
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Without breaking the former way of declaring them. This is simply based
on the presence of the GUID TDX Metadata offset. If not present, we
consider the firmware is quite old and therefore we fallback onto the
previous way to expose memory resources.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
In particular update to latest linux-loader release and point to latest
vfio repository for both crates hosted there.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The preferred way of retrieving the offset where to find the TDVF
descriptor structure is by going through a table of GUIDs that can be
found at a specific offset in the firmware file. If the expected GUIDs
can't be found, we can fallback onto the former way, which is to read
directly the value at a specific offset in the file.
This patch implements the new mechanism without breaking compatibility
for older firmwares as it keeps supporting the previous mechanism.
As a reference, here is the documentation from the EDK2 code, and
particularly from the OvmfPkg/ResetVector/Ia16/ResetVectorVtf0.asm file:
```
GUIDed structure. To traverse this you should first verify the
presence of the table footer guid
(96b582de-1fb2-45f7-baea-a366c55a082d) at 0xffffffd0. If that
is found, the two bytes at 0xffffffce are the entire table length.
The table is composed of structures with the form:
Data (arbitrary bytes identified by guid)
length from start of data to end of guid (2 bytes)
guid (16 bytes)
so work back from the footer using the length to traverse until you
either find the guid you're looking for or run off the beginning of
the table.
```
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
When configuring the vCPUs it is only necessary to provide the guest
memory when booting fresh (for populating the guest memory). As such
refactor the vCPU configuration to remove the use of the
GuestMemoryMmap stored on the CpuManager.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
If the KVM version is too old (pre Linux 5.7) then fetch the CPUID
information from the host and use that in the guest. We prefer the KVM
version over the host version as that would use the CPUID for the
running CPU vs the CPU that runs this code which might be different due
to a hybrid topology.
Fixes: #4918
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Add TPM's CRB Interface specific address ranges to layouts
Signed-off-by: Praveen K Paladugu <prapal@linux.microsoft.com>
Co-authored-by: Sean Yoo <t-seanyoo@microsoft.com>