Current instructions are incorrect and there is now a new profile called
dev-opt to build the debug version of TD-SHIM.
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
Under the fuzzer this code appears dead:
error: field `0` is never read
--> /home/rob/src/cloud-hypervisor/arch/src/x86_64/mod.rs:128:32
|
128 | struct MemmapTableEntryWrapper(hvm_memmap_table_entry);
| ----------------------- ^^^^^^^^^^^^^^^^^^^^^^
| |
| field in this struct
|
= note: `MemmapTableEntryWrapper` has a derived impl for the trait `Clone`, but this is intentionally ignored during dead code analysis
= note: `-D dead-code` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(dead_code)]`
help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field
|
128 | struct MemmapTableEntryWrapper(());
| ~~
Signed-off-by: Rob Bradford <rbradford@rivosinc.com>
Add a 'rate_limit_groups' field to VmConfig that defines a set of
named RateLimiterGroups.
When the 'rate_limit_group' field of DiskConfig is defined, all
virtio-blk queues will be rate-limited by a shared RateLimiterGroup.
The lifecycle of all RateLimiterGroups is tied to the Vm.
A RateLimiterGroup may exist even if no Disks are configured to use
the RateLimiterGroup. Disks may be hot-added or hot-removed from the
RateLimiterGroup.
When the 'rate_limiter' field of DiskConfig is defined, we construct
an anonymous RateLimiterGroup whose lifecycle is tied to the Disk.
This is primarily done for api backwards compatability. Importantly,
the behavior is not the same! This implementation rate_limits the
aggregate bandwidth / iops of an individual disk rather than the
bandwidth / iops of an individual queue of a disk.
When neither the 'rate_limit_group' or the 'rate_limiter' fields of
DiskConfig is defined, the Disk is not rate-limited.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
Add a 'rate_limiter/group' module that defines the RateLimiterGroup
and a RateLimiterGroupHandle types.
The RateLimiterGroupHandle can be used in place of a RateLimiter to
limit the aggregate bandwidth and/or ops of multiple virtio-blk or
virtio-net queues.
Each RateLimiterGroup has an associated worker thread that broadcasts
an event to each RateLimiterGroupHandle when a RateLimiter is unblocked.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
CI reports errors:
error: writing `&Vec` instead of `&[_]` involves a new object where a slice will do
--> arch/src/x86_64/mod.rs:1351:19
|
1351 | epc_sections: &Vec<SgxEpcSection>,
| ^^^^^^^^^^^^^^^^^^^ help: change this to: `&[SgxEpcSection]`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#ptr_arg
= note: `-D clippy::ptr-arg` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::ptr_arg)]`
Signed-off-by: Yi Wang <foxywang@tencent.com>
CI reports clippy errors:
error: argument to `Path::join` starts with a path separator
--> tests/integration.rs:4076:58
|
4076 | let serial_socket = guest.tmp_dir.as_path().join("/tmp/serial.socket");
| ^^^^^^^^^^^^^^^^^^^^
|
= note: joining a path starting with separator will replace the path instead
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#join_absolute_paths
Signed-off-by: Yi Wang <foxywang@tencent.com>
CI reports clippy errors:
error: in a `match` scrutinee, avoid complex blocks or closures with blocks; instead, move the block or closure higher and bind it with a `let`
--> test_infra/src/lib.rs:93:51
|
93 | match (|| -> Result<(), WaitForBootError> {
| ___________________________________________________^
94 | | let listener =
95 | | TcpListener::bind(listen_addr.as_str()).map_err(WaitForBootError::Listen)?;
96 | | listener
... |
145 | | }
146 | | })() {
| |_________^
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#blocks_in_conditions
= note: `-D clippy::blocks-in-conditions` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(clippy::blocks_in_conditions)]`
Signed-off-by: Yi Wang <foxywang@tencent.com>
This PR addresses a bug in which the cpu topology of a guest
with non power-of-two number of cores is incorrect. For example,
in some contexts, a virtual machine with 2-sockets and 12-cores
will incorrectly believe that 16 cores are on socket 1 and 8
cores are on socket 2. In other cases, common topology enumeration
software such as hwloc will crash.
The root of the problem was the way that cloud-hypervisor generates
apic_id. On x86_64, the (x2) apic_id embeds information about cpu
topology. The cpuid instruction is primarily used to discover the
number of sockets, dies, cores, threads, etc. Using this information,
the (x2) apic_id is masked to determine which {core, die, socket} the
cpu is on. When the cpu topology is not a power of two
(e.g. a 12-core machine), this requires non-contiguous (x2) apic_id.
Signed-off-by: Thomas Barrett <tbarrett@crusoeenergy.com>
The following tests have been temporarily disabled:
1. Live upgrade/migration test with ovs-dpdk (#5532);
2. Disk hotplug tests on windows guests (#6037);
This patch has been tested with PR #6048.
Signed-off-by: Ravi kumar Veeramally <ravikumar.veeramally@intel.com>
Signed-off-by: Michael Zhao <michael.zhao@arm.com>
Tested-by: Bo Chen <chen.bo@intel.com>
For SEV-SNP guests we need to provide the extended memory. It follows a
very simple layout and very similar to other x86 guests.
First segment: [HIGH_RAM_START - MEM_32BIT_RESERVED_START]
PCI hole: [MEM_32BIT_RESERVED_START - RAM_64BIT_START]
Second segment: [RAM_64BIT_START - RAM_END]
Fixes#5993
Signed-off-by: Jinank Jain <jinankjain@microsoft.com>
The 'test_vfio_user' is prone to fail when the system is under high
workloads with errors:
```
Error while connecting to /var/tmp/spdk.sock
Is SPDK application running?
Error details: Invalid or non-existing address: '/var/tmp/spdk.sock'
```
This is because SPDK is not fully functional before we request to
create a nvme device using the vfio_user protocol. This patch stabilize
this test with allowing retires to execute host commands.
Signed-off-by: Bo Chen <chen.bo@intel.com>
This patch defines a new function 'generate_ram_ranges', to generate
usable physical memory ranges for the guest based on the existing guest
memory managed by VMM. This function is also made public, so that it can
be reused, say by the IGVM loader in the future [1].
No functional change.
See: #6020
Signed-off-by: Bo Chen <chen.bo@intel.com>