As an optimisation to avoid allocating byte vectors add a trait method
that will append to an existing vector. Further to support the
transition add a default implementation of Aml::to_aml_bytes() that uses
the newly added Aml::append_aml_bytes()
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Now that vfio-ioctls correctly exposes the list of capabilities related
to each region, Cloud Hypervisor can decide to mmap a region based on
the presence or absence of MSIX_MAPPABLE. Instead of blindly mmap'ing
the region, we check if the MSI-X table or PBA is present on the BAR,
and if that's the case, we look for MSIX_MAPPABLE.
If MSIX_MAPPABLE is present, we can go ahead and mmap the entire region.
If MSIX_MAPPABLE is not present, we simply ignore the mmap'ing of this
region as it wouldn't be supported.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
As v19.0 has been release almost a month ago, let's also update the
project's spec file accordingly.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
By merging receive buffers through the VIRTIO_NET_F_MRG_RXBUF feature,
as well as enabling the use of indirect descriptors through
VIRTIO_RING_F_INDIRECT_DESC feature, we achieve better throughput for
the virtio-net device without hurting its latency.
Signed-off-by: Sebastien Boeuf <sebastien.boeuf@intel.com>
This allocator allocates 64-bit MMIO addresses for use with platform
devices e.g. ACPI control devices and ensures there is no overlap with
PCI address space ranges which can cause issues with PCI device
remapping.
Use this allocator the ACPI platform devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Rather than use the system MMIO allocator for RAM use an allocator that
covers the full RAM range.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This is because the SGX region will be placed between the end of ram and
the start of the device area.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
With the segment id now encoded in the bdf it is not necessary to have
the separate field for it.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactor the existing virtio fs test to support controlling the PCI
segment the device should be added to and use this for a multiple
segment test.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactor the existing net hotplug test to support controlling the PCI
segment the device should be added to and use this for a multiple
segment test.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Refactor the existing pmem hotplug test to support controlling the PCI
segment the device should be added to and use this for a multiple
segment test.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
In particular use the accessor for getting the device id from the bdf.
As a side effect the VIOT table is now segment aware.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This will allow making the code that handles bdf parsing simpler and
remove the need for manual shifting.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Since each segment must have a non-overlapping memory range associated
with it the device memory must be equally divided amongst all segments.
A new allocator is used for each segment to ensure that BARs are
allocated from the correct address ranges. This requires changes to
PciDevice::allocate/free_bars to take that allocator and when
reallocating BARs the correct allocator must be identified from the
ranges.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
For all the devices that support being hotplugged (disk, net, pmem, fs
and vsock) add "pci_segment" option and propagate that through to the
addition onto the PCI busses.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Move the decision on whether to use a 64-bit bar up to the DeviceManager
so that it can use both the device type (e.g. block) and the PCI segment
ID to decide what size bar should be used.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Generate a set of 8 IRQs and round-robin distribute those over all the
slots for a bus. This same set of IRQs is then used for all PCI
segments.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
The platform config may specify a number of PCI segments to use, if this
greater than 1 then we add supplemental PCI segments as well as the
default segment.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
This currently contains only the number over PCI segments to create.
This is limited to 16 at the moment which should allow 496 user specified
PCI devices.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
For the bus scanning the GED AML code now calls into a PSCN method that
scans all buses. This approach was chosen since it handles the case
correctly where one GED interrupt is services for two hotplugs on
distinct segments.
The PCIU and PCID field values are now determined by the PSEG field that
is uses to select which segment those values should be used for.
Similarly _EJ0 will notify based on the value of _SEG.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Replace the hardcoded zero PCI segment id when adding devices to the bus
and extend the DeviceTree to hold the PCI segment id.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Since each segment must have disjoint address spaces only advertise
address space in the 32-bit range and the PIO address space on the
default (zero) segment.
Signed-off-by: Rob Bradford <robert.bradford@intel.com>
Split PciSegment::new_default_segment() into a separate
PciSegment::new() and those parts required only for the default segment
(PIO PCI config device.)
Signed-off-by: Rob Bradford <robert.bradford@intel.com>