All callers of this function called virStorageFileParseChainIndex
before. Internalize the logic of that function to prevent multiple calls
and passing around unnecessary temporary variables.
This is achieved by calling virStorageFileParseBackingStoreStr and using
it to fill the values internally.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
A terminated chain has a virStorageSource with type ==
VIR_STORAGE_TYPE_NONE at the end. Since virStorageSourceHasBacking
is explicitly returning false in that case we'd probe the chain
needlessly. Just check whether src->backingStore is non-NULL.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
'virDomainDiskGetSource' returns src->path effectively. Checking whether
a disk is empty is done via 'virStorageSourceIsEmpty'.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Use g_autoptr for the hash table and remove the 'ret' variable.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The parsers for the backing store strings are relatively self-contained
and rather massive piece of code. Move them to a new module called
storage_source_backingstore.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
There are no other files using it. Move it and make the functions
static.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Rename the function to virStorageSourceFetchRelativeBackingPath and
return relative paths only. The function is only used to restore the
relative relationship between images so there's no need for it to be
universal.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Move it together with virStorageSourceGetRelativeBackingPath which is
the main reason why it exists. Upcoming patch will modify the comment
and arguments refering to virStorageSourceGetRelativeBackingPath so it's
better if they are together.
Signed-off-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
As explained in QEMU commit 4c257911dcc7c4189768e9651755c849ce9db4e8
intel-pt features should never be included in the CPU models as it was
not supported by KVM back then and even once it started to be supported,
users have to enable it by passing pt_mode=1 parameter to kvm_intel
module. The Icelake-* CPU models with intel-pt included were added to
QEMU 3.1.0 and removed right in the following 4.0.0 release (and even in
3.1.1 maintenance release).
In libvirt 6.10.0 I introduced 'removed' attribute for features included
in our CPU model definitions which we can use to drop intel-pt from
Icelake-* CPU models. Back then I explained we can safely do so only for
features which could never be enabled, which is not the case of intel-pt.
Theoretically, it could be possible to create an environment in which
QEMU would enable intel-pt without asking for it explicitly: it would
need to use a new enough kernel (not available at the time of QEMU
3.1.0) and pt_mode KVM parameter in combination with QEMU 3.1.0 running
a domain with q35 machine type and all that on a CPU which didn't really
exist at that time.
Migrating such domain to a host with newer SW stack including libvirt
with this patch applied would result in incompatible guest ABI (the
virtual CPU would lose intel-pt). However, QEMU changed its CPU models
unconditionally and thus migration would not work even without this
patch. That said, it is safe to follow QEMU and remove the feature from
Icelake-* CPU models in our cpu_map.
https://bugzilla.redhat.com/show_bug.cgi?id=1853972
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Tim Wiederhake <twiederh@redhat.com>
dtrace invokes the C compiler, so when cross-building we need
to make sure that $CC is set in the environment and that it
points to the cross-compiler rather than the native one.
Until https://github.com/mesonbuild/meson/issues/266
is addressed, the workaround is to call dtrace via env(1).
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=980334
Signed-off-by: Helmut Grohne <helmut@subdivi.de>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
virPCIGetNetName is used to get the name of the netdev associated with
a particular PCI device. This is used when we have a VF name, but need
the PF name in order to send a netlink command (e.g. in order to
get/set the MAC address of the VF).
In simple cases there is a single netdev associated with any PCI
device, so it is easy to figure out the PF netdev for a VF - just look
for the PCI device that has the VF listed in its "virtfns" directory;
the only name in the "net" subdirectory of that PCI device's sysfs
directory is the PF netdev that is upstream of the VF in question.
In some cases there can be more than one netdev in a PCI device's net
directory though. In the past, the only case of this was for SR-IOV
NICs that could have multiple PF's per PCI device. In this case, all
PF netdevs associated with a PCI address would be listed in the "net"
subdirectory of the PCI device's directory in sysfs. At the same time,
all VF netdevs and all PF netdevs have a phys_port_id in their sysfs,
so the way to learn the correct PF netdev for a particular VF netdev
is to search through the list of devices in the net subdirectory of
the PF's PCI device, looking for the one netdev with a "phys_port_id"
matching that of the VF netdev.
But starting in kernel 5.8, the NVIDIA Mellanox driver began linking
the VFs' representor netdevs to the PF PCI address [1], and so the VF
representor netdevs would also show up in the net
subdirectory. However, all of the devices that do so also only have a
single PF netdev for any given PCI address.
This means that the net directory of the PCI device can still hold
multiple net devices, but only one of them will be the PF netdev (the
others are VF representors):
$ ls '/sys/bus/pci/devices/0000:82:00.0/net'
ens1f0 eth0 eth1
In this case the way to find the PF device is to look at the
"phys_port_name" attribute of each netdev in sysfs. All PF devices
have a phys_port_name matching a particular regex
(p[0-9]+$)|(p[0-9]+s[0-9]+$)
Since there can only be one PF in the entire list of devices, once we
match that regex, we've found the PF netdev.
[1] - https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/
commit/?id=123f0f53dd64b67e34142485fe866a8a581f12f1
Co-Authored-by: Moshe Levi <moshele@nvidia.com>
Signed-off-by: Dmytro Linkin <dlinkin@nvidia.com>
Reviewed-by: Adrian Chiris <adrianc@nvidia.com>
Reviewed-by: Laine Stump <laine@redhat.com>
This commit add virNetDevGetPhysPortName to read netdevice
phys_port_name from sysfs. It also refactor the code so
virNetDevGetPhysPortName and virNetDevGetPhysPortID will use
same method to read the netdevice sysfs.
Signed-off-by: Moshe Levi <moshele@nvidia.com>
Reviewed-by: Laine Stump <laine@redhat.com>
Fixes a memory leak when hypervCreateInvokeParamsList() fails.
Signed-off-by: Matt Coleman <matt@datto.com>
Reviewed-by: Laine Stump <laine@redhat.com>
The code handles XML bits and internal definition and should be
in conf directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The code handles XML bits and internal definition and should be
in conf directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Same as virStorageFileBackend, it doesn't belong into util directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
It's used only by storage file code so it doesn't make sense to have
it in util directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Up until now we had a runtime code and XML related code in the same
source file inside util directory.
This patch takes the runtime part and extracts it into the new
storage_file directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This code is not directly relevant to virStorageSource so move it to
separate file.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Introduce a new storage_file directory where we will keep storage file
related code. Add a backend prefix to the file name to separate it from
other future files with 'storage_file' prefix.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
This will allow following patches to move virStorageSource into conf
directory and virStorageDriverData into a new storage_file directory.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
These files are using functions from virstoragefile.h but are missing
explicit include.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
When adding a rule for an image file and that image file has a chain
of backing files then we need to add a rule for each of those files.
To get that iterate over the backing file chain the same way as
dac/selinux already do and add a label for each.
Fixes: https://gitlab.com/libvirt/libvirt/-/issues/118
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Since Hyper-V allows multiple VMs to be created with the same name,
some commands produce unpredictable results due to
hypervDomainLookupByName's WMI query selecting the wrong domain.
For example, this prevents `virsh dumpxml` from outputting XML for the
wrong domain.
Signed-off-by: Matt Coleman <matt@datto.com>
Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
It's better to fill in missing values in post parse callbacks
than during parsing.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
The _virDomainMemoryDef structure has @uuid member which is
needed for PPC64 guests. No other architectures use it. Since the
member is VIR_UUID_BUFLEN bytes long, the structure is
unnecessary big. If the member is just a pointer then we can also
replace some calls of virUUIDIsValid() with plain test against
NULL and also simplify formatter code which can now also check
the pointer against NULL.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Now we have everything prepared for generating the command line.
The device alias prefix was chosen to be 'virtiopmem'.
Since virtio-pmem-pci device goes onto PCI bus generating device
alias must have been changed slightly because
qemuAssignDeviceMemoryAlias() might have used DIMM slot number to
generate the alias. This obviously won't work and thus the "old"
way (which includes qemuDomainDeviceAliasIndex()) must be used.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1735375
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some users might want to have virtio-pmem backed by a block device
in which case we have to create the device in the domain private
namespace.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Some users might want to have virtio-pmem backed by a block
device in which case we have to allow the device in CGroups.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Just like with NVDIMM model, we have to relabel the path to
virtio-pmem so that QEMU can access it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
The virtio-pmem is a virtio variant of NVDIMM and just like
NVDIMM virtio-pmem also allows accessing host pages bypassing
guest page cache. The difference is that if a regular file is
used to back guest's NVDIMM (model='nvdimm') the persistence of
guest writes might not be guaranteed while with virtio-pmem it
is.
To express this new model at domain XML level, I've chosen the
following:
<memory model='virtio-pmem' access='shared'>
<source>
<path>/tmp/virtio_pmem</path>
</source>
<target>
<size unit='KiB'>524288</size>
</target>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memory>
Another difference between NVDIMM and virtio-pmem is that while
the former supports NUMA node locality the latter doesn't. And
also, the latter goes onto PCI bus and not into a DIMM module.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
This commit introduces a new capability that reflects virtio-pmem-pci
device support in qemu:
QEMU_CAPS_DEVICE_VIRTIO_PMEM_PCI, /* -device virtio-pmem-pci */
The virtio-pmem-pci device was introduced in QEMU 4.1.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>