While processing the volume for lseek, virFileReadHeaderFD, and
virStorageFileGetMetadataFromBuf - failure would cause an error,
but ret would not be set. That would result in an error message being
sent, but successful status being returned.
Just so it's clearer what to expect upon input and what types of return
values could be generated. These were loosely copied from existing
virStorageBackendUpdateVolTargetInfoFD.
Similar to the openflags which allow VIR_STORAGE_VOL_OPEN_NOERROR to be
passed to avoid open errors, add a 'readflags' variable so that in the
future read failures could also be ignored.
This updates the test program to make it consistent with recent changes
to the mock libraries, and also opens up the possibility of mocking more
than just /sys in the future.
Instead of fakesysfsdir, which is very generic, use fakesysfspcidir and
fakesysfscgroupdir. This makes it explicit what part of the fake sysfs
filesystem they're referring to, and also leaves open the possibility of
handling files in two unrelated parts of the fake sysfs filesystem.
No functional changes.
We might need to mock files living outside SYSFS_PREFIX later on,
so it's better to treat the temporary directory we are passed via
the environment as the root of the fake filesystem and create
SYSFS_PREFIX inside it.
The environment variable name will be changed to reflect the new use
we're making of it in a later commit.
We might need to mock files living outside PCI_SYSFS_PREFIX later on,
so it's better to treat the temporary directory we are passed via
the environment as the root of the fake filesystem and create
PCI_SYSFS_PREFIX inside it.
The environment variable name will be changed to reflect the new use
we're making of it in a later commit.
Add qemuDomainHasVCpuPids to do the checking and replace in place checks
with it.
We no longer need checking whether the thread contains fake data
(vcpupids[0] == vm->pid) as in b07f3d821d
and 65686e5a81 this was removed.
The vCPU threads make sense in the counterparts that set the vCPU
bandwidth/quota, not in the emulator one. The emulator tunables are set
all the time anyways.
Drop the extra check and remove the now unneeded vm argument.
Since commit 0c04906fa the check for priv->cgroup doesn't make sense as
the calls to virCgroupHasController return the same information. Remove
it and move it's comment partially to the new check.
The already spurious check was also later copied to the iothreads code.
Once more stuff will be moved into the vCPU data structure it will be
necessary to get a specific one in some ocasions. Add a helper that will
simplify this task.
Refactor the code flow so that 'exit_monitor:' can be removed.
This patch moves the auditing functions into places where it's certain
that hotunplug was or was not successful and reports errors from
qemuMonitorGetCPUInfo properly.
Refactor the code flow so that 'exit_monitor:' can be removed.
This patch also moves the auditing and setting of the new vCPU count
right to the place where the hotplug happens, since it's possible that
the hotplug succeeds and adds a cpu while other stuff fails.
Lastly, failures of qemuMonitorGetCPUInfo are now reported rather than
ignored. The function retuns 0 if it "successfully" detected 0 threads.
qemuDomainHotplugVcpus/qemuDomainHotunplugVcpus are complex enough in
regards of adding one CPU. Additionally it will be desired to reuse
those functions later with specific vCPU hotplug.
Move the loops for adding vCPUs into qemuDomainSetVcpusFlags so that the
helpers can be made simpler and more straightforward.
The cpu hotplug helper functions used negative error handling in a part
of them, although some code that was added later didn't properly set the
error codes in some cases. This would cause improper error messages in
cases where we couldn't modify the numa cpu mask and a few other cases.
Fix the logic by converting it to the regularly used pattern.
With a very unfortunate timing, the agent might vanish before we do the
second call while the locks were down. Re-check that the agent is
available before attempting it again.
We should make a copy of current definition to preserve a persistent
definition, because we later update the definition with live changes.
The live definition is discarded on domain shutdown and replaced by the
copy we make before starting the domain.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This change ensures to call driver specific post-parse code to modify
domain definition after parsing hypervisor config the same way we do
after parsing XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This change ensures to call driver specific post-parse code to modify
domain definition after parsing hypervisor config the same way we do
after parsing XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This change ensures to call driver specific post-parse code to modify
domain definition after parsing hypervisor config the same way we do
after parsing XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This reverts commit d2e5538b16.
A migration regression was introduced by this commit. When migrating
a domain, its active XML is sent to the destination libvirtd, where
it is parsed as inactive XML. d2e5538b copied the libxl generated
interface name into the active config, which was being passed to the
migration destination and being parsed into inactive config. Attempting
to start the config could result in failure if an interface with the
same generated name already exists.
The qemu driver behaves similarly, but the parser contains a hack to
skip interface names starting with 'vnet' when parsing inactive XML.
We could extend the hack to skip names starting with 'vif' too, but a
better fix would be to expose these hypervisor-specific interface name
prefixes in capabilities. See the following discussion thread for more
details
https://www.redhat.com/archives/libvir-list/2015-December/msg00262.html
For the pending 1.3.0 release, it is best to revert d2e5538b. It can
be added again post release, after moving the prefix to capabilities.
When installing the libvirt-daemon RPM, we have a %post rule to
enable the libvirtd.service, virtlockd.socket and virtlogd.socket
files. This is only done, however, when the RPM is first installed,
not when upgrading RPMs. So virtlogd will not get activated on
upgrading, which is a problem as libvirt qemu driver will expect
it to be available by default.
This adds a trigger that is run when uninstalling libvirt-daemon
older than 1.3.0 that will enable & start virtlogd.socket if
libvirtd is enabled and/or started. Using the trigger rather
than %post ensures that it only runs once, allowing admins to
disable it explicitly thereafter without future upgrades
re-enabling it.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When someone does 'systemctl enable libvirtd.service' we should
also enable virtlockd.socket/virtlogd.socket, so that they can
be auto-activated if libvirtd tries to access the sockets.
Without this, people have to manually enable the units themselves
via 'systemctl enable virtdlogd.socket'.
This also ensures that if distros uses 'systemctl preset' for
enabling 'libvirtd.service', then the virtdlogd.socket gets
enabled without having to wait for the distro to update their
presets file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>