https://bugzilla.redhat.com/show_bug.cgi?id=1197600
So, libvirt uses pid file to track pid of started qemus. Whenever
a domain is started, its pid is put into corresponding pid file.
The pid file path is generated based on domain name and stored
into domain object internals. However, it's not stored in the
status XML and therefore lost on daemon restarts. Hence, later,
when domain is being shut down, the daemon does not know which
pid file to unlink, and the correct pid file is left behind. To
avoid this, lets generate the pid file path again in
qemuProcessReconnect().
Reported-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of checking defaultMode for every channel that has no mode
configured, test it only once outside of channel loop. This fixes a bug
that in case all possible channels are fore example set to insecure, but
defaultMode is set to secure, we wouldn't auto-generate TLS port. This
results in failure while starting a guest.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143832
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
We have two different places that needs to be updated while touching
code for allocation spice ports. Add a bool option to
'qemuProcessSPICEAllocatePorts' function to switch between true and fake
allocation so we can use this function also in qemu_driver to generate
native domain definition.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Since adding the support for scheduler policy settings in commit
8680ea97, there are two enums with the same information. That was
caused by rewriting the patch since first draft.
Find out thanks to clang, but there was no impact whatsoever.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has
completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image,
the event looks something like this:
"timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event":
"BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len":
8412790784, "offset": 409993216, "speed": 8796093022207, "type":
"mirror", "error": "No space left on device"}}
If "len" does not equal "offset" it's considered an error, and we can
clearly see "error" field filled in. However, later in the event
processing this case was handled no differently to case of job being
aborted via separate API. It's time that we start differentiate these
two because of the future work.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated
each time. The callback code handling the events checks if a blockjob
was started via our public APIs prior to setting the mirrorState.
However, some block jobs may be started internally (e.g. during
storage migration), in which case we don't bother with setting
disk->mirror (there's nothing we can set it to anyway), or other
fields. But it will come handy if we update the mirrorState in these
cases too. The event wasn't delivered just for fun - we've started the
job after all.
So, in this commit, the mirrorState is set to whatever job status
we've obtained. Of course, there are some actions on some statuses
that we want to perform. But instead of if {} else if {} else {} ...
enumeration, let's move to switch().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We do have a check for valid per-domain security model, however we still
do permit an invalid security model for a domain's device (those which
are specified with <source> element).
This patch introduces a new function virSecurityManagerCheckAllLabel
which compares user specified security model against currently
registered security drivers. That being said, it also permits 'none'
being specified as a device security model.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1165485
Signed-off-by: Ján Tomko <jtomko@redhat.com>
If a previous commit I fixed the incorrect handling of vcpu pids
for TCG mode QEMU:
commit b07f3d821dfb11a118ee75ea275fd6ab737d9500
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Thu Dec 18 16:34:39 2014 +0000
Don't setup fake CPU pids for old QEMU
The code assumes that def->vcpus == nvcpupids, so when we setup
fake CPU pids for old QEMU with nvcpupids == 1, we cause the
later code to read off the end of the array. This has fun results
like sche_setaffinity(0, ...) which changes libvirtd's own CPU
affinity, or even better sched_setaffinity($RANDOM, ...) which
changes the affinity of a random OS process.
The intent was that this would merely disable the ability to set
per-vCPU affinity. It should still have been possible to set VM
level host CPU affinity.
Unfortunately, when you set <vcpu cpuset='0-1'>4</vcpu>, the XML
parser will internally take this & initialize an entry in the
def->cputune.vcpupin array for every VCPU. IOW this is implicitly
being treated as
<cputune>
<vcpupin cpuset='0-1' vcpu='0'/>
<vcpupin cpuset='0-1' vcpu='1'/>
<vcpupin cpuset='0-1' vcpu='2'/>
<vcpupin cpuset='0-1' vcpu='3'/>
</cputune>
Even more fun, the faked cputune elements are hidden from view when
querying the live XML, because their cpuset mask is the same as the
VM default cpumask.
The upshot was that it was impossible to set VM level CPU affinity.
To fix this we must update qemuProcessSetVcpuAffinities so that it
only reports a fatal error if the per-VCPU cpu mask is different
from the VM level cpu mask.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
It is often helpful to know which version of libvirt and QEMU
was present when a guest was first launched. Ensure this info
is written into the QEMU log file for each guest.
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
In the device type-specific functions, exit early
if the domain has disappeared, because the cleanup
should have been done by qemuProcessStop.
Check the return value in processDeviceDeletedEvent
and qemuProcessUpdateDevices.
Skip audit and removing the device from live def because
it has already been cleaned up.
Make local copy of the disk alias in qemuProcessInitPasswords,
instead of referencing the one in domain definition, which
might get freed if the domain crashes while we're in monitor.
Also copy the memballoon period value.
QEMU internally updates the size of video memory if the domain XML had
provided too low memory size or there are some dependencies for a QXL
devices 'vgamem' and 'ram' size. We need to know about the changes and
store them into the status XML to not break migration or managedsave
through different libvirt versions.
The values would be loaded only if the "vgamem_mb" property exists for
the device. The presence of the "vgamem_mb" also tells that the
"ram_size" and "vram_size" exists for QXL devices.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Although QMP returns info about vCPU threads in TCG mode, the
data it returns is mostly lies. Only the first vCPU has a valid
thread_id returned. The thread_id given for the other vCPUs is
in fact the main emulator thread. All vCPUs actually run under
the same thread in TCG mode.
Our vCPU pinning code is not at all able to cope with this
so if you try to set CPU affinity per-vCPU you end up with
wierd errors
error: Failed to start domain instance-00000007
error: cannot set CPU affinity on process 24365: Invalid argument
Since few people will care about the performance of TCG with
strict CPU pinning, lets just disable that for now, so we get
a clear error message
error: Failed to start domain instance-00000007
error: Requested operation is not valid: cpu affinity is not supported
The code assumes that def->vcpus == nvcpupids, so when we setup
fake CPU pids for old QEMU with nvcpupids == 1, we cause the
later code to read off the end of the array. This has fun results
like sche_setaffinity(0, ...) which changes libvirtd's own CPU
affinity, or even better sched_setaffinity($RANDOM, ...) which
changes the affinity of a random OS process.
We now have a qemuInterfaceStartDevices() which does the final
activation needed for the host-side tap/macvtap devices that are used
for qemu network connections. It will soon make sense to have the
converse qemuInterfaceStopDevices() which will undo whatever was done
during qemuInterfaceStartDevices().
A function to "stop" a single device has also been added, and is
called from the appropriate place in qemuDomainDetachNetDevice(),
although this is currently unnecessary - the device is going to
immediately be deleted anyway, so any extra "deactivation" will be for
naught. The call is included for completeness, though, in anticipation
that in the future there may be some required action that *isn't*
nullified by deleting the device.
This patch is a part of a more complete fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=1081461
The patch that added qemuInterfaceStartDevices() (upstream commit
82977058f5b1d143a355079900029e9cbfee2fe4) had an extra conditional to
prevent calling it if the reason for starting the CPUs was
VIR_DOMAIN_RUNNING_UNPAUSED or VIR_DOMAIN_RUNNING_SAVE_CANCELED. This
was put in by the author as the result of a reviewer asking if it was
necessary to ifup the interfaces in *all* occasions (because these
were the two cases where the CPU would have already been started (and
stopped) once, so the interface would already be ifup'ed).
It turns out that, as long as there is no corresponding
qemuInterfaceStopDevices() to ifdown the interfaces anytime the CPUs
are stopped, neglecting to ifup when reason is RUNNING_UNPAUSED or
RUNNING_SAVE_CANCELED doesn't cause any problems (because it just
happens that the interface will have already been ifup'ed by a prior
call when the CPU was previously started for some other reason).
However, it also doesn't *help*, and there will soon be a
qemuInterfaceStopDevices() function which *will* ifdown these
interfaces when the guest CPUs are stopped, and once that is done, the
interfaces will be left down in some cases when they should be up (for
example, if a domain is paused and then unpaused).
So, this patch is removing the condition in favor of always calling
qemuInterfaeStartDevices() when the guest CPUs are started.
This patch (and the aforementioned patch) resolve:
https://bugzilla.redhat.com/show_bug.cgi?id=1081461
Currently, MAC registration occurs during device creation, which is
early enough that, during live migration, you end up with duplicate
MAC addresses on still-running source and target devices, even though
the target device isn't actually being used yet.
This patch proposes to defer MAC registration until right before
the guest can actually use the device -- In other words, right
before starting guest CPUs.
Signed-off-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Laine Stump <laine@laine.org>
3ecebf07110ca8d3413072557f29137943e848e3 breaks the build as it adds a
way to jump to cleanup before the 'cfg' object is retrieved and 'priv'
is initialized.
Move entering the job into the thread to simplify the program flow. Also
as the code holds a separate reference to the domain object some
conditions can be simplified.
After this patch qemuDomainObjTransferJob is no longer needed so this
patch removes it.
There are some small issue in qemuProcessAttach:
1.Fix virSecurityManagerGetProcessLabel always get pid = 0,
move 'vm->pid = pid' before call virSecurityManagerGetProcessLabel.
2.Use virSecurityManagerGenLabel to get image label.
3.Fix always set selinux label for other security driver label.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
When a block{commit,copy} job was aborted on a domain, block job handler
did not process it correctly, leaving a phantom job in the background.
Any further calls to any blockjob causes "block <jobtype> still active"
error. This patch fixes the blockjob handler so that it checks not only
for VIR_DOMAIN_BLOCK_JOB_FAILED status, but VIR_DOMAIN_BLOCK_JOB_CANCELED
status as well, followed by our existing cleanup routine.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1135169
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160084
As of b6d4dad11b (1.2.5) we are trying to keep the status of FSFreeze
in the guest. Even though I've tried to fixed couple of corner cases
(6ea54769ba18), it occurred to me just recently, that the approach is
broken by design. Firstly, there are many other ways to talk to
qemu-ga (even through libvirt) that filesystems can be thawed (e.g.
qemu-agent-command) without libvirt noticing. Moreover, there are
plenty of ways to thaw filesystems without even qemu-ga noticing (yes,
qemu-ga keeps internal track of FSFreeze status). So, instead of
keeping the track ourselves, or asking qemu-ga for stale state, it's
the best to let qemu-ga deal with that (and possibly let guest kernel
propagate an error).
Moreover, there's one bug with the following approach, if fsfreeze
command failed, we've executed fsthaw subsequently. So issuing
domfsfreeze in virsh gave the following result:
virsh # domfsfreeze gentoo
Froze 1 filesystem(s)
virsh # domfsfreeze gentoo
error: Unable to freeze filesystems
error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
virsh # domfsfreeze gentoo
Froze 1 filesystem(s)
virsh # domfsfreeze gentoo
error: Unable to freeze filesystems
error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Improve the monitor function to also retrieve the guest state of
character device (if provided) so that we can refresh the state of
virtio-serial channels and perhaps react to changes in the state in
future patches.
This patch changes the returned data from qemuMonitorGetChardevInfo to
return a structure containing the pty path and the state for all the
character devices.
The change to the testsuite makes sure that the data is parsed
correctly.
New qemu added a new event that is emitted when a virtio serial channel
is opened in the guest OS. This allows us to update the state of the
port in the output-only XML element.
This patch implements the monitor callbacks and necessary handlers to
update the state in the definition.
To unify future additions that require information from "query-chardev"
rename qemuMonitorGetPtyPaths and friends to qemuMonitorGetChardevInfo
and move the allocation of the returned hash into the top level
function.
When creating a disk image snapshot the libvirt code would blindly copy
the parents label to the newly created image. This runs into problems
when you start a VM from an image hosted on NFS (or other storage system
that doesn't support selinux labels) and the snapshot destination is on
a storage system that does support selinux labels. Libvirt's code in
that case generates a different security label for the image hosted on
NFS. This label is valid only for NFS images and doesn't allow access in
case of a locally stored image.
To fix this issue libvirt needs to refrain from copying security
information in cases where the default domain seclabel is a better
choice.
This patch repurposes the now unused @force argument of
virStorageSourceInitChainElement to denote whether a copy of the
security labelling stuff should be attempted or not. This allows to
fine-control the copy operation for cases where we need to keep the
label of the old disk vs. the cases where we need to keep the label
unset to use the default domain imagelabel.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1151718
Ethernet interfaces in libvirt currently do not support bandwidth setting.
For example, following xml file for an interface will not apply these
settings to corresponding qdiscs.
<interface type="ethernet">
<mac address="02:36:1d:18:2a:e4"/>
<model type="virtio"/>
<script path=""/>
<target dev="tap361d182a-e4"/>
<bandwidth>
<inbound average="984" peak="1024" burst="64"/>
<outbound average="2000" peak="2048" burst="128"/>
</bandwidth>
</interface>
Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
and starting of guest, but this same deadlock exists for
updating/attaching network device to domain.
The deadlock was introduced by removing global QEMU driver lock because
nwfilter was counting on this lock and ensure that all driver locks are
locked inside of nwfilter{Define,Undefine}.
This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
the deadlock for all possible paths in QEMU driver. LXC and UML drivers
still have global lock.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In one of my previous patches (3a3c3780b) I've tried to fix the
problem of nvram path disappearing on a domain that's been
started and shut down again. I fixed this by explicitly saving
domain's config file. However, I did a bit of clumsy without
realizing we have a transient domains for which we don't save the
config file. Hence, any domain using UEFI became persistent.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160084
As of b6d4dad1 (1.2.5) libvirt keeps track if domain disks have been
frozen. However, this falls into that set of information which don't
survive domain restart. Therefore, we need to clear the flag upon some
state transitions. Moreover, once we clear the flag we must update the
status file too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is a reaction to Michal's fix [1] for non-NUMA systems that also
splits out conf/ out of util/ because libvirt_util shouldn't require
libvirt_conf if it is the other way around. This particular use case
worked, but we're trying to avoid it as mentioned [2], many times.
The only functions from virnuma.c that needed numatune_conf were
virDomainNumatuneNodesetIsAvailable() and virNumaSetupMemoryPolicy().
The first one should be in numatune_conf as it works with
virDomainNumatune, the second one just needs nodeset and mode, both of
which can be passed without the need of numatune_conf.
Apart from fixing that, this patch also fixes recently added
code (between commits d2460f85^..5c8515620) that doesn't support
non-contiguous nodesets. It uses new function
virNumaNodesetIsAvailable(), which doesn't need a stub as it doesn't use
any libnuma functions, to check if every specified nodeset is available.
[1] https://www.redhat.com/archives/libvir-list/2014-November/msg00118.html
[2] http://www.redhat.com/archives/libvir-list/2011-June/msg01040.html
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Particularly in qemuBuildNumaArgStr(), there was a need for the advice
due to memory backing, which needs to know the nodeset it will be pinned
to. With newer qemu this caused the following error when starting
domain:
error: internal error: Advice from numad is needed in case of
automatic numa placement
even when starting perfectly valid domain, e.g.:
...
<vcpu placement='auto'>4</vcpu>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
<cpu>
<numa>
<cell id='0' cpus='0' memory='524288'/>
<cell id='1' cpus='1' memory='524288'/>
</numa>
</cpu>
...
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138545
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
commit 3e1e16aa8d4238241a1806cb9bdb3b9ad60db777 (Use a port from the
migration range for NBD as well) changed ndb port allocation from
remotePorts to migrationPorts, but did not change the port releasing
process, which makes an error when migrating several times (above 64):
error: internal error: Unable to find an unused port in range
'migration' (49152-49215)
https://bugzilla.redhat.com/show_bug.cgi?id=1159245
Signed-off-by: Weiwei Li <nuonuoli@tencent.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
If VM is configured with many devices(including passthrough devices)
and large memory, libvirtd will take seconds(in the worst case) to
wait for monitor. In this period the qemu process may run on any
PCPU though I intend to pin emulator to the specified PCPU in xml
configuration.
Actually qemu process takes high cpu usage during vm startup.
So this is not the strict CPU isolation in this case.
Signed-off-by: Zhou yimin <zhouyimin@huawei.com>
NIC_RX_FILTER_CHANGED is sent by qemu any time a NIC driver in the
guest modified the NIC's RX Filter (for example, if the MAC address of
the NIC is changed by the guest).
This patch doesn't do anything useful with that event; it just sets up
all the plumbing to get news of the event into a worker thread with
all proper locking/reference counting, and provide an easy place to
add in desired functionality.
See src/qemu/EVENTHANDLERS.txt for information/instructions on adding
a libvirt-internal handler for a qemu event (using
NIC_RX_FILTER_CHANGED as an example).
If we don't properly clean up all processes in the
machine-<vmname>.scope systemd won't remove the cgroup and subsequent vm
starts fail with
'CreateMachine: File exists'
Additional processes can e.g. be added via
echo $PID > /sys/fs/cgroup/systemd/machine.slice/machine-${VMNAME}.scope/tasks
but there are other cases like
http://bugs.debian.org/761521
Invoke TerminateMachine to be on the safe side since systemd tracks the
cgroup anyway. This is a noop if all processes have terminated already.