The structure actually contains migration statistics rather than just
the status as the name suggests. Renaming it as
qemuMonitorMigrationStats removes the confusion.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
My commit 674afcb09e3d33500cfbbcf870ebf92cb99ecfa3 moved computing the
default listen address from qemuMigrationPrepareAny to
qemuMigrationPrepareIncoming. However, I didn't notice listenAddress was
later passed to qemuMigrationStartNBDServer. Thus, it would be called
with the original value of listenAddress (NULL).
Let's add the updated listen address to qemuProcessIncomingDef and use
it when starting NBD servers.
Reported-by: Michael Chapman <mike@very.puzzling.org>
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In commit 686eb7a24f7d, the break was not considered part of the
condition, hence breaking after first node when searching.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The machine cgroup is a superset, a parent to the emulator and vcpuX
cgroups. The parent cgroup should never have any tasks directly in it.
In fact the parent cpuset might contain way more cpus than the sum of
emulatorpin and vcpupins. So putting tasks in the superset will allow
them to run outside of <cputune>.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
When user configures vhost-user interface and forgets to also configure
any shared memory, the search for the root cause of non-operational
interface might take unpleasantly long time. Let's enhance user
experience by emitting a warning in the logs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1266982
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Add qemuDomainHasVCpuPids to do the checking and replace in place checks
with it.
We no longer need checking whether the thread contains fake data
(vcpupids[0] == vm->pid) as in b07f3d821dfb11a118ee75ea275fd6ab737d9500
and 65686e5a81d654d834d338fceeaf0229b2ca4f0d this was removed.
Often when debugging bug reports one is given a copy of the file
from /var/log/libvirt/qemu/$NAME.log along with other supporting
files. In a number of cases I've been given sets of files which
were from different machines. Including the hostname in the QEMU
log file will help identify when the bug reporter is providing
bad information.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If VM A is shutdown a by qemu agent at appoximately the same time
an agent EOF of VM A happened, there's a chance that deadlock may occur:
qemuProcessHandleAgentEOF in main thread
A) priv->agent = NULL; //A happened before B
//deadlock when we get agent lock which's held by worker thread
qemuAgentClose(agent);
qemuDomainObjExitAgent called by qemuDomainShutdownFlags in worker thread
B) hasRefs = virObjectUnref(priv->agent); // priv->agent is NULL,
// return false
if (hasRefs)
virObjectUnlock(priv->agent); //agent lock will not be released here
In order to resolve, during EOF close the agent first, then set priv->agent
to NULL to fix the deadlock.
This essentially reverts commit id '1020a504'. It's also of note that commit
id '362d0477' notes a possible/rare deadlock similar to what was seen in
the monitor in commit id '25f582e3'. However, it seems interceding changes
including commit id 'd960d06f' should remove the deadlock issue.
With this change, if EOF is called:
Get VM lock
Check if !priv->agent || priv->beingDestroyed, then unlock VM
Call qemuAgentClose
Unlock VM
When qemuAgentClose is called
Get Agent lock
If Agent->fd open, close it
Unlock Agent
Unref Agent
qemuDomainObjEnterAgent
Enter with VM lock
Get Agent lock
Increase Agent refcnt
Unlock VM
After running agent command, calling qemuDomainObjExitAgent
Enter with Agent lock
Unref Agent
If not last reference, unlock Agent
Get VM lock
If we were in the middle of an EnterAgent, call Agent command, and
ExitAgent sequence and the EOF code is triggered, then the EOF code
can get the VM lock, make it's checks against !priv->agent ||
priv->beingDestroyed, and call qemuAgentClose. The CloseAgent
would wait to get agent lock. The other thread then will eventually
call ExitAgent, release the Agent lock and unref the Agent. Once
ExitAgent releases the Agent lock, AgentClose will get the Agent
Agent lock, close the fd, unlock the agent, and unref the agent.
The final unref would cause deletion of the agent.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Reviewed-by: Ren Guannan <renguannan@huawei.com>
Currently the QEMU monitor is given an FD to the logfile. This
won't work in the future with virtlogd, so it needs to use the
qemuDomainLogContextPtr instead, but it shouldn't directly
access that object either. So define a callback that the
monitor can use for reporting errors from the log file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When the qemuProcessAttach/Stop methods write a marker into
the log file, they can use qemuDomainLogContextWrite to
write a formatted message.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of writing directly to a log file descriptor, change
qemuLogOperation to use qemuDomainLogContextWrite().
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The qemuDomainTaint APIs currently expect to be passed a log file
descriptor. Change them to instead use a qemuDomainLogContextPtr
to hide the implementation details.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Convert the places which create/open log files to use the new
qemuDomainLogContextPtr object instead.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are two pretty similar functions qemuProcessReadLog and
qemuProcessReadChildErrors. Both read from the QEMU log file
and try to strip out libvirt messages. The latter then reports
an error, while the former lets the callers report an error.
Re-write qemuProcessReadLog so that it uses a single read
into a dynamically allocated buffer. Then introduce a new
qemuProcessReportLogError that calls qemuProcessReadLog
and reports an error.
Convert all callers to use qemuProcessReportLogError.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Once qemuProcessInit was called, qemuProcessLaunch will launch a new
QEMU process with stopped virtual CPUs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is going to be split in three parts: qemuProcessInit,
qemuProcessLaunch, and qemuProcessFinish so that migration Prepare phase
can insert additional code in the process. qemuProcessStart will be a
small wrapper for all other callers.
qemuProcessInit prepares the domain up to the point when priv->qemuCaps
is initialized.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Now that new domains are started inside a QEMU_ASYNC_JOB_START job,
we need to pass it down to qemuProcessStartCPUs too.
This removes the warning:
qemuDomainObjEnterMonitorInternal:1750 : This thread seems to be the
async job owner; entering monitor without asking for a nested job is
dangerous
Introduced by commit 04c721f, before that this code path was only
executed with QEMU_ASYNC_JOB_NONE.
(This code is not executed on migration, because qemuMigrationPrepareAny
sets the VIR_QEMU_PROCESS_START_PAUSED flag.)
Remembering to call qemuMonitorSetDomainLog in the right paths before
calling qemuProcessStop is annoying and easy to forget. And I already
forgot to do so in commit v1.2.8-52-g0389060: logfd may be leaked if
QEMU process dies between Prepare and Finish migration phases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Traditionally, we pass incoming migration URI on QEMU command line,
which has some drawbacks. Depending on the URI QEMU may initialize its
migration state immediately without giving us a chance to set any
additional migration parameters (this applies mainly for fd: URIs). For
some URIs the monitor may be completely blocked from the beginning until
migration is finished, which means we may be stuck in qmp_capabilities
command without being able to send any QMP commands.
QEMU solved this by introducing "defer" parameter for -incoming command
line option. This will tell QEMU to prepare for an incoming migration
while the actual incoming URI is sent using migrate-incoming QMP
command. Before calling this command we can normally talk to the
monitor and even set any migration parameters which will be honored by
the incoming migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We only started an async job for incoming migration from another host.
When we were starting a domain from scratch or restoring from a saved
state (migration from file) we didn't set any async job. Let's introduce
a new QEMU_ASYNC_JOB_START for these cases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Incoming migration may require quite a few parameters (URI, fd, path) to
be considered while starting QEMU and we will soon add another one.
Let's group all of them in a single struct.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Make callers of qemuBuildCommandLine responsible for providing the URI
which should be passed as a parameter for -incoming.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As of QEMU 0.11.0 the 'info chardev' monitor command can be
used to report on allocated chardev paths, so we can drop
support for parsing QEMU stderr to locate the PTY paths.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
nodeset should be freed in both success and failure paths.
While tmppath is freed immediately after it's consumed, moving it from
error to cleanup label is a bit more consistent and robust.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Generally, we use "ret" variable for storing the value we are going to
return at the and of a function, but this is not the case in
qemuProcessStart. Let's rename "ret" as "rv".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart was passing char * migrateFrom as the third argument to
qemuPrepareNVRAM. We should explicitly convert the pointer to bool which
is what the function expects.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This calls the PCI-, USB- and SCSI-specific functions just
like qemuHostdev{Prepare,ReAttach}DomainDevices() already do,
and was the missing piece for the qemuHostdev API to nicely
mirror the virHostdev API.
Update qemuProcessReconnect() to use the new function.
Adopt the same names used for virHostdevUpdateActive*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
Adopt the same names used for virHostdevReAttach*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
https://bugzilla.redhat.com/show_bug.cgi?id=1249981
When qemuDomainPinIOThread was added in commit id 'fb562614', a check
for the IOThread capability was not needed since a check for iothreadpids
covered the condition where the support for IOThreads was not present.
The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
was able to query the monitor for IOThreads. It would only do that if
the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
However, when iothreadids were added in commit id '8d4614a5' and the
check for iothreadpids was replaced by a search through the iothreadids[]
array for the matching iothread_id that left open the possibility that
an iothreadids[] array was defined, but the entries essentially pointed
to elements with only the 'iothread_id' defined leaving the 'thread_id'
value of 0 and eventually the cpumap entry of NULL.
This was because, the original IOThreads commit id '72edaae7' only
checked if IOThreads were defined and if the emulator had the IOThreads
capability, then IOThread objects were added at startup. The "capability
failure" check was only done when a disk was assigned to an IOThread in
qemuCheckIOThreads. This was because the initial implementation had no way
to dynamically add IOThreads, but it was possible to dynamically add a
disk to the domain. So the decision was if the domain supported it, then
add the IOThread objects. Then if a disk with an IOThread defined was
added, it could check the capability and fail to add if not there. This
just meant the 'iothreads' value was essentially ignored.
Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
deletion of IOThread objects. So it was no longer necessary to generate
IOThread objects to dynamically attach a disk to. However, the startup
and disk check code was not modified to reflect this.
This patch will move the capability failure check to when IOThread
objects are being added to the command line. Thus a domain that has
IOThreads defined will not be started if the emulator doesn't support
the capability. This means when qemuCheckIOThreads is called to add
a disk, it's no longer necessary to check the capability. Instead the
code can use the IOThreadFind call to indicate that the IOThread
doesn't exist.
Finally because it could be possible to have a domain running with the
iothreadids[] defined prior to this change if libvirtd is restarted each
having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
check fails and remove the elements and array if it exists.
With these changes in place, it turns out the cputune-numatune test
was failing because the right bit wasn't set in the test. So used the
opportunity to fix that and create a test that would expect to fail
with some sort of iothreads defined and used, but not having the
correct capability.
Although theoretically both should be the same value, the niothreadids
should be used in favor of iothreads when performing comparisons. This
leaves the iothreads as a purely numeric value to be saved in the config
file. The one exception to the rule is virDomainIOThreadIDDefArrayInit
where the iothreadids are being generated from the iothreads count since
iothreadids were added after initial iothreads support.
Coverity notices that net->ifname is potentially referenced after a
VIR_FREE(). Since the net->ifname will eventually be free'd during
virDomainDefFree when calling virDomainNetDefFree, let's just that
processing take care the free.
Signed-off-by: John Ferlan <jferlan@redhat.com>