The machine cgroup is a superset, a parent to the emulator and vcpuX
cgroups. The parent cgroup should never have any tasks directly in it.
In fact the parent cpuset might contain way more cpus than the sum of
emulatorpin and vcpupins. So putting tasks in the superset will allow
them to run outside of <cputune>.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
When user configures vhost-user interface and forgets to also configure
any shared memory, the search for the root cause of non-operational
interface might take unpleasantly long time. Let's enhance user
experience by emitting a warning in the logs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1266982
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Add qemuDomainHasVCpuPids to do the checking and replace in place checks
with it.
We no longer need checking whether the thread contains fake data
(vcpupids[0] == vm->pid) as in b07f3d821d
and 65686e5a81 this was removed.
Often when debugging bug reports one is given a copy of the file
from /var/log/libvirt/qemu/$NAME.log along with other supporting
files. In a number of cases I've been given sets of files which
were from different machines. Including the hostname in the QEMU
log file will help identify when the bug reporter is providing
bad information.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If VM A is shutdown a by qemu agent at appoximately the same time
an agent EOF of VM A happened, there's a chance that deadlock may occur:
qemuProcessHandleAgentEOF in main thread
A) priv->agent = NULL; //A happened before B
//deadlock when we get agent lock which's held by worker thread
qemuAgentClose(agent);
qemuDomainObjExitAgent called by qemuDomainShutdownFlags in worker thread
B) hasRefs = virObjectUnref(priv->agent); // priv->agent is NULL,
// return false
if (hasRefs)
virObjectUnlock(priv->agent); //agent lock will not be released here
In order to resolve, during EOF close the agent first, then set priv->agent
to NULL to fix the deadlock.
This essentially reverts commit id '1020a504'. It's also of note that commit
id '362d0477' notes a possible/rare deadlock similar to what was seen in
the monitor in commit id '25f582e3'. However, it seems interceding changes
including commit id 'd960d06f' should remove the deadlock issue.
With this change, if EOF is called:
Get VM lock
Check if !priv->agent || priv->beingDestroyed, then unlock VM
Call qemuAgentClose
Unlock VM
When qemuAgentClose is called
Get Agent lock
If Agent->fd open, close it
Unlock Agent
Unref Agent
qemuDomainObjEnterAgent
Enter with VM lock
Get Agent lock
Increase Agent refcnt
Unlock VM
After running agent command, calling qemuDomainObjExitAgent
Enter with Agent lock
Unref Agent
If not last reference, unlock Agent
Get VM lock
If we were in the middle of an EnterAgent, call Agent command, and
ExitAgent sequence and the EOF code is triggered, then the EOF code
can get the VM lock, make it's checks against !priv->agent ||
priv->beingDestroyed, and call qemuAgentClose. The CloseAgent
would wait to get agent lock. The other thread then will eventually
call ExitAgent, release the Agent lock and unref the Agent. Once
ExitAgent releases the Agent lock, AgentClose will get the Agent
Agent lock, close the fd, unlock the agent, and unref the agent.
The final unref would cause deletion of the agent.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Reviewed-by: Ren Guannan <renguannan@huawei.com>
Currently the QEMU monitor is given an FD to the logfile. This
won't work in the future with virtlogd, so it needs to use the
qemuDomainLogContextPtr instead, but it shouldn't directly
access that object either. So define a callback that the
monitor can use for reporting errors from the log file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When the qemuProcessAttach/Stop methods write a marker into
the log file, they can use qemuDomainLogContextWrite to
write a formatted message.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of writing directly to a log file descriptor, change
qemuLogOperation to use qemuDomainLogContextWrite().
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The qemuDomainTaint APIs currently expect to be passed a log file
descriptor. Change them to instead use a qemuDomainLogContextPtr
to hide the implementation details.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Convert the places which create/open log files to use the new
qemuDomainLogContextPtr object instead.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are two pretty similar functions qemuProcessReadLog and
qemuProcessReadChildErrors. Both read from the QEMU log file
and try to strip out libvirt messages. The latter then reports
an error, while the former lets the callers report an error.
Re-write qemuProcessReadLog so that it uses a single read
into a dynamically allocated buffer. Then introduce a new
qemuProcessReportLogError that calls qemuProcessReadLog
and reports an error.
Convert all callers to use qemuProcessReportLogError.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Once qemuProcessInit was called, qemuProcessLaunch will launch a new
QEMU process with stopped virtual CPUs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is going to be split in three parts: qemuProcessInit,
qemuProcessLaunch, and qemuProcessFinish so that migration Prepare phase
can insert additional code in the process. qemuProcessStart will be a
small wrapper for all other callers.
qemuProcessInit prepares the domain up to the point when priv->qemuCaps
is initialized.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Now that new domains are started inside a QEMU_ASYNC_JOB_START job,
we need to pass it down to qemuProcessStartCPUs too.
This removes the warning:
qemuDomainObjEnterMonitorInternal:1750 : This thread seems to be the
async job owner; entering monitor without asking for a nested job is
dangerous
Introduced by commit 04c721f, before that this code path was only
executed with QEMU_ASYNC_JOB_NONE.
(This code is not executed on migration, because qemuMigrationPrepareAny
sets the VIR_QEMU_PROCESS_START_PAUSED flag.)
Remembering to call qemuMonitorSetDomainLog in the right paths before
calling qemuProcessStop is annoying and easy to forget. And I already
forgot to do so in commit v1.2.8-52-g0389060: logfd may be leaked if
QEMU process dies between Prepare and Finish migration phases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Traditionally, we pass incoming migration URI on QEMU command line,
which has some drawbacks. Depending on the URI QEMU may initialize its
migration state immediately without giving us a chance to set any
additional migration parameters (this applies mainly for fd: URIs). For
some URIs the monitor may be completely blocked from the beginning until
migration is finished, which means we may be stuck in qmp_capabilities
command without being able to send any QMP commands.
QEMU solved this by introducing "defer" parameter for -incoming command
line option. This will tell QEMU to prepare for an incoming migration
while the actual incoming URI is sent using migrate-incoming QMP
command. Before calling this command we can normally talk to the
monitor and even set any migration parameters which will be honored by
the incoming migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We only started an async job for incoming migration from another host.
When we were starting a domain from scratch or restoring from a saved
state (migration from file) we didn't set any async job. Let's introduce
a new QEMU_ASYNC_JOB_START for these cases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Incoming migration may require quite a few parameters (URI, fd, path) to
be considered while starting QEMU and we will soon add another one.
Let's group all of them in a single struct.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Make callers of qemuBuildCommandLine responsible for providing the URI
which should be passed as a parameter for -incoming.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
As of QEMU 0.11.0 the 'info chardev' monitor command can be
used to report on allocated chardev paths, so we can drop
support for parsing QEMU stderr to locate the PTY paths.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
nodeset should be freed in both success and failure paths.
While tmppath is freed immediately after it's consumed, moving it from
error to cleanup label is a bit more consistent and robust.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Generally, we use "ret" variable for storing the value we are going to
return at the and of a function, but this is not the case in
qemuProcessStart. Let's rename "ret" as "rv".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart was passing char * migrateFrom as the third argument to
qemuPrepareNVRAM. We should explicitly convert the pointer to bool which
is what the function expects.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This calls the PCI-, USB- and SCSI-specific functions just
like qemuHostdev{Prepare,ReAttach}DomainDevices() already do,
and was the missing piece for the qemuHostdev API to nicely
mirror the virHostdev API.
Update qemuProcessReconnect() to use the new function.
Adopt the same names used for virHostdevUpdateActive*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
Adopt the same names used for virHostdevReAttach*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
https://bugzilla.redhat.com/show_bug.cgi?id=1249981
When qemuDomainPinIOThread was added in commit id 'fb562614', a check
for the IOThread capability was not needed since a check for iothreadpids
covered the condition where the support for IOThreads was not present.
The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
was able to query the monitor for IOThreads. It would only do that if
the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
However, when iothreadids were added in commit id '8d4614a5' and the
check for iothreadpids was replaced by a search through the iothreadids[]
array for the matching iothread_id that left open the possibility that
an iothreadids[] array was defined, but the entries essentially pointed
to elements with only the 'iothread_id' defined leaving the 'thread_id'
value of 0 and eventually the cpumap entry of NULL.
This was because, the original IOThreads commit id '72edaae7' only
checked if IOThreads were defined and if the emulator had the IOThreads
capability, then IOThread objects were added at startup. The "capability
failure" check was only done when a disk was assigned to an IOThread in
qemuCheckIOThreads. This was because the initial implementation had no way
to dynamically add IOThreads, but it was possible to dynamically add a
disk to the domain. So the decision was if the domain supported it, then
add the IOThread objects. Then if a disk with an IOThread defined was
added, it could check the capability and fail to add if not there. This
just meant the 'iothreads' value was essentially ignored.
Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
deletion of IOThread objects. So it was no longer necessary to generate
IOThread objects to dynamically attach a disk to. However, the startup
and disk check code was not modified to reflect this.
This patch will move the capability failure check to when IOThread
objects are being added to the command line. Thus a domain that has
IOThreads defined will not be started if the emulator doesn't support
the capability. This means when qemuCheckIOThreads is called to add
a disk, it's no longer necessary to check the capability. Instead the
code can use the IOThreadFind call to indicate that the IOThread
doesn't exist.
Finally because it could be possible to have a domain running with the
iothreadids[] defined prior to this change if libvirtd is restarted each
having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
check fails and remove the elements and array if it exists.
With these changes in place, it turns out the cputune-numatune test
was failing because the right bit wasn't set in the test. So used the
opportunity to fix that and create a test that would expect to fail
with some sort of iothreads defined and used, but not having the
correct capability.
Although theoretically both should be the same value, the niothreadids
should be used in favor of iothreads when performing comparisons. This
leaves the iothreads as a purely numeric value to be saved in the config
file. The one exception to the rule is virDomainIOThreadIDDefArrayInit
where the iothreadids are being generated from the iothreads count since
iothreadids were added after initial iothreads support.
Coverity notices that net->ifname is potentially referenced after a
VIR_FREE(). Since the net->ifname will eventually be free'd during
virDomainDefFree when calling virDomainNetDefFree, let's just that
processing take care the free.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since we'd disallow migration of a guest that would have possibly
invalid config but still be able to work, relax the WWN check to be
performed only on new starts of the VM.
Coverity complains that return from virHookCall is not checked in
one place in qemuProcessStop. Since the comment notes that we cannot
stop the operation even it if fails, just added the ignore_value.
So far we have the following pattern occurring over and over
again:
if (!vm->persistent)
qemuDomainRemoveInactive(driver, vm);
It's safe to put the check into the function and save some LoC.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Qemu unfortunately doesn't update internal state right after migration
and so the actual balloon size as returned by 'query-balloon' are
invalid for a while after the CPUs are started after migration. If we'd
refresh our internal state at this point we would report invalid current
memory size until the next balloon event would arrive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242940
Every single call to qemuDomainEventQueue() uses the following pattern:
if (event)
qemuDomainEventQueue(driver, event);
Let's move the check for valid event to qemuDomainEventQueue and
simplify all callers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit id 'f1f68ca33' added code to remove the directory paths for
auto-generated sockets, but that code could be called before the
paths were created resulting in generating error messages from
virFileDeleteTree indicating that the file doesn't exist.
Rather than "enforce" all callers to make the non-NULL and existence
checks, modify the virFileDeleteTree API to silently ignore NULL on
input and non-existent directory trees.
Commit f1f68ca334 did not report an error if virFileMakePath()
returned -1. Well, who would've guessed function with name starting
with 'vir' sets an errno instead of reporting an error the libvirt way.
Anyway, let's fix it, so the output changes from:
$ virsh start arm
error: Failed to start domain arm
error: An error occurred, but the cause is unknown
to:
$ virsh start arm
error: Failed to start domain arm
error: Cannot create directory '/var/lib/libvirt/qemu/domain-arm': Not
a directory
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commit f1f68ca334 overused mdir_name()
event though it was not needed in the latest version, hence labelling
directory one level up in the tree and not the one it should.
If anyone with SElinux managed to try run a domain with guest agent set
up, it's highly possible that they will need to run 'restorecon -F
/var/lib/libvirt/qemu/channel/target' to fix what was done.
Reported-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We are automatically generating some socket paths for domains, but all
those paths end up in a directory that's the same for multiple domains.
The problem is that multiple domains can each run with different
seclabels (users, selinux contexts, etc.). The idea here is to create a
per-domain directory labelled in a way that each domain can access its
own unix sockets.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The virDomainObjListRemove() function unlocks a domain that it's given
due to legacy code. And because of that code, which should be
refactored, that last virObjectUnlock() cannot be just removed. So
instead, lock it right back for qemu for now. All calls to
qemuDomainRemoveInactive() are followed by code that unlocks the domain
again, plus the domain should be locked during qemuDomainObjEndJob(), so
the right place to lock it is right after virDomainObjListRemove().
The only place where this would cause a problem is the autodestroy
callback, so we need to get another reference there and uref+unlock it
afterwards. Luckily, returning NULL from that function doesn't mean an
error, and only means that it doesn't need to be unlocked anymore.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When stopping a domain on the destination host after a failed migration,
we need to avoid reseting security labels since the domain is still
running on the source host. While we were correctly doing so in some
cases, there were still some paths which did this wrong.
https://bugzilla.redhat.com/show_bug.cgi?id=1242904
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In addition to checking the current asynchronous job
qemuMigrationJobIsActive reports an error if the current job does not
match the one we asked for. Let's just check the job directly since we
are not interested in the error in qemuProcessHandleMonitorEOF.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
In commit 641a145d73 I've added code that
resets the balloon memory value to full size prior to resuming the vCPUs
since the size certainly was not reduced at that point.
Since qemuProcessStart is used also in code paths with already booted
up guests (migration, save/restore) the assumption is not entirely true
since the guest might already been running before.
This patch adds a function that queries the monitor rather than using
the full size since a balloon event would not be reissued in case we are
recovering a saved migration state.
Additionally the new function is used also when reconnecting to a VM
after libvirtd restart since we might have missed a few balloon events
while libvirtd was not running.
After Jirka's migration patches libvirt is listening on migration
events from qemu instead of actively polling on the monitor. There is,
however, a little regression (introduced in 6d2edb6a42). The
problem is, the current status of migration job is updated in
qemuProcessHandleMigrationStatus if and only if migration job was
started. But eventually every asynchronous job may result in
migration. Therefore, since this job is not strictly a
migration job, internal state was not updated and later checks failed:
virsh # save fedora22 /tmp/fedora22_ble.save
error: Failed to save domain fedora22 to /tmp/fedora22_ble.save
error: operation failed: domain save job: is not active
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If QEMU fails during incoming migration, the domain disappears including
a possibly useful error message read from QEMU log file. Let's remember
the error in virQEMUDriver so that Finish can report more than just "no
such domain".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Since we already support the MIGRATION event, we just need to make sure
the domain condition is signalled whenever a p2p connection drops or the
domain is paused due to IO error and we can avoid waking up every 50 ms
to check whether something happened.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We don't need to call query-migrate every 50ms when we get the current
migration state via MIGRATION event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Even if QEMU supports migration events it doesn't send them by default.
We have to enable them by calling migrate-set-capabilities. Let's enable
migration events everytime we can and clear QEMU_CAPS_MIGRATION_EVENT in
case migrate-set-capabilities does not support events.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Some guests lock the tray and QEMU eject command will simply fail to
eject the media. But the guest OS can handle this attempt to eject the
media and can unlock the tray and open it. In this case, we should try
again to actually eject the media.
If the first attempt fails to detect a tray_open we will fail with
error, from monitor. If we receive that event, we know, that the guest
properly reacted to the eject request, unlocked the tray and opened it.
In this case, we need to run the command again to actually eject the
media from the device. The reason to call it again is, that QEMU
doesn't wait for the guest to react and report an error, that the tray
is locked.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1147471
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
QEMU_CAPS_SEAMLESS_MIGRATION capability says QEMU supports
SPICE_MIGRATE_COMPLETED event. Thus we can just drop all code which
polls query-spice and replace it with waiting for the event.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When libvirtd is restarted during migration, we properly cancel the
ongoing migration (unless it managed to almost finished before the
restart). But if we were also migrating storage using NBD, we would
completely forget about the running disk mirrors.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
By switching block jobs to use domain conditions, we can drop some
pretty complicated code in NBD storage migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
After libvirt issues the balloon resize command, the current balloon
size needs to be changed to the maximum memory size since the vCPUs were
not started and thus the balloon driver could not return the memory.
Since GetXMLDesc and other APIs return the balloon size without updating
it in case they are not able to obtain the job and the memory balloon
does not support the asynchronous event the sizing might be incorrect.
Since the monitor code now supports ullongs when setting balloon size,
drop the legacy code with overflow checking.
Additionally the comment mentioning that the job is treated as a sync
job does not make sense any more since the monitor is entered
asynchronously.
Store the emulator pinning cpu mask as a pure virBitmap rather than the
virDomainPinDef since it stores only the bitmap and refactor
qemuDomainPinEmulator to do the same operations in a much saner way.
As a side effect virDomainEmulatorPinAdd and virDomainEmulatorPinDel can
be removed since they don't add any value.
In case when <vcpu ... cpuset=""> is not specified, the vcpupin array is
not guaranteed to be allocated to def->vcpus. This would cause a crash
for TCG since it does not report thread IDs for vCPUs.
Most virDomainDiskIndexByName callers do not care about the index; what
they really want is a disk def pointer.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When starting a domain, if a domain specifies security drivers we do not have
loaded, we fail. However we don't check for this during
reconnect, so any operation relying on security driver functionality would fail.
If someone e.g. starts a domain with selinux driver loaded, then they change
the security driver to 'none' in config, restart the daemon and call dump/save/..,
QEMU will return an error.
As we shouldn't kill the domain, we should at least log an error to let the
user know that domain reconnect wasn't completely clean.
https://bugzilla.redhat.com/show_bug.cgi?id=1183893
So far, we are not reporting if numatune was even defined. The
value of zero is blindly returned (which maps onto
VIR_DOMAIN_NUMATUNE_MEM_STRICT). Unfortunately, we are making
decisions based on this value. Instead, we should not only return
the correct value, but report to the caller if the value is valid
at all.
For better viewing of this patch use '-w'.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Other threads may be blocked in qemuBlockJobSyncWait. Ensure that
they're woken up when the domain is stopped.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
If we received zero iothreads from the monitor, but were perhaps
expecting to receive something, then the code was skipping the check
to ensure what's in the monitor matches our expectations. So invert
the checks to check that what we get back matches expectations and
then check there are zero iothreads returned.
Rather than have a separate routine to parse the alias of an iothread
returned from qemu in order to get the iothread_id value, parse the alias
when returning and just return the iothread_id in qemuMonitorIOThreadInfoPtr
This set of patches removes the function, changes the "char *name" to
"unsigned int" and handles all the fallout.
Remove the iothreadspin array from cputune and replace with a cpumask
to be stored in the iothreadids list.
Adjust the test output because our printing goes in order of the iothreadids
list now.
Add 'thread_id' to the virDomainIOThreadIDDef as a means to store the
'thread_id' as returned from the live qemu monitor data.
Remove the iothreadpids list from _qemuDomainObjPrivate and replace with
the new iothreadids 'thread_id' element.
Rather than use the default numbering scheme of 1..number of iothreads
defined for the domain, use the iothreadid's list for the iothread_id
Since iothreadids list keeps track of the iothread_id's, these are
now used in place of the many places where a for loop would "know"
that the ID was "+ 1" from the array element.
The new tests ensure usage of the <iothreadid> values for an exact number
of iothreads and the usage of a smaller number of <iothreadid> values than
iothreads that exist (and usage of the default numbering scheme).
If a user hot-attaches the guest agent channel libvirt would ignore it
until the restart of libvirtd or shutdown/destroy and start of the VM
itself.
This patch adds code that opens or closes the guest agent connection
according to the state of the guest agent channel according to
connect/disconnect events.
To allow opening the channel from the event handler qemuConnectAgent
needed to be exported.
When the guest agent channel gets hotplugged to a VM, libvirt would
still report that "QEMU guest agent is not configured" rather than
stating that the connection was not established yet.
Currently the code won't be able to connect to the agent after hotplug
but that will change in a later patch.
As the qemuFindAgentConfig() helper is quite helpful in this case move
it to a more usable place and export it.
This is basically turning qemuDomObjEndAPI into a more general
function. Other drivers which gets a reference to domain objects may
benefit from this function too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1198645
Once upon a time, there was a little domain. And the domain was pinned
onto a NUMA node and hasn't fully allocated its memory:
<memory unit='KiB'>2355200</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
Oh little me, said the domain, what will I do with so little memory.
If I only had a few megabytes more. But the old admin noticed the
whimpering, barely audible to untrained human ear. And good admin he
was, he gave the domain yet more memory. But the old NUMA topology
witch forbade to allocate more memory on the node zero. So he
decided to allocate it on a different node:
virsh # numatune little_domain --nodeset 0-1
virsh # setmem little_domain 2355200
The little domain was happy. For a while. Until bad, sharp teeth
shaped creature came. Every process in the system was afraid of him.
The OOM Killer they called him. Oh no, he's after the little domain.
There's no escape.
Do you kids know why? Because when the little domain was born, her
father, Libvirt, called numa_set_membind(). So even if the admin
allowed her to allocate memory from other nodes in the cgroups, the
membind() forbid it.
So what's the lesson? Libvirt should rely on cgroups, whenever
possible and use numa_set_membind() as the last ditch effort.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The destination libvirt daemon in a migration may segfault if the client
disconnects immediately after the migration has begun:
# virsh -c qemu+tls://remote/system list --all
Id Name State
----------------------------------------------------
...
# timeout --signal KILL 1 \
virsh migrate example qemu+tls://remote/system \
--verbose --compressed --live --auto-converge \
--abort-on-error --unsafe --persistent \
--undefinesource --copy-storage-all --xml example.xml
Killed
# virsh -c qemu+tls://remote/system list --all
error: failed to connect to the hypervisor
error: unable to connect to server at 'remote:16514': Connection refused
The crash is in:
1531 void
1532 qemuDomainObjEndJob(virQEMUDriverPtr driver, virDomainObjPtr obj)
1533 {
1534 qemuDomainObjPrivatePtr priv = obj->privateData;
1535 qemuDomainJob job = priv->job.active;
1536
1537 priv->jobs_queued--;
Backtrace:
#0 at qemuDomainObjEndJob at qemu/qemu_domain.c:1537
#1 in qemuDomainRemoveInactive at qemu/qemu_domain.c:2497
#2 in qemuProcessAutoDestroy at qemu/qemu_process.c:5646
#3 in virCloseCallbacksRun at util/virclosecallbacks.c:350
#4 in qemuConnectClose at qemu/qemu_driver.c:1154
...
qemuDomainRemoveInactive calls virDomainObjListRemove, which in this
case is holding the last remaining reference to the domain.
qemuDomainRemoveInactive then calls qemuDomainObjEndJob, but the domain
object has been freed and poisoned by then.
This patch bumps the domain's refcount until qemuDomainRemoveInactive
has completed. We also ensure qemuProcessAutoDestroy does not return the
domain to virCloseCallbacksRun to be unlocked in this case. There is
similar logic in bhyveProcessAutoDestroy and lxcProcessAutoDestroy
(which call virDomainObjListRemove directly).
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Instead of always using controller 0 and incrementing port number,
respect the maximum port numbers of controllers and use all of them.
Ports for virtio consoles are quietly reserved, but not formatted
(neither in XML nor on QEMU command line).
Also rejects duplicate virtio-serial addresses.
https://bugzilla.redhat.com/show_bug.cgi?id=890606https://bugzilla.redhat.com/show_bug.cgi?id=1076708
Test changes:
* virtio-auto.args
Filling out the port when just the controller is specified.
switched from using
maxport + 1
to:
first free port on the controller
* virtio-autoassign.args
Filling out the address when no <address> is specified.
Started using all the controllers instead of 0, also discards
the bus value.
* xml -> xml output of virtio-auto
The port assignment is no longer done as a part of XML parsing,
so the unspecified values stay 0.
Two places would call to qemuPrepareCpumap() with priv->autoNodeset to
convert it to a cpuset. Remove the function and use the prepared cpuset
automatically.
When the synchronous pivot option is selected, libvirt would not update
the backing chain until the job was exitted. Some applications then
received invalid data as their job serialized first.
This patch removes polling to wait for the ABORT/PIVOT job completion
and replaces it with a condition. If a synchronous operation is
requested the update of the XML is executed in the job of the caller of
the synchronous request. Otherwise the monitor event callback uses a
separate worker to update the backing chain with a new job.
This is a regression since 1a92c71910
When the ABORT job is finished synchronously you get the following call
stack:
#0 qemuBlockJobEventProcess
#1 qemuDomainBlockJobImpl
#2 qemuDomainBlockJobAbort
#3 virDomainBlockJobAbort
While previously or while using the _ASYNC flag you'd get:
#0 qemuBlockJobEventProcess
#1 processBlockJobEvent
#2 qemuProcessEventHandler
#3 virThreadPoolWorker
When using 'dimm' memory devices with qemu, some of the information
like the slot number and base address need to be reloaded from qemu
after process start so that it reflects the actual state. The state then
allows to use memory devices across migrations.
virnetdevopenvswitch.h declares a few functions that can be called to
add ports to and remove them from OVS bridges, and retrieve the
migration data for a port. It does not contain any data definitions
that are used by domain_conf.h. But for some reason, domain_conf.h
virnetdevopenvswitch.h should be directly #including it. This adds a
few lines to the project, but saves all the files that don't need it
from the extra computing, and makes the dependencies more clear cut.
When libvirt is starting a domain, it reports the state as SHUTOFF until
it's RUNNING. This is not ideal because domain startup may take a long
time (usually because of some configuration issues, firewalls blocking
access to network disks, etc.) and domain lists provided by libvirt look
awkward. One can see weird shutoff domains with IDs in a list of active
domains or even shutoff transient domains. In any case, it looks more
like a bug in libvirt than a normal state a domain goes through.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Use the utilities introduced in the previous patches so the qemu
driver is able to create tap devices that are bound (and unbound
on domain destroyal) to Midonet virtual ports.
Signed-off-by: Antoni Segura Puimedon <toni+libvirt@midokura.com>
We're parsing memballoon status period as unsigned int, but when we're
trying to set it, both we and qemu use signed int. That means large
values will get wrapped around to negative one resulting in error.
Basically the same problem as commit e3a7b874 was dealing with when
updating live domain.
QEMU changed the accepted value to int64 in commit 1f9296b5, but even
values as INT_MAX don't make sense since the value passed means seconds.
Hence adding capability flag for this change isn't worth it.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1140958
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
As pointed out by jtomko in his review of the IOThreads pinning code:
http://www.redhat.com/archives/libvir-list/2015-March/msg00495.html
there are some comments sprinkled in indicating IOThreads were using
the same structure as the VcpuPin code...
This is the first patch of a few that will change the virDomainVcpuPin*
structures and code to just virDomainPin* - starting with the data
structure naming...
As there are two possible approaches to define a domain's memory size -
one used with legacy, non-NUMA VMs configured in the <memory> element
and per-node based approach on NUMA machines - the user needs to make
sure that both are specified correctly in the NUMA case.
To avoid this burden on the user I'd like to replace the NUMA case with
automatic totaling of the memory size. To achieve this I need to replace
direct access to the virDomainMemtune's 'max_balloon' field with
two separate getters depending on the desired size.
The two sizes are needed as:
1) Startup memory size doesn't include memory modules in some
hypervisors.
2) After startup these count as the usable memory size.
Note that the comments for the functions are future aware and document
state that will be present after a few later patches.
Surprisingly we did not grab a VM job when a block job finished and we'd
happily rewrite the backing chain data. This made it possible to crash
libvirt when queueing two backing chains tightly and other badness.
To fix it, add yet another handler to the helper thread that handles
monitor events that require a job.
https://bugzilla.redhat.com/show_bug.cgi?id=1197600
So, libvirt uses pid file to track pid of started qemus. Whenever
a domain is started, its pid is put into corresponding pid file.
The pid file path is generated based on domain name and stored
into domain object internals. However, it's not stored in the
status XML and therefore lost on daemon restarts. Hence, later,
when domain is being shut down, the daemon does not know which
pid file to unlink, and the correct pid file is left behind. To
avoid this, lets generate the pid file path again in
qemuProcessReconnect().
Reported-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Instead of checking defaultMode for every channel that has no mode
configured, test it only once outside of channel loop. This fixes a bug
that in case all possible channels are fore example set to insecure, but
defaultMode is set to secure, we wouldn't auto-generate TLS port. This
results in failure while starting a guest.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143832
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
We have two different places that needs to be updated while touching
code for allocation spice ports. Add a bool option to
'qemuProcessSPICEAllocatePorts' function to switch between true and fake
allocation so we can use this function also in qemu_driver to generate
native domain definition.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Since adding the support for scheduler policy settings in commit
8680ea97, there are two enums with the same information. That was
caused by rewriting the patch since first draft.
Find out thanks to clang, but there was no impact whatsoever.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has
completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image,
the event looks something like this:
"timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event":
"BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len":
8412790784, "offset": 409993216, "speed": 8796093022207, "type":
"mirror", "error": "No space left on device"}}
If "len" does not equal "offset" it's considered an error, and we can
clearly see "error" field filled in. However, later in the event
processing this case was handled no differently to case of job being
aborted via separate API. It's time that we start differentiate these
two because of the future work.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated
each time. The callback code handling the events checks if a blockjob
was started via our public APIs prior to setting the mirrorState.
However, some block jobs may be started internally (e.g. during
storage migration), in which case we don't bother with setting
disk->mirror (there's nothing we can set it to anyway), or other
fields. But it will come handy if we update the mirrorState in these
cases too. The event wasn't delivered just for fun - we've started the
job after all.
So, in this commit, the mirrorState is set to whatever job status
we've obtained. Of course, there are some actions on some statuses
that we want to perform. But instead of if {} else if {} else {} ...
enumeration, let's move to switch().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We do have a check for valid per-domain security model, however we still
do permit an invalid security model for a domain's device (those which
are specified with <source> element).
This patch introduces a new function virSecurityManagerCheckAllLabel
which compares user specified security model against currently
registered security drivers. That being said, it also permits 'none'
being specified as a device security model.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1165485
Signed-off-by: Ján Tomko <jtomko@redhat.com>
If a previous commit I fixed the incorrect handling of vcpu pids
for TCG mode QEMU:
commit b07f3d821d
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Thu Dec 18 16:34:39 2014 +0000
Don't setup fake CPU pids for old QEMU
The code assumes that def->vcpus == nvcpupids, so when we setup
fake CPU pids for old QEMU with nvcpupids == 1, we cause the
later code to read off the end of the array. This has fun results
like sche_setaffinity(0, ...) which changes libvirtd's own CPU
affinity, or even better sched_setaffinity($RANDOM, ...) which
changes the affinity of a random OS process.
The intent was that this would merely disable the ability to set
per-vCPU affinity. It should still have been possible to set VM
level host CPU affinity.
Unfortunately, when you set <vcpu cpuset='0-1'>4</vcpu>, the XML
parser will internally take this & initialize an entry in the
def->cputune.vcpupin array for every VCPU. IOW this is implicitly
being treated as
<cputune>
<vcpupin cpuset='0-1' vcpu='0'/>
<vcpupin cpuset='0-1' vcpu='1'/>
<vcpupin cpuset='0-1' vcpu='2'/>
<vcpupin cpuset='0-1' vcpu='3'/>
</cputune>
Even more fun, the faked cputune elements are hidden from view when
querying the live XML, because their cpuset mask is the same as the
VM default cpumask.
The upshot was that it was impossible to set VM level CPU affinity.
To fix this we must update qemuProcessSetVcpuAffinities so that it
only reports a fatal error if the per-VCPU cpu mask is different
from the VM level cpu mask.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
It is often helpful to know which version of libvirt and QEMU
was present when a guest was first launched. Ensure this info
is written into the QEMU log file for each guest.
Depending on the context, either error out if the domain
has disappeared in the meantime, or just ignore the value
to allow marking the function as ATTRIBUTE_RETURN_CHECK.
https://bugzilla.redhat.com/show_bug.cgi?id=1161024
In the device type-specific functions, exit early
if the domain has disappeared, because the cleanup
should have been done by qemuProcessStop.
Check the return value in processDeviceDeletedEvent
and qemuProcessUpdateDevices.
Skip audit and removing the device from live def because
it has already been cleaned up.
Make local copy of the disk alias in qemuProcessInitPasswords,
instead of referencing the one in domain definition, which
might get freed if the domain crashes while we're in monitor.
Also copy the memballoon period value.
QEMU internally updates the size of video memory if the domain XML had
provided too low memory size or there are some dependencies for a QXL
devices 'vgamem' and 'ram' size. We need to know about the changes and
store them into the status XML to not break migration or managedsave
through different libvirt versions.
The values would be loaded only if the "vgamem_mb" property exists for
the device. The presence of the "vgamem_mb" also tells that the
"ram_size" and "vram_size" exists for QXL devices.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
There is one problem that causes various errors in the daemon. When
domain is waiting for a job, it is unlocked while waiting on the
condition. However, if that domain is for example transient and being
removed in another API (e.g. cancelling incoming migration), it get's
unref'd. If the first call, that was waiting, fails to get the job, it
unref's the domain object, and because it was the last reference, it
causes clearing of the whole domain object. However, when finishing the
call, the domain must be unlocked, but there is no way for the API to
know whether it was cleaned or not (unless there is some ugly temporary
variable, but let's scratch that).
The root cause is that our APIs don't ref the objects they are using and
all use the implicit reference that the object has when it is in the
domain list. That reference can be removed when the API is waiting for
a job. And because each domain doesn't do its ref'ing, it results in
the ugly checking of the return value of virObjectUnref() that we have
everywhere.
This patch changes qemuDomObjFromDomain() to ref the domain (using
virDomainObjListFindByUUIDRef()) and adds qemuDomObjEndAPI() which
should be the only function in which the return value of
virObjectUnref() is checked. This makes all reference counting
deterministic and makes the code a bit clearer.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Although QMP returns info about vCPU threads in TCG mode, the
data it returns is mostly lies. Only the first vCPU has a valid
thread_id returned. The thread_id given for the other vCPUs is
in fact the main emulator thread. All vCPUs actually run under
the same thread in TCG mode.
Our vCPU pinning code is not at all able to cope with this
so if you try to set CPU affinity per-vCPU you end up with
wierd errors
error: Failed to start domain instance-00000007
error: cannot set CPU affinity on process 24365: Invalid argument
Since few people will care about the performance of TCG with
strict CPU pinning, lets just disable that for now, so we get
a clear error message
error: Failed to start domain instance-00000007
error: Requested operation is not valid: cpu affinity is not supported
The code assumes that def->vcpus == nvcpupids, so when we setup
fake CPU pids for old QEMU with nvcpupids == 1, we cause the
later code to read off the end of the array. This has fun results
like sche_setaffinity(0, ...) which changes libvirtd's own CPU
affinity, or even better sched_setaffinity($RANDOM, ...) which
changes the affinity of a random OS process.
We now have a qemuInterfaceStartDevices() which does the final
activation needed for the host-side tap/macvtap devices that are used
for qemu network connections. It will soon make sense to have the
converse qemuInterfaceStopDevices() which will undo whatever was done
during qemuInterfaceStartDevices().
A function to "stop" a single device has also been added, and is
called from the appropriate place in qemuDomainDetachNetDevice(),
although this is currently unnecessary - the device is going to
immediately be deleted anyway, so any extra "deactivation" will be for
naught. The call is included for completeness, though, in anticipation
that in the future there may be some required action that *isn't*
nullified by deleting the device.
This patch is a part of a more complete fix for:
https://bugzilla.redhat.com/show_bug.cgi?id=1081461
The patch that added qemuInterfaceStartDevices() (upstream commit
82977058f5) had an extra conditional to
prevent calling it if the reason for starting the CPUs was
VIR_DOMAIN_RUNNING_UNPAUSED or VIR_DOMAIN_RUNNING_SAVE_CANCELED. This
was put in by the author as the result of a reviewer asking if it was
necessary to ifup the interfaces in *all* occasions (because these
were the two cases where the CPU would have already been started (and
stopped) once, so the interface would already be ifup'ed).
It turns out that, as long as there is no corresponding
qemuInterfaceStopDevices() to ifdown the interfaces anytime the CPUs
are stopped, neglecting to ifup when reason is RUNNING_UNPAUSED or
RUNNING_SAVE_CANCELED doesn't cause any problems (because it just
happens that the interface will have already been ifup'ed by a prior
call when the CPU was previously started for some other reason).
However, it also doesn't *help*, and there will soon be a
qemuInterfaceStopDevices() function which *will* ifdown these
interfaces when the guest CPUs are stopped, and once that is done, the
interfaces will be left down in some cases when they should be up (for
example, if a domain is paused and then unpaused).
So, this patch is removing the condition in favor of always calling
qemuInterfaeStartDevices() when the guest CPUs are started.
This patch (and the aforementioned patch) resolve:
https://bugzilla.redhat.com/show_bug.cgi?id=1081461
Currently, MAC registration occurs during device creation, which is
early enough that, during live migration, you end up with duplicate
MAC addresses on still-running source and target devices, even though
the target device isn't actually being used yet.
This patch proposes to defer MAC registration until right before
the guest can actually use the device -- In other words, right
before starting guest CPUs.
Signed-off-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Laine Stump <laine@laine.org>
Move entering the job into the thread to simplify the program flow. Also
as the code holds a separate reference to the domain object some
conditions can be simplified.
After this patch qemuDomainObjTransferJob is no longer needed so this
patch removes it.
There are some small issue in qemuProcessAttach:
1.Fix virSecurityManagerGetProcessLabel always get pid = 0,
move 'vm->pid = pid' before call virSecurityManagerGetProcessLabel.
2.Use virSecurityManagerGenLabel to get image label.
3.Fix always set selinux label for other security driver label.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
When a block{commit,copy} job was aborted on a domain, block job handler
did not process it correctly, leaving a phantom job in the background.
Any further calls to any blockjob causes "block <jobtype> still active"
error. This patch fixes the blockjob handler so that it checks not only
for VIR_DOMAIN_BLOCK_JOB_FAILED status, but VIR_DOMAIN_BLOCK_JOB_CANCELED
status as well, followed by our existing cleanup routine.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1135169
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160084
As of b6d4dad11b (1.2.5) we are trying to keep the status of FSFreeze
in the guest. Even though I've tried to fixed couple of corner cases
(6ea54769ba), it occurred to me just recently, that the approach is
broken by design. Firstly, there are many other ways to talk to
qemu-ga (even through libvirt) that filesystems can be thawed (e.g.
qemu-agent-command) without libvirt noticing. Moreover, there are
plenty of ways to thaw filesystems without even qemu-ga noticing (yes,
qemu-ga keeps internal track of FSFreeze status). So, instead of
keeping the track ourselves, or asking qemu-ga for stale state, it's
the best to let qemu-ga deal with that (and possibly let guest kernel
propagate an error).
Moreover, there's one bug with the following approach, if fsfreeze
command failed, we've executed fsthaw subsequently. So issuing
domfsfreeze in virsh gave the following result:
virsh # domfsfreeze gentoo
Froze 1 filesystem(s)
virsh # domfsfreeze gentoo
error: Unable to freeze filesystems
error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
virsh # domfsfreeze gentoo
Froze 1 filesystem(s)
virsh # domfsfreeze gentoo
error: Unable to freeze filesystems
error: internal error: unable to execute QEMU agent command 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been disabled for this instance
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Improve the monitor function to also retrieve the guest state of
character device (if provided) so that we can refresh the state of
virtio-serial channels and perhaps react to changes in the state in
future patches.
This patch changes the returned data from qemuMonitorGetChardevInfo to
return a structure containing the pty path and the state for all the
character devices.
The change to the testsuite makes sure that the data is parsed
correctly.
New qemu added a new event that is emitted when a virtio serial channel
is opened in the guest OS. This allows us to update the state of the
port in the output-only XML element.
This patch implements the monitor callbacks and necessary handlers to
update the state in the definition.
To unify future additions that require information from "query-chardev"
rename qemuMonitorGetPtyPaths and friends to qemuMonitorGetChardevInfo
and move the allocation of the returned hash into the top level
function.
When creating a disk image snapshot the libvirt code would blindly copy
the parents label to the newly created image. This runs into problems
when you start a VM from an image hosted on NFS (or other storage system
that doesn't support selinux labels) and the snapshot destination is on
a storage system that does support selinux labels. Libvirt's code in
that case generates a different security label for the image hosted on
NFS. This label is valid only for NFS images and doesn't allow access in
case of a locally stored image.
To fix this issue libvirt needs to refrain from copying security
information in cases where the default domain seclabel is a better
choice.
This patch repurposes the now unused @force argument of
virStorageSourceInitChainElement to denote whether a copy of the
security labelling stuff should be attempted or not. This allows to
fine-control the copy operation for cases where we need to keep the
label of the old disk vs. the cases where we need to keep the label
unset to use the default domain imagelabel.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1151718
Ethernet interfaces in libvirt currently do not support bandwidth setting.
For example, following xml file for an interface will not apply these
settings to corresponding qdiscs.
<interface type="ethernet">
<mac address="02:36:1d:18:2a:e4"/>
<model type="virtio"/>
<script path=""/>
<target dev="tap361d182a-e4"/>
<bandwidth>
<inbound average="984" peak="1024" burst="64"/>
<outbound average="2000" peak="2048" burst="128"/>
</bandwidth>
</interface>
Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
and starting of guest, but this same deadlock exists for
updating/attaching network device to domain.
The deadlock was introduced by removing global QEMU driver lock because
nwfilter was counting on this lock and ensure that all driver locks are
locked inside of nwfilter{Define,Undefine}.
This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
the deadlock for all possible paths in QEMU driver. LXC and UML drivers
still have global lock.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In one of my previous patches (3a3c3780b) I've tried to fix the
problem of nvram path disappearing on a domain that's been
started and shut down again. I fixed this by explicitly saving
domain's config file. However, I did a bit of clumsy without
realizing we have a transient domains for which we don't save the
config file. Hence, any domain using UEFI became persistent.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1160084
As of b6d4dad1 (1.2.5) libvirt keeps track if domain disks have been
frozen. However, this falls into that set of information which don't
survive domain restart. Therefore, we need to clear the flag upon some
state transitions. Moreover, once we clear the flag we must update the
status file too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is a reaction to Michal's fix [1] for non-NUMA systems that also
splits out conf/ out of util/ because libvirt_util shouldn't require
libvirt_conf if it is the other way around. This particular use case
worked, but we're trying to avoid it as mentioned [2], many times.
The only functions from virnuma.c that needed numatune_conf were
virDomainNumatuneNodesetIsAvailable() and virNumaSetupMemoryPolicy().
The first one should be in numatune_conf as it works with
virDomainNumatune, the second one just needs nodeset and mode, both of
which can be passed without the need of numatune_conf.
Apart from fixing that, this patch also fixes recently added
code (between commits d2460f85^..5c8515620) that doesn't support
non-contiguous nodesets. It uses new function
virNumaNodesetIsAvailable(), which doesn't need a stub as it doesn't use
any libnuma functions, to check if every specified nodeset is available.
[1] https://www.redhat.com/archives/libvir-list/2014-November/msg00118.html
[2] http://www.redhat.com/archives/libvir-list/2011-June/msg01040.html
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Particularly in qemuBuildNumaArgStr(), there was a need for the advice
due to memory backing, which needs to know the nodeset it will be pinned
to. With newer qemu this caused the following error when starting
domain:
error: internal error: Advice from numad is needed in case of
automatic numa placement
even when starting perfectly valid domain, e.g.:
...
<vcpu placement='auto'>4</vcpu>
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
<cpu>
<numa>
<cell id='0' cpus='0' memory='524288'/>
<cell id='1' cpus='1' memory='524288'/>
</numa>
</cpu>
...
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1138545
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
commit 3e1e16aa8d (Use a port from the
migration range for NBD as well) changed ndb port allocation from
remotePorts to migrationPorts, but did not change the port releasing
process, which makes an error when migrating several times (above 64):
error: internal error: Unable to find an unused port in range
'migration' (49152-49215)
https://bugzilla.redhat.com/show_bug.cgi?id=1159245
Signed-off-by: Weiwei Li <nuonuoli@tencent.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
If VM is configured with many devices(including passthrough devices)
and large memory, libvirtd will take seconds(in the worst case) to
wait for monitor. In this period the qemu process may run on any
PCPU though I intend to pin emulator to the specified PCPU in xml
configuration.
Actually qemu process takes high cpu usage during vm startup.
So this is not the strict CPU isolation in this case.
Signed-off-by: Zhou yimin <zhouyimin@huawei.com>
NIC_RX_FILTER_CHANGED is sent by qemu any time a NIC driver in the
guest modified the NIC's RX Filter (for example, if the MAC address of
the NIC is changed by the guest).
This patch doesn't do anything useful with that event; it just sets up
all the plumbing to get news of the event into a worker thread with
all proper locking/reference counting, and provide an easy place to
add in desired functionality.
See src/qemu/EVENTHANDLERS.txt for information/instructions on adding
a libvirt-internal handler for a qemu event (using
NIC_RX_FILTER_CHANGED as an example).
If we don't properly clean up all processes in the
machine-<vmname>.scope systemd won't remove the cgroup and subsequent vm
starts fail with
'CreateMachine: File exists'
Additional processes can e.g. be added via
echo $PID > /sys/fs/cgroup/systemd/machine.slice/machine-${VMNAME}.scope/tasks
but there are other cases like
http://bugs.debian.org/761521
Invoke TerminateMachine to be on the safe side since systemd tracks the
cgroup anyway. This is a noop if all processes have terminated already.
Commit fba6bc4 introduced the non-migratable invtsc feature,
breaking save/migration with host-model and host-passthrough.
On hosts with this feature present it was automatically included
in the CPU definition, regardless of QEMU support.
Commit de0aeaf stopped including it by default for host-model,
but failed to fix host-passthrough.
This commit ignores checking of CPU features with host-passthrough,
since we don't pass them to QEMU (only -cpu host is passed),
allowing domains using host-passthrough that were saved with
the broken version of libvirtd to be restored.
https://bugzilla.redhat.com/show_bug.cgi?id=1147584
On a domain startup, the variable store path is generated if needed.
The path is intended to be generated only once. However, the updated
domain definition is not saved into config dir rather than state XML
only. So later, whenever the domain is destroyed and the daemon is
restarted, the generated path is forgotten and the file may be left
behind on virDomainUndefine() call.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Request erroring out from the backing chain traveller and drop qemu's
internal backing chain integrity tester.
The backing chain traveller reports errors by itself with possibly more
detail than qemuDiskChainCheckBroken ever could.
We also need to make sure that we reconnect to existing qemu instances
even at the cost of losing the backing chain info (this really should be
stored in the XML rather than reloaded from disk, but that needs some
work).
Commit id '9a2f36ec' added a build conditional of CAP_SYS_RAWIO
in order to determine whether or not a disk definition using rawio
should be allowed on platforms without CAP_SYS_RAWIO. If one was
found, virReportError was used but the code didn't goto cleanup.
This patch adds the goto.
We are not detecting the presence of FIPS from QEMU, but from procfs and
that means it's not QEMU capability. It was decided that we will pass
this flag to QEMU even if it's not supported by old QEMU binaries.
This patch also reverts changes done by commit a21cfb0f to
qemucapabilitestest and implements a new test case in qemuxml2argvtest.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1135431
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
If there are no iothreads, then return from qemuProcessDetectIOThreadPIDs
without error; otherwise, the following occurs:
error: Failed to start domain $dom
error: An error occurred, but the cause is unknown
Modify qemuProcessStart() in order to allowing setting affinity to
specific CPU's for IOThreads. The process followed is similar to
that for the vCPU's.
This involves adding a function to fetch the IOThread id's via
qemuMonitorGetIOThreads() and adding them to iothreadpids[] list.
Then making sure all the cgroup data has been properly set up and
finally assigning affinity.
In qemuProcessInitPCIAddresses() if qemuMonitorGetAllPCIAddresses()
returns a negative (or zero) value, then no need to call the
qemuProcessDetectPCIAddresses().
Signed-off-by: John Ferlan <jferlan@redhat.com>
When using split UEFI image, it may come handy if libvirt manages per
domain _VARS file automatically. While the _CODE file is RO and can be
shared among multiple domains, you certainly don't want to do that on
the _VARS file. This latter one needs to be per domain. So at the
domain startup process, if it's determined that domain needs _VARS
file it's copied from this master _VARS file. The location of the
master file is configurable in qemu.conf.
Temporary, on per domain basis the location of master NVRAM file can
be overridden by this @template attribute I'm inventing to the
<nvram/> element. All it does is holding path to the master NVRAM file
from which local copy is created. If that's the case, the map in
qemu.conf is not consulted.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Total time of a migration and total downtime transfered from a source to
a destination host do not count with the transfer time to the
destination host and with the time elapsed before guest CPUs are
resumed. Thus, source libvirtd remembers when migration started and when
guest CPUs were paused. Both timestamps are transferred to destination
libvirtd which uses them to compute total migration time and total
downtime. Obviously, this requires the time to be synchronized between
the two hosts. The reported times are useless otherwise but they would
be equally useless if we didn't do this recomputation so don't lose
anything by doing it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When QEMU fails during incoming migration after we successfully started
it (i.e., during Perform or Finish phase), we report a rather unhelpful
message
Unable to read from monitor: Connection reset by peer
We already have a code that takes error messages from QEMU's error
output but we disable it once QEMU successfully starts. This patch
postpones this until the end of Finish phase during incoming migration
so that we can report a much better error message:
internal error: early end of file from monitor: possible problem:
Unknown savevm section or instance '0000:00:05.0/virtio-balloon' 0
load of migration failed
https://bugzilla.redhat.com/show_bug.cgi?id=1090093
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Add umask to _virCommand, allow user to set umask to command.
Set umask(002) to qemu process to overwrite the default umask
of 022 set by many distros, so that unix sockets created for
virtio-serial has expected permissions.
Fix problem reported here:
https://sourceware.org/bugzilla/show_bug.cgi?id=13078#c11https://bugzilla.novell.com/show_bug.cgi?id=888166
To use virtio-serial device, unix socket created for chardev with
default umask(022) has insufficient permissions.
e.g.:
-device virtio-serial \
-chardev socket,path=/tmp/foo,server,nowait,id=foo \
-device virtserialport,chardev=foo,name=org.fedoraproject.port.0
srwxr-xr-x 1 qemu qemu 0 21. Jul 14:19 /tmp/somefile.sock
Other users in the same group (like real user, test engines, etc)
cannot write to this socket.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The 'min_guarantee' is used by VMware ESX and OpenVZ drivers,
with qemu however, libvirt should report error when starting a domain,
because this element is not used.
Resolves https://bugzilla.redhat.com/show_bug.cgi?id=1122455
Currently, qemu driver uses qemuTranslateDiskSourcePool()
to translate disk volume information. This function is
general enough and could be used for other drivers as well,
so move it to conf/domain_conf.c along with its helpers.
- qemuTranslateDiskSourcePool: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePool,
- qemuAddISCSIPoolSourceHost: move to storage/storage_driver.c
and rename to virStorageAddISCSIPoolSourceHost,
- qemuTranslateDiskSourcePoolAuth: move to storage/storage_driver.c
and rename to virStorageTranslateDiskSourcePoolAuth,
- Update users of qemuTranslateDiskSourcePool to use a
new name.
Pin existing vcpus rather than existing vcpu pinning infos. This
increases the complexity of the lookup, but avoids pinning cpus that are
not enabled actually.
When editing guest's XML (on QEMU), it was possible to add multiple
listen elements into graphics parent element. However QEMU does not
support listening on multiple addresses. Configuration is tested for
multiple 'listen address' and if positive, an error is raised.
https://bugzilla.redhat.com/show_bug.cgi?id=1119212
During a QEMU live migration several warning messages about job
handling could be written to syslog on the destination host:
"entering monitor without asking for a nested job is dangerous"
The messages are written because the job handling during migration
uses hard coded asyncJob values in several places that are incorrect.
This patch passes the required asyncJob value around and prevents
the warnings as well as any issues that the warnings may be referring
to.
https://bugzilla.redhat.com/show_bug.cgi?id=1130089
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
A future patch is going to wire up qemu active block commit jobs;
but as they have similar events and are canceled/pivoted in the
same way as block copy jobs, it is easiest to track all bookkeeping
for the commit job by reusing the <mirror> element. This patch
adds domain XML to track which job was responsible for creating a
mirroring situation, and adds a job='copy' attribute to all
existing uses of <mirror>. Along the way, it also massages the
qemu monitor backend to read the new field in order to generate
the correct type of libvirt job (even though it requires a
future patch to actually cause a qemu event that can be reported
as an active commit). It also prepares to update persistent XML
to match changes made to live XML when a copy completes.
* docs/schemas/domaincommon.rng: Enhance schema.
* docs/formatdomain.html.in: Document it.
* src/conf/domain_conf.h (_virDomainDiskDef): Add a field.
* src/conf/domain_conf.c (virDomainBlockJobType): String conversion.
(virDomainDiskDefParseXML): Parse job type.
(virDomainDiskDefFormat): Output job type.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Distinguish
active from regular commit.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Set job type.
(qemuDomainBlockPivot, qemuDomainBlockJobImpl): Clean up job type
on completion.
* tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-mirror-old.xml:
Update tests.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-disk-active-commit.xml: New
file.
* tests/qemuxml2xmltest.c (mymain): Drive new test.
Signed-off-by: Eric Blake <eblake@redhat.com>
We were not directly saving the domain XML to file after starting
or finishing a blockcopy. Without the startup write, a libvirtd
restart in the middle of a copy job would forget that the job was
underway. Then at pivot, we were indirectly writing new XML in
reaction to events that occur as we stop and restart the guest CPUs.
But there was a race: since pivot is an async action, it is possible
that libvirtd is restarted before the pivot completes, so if XML
changes during the event, that change was not written. The original
blockcopy code cleared out the <mirror> element prior to restarting
the CPUs, but this is also a race, observed if a user does an async
pivot and a dumpxml before the event occurs. Furthermore, this race
will interfere with active commit in a future patch, because that
code will rely on the <mirror> element at the time of the qemu event
to determine whether to inform the user of a normal commit or an
active commit.
Fix things by saving state any time we modify live XML, while
delaying XML disk modifications until after the event completes. We
still need a to teach libvirtd restarts to examine all existing
<mirror> elements to see if the job completed in the meantime (that
is, if libvirtd misses the event, the updated state still needs to be
updated in live XML), but that will be a later patch, in part because
we also need to to start taking advantage of newer qemu's ability to
keep the job around after completion rather than the current usage
where the job disappears both on error and on success.
* src/qemu/qemu_driver.c (qemuDomainBlockCopy): Track XML change
on disk.
(qemuDomainBlockJobImpl, qemuDomainBlockPivot): Move job-end XML
rewrites...
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): ...here.
Signed-off-by: Eric Blake <eblake@redhat.com>
Doing a blockcopy operation across a libvirtd restart is not very
robust at the moment. In particular, we are clearing the <mirror>
element prior to telling qemu to finish the job. Also, thanks to the
ability to request async completion, the user can easily regain
control prior to qemu actually finishing the effort, and they should
be able to poll the domain XML to see if the job is still going.
A future patch will fix things to actually wait until qemu is done
before modifying the XML to reflect the job completion. But since
qemu issues identical BLOCK_JOB_COMPLETE events regardless of whether
the job was cancelled (kept the original disk) or completed (pivoted
to the new disk), we have to track which of the two operations were
used to end the job. Furthermore, we'd like to avoid attempts to
end a job where we are already waiting on an earlier request to qemu
to end the job. Likewise, if we miss the qemu event (perhaps because
it arrived during a libvirtd restart), we still need enough state
recorded to be able to determine how to modify the domain XML once
we reconnect to qemu and manually learn whether the job still exists.
Although this patch doesn't actually fix the problem, it is a
preliminary step that makes it possible to track whether a job
has already begun steps towards completion.
* src/conf/domain_conf.h (virDomainDiskMirrorState): New enum.
(_virDomainDiskDef): Convert bool mirroring to new enum.
* src/conf/domain_conf.c (virDomainDiskDefParseXML)
(virDomainDiskDefFormat): Handle new values.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Adjust
client.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl): Likewise.
* docs/schemas/domaincommon.rng (diskMirror): Expose new values.
* docs/formatdomain.html.in (elementsDisks): Document it.
* tests/qemuxml2argvdata/qemuxml2argv-disk-mirror.xml: Test it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Use better detection of hugetlbfs mount points. Yes, there can be
multiple mount points each serving different huge page size.
Since we already have ability to override the mount point in the
qemu.conf file, this crazy backward compatibility code is brought in.
Now we allow multiple mount points, so the "hugetlbfs_mount" option
must take an list of strings (mount points). But previously, it was
just a string, so we must accept both types now.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When domain is started with numatune memory mode strict and the
nodeset does not include host NUMA node with DMA and DMA32 zones, KVM
initialization fails. This is because cgroup restrict even kernel
allocations. We are already doing numa_set_membind() which does the
same thing, only it does not restrict kernel allocations.
This patch leaves the userspace numa_set_membind() in place and moves
the cpuset.mems setting after the point where monitor comes up, but
before vcpu and emulator sub-groups are created.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
There were numerous places where numatune configuration (and thus
domain config as well) was changed in different ways. On some
places this even resulted in persistent domain definition not to be
stable (it would change with daemon's restart).
In order to uniformly change how numatune config is dealt with, all
the internals are now accessible directly only in numatune_conf.c and
outside this file accessors must be used.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since there was already public virDomainNumatune*, I changed the
private virNumaTune to match the same, so all the uses are unified and
public API is kept:
s/vir\(Domain\)\?Numa[tT]une/virDomainNumatune/g
then shrunk long lines, and mainly functions, that were created after
that:
sed -i 's/virDomainNumatuneMemPlacementMode/virDomainNumatunePlacement/g'
And to cope with the enum name, I haad to change the constants as
well:
s/VIR_NUMA_TUNE_MEM_PLACEMENT_MODE/VIR_DOMAIN_NUMATUNE_PLACEMENT/g
Last thing I did was at least a little shortening of already long
name:
s/virDomainNumatuneDef/virDomainNumatune/g
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This saves a few lines of code and catches the error when:
<spice autoport ='yes' defaultMode='any' ..>
<channel name='main' mode='secure'/>
</spice>
is specified with spice_tls = 0 in qemu.conf.
Instead of this error in qemuBuildGraphicsSPICECommandLine:
error: unsupported configuration: spice secure channels set in XML
configuration, but TLS port is not provided
an error is reported in qemuProcessSPICEAllocatePorts:
error: unsupported configuration: Auto allocation of spice TLS port
requested but spice TLS is disabled in qemu.conf
Inspired by:
https://www.redhat.com/archives/libvir-list/2014-June/msg01408.html
As we are doing with the enum structures, a cleanup in "src/qemu/"
directory was done now. All the enums that were defined in the
header files were converted to typedefs in this directory. This
patch includes all the adjustments to remove conflicts when you do
this kind of change. "Enum-to-typedef"'s conversions were made in
"src/qemu/qemu_{capabilities, domain, migration, hotplug}.h".
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
When looking for a port to allocate, the port allocator didn't take in
consideration ports that are statically set by the user. Defining
these two graphics elements in the XML would cause an error, as the
port allocator would try to use the same port for the spice graphics
element:
<graphics type='spice' autoport='yes'/>
<graphics type='vnc' port='5900' autoport='no'/>
The new *[pP]ortReserved variables keep track of the ports that were
successfully tracked as used by the port allocator but that weren't
bound.
Closes: https://bugzilla.redhat.com/show_bug.cgi?id=1081881
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
When the block job event was first added, it was for block pull,
where the active layer of the disk remains the same name. It was
also in a day where we only cared about local files, and so we
always had a canonical absolute file name. But two things have
changed since then: we now have network disks, where determining
a single absolute string does not really make sense; and we have
two-phase jobs (copy and active commit) where the name of the
active layer changes between the first event (ready, on the old
name) and second (complete, on the pivoted name).
Adam Litke reported that having an unstable string between events
makes life harder for clients. Furthermore, all of our API that
operate on a particular disk of a domain accept multiple strings:
not only the absolute name of the active layer, but also the
destination device name (such as 'vda'). As this latter name is
stable, even for network sources, it serves as a better string
to supply in block job events.
But backwards-compatibility demands that we should not change the
name handed to users unless they explicitly request it. Therefore,
this patch adds a new event, BLOCK_JOB_2 (alas, I couldn't think of
any nicer name - but at least Migrate2 and Migrate3 are precedent
for a number suffix). We must double up on emitting both old-style
and new-style events according to what clients have registered for
(see also how IOError and IOErrorReason emits double events, but
there the difference was a larger struct rather than changed
meaning of one of the struct members).
Unfortunately, adding a new event isn't something that can easily
be broken into pieces, so the commit is rather large.
* include/libvirt/libvirt.h.in (virDomainEventID): Add a new id
for VIR_DOMAIN_EVENT_ID_BLOCK_JOB_2.
(virConnectDomainEventBlockJobCallback): Document new semantics.
* src/conf/domain_event.c (_virDomainEventBlockJob): Rename field,
to ensure we catch all clients.
(virDomainEventBlockJobNew): Add parameter.
(virDomainEventBlockJobDispose)
(virDomainEventBlockJobNewFromObj)
(virDomainEventBlockJobNewFromDom)
(virDomainEventDispatchDefaultFunc): Adjust clients.
(virDomainEventBlockJob2NewFromObj)
(virDomainEventBlockJob2NewFromDom): New functions.
* src/conf/domain_event.h: Add new prototypes.
* src/libvirt_private.syms (domain_event.h): Export new functions.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Generate two
different events.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Likewise.
* src/remote/remote_protocol.x
(remote_domain_event_block_job_2_msg): New struct.
(REMOTE_PROC_DOMAIN_EVENT_BLOCK_JOB_2): New RPC.
* src/remote/remote_driver.c
(remoteDomainBuildEventBlockJob2): New handler.
(remoteEvents): Register new event.
* daemon/remote.c (remoteRelayDomainEventBlockJob2): New handler.
(domainEventCallbacks): Register new event.
* tools/virsh-domain.c (vshEventCallbacks): Likewise.
(vshEventBlockJobPrint): Adjust client.
* src/remote_protocol-structs: Regenerate.
Signed-off-by: Eric Blake <eblake@redhat.com>
The current implementation of 'virsh blockcopy' (virDomainBlockRebase)
is limited to copying to a local file name. But future patches want
to extend it to also copy to network disks. This patch converts over
to a virStorageSourcePtr, although it should have no semantic change
visible to the user, in anticipation of those future patches being
able to use more fields for non-file destinations.
* src/conf/domain_conf.h (_virDomainDiskDef): Change type of
mirror information.
* src/conf/domain_conf.c (virDomainDiskDefParseXML): Localize
mirror parsing into new object.
(virDomainDiskDefFormat): Adjust clients.
* src/qemu/qemu_domain.c (qemuDomainDeviceDefPostParse):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainBlockPivot)
(qemuDomainBlockJobImpl, qemuDomainBlockCopy): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
As part of the work on backing chains, I'm finding that it would
be easier to directly manipulate chains of pointers (adding a
snapshot merely adjusts pointers to form the correct list) rather
than copy data from one struct to another. This patch converts
domain disk source to be a pointer.
In this patch, the pointer is ALWAYS allocated (thanks in part to
the previous patch forwarding all disk def allocation through a
common point), and all other changse are just mechanical fallout of
the new type; there should be no functional change. It is possible
that we may want to leave the pointer NULL for a cdrom with no
medium in a later patch, but as that requires a closer audit of the
source to ensure we don't fault on a null dereference, I didn't do
it here.
* src/conf/domain_conf.h (_virDomainDiskDef): Change type of src.
* src/conf/domain_conf.c: Adjust all clients.
* src/security/security_selinux.c: Likewise.
* src/qemu/qemu_domain.c: Likewise.
* src/qemu/qemu_command.c: Likewise.
* src/qemu/qemu_conf.c: Likewise.
* src/qemu/qemu_process.c: Likewise.
* src/qemu/qemu_migration.c: Likewise.
* src/qemu/qemu_driver.c: Likewise.
* src/lxc/lxc_driver.c: Likewise.
* src/lxc/lxc_controller.c: Likewise.
* tests/securityselinuxlabeltest.c: Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Currently, we don not acquire any job when removing a device after
DEVICE_DELETED event was received from QEMU. This means that if there is
another API running at the time DEVICE_DELETED is delivered and the API
acquired a job, we may happily change the definition of the domain the
API is working with whenever it unlocks the domain object (e.g., to talk
with its monitor). That said, we have to acquire a job before finishing
device removal to make things safe. However, doing so in the main event
loop would cause a deadlock so we need to move most of the event handler
into a separate thread.
Another good reason for both acquiring a job and handling the event in a
separate thread is that we currently remove a device backend immediately
after removing its frontend while we should only remove the backend once
we already received DEVICE_DELETED event. That is, we will have to talk
to QEMU monitor from the event handler.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If QEMU supports DEVICE_DELETED event, we always call
qemuDomainRemoveDevice from the event handler. However, we will need to
push this call away from the main event loop and begin a job for it (see
the following commit), we need to make sure the device is fully removed
by the original thread (and within its existing job) in case the
DEVICE_DELETED event arrives before qemuDomainWaitForDeviceRemoval times
out.
Without this patch, device removals would be guaranteed to never finish
before the timeout because the could would be blocked by the original
job being still active.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1088787
Clean up unix socket files for chardevs using mode='bind',
like we clean up the monitor socket.
They are created by QEMU on startup and not really useful
after shutting it down.
commit e31b5cf393 attempted to fix libvirt's
VIR_DOMAIN_EVENT_ID_RTC_CHANGE, which is documentated to always
provide the new offset of the domain's real time clock from UTC. The
problem was that, in the case that qemu is provided with an "-rtc
base=x" where x is an absolute time (rather than "utc" or
"localtime"), the offset sent by qemu's RTC_CHANGE event is *not* the
new offset from UTC, but rather is the sum of all changes to the
domain's RTC since it was started with base=x.
So, despite what was said in commit e31b5cf393, if we assume that
the original value stored in "adjustment" was the offset from UTC at
the time the domain was started, we can always determine the current
offset from UTC by simply adding the most recent (i.e. current) offset
from qemu to that original adjustment.
This patch accomplishes that by storing the initial adjustment in the
domain's status as "adjustment0". Each time a new RTC_CHANGE event is
received from qemu, we simply add adjustment0 to the value sent by
qemu, store that as the new adjustment, and forward that value on to
any event handler.
This patch (*not* e31b5cf393, which should be reverted prior to
applying this patch) fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=964177
(for the case where basis='utc'. It does not fix basis='localtime')
This reverts commit e31b5cf393.
This commit attempted to work around a bug in the offset value
reported by qemu's RTC_CHANGE event in the case that a variable base
date was given on the qemu commandline. The patch mixed up the math
involved in arriving at the corrected offset to report, and in the
process added an unnecessary private attribute to the clock
element. Since that element is private/internal and not used by anyone
else, it makes sense to simplify things by removing it.
Refresh the disk backing chains when reconnecting to a qemu process
after daemon restart. There are a few internal fields that don't get
refreshed from the XML. Until we are able to do that, let's reload all
the metadata by the backing chain crawler.
As a side effect, the return value of qemuDomainObjEnterMonitorAsync is
not directly used as the return value of qemuProcess{Start,Stop}CPUs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Move sharable PCI handling functions to domain_addr.[ch], and
change theirs prefix from 'qemu' to 'vir':
- virDomainPCIAddressAsString;
- virDomainPCIAddressBusSetModel;
- virDomainPCIAddressEnsureAddr;
- virDomainPCIAddressFlagsCompatible;
- virDomainPCIAddressGetNextSlot;
- virDomainPCIAddressReleaseSlot;
- virDomainPCIAddressReserveAddr;
- virDomainPCIAddressReserveNextSlot;
- virDomainPCIAddressReserveSlot;
- virDomainPCIAddressSetFree;
- virDomainPCIAddressSetGrow;
- virDomainPCIAddressSlotInUse;
- virDomainPCIAddressValidate;
The only change here is function names, the implementation itself
stays untouched.
Extract common allocation code from DomainPCIAddressSetCreate
into virDomainPCIAddressSetAlloc.
In "src/util/" there are many enumeration (enum) declarations.
Sometimes, it's better using a typedef for variable types,
function types and other usages. Other enumeration will be
changed to typedef's in the future.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
I almost wrote a hash value free function that just called
VIR_FREE, then realized I couldn't be the first person to
do that. Sure enough, it was worth factoring into a common
helper routine.
* src/util/virhash.h (virHashValueFree): New function.
* src/util/virhash.c (virHashValueFree): Implement it.
* src/util/virobject.h (virObjectFreeHashData): New function.
* src/libvirt_private.syms (virhash.h, virobject.h): Export them.
* src/nwfilter/nwfilter_learnipaddr.c (virNWFilterLearnInit): Use
common function.
* src/qemu/qemu_capabilities.c (virQEMUCapsCacheNew): Likewise.
* src/qemu/qemu_command.c (qemuDomainCCWAddressSetCreate):
Likewise.
* src/qemu/qemu_monitor.c (qemuMonitorGetBlockInfo): Likewise.
* src/qemu/qemu_process.c (qemuProcessWaitForMonitor): Likewise.
* src/util/virclosecallbacks.c (virCloseCallbacksNew): Likewise.
* src/util/virkeyfile.c (virKeyFileParseGroup): Likewise.
* tests/qemumonitorjsontest.c
(testQemuMonitorJSONqemuMonitorJSONGetBlockInfo): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
In order to reuse the newly-created host-side disk struct in
the virstoragefile backing chain code, I first have to move
it to util/. This starts the process, by first moving the
security label structures.
* src/conf/domain_conf.h (virDomainDefGenSecurityLabelDef)
(virDomainDiskDefGenSecurityLabelDef, virSecurityLabelDefFree)
(virSecurityDeviceLabelDefFree, virSecurityLabelDef)
(virSecurityDeviceLabelDef): Move...
* src/util/virseclabel.h: ...to new file.
(virSecurityLabelDefNew, virSecurityDeviceLabelDefNew): Rename the
GenSecurity functions.
* src/qemu/qemu_process.c (qemuProcessAttach): Adjust callers.
* src/security/security_manager.c (virSecurityManagerGenLabel):
Likewise.
* src/security/security_selinux.c
(virSecuritySELinuxSetSecurityFileLabel): Likewise.
* src/util/virseclabel.c: New file.
* src/conf/domain_conf.c: Move security code, and fix fallout.
* src/Makefile.am (UTIL_SOURCES): Build new file.
* src/libvirt_private.syms (domain_conf.h): Move symbols...
(virseclabel.h): ...to new section.
Signed-off-by: Eric Blake <eblake@redhat.com>
It's finally time to start tracking disk backing chains in
<domain> XML. The first step is to start refactoring code
so that we have an object more convenient for representing
each host source resource in the context of a single guest
<disk>. Ultimately, I plan to move the new type into src/util
where it can be reused by virStorageFile, but to make the
transition easier to review, this patch just creates the
new type then fixes everything until it compiles again.
* src/conf/domain_conf.h (_virDomainDiskDef): Split...
(_virDomainDiskSourceDef): ...to new struct.
(virDomainDiskAuthClear): Use new type.
* src/conf/domain_conf.c (virDomainDiskDefFree): Split...
(virDomainDiskSourceDefClear): ...to new function.
(virDomainDiskGetType, virDomainDiskSetType)
(virDomainDiskGetSource, virDomainDiskSetSource)
(virDomainDiskGetDriver, virDomainDiskSetDriver)
(virDomainDiskGetFormat, virDomainDiskSetFormat)
(virDomainDiskAuthClear, virDomainDiskGetActualType)
(virDomainDiskDefParseXML, virDomainDiskSourceDefFormat)
(virDomainDiskDefFormat, virDomainDiskDefForeachPath)
(virDomainDiskDefGetSecurityLabelDef)
(virDomainDiskSourceIsBlockType): Adjust all users.
* src/lxc/lxc_controller.c (virLXCControllerSetupDisk):
Likewise.
* src/lxc/lxc_driver.c (lxcDomainAttachDeviceMknodHelper):
Likewise.
* src/qemu/qemu_command.c (qemuAddRBDHost, qemuParseRBDString)
(qemuParseDriveURIString, qemuParseGlusterString)
(qemuParseISCSIString, qemuParseNBDString)
(qemuDomainDiskGetSourceString, qemuBuildDriveStr)
(qemuBuildCommandLine, qemuParseCommandLineDisk)
(qemuParseCommandLine): Likewise.
* src/qemu/qemu_conf.c (qemuCheckSharedDevice)
(qemuAddISCSIPoolSourceHost, qemuTranslateDiskSourcePool):
Likewise.
* src/qemu/qemu_driver.c (qemuDomainUpdateDeviceConfig)
(qemuDomainPrepareDiskChainElement)
(qemuDomainSnapshotCreateInactiveExternal)
(qemuDomainSnapshotPrepareDiskExternalBackingInactive)
(qemuDomainSnapshotPrepareDiskInternal)
(qemuDomainSnapshotPrepare)
(qemuDomainSnapshotCreateSingleDiskActive)
(qemuDomainSnapshotUndoSingleDiskActive)
(qemuDomainBlockPivot, qemuDomainBlockJobImpl)
(qemuDomainBlockCopy, qemuDomainBlockCommit): Likewise.
* src/qemu/qemu_migration.c (qemuMigrationIsSafe): Likewise.
* src/qemu/qemu_process.c (qemuProcessGetVolumeQcowPassphrase)
(qemuProcessInitPasswords): Likewise.
* src/security/security_selinux.c
(virSecuritySELinuxSetSecurityFileLabel): Likewise.
* src/storage/storage_driver.c (virStorageFileInitFromDiskDef):
Likewise.
* tests/securityselinuxlabeltest.c (testSELinuxLoadDef):
Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Wire up all the pieces to send arbitrary qemu events to a
client using libvirt-qemu.so. If the extra bookkeeping of
generating event objects even when no one is listening turns
out to be noticeable, we can try to further optimize things
by adding a counter for how many connections are using events,
and only dump events when the counter is non-zero; but for
now, I didn't think it was worth the code complexity.
* src/qemu/qemu_driver.c
(qemuConnectDomainQemuMonitorEventRegister)
(qemuConnectDomainQemuMonitorEventDeregister): New functions.
* src/qemu/qemu_monitor.h (qemuMonitorEmitEvent): New prototype.
(qemuMonitorDomainEventCallback): New typedef.
* src/qemu/qemu_monitor_json.c (qemuMonitorJSONIOProcessEvent):
Report events.
* src/qemu/qemu_monitor.c (qemuMonitorEmitEvent): New function, to
pass events through.
* src/qemu/qemu_process.c (qemuProcessHandleEvent): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
Any source file which calls the logging APIs now needs
to have a VIR_LOG_INIT("source.name") declaration at
the start of the file. This provides a static variable
of the virLogSource type.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
We have to explicitly destroy TAP devices on FreeBSD because
they're not freed after being closed, otherwise we end up with
orphaned TAP devices after destroying a domain.
Change any method names with Usb, Pci or Scsi to use
USB, PCI and SCSI since they are abbreviations.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
For extracting hostdev codes from qemu_hostdev.c to common library, change qemu
specific COLD_BOOT handling to be a flag, and pass it to hostdev functions.
For extracting hostdev codes from qemu_hostdev.c to common library, change qemu
specific cfg->relaxedACS handling to be a flag, and pass it to hostdev
functions.
When attaching to a QEMU process, the def->seclabels array is
going to be empty. The qemuProcessAttach method must thus
populate it with data for the security drivers.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The qemu_bridge_filter.c file had some helpers for calling
the ebtablesXXX functions todo bridge filtering. The only
thing these helpers did was to overwrite the original error
message from the ebtables code. For added fun, the callers
of these helpers overwrote the errors yet again. For even
more fun, one of the helpers called another helper and
overwrite its errors too.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There might be some use cases, where user wants to prepare the host or
its environment prior to starting a network and do some cleanup after
the network has been shut down. Consider all the functionality that
libvirt doesn't currently have as an example what a hook script can
possibly do.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
On some platforms like IBM PowerNV the NUMA node numbers can be
non-sequential. For eg. numactl --hardware o/p from such a machine looks
as given below
node distances:
node 0 1 16 17
0: 10 40 40 40
1: 40 10 40 40
16: 40 40 10 40
17: 40 40 40 10
The NUMA nodes are 0,1,16,17
Libvirt uses sequential index as NUMA node numbers and this can
result in crash or incorrect results.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Signed-off-by: Pradipta Kr. Banerjee <bpradip@in.ibm.com>
The code took into account only the global permissions. The domains now
support per-vm DAC labels and per-image DAC labels. Use the most
specific label available.
The public virConnectRef and virConnectClose API are just thin
wrappers around virObjectRef/virObjectRef, with added object
validation and an error reset. Within our backend drivers, use
of the object validation is just an inefficiency since we always
pass valid objects. More important to think about is what
happens with the error reset; our uses of virConnectRef happened
to be safe (since we hadn't encountered any earlier errors), but
in several cases the use of virConnectClose could lose a real
error.
Ideally, we should also avoid calling virConnectOpen() from
within backend drivers - but that is a known situation that
needs much more design work.
* src/qemu/qemu_process.c (qemuProcessReconnectHelper)
(qemuProcessReconnect): Avoid nested public API call.
* src/qemu/qemu_driver.c (qemuAutostartDomains)
(qemuStateInitialize, qemuStateStop): Likewise.
* src/qemu/qemu_migration.c (doPeer2PeerMigrate): Likewise.
* src/storage/storage_driver.c (storageDriverAutostart):
Likewise.
* src/uml/uml_driver.c (umlAutostartConfigs): Likewise.
* src/lxc/lxc_process.c (virLXCProcessAutostartAll): Likewise.
(virLXCProcessReboot): Likewise, and avoid leaking conn on error.
Signed-off-by: Eric Blake <eblake@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1047659
If a VM dies very early during an attempted connect to the guest agent
while the locks are down the domain monitor object will be freed. The
object is then accessed later as any failure during guest agent startup
isn't considered fatal.
In the current upstream version this doesn't lead to a crash as
virObjectLock called when entering the monitor in
qemuProcessDetectVcpuPIDs checks the pointer before attempting to
dereference (lock) it. The NULL pointer is then caught in the monitor
helper code.
Before the introduction of virObjectLockable - observed on 0.10.2 - the
pointer is locked directly via virMutexLock leading to a crash.
To avoid this problem we need to differentiate between the guest agent
not being present and the VM quitting when the locks were down. The fix
reorganizes the code in qemuConnectAgent to add the check and then adds
special handling to the callers.
Currently, the qemuProcessStop tries to open the domain log file
and saves the original error afterwards. Then all the cleanup is
done after which the error is restored back. This has however one
flaw: if opening of the log file fails an error is reported,
which results in previous error being overwritten (the useful
one, e.g. "PCI device XXXX:XXXX could not be found"). Hence, user
sees something like:
error: failed to create logfile /var/log/libvirt/qemu/ovirt_usb.log: No such file or directory
instead of:
error: internal error: Did not find USB device 8644:8003
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reported-by: Zhou Yimin <zhouyimin@huawei.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1035955
There's a window when starting a qemu process between fork() and exec()
during which we are doing things that may fail but not tunnelling the
error to the daemon. This is basically all within qemuProcessHook().
So whenever we fail in something, e.g. placing a process onto numa node,
users are left with:
error: Child quit during startup handshake: Input/output error
while the original error is thrown into the domain log:
libvirt: error : internal error: NUMA memory tuning in 'preferred'
mode only supports single node
Hence, we should read the log file and search for the error message and
report it to users.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Report the error in virPortAllocatorAcquire instead
of doing it in every caller.
The error contains the port range name instead of the intended
use for the port, e.g.:
Unable to find an unused port in range 'display' (65534-65535)
instead of:
Unable to find an unused port for SPICE
This also adds error reporting when the QEMU driver could not
find an unused port for VNC, VNC WebSockets or NBD migration.
In the qemuProcessReconnectHelper() a new thread that does all the
interesting work is spawned. The rationale is to not block the daemon
startup process in case of unresponsive qemu. However, the thread
handler is a local variable which gets lost once the control goes out of
scope. Hence the thread gets leaked. We can avoid this if the thread
isn't made joinable.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When starting a VM the qemu process may filter out some requested
features of a domain as it's not supported either by the host or by
qemu. Libvirt didn't check if this happened which might end up in
changing of the guest ABI when migrating.
The proof of concept implementation adds the check for the recently
introduced kvm_pv_unhalt cpuid feature bit. This feature depends on both
qemu and host kernel support and thus increase the possibility of guest
ABI breakage.
One of my previous patches (c7ac2519b7) did try to fix the issue when
domain dies too soon during migration. However, this clumsy approach was
missing removal of qemuProcessHandleMonitorDestroy resulting in double
unrefing of mon->vm and hence producing the daemon crash:
==11843== Invalid read of size 4
==11843== at 0x50C28C5: virObjectUnref (virobject.c:255)
==11843== by 0x1148F7DB: qemuMonitorDispose (qemu_monitor.c:258)
==11843== by 0x50C2991: virObjectUnref (virobject.c:262)
==11843== by 0x50C2D13: virObjectFreeCallback (virobject.c:388)
==11843== by 0x509C37B: virEventPollCleanupHandles (vireventpoll.c:583)
==11843== by 0x509C711: virEventPollRunOnce (vireventpoll.c:652)
==11843== by 0x509A620: virEventRunDefaultImpl (virevent.c:274)
==11843== by 0x520D21C: virNetServerRun (virnetserver.c:1112)
==11843== by 0x11F368: main (libvirtd.c:1513)
==11843== Address 0x13b88864 is 4 bytes inside a block of size 136 free'd
==11843== at 0x4A07F5C: free (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==11843== by 0x5079A2F: virFree (viralloc.c:580)
==11843== by 0x50C29E3: virObjectUnref (virobject.c:270)
==11843== by 0x114770E4: qemuProcessHandleMonitorDestroy (qemu_process.c:1103)
==11843== by 0x1148F7CB: qemuMonitorDispose (qemu_monitor.c:257)
==11843== by 0x50C2991: virObjectUnref (virobject.c:262)
==11843== by 0x50C2D13: virObjectFreeCallback (virobject.c:388)
==11843== by 0x509C37B: virEventPollCleanupHandles (vireventpoll.c:583)
==11843== by 0x509C711: virEventPollRunOnce (vireventpoll.c:652)
==11843== by 0x509A620: virEventRunDefaultImpl (virevent.c:274)
==11843== by 0x520D21C: virNetServerRun (virnetserver.c:1112)
==11843== by 0x11F368: main (libvirtd.c:1513)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Include reference of the VM object pointer and name in debug
logs for QEMU start/stop functions. Also make sure we log the
PID that we started, since it isn't available elsewhere in the
logs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>