Many of the functions follow the pattern:
virSecurity.*Security.*Label
Remove the second 'Security' from the names, it should be obvious
that the virSecurity* functions deal with security labels even
without it.
Commit 256496e1 introduced a detection if "is locked" in error replay
from qemu monitor. Commit c4073657 fixed a memory leak, but it was
pointed out by Peter, that this could be done cleaner without
stringifing the replay.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The return value of virJSONValueToString() should be freed when
no longer needed. This is not the case after 256496e1.
==26902== 138 bytes in 2 blocks are definitely lost in loss record 1,051 of 1,239
==26902== at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==26902== by 0xAA5F599: strdup (in /lib64/libc-2.21.so)
==26902== by 0x552BAD9: virStrdup (virstring.c:726)
==26902== by 0x54F60A7: virJSONValueToString (virjson.c:1790)
==26902== by 0x1DF6EBB9: qemuMonitorJSONEjectMedia (qemu_monitor_json.c:2225)
==26902== by 0x1DF57A4C: qemuMonitorEjectMedia (qemu_monitor.c:1985)
==26902== by 0x1DF1EF2D: qemuDomainChangeEjectableMedia (qemu_hotplug.c:199)
==26902== by 0x1DF90314: qemuDomainChangeDiskLive (qemu_driver.c:7985)
==26902== by 0x1DF90476: qemuDomainUpdateDeviceLive (qemu_driver.c:8030)
==26902== by 0x1DF91ED7: qemuDomainUpdateDeviceFlags (qemu_driver.c:8677)
==26902== by 0x561785F: virDomainUpdateDeviceFlags (libvirt-domain.c:8559)
==26902== by 0x134210: remoteDispatchDomainUpdateDeviceFlags (remote_dispatch.h:10966)
==26902== 106 bytes in 1 blocks are definitely lost in loss record 1,033 of 1,239
==26902== at 0x4C29F80: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==26902== by 0xAA5F599: strdup (in /lib64/libc-2.21.so)
==26902== by 0x552BAD9: virStrdup (virstring.c:726)
==26902== by 0x54F60A7: virJSONValueToString (virjson.c:1790)
==26902== by 0x1DF6EC0C: qemuMonitorJSONEjectMedia (qemu_monitor_json.c:2227)
==26902== by 0x1DF57A4C: qemuMonitorEjectMedia (qemu_monitor.c:1985)
==26902== by 0x1DF1EF2D: qemuDomainChangeEjectableMedia (qemu_hotplug.c:199)
==26902== by 0x1DF90314: qemuDomainChangeDiskLive (qemu_driver.c:7985)
==26902== by 0x1DF90476: qemuDomainUpdateDeviceLive (qemu_driver.c:8030)
==26902== by 0x1DF91ED7: qemuDomainUpdateDeviceFlags (qemu_driver.c:8677)
==26902== by 0x561785F: virDomainUpdateDeviceFlags (libvirt-domain.c:8559)
==26902== by 0x134210: remoteDispatchDomainUpdateDeviceFlags (remote_dispatch.h:10966)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Moving tasks to cgroups implied sched_setaffinity. Changing the cpus in
a set implies the same for all tasks in the group.
The old code put the the thread into the cpuset inherited from the
machine cgroup, which allowed it to run outside of vcpupin for a short
while.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
The machine cgroup is a superset, a parent to the emulator and vcpuX
cgroups. The parent cgroup should never have any tasks directly in it.
In fact the parent cpuset might contain way more cpus than the sum of
emulatorpin and vcpupins. So putting tasks in the superset will allow
them to run outside of <cputune>.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
virCgroupNewMachine used to add the pidleader to the newly created
machine cgroup. Do not do this implicit anymore.
Signed-off-by: Henning Schild <henning.schild@siemens.com>
Firstly, there's a bug (or typo) in the only place where we call
this function: @multiqueue is set whenever @tapfdSize is greater
than zero, while in fact the condition should have been 'greater
than one'.
Then, secondly, since the condition depends on just one
variable, that we are even passing down to the function, we can
move the condition into the function and drop useless argument.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When user configures vhost-user interface and forgets to also configure
any shared memory, the search for the root cause of non-operational
interface might take unpleasantly long time. Let's enhance user
experience by emitting a warning in the logs.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1266982
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Some older systems, e.g. RHEL-6 do not have IFF_MULTI_QUEUE flag
which we use to enable multiqueue feature. Therefore one gets the
following compile error there:
CC util/libvirt_util_la-virnetdevmacvlan.lo
util/virnetdevmacvlan.c: In function 'virNetDevMacVLanTapSetup':
util/virnetdevmacvlan.c:338: error: 'IFF_MULTI_QUEUE' undeclared (first use in this function)
util/virnetdevmacvlan.c:338: error: (Each undeclared identifier is reported only once
util/virnetdevmacvlan.c:338: error: for each function it appears in.)
make[3]: *** [util/libvirt_util_la-virnetdevmacvlan.lo] Error 1
So, whenever user wants us to enable the feature on such systems,
we will just throw a runtime error instead.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The libvirt file system storage driver determines what file to
act on by concatenating the pool location with the volume name.
If a user is able to pick names like "../../../etc/passwd", then
they can escape the bounds of the pool. For that matter,
virStoragePoolListVolumes() doesn't descend into subdirectories,
so a user really shouldn't use a name with a slash.
Normally, only privileged users can coerce libvirt into creating
or opening existing files using the virStorageVol APIs; and such
users already have full privilege to create any domain XML (so it
is not an escalation of privilege). But in the case of
fine-grained ACLs, it is feasible that a user can be granted
storage_vol:create but not domain:write, and it violates
assumptions if such a user can abuse libvirt to access files
outside of the storage pool.
Therefore, prevent all use of volume names that contain "/",
whether or not such a name is actually attempting to escape the
pool.
This changes things from:
$ virsh vol-create-as default ../../../../../../etc/haha --capacity 128
Vol ../../../../../../etc/haha created
$ rm /etc/haha
to:
$ virsh vol-create-as default ../../../../../../etc/haha --capacity 128
error: Failed to create vol ../../../../../../etc/haha
error: Requested operation is not valid: volume name '../../../../../../etc/haha' cannot contain '/'
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit id '56e2171c6' removed a variable from the argument list, but
neglected to update the ATTRIBUTE_NONNULL values, so when commit id
'08da97bfb' added a couple of arguments, the values were off.
Always return LLONG_MAX even on 32 bit systems. The limitation
originates from our use of "unsigned long" in several APIs. The internal
data type is unsigned long long. Make the test suite deterministic by
removing the architecture difference.
Flaw was introduced in 645881139b where
I've added a test that uses too large numbers.
https://bugzilla.redhat.com/show_bug.cgi?id=1240439
Ta-da! Now that we know how to open a macvtap device multiple
times, we can finally enable the multiqueue feature. Everything
else is already prepared (e.g. command line generation) from the
previous iteration where the feature was implemented for
TUN/TAP devices.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For the multiqueue on macvtaps we are going to need to open
the device multiple times. Currently, this is not supported.
Rework the function, so that upper layers can be reworked too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Like we are doing for TUN/TAP devices, we should do the same for
macvtaps. Although, it's not as critical as in that case, we
should do it for the consistency.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For the multiqueue on macvtaps we are going to need to open
the device multiple times. Currently, this is not supported.
Rework the function, so that upper layers can be reworked too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
For the multiqueue on macvtaps we are going to need to open
the device multiple times. Currently, this is not supported.
Rework the function, so that upper layers can be reworked too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There are few outdated things. Firstly, we don't need to undergo
the torture of fopen, fscanf and fclose just to get the interface
index when we have nice wrapper over that: virNetDevGetIndex.
Secondly, we don't need to have statically allocated buffer for
the path we are opening.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So yet again one of integer arguments that we use as a boolean.
Since the argument count of the function is unbearably long
enough, lets turn those booleans into flags.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
On the very first log message we send to any output, we include
the libvirt version number and package string. In some bug reports
we have been given libvirtd.log files that came from a different
host than the corresponding /var/log/libvirt/qemu log files. So
extend the initial log message to include the hostname too.
eg on first log message we would now see:
$ libvirtd
2015-12-04 17:35:36.610+0000: 20917: info : libvirt version: 1.3.0
2015-12-04 17:35:36.610+0000: 20917: info : hostname: dhcp-1-180.lcy.redhat.com
2015-12-04 17:35:36.610+0000: 20917: error : qemuMonitorIO:687 : internal error: End of file from monitor
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1276198
Prior to commit id '98322052' failure to saferead the block device would
cause an error to be logged and the device to be skipped while attempting
to discover/create a stable target path for a new LUN (NPIV).
This was because virStorageBackendSCSIFindLUs ignored errors from
processLU and virStorageBackendSCSINewLun.
Ignoring the failure allowed a multipath device with an "active" and
"ghost" to be present on the host with the "ghost" block device being
ignored. This patch will return a -2 to the caller indicating the desire
to ignore the block device since it cannot be used directly rather than
fail the pool startup.
I found this useful while processing a volume that wouldn't end up
showing up in the resulting list of block volumes. In this case, the
partition type wasn't found in the disk_types table.
Similar to the openflags VIR_STORAGE_VOL_OPEN_NOERROR processing, if some
read processing operation fails, check the readflags for the corresponding
error flag being set. If so, rather then causing an error - use VIR_WARN
to flag the error, but return -2 which some callers can use to perform
specific actions. Use a new VIR_STORAGE_VOL_READ_NOERROR flag in a new
VolReadErrorMode enum.
While processing the volume for lseek, virFileReadHeaderFD, and
virStorageFileGetMetadataFromBuf - failure would cause an error,
but ret would not be set. That would result in an error message being
sent, but successful status being returned.
Just so it's clearer what to expect upon input and what types of return
values could be generated. These were loosely copied from existing
virStorageBackendUpdateVolTargetInfoFD.
Similar to the openflags which allow VIR_STORAGE_VOL_OPEN_NOERROR to be
passed to avoid open errors, add a 'readflags' variable so that in the
future read failures could also be ignored.
Add qemuDomainHasVCpuPids to do the checking and replace in place checks
with it.
We no longer need checking whether the thread contains fake data
(vcpupids[0] == vm->pid) as in b07f3d821d
and 65686e5a81 this was removed.
The vCPU threads make sense in the counterparts that set the vCPU
bandwidth/quota, not in the emulator one. The emulator tunables are set
all the time anyways.
Drop the extra check and remove the now unneeded vm argument.
Since commit 0c04906fa the check for priv->cgroup doesn't make sense as
the calls to virCgroupHasController return the same information. Remove
it and move it's comment partially to the new check.
The already spurious check was also later copied to the iothreads code.
Once more stuff will be moved into the vCPU data structure it will be
necessary to get a specific one in some ocasions. Add a helper that will
simplify this task.
Refactor the code flow so that 'exit_monitor:' can be removed.
This patch moves the auditing functions into places where it's certain
that hotunplug was or was not successful and reports errors from
qemuMonitorGetCPUInfo properly.
Refactor the code flow so that 'exit_monitor:' can be removed.
This patch also moves the auditing and setting of the new vCPU count
right to the place where the hotplug happens, since it's possible that
the hotplug succeeds and adds a cpu while other stuff fails.
Lastly, failures of qemuMonitorGetCPUInfo are now reported rather than
ignored. The function retuns 0 if it "successfully" detected 0 threads.
qemuDomainHotplugVcpus/qemuDomainHotunplugVcpus are complex enough in
regards of adding one CPU. Additionally it will be desired to reuse
those functions later with specific vCPU hotplug.
Move the loops for adding vCPUs into qemuDomainSetVcpusFlags so that the
helpers can be made simpler and more straightforward.
The cpu hotplug helper functions used negative error handling in a part
of them, although some code that was added later didn't properly set the
error codes in some cases. This would cause improper error messages in
cases where we couldn't modify the numa cpu mask and a few other cases.
Fix the logic by converting it to the regularly used pattern.
With a very unfortunate timing, the agent might vanish before we do the
second call while the locks were down. Re-check that the agent is
available before attempting it again.
We should make a copy of current definition to preserve a persistent
definition, because we later update the definition with live changes.
The live definition is discarded on domain shutdown and replaced by the
copy we make before starting the domain.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This change ensures to call driver specific post-parse code to modify
domain definition after parsing hypervisor config the same way we do
after parsing XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This change ensures to call driver specific post-parse code to modify
domain definition after parsing hypervisor config the same way we do
after parsing XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This change ensures to call driver specific post-parse code to modify
domain definition after parsing hypervisor config the same way we do
after parsing XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This reverts commit d2e5538b16.
A migration regression was introduced by this commit. When migrating
a domain, its active XML is sent to the destination libvirtd, where
it is parsed as inactive XML. d2e5538b copied the libxl generated
interface name into the active config, which was being passed to the
migration destination and being parsed into inactive config. Attempting
to start the config could result in failure if an interface with the
same generated name already exists.
The qemu driver behaves similarly, but the parser contains a hack to
skip interface names starting with 'vnet' when parsing inactive XML.
We could extend the hack to skip names starting with 'vif' too, but a
better fix would be to expose these hypervisor-specific interface name
prefixes in capabilities. See the following discussion thread for more
details
https://www.redhat.com/archives/libvir-list/2015-December/msg00262.html
For the pending 1.3.0 release, it is best to revert d2e5538b. It can
be added again post release, after moving the prefix to capabilities.
The virtlogd RPC messages all have a flags parameter. For
sake of future error reporting we should be verifying
these are all 0 for now.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The current virtlogd RPC protocol provides the ability to
handle log files associated with QEMU stdout/err. The log
protocol messages take the virt driver, domain name and
use that to form a log file path. This is quite restrictive
as it prevents us re-using the same RPC protocol messages
for logging to char device backends where the filename
can be arbitrarily user specified. It is also bad because
it means we have 2 separate locations which have to decide
on logfile name.
This change alters the RPC protocol so that we pass the
desired log file path along when opening the log file
initially. Now the virt driver is exclusively in charge
of deciding the log filename
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virt driver, dom name and uuid associated with a log
file are important pieces of metadata to keep around for
sake of future enhancements to virtlogd. Currently we
discard them after opening the log file, but we should
preserve them, even across restarts.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
virDomainSetMemory is documented to change only runtime configuration of
running domain. However, that's not true of all hypervisors supported.
Seems as though when commit id '0f2e50be5' added the current flag, the
function description should have been updated similar to when commit id
'c1795c52' updated the virDomainSetMaxMemory description. Especially since
commit id '80427f1d' updated the virsh 'setmem' description to indicate
"behavior is different depending on hypervisor."
This patch will update the description to match current functionality.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
When a SCSI disk is hotplugged to a domain that does not have the required
SCSI controller already defined and loaded the following internal error occurs
error: Failed to attach device from scsi_disk.xml
error: internal error: Could not find scsi controller with index 0 required for device
Commit 0260506c added in method qemuBuildDriveDevStr a lookup of the controller
alias. The internal error occurs because in method qemuDomainAttachSCSIDisk
the automatic creation of the potentially missing SCSI controller occurs after
calling qemuBuildDriveDevStr.
This patch reverses the calling sequence.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Often when debugging bug reports one is given a copy of the file
from /var/log/libvirt/qemu/$NAME.log along with other supporting
files. In a number of cases I've been given sets of files which
were from different machines. Including the hostname in the QEMU
log file will help identify when the bug reporter is providing
bad information.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The log file descriptor associated with the virRotatingFile
struct should be marked close-on-exec, as even when virtlogd
re-exec's itself it expect to open the log file fresh. It
does not need to preserve the logfile handles, only the network
client FDs.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Since libvirt for dubious historical reasons stores memory size as
kibibytes, it's possible that the alignments done in the qemu code
overflow the the maximum representable size in bytes. The XML parser
code handles them in bytes in some stages. Prevent this by doing
overflow checks when alinging the size and add a test case.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1260576
This patch reverts parts of commits 0d8b24f6b and 0785966d dealing with
the addition of a controller during virDomainHostdevAssignAddress. This
caused a regression for the hostdev hotplug path which assumes the
qemuDomainFindOrCreateSCSIDiskController will add the new controller
during qemuDomainAttachHostSCSIDevice to both the running domain and
the domain def controller list when the controller doesn't yet exist
(whether due to no SCSI controllers existing or the addition of a new
controller because existing ones are full).
Since commit id 0d8b24f6 will call virDomainHostdevAssignAddress during
virDomainDeviceDefPostParseInternal which is called either during domain
definition post processing (via an iterator during virDomainDefPostParse)
or directly from virDomainDeviceDefParse during hotplug, the change
broke the "side effect" of being able to add both a hostdev and controller
to the running domain.
The regression would only be seen if the running domain didn't have a
SCSI controller already defined or if the existing SCSI controller was
"full" of devices and a new controller needed to be created.
This patch will also add some extra comments to the code to avoid a
similar future change.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Stefan Zimmermann <stzi@linux.vnet.ibm.com>
Instead of comparing garbage strings against real MAC addresses,
introduce an error mesage for unparsable ones:
$ virsh net-dhcp-leases default --mac t12
error: Failed to get leases info for default
error: invalid MAC address: t12
https://bugzilla.redhat.com/show_bug.cgi?id=1261432
Introduce support for domainInterfaceStats API call for querying
network interface statistics. Consequently it also enables the
use of `virsh domifstat <dom> <interface name>` command plus
seeing the interfaces names instead of "-" when doing
`virsh domiflist <dom>`.
After successful guest creation we fill the network
interfaces names based on domain, device id and append suffix
if it's emulated in the following form: vif<domid>.<devid>[-emu].
We extract the network interfaces info from the libxl_domain_config
object in libxlDomainCreateIfaceNames() to generate ifname. On domain
cleanup we also clear ifname, in case it was set by libvirt (i.e.
being prefixed with "vif"). We also skip these two steps in case the name
of the interface was manually inserted by the adminstrator.
For getting the interface statistics we resort to virNetInterfaceStats
and let libvirt handle the platform specific nits. Note that the latter
is not yet supported in FreeBSD.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Commit 0f7436ca54 "network: wait for DAD to finish for bridge IPv6 addresses"
results in:
CC util/libvirt_util_la-virnetdevmacvlan.lo
util/virnetdev.c: In function 'virNetDevParseDadStatus':
util/virnetdev.c:1319:188: error: cast increases required alignment of target type [-Werror=cast-align]
util/virnetdev.c:1332:41: error: cast increases required alignment of target type [-Werror=cast-align]
util/virnetdev.c:1334:92: error: cast increases required alignment of target type [-Werror=cast-align]
cc1: all warnings being treated as errors
on at least ARM platforms.
The three macros involved (NLMSG_NEXT, IFA_RTA and RTA_NEXT) all appear to
correctly take care of alignment, therefore suppress Wcast-align around their
uses.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Maxim Perevedentsev <mperevedentsev@virtuozzo.com>
Cc: Laine Stump <laine@laine.org>
Cc: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
The problem is that in some mingw header DATADIR is used but
gnulib defines it too. This leads to the following compile error:
CC locking/libvirt_driver_la-lock_manager.lo
In file included from /usr/i686-w64-mingw32/sys-root/mingw/include/objbase.h:66:0,
from /usr/i686-w64-mingw32/sys-root/mingw/include/ole2.h:17,
from /usr/i686-w64-mingw32/sys-root/mingw/include/wtypes.h:12,
from /usr/i686-w64-mingw32/sys-root/mingw/include/winscard.h:10,
from /usr/i686-w64-mingw32/sys-root/mingw/include/windows.h:97,
from /usr/i686-w64-mingw32/sys-root/mingw/include/winsock2.h:23,
from ../gnulib/lib/unistd.h:48,
from ../../src/util/virutil.h:29,
from ../../src/logging/log_manager.c:30:
/usr/i686-w64-mingw32/sys-root/mingw/include/objidl.h:12275:2: error: expected identifier or '(' before string constant
} DATADIR;
^
Makefile:7888: recipe for target 'logging/libvirt_driver_la-log_manager.lo' failed
The fix is to include configmake.h at the end of includes.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 48cd3dfa66 introduced configuration
file for libvirt-admin but forgot to distribute it. Also the change
made to libvirt.conf in commit dbecb87f94
should've been removed thanks to introduction of separate config file.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
virAdmConnect was named after virConnect, but after some discussions,
most of the APIs called will be working with remote daemon and starting
them virAdmDaemon will make more sense. Only possibly controversal name
is CloseCallback (de)registration, and connecting to the daemon (which
will still be Open/Close), but even this makes sense if one thinks about
the daemon being opened and closed, e.g. as file, etc.
This way all the APIs working with the daemon will start with
virAdmDaemon prefix, they will accept virAdmDaemonPtr as first parameter
and that will better suit with other namings as well (virDomain*,
virAdmServer*, etc.).
Because in virt-admin, the connection name does not refer to a struct
that would have a connect in its name, also adjust 'connname' in
clients. And because it is not used anywhere in the vsh code, move it
from there into each client.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
If VM A is shutdown a by qemu agent at appoximately the same time
an agent EOF of VM A happened, there's a chance that deadlock may occur:
qemuProcessHandleAgentEOF in main thread
A) priv->agent = NULL; //A happened before B
//deadlock when we get agent lock which's held by worker thread
qemuAgentClose(agent);
qemuDomainObjExitAgent called by qemuDomainShutdownFlags in worker thread
B) hasRefs = virObjectUnref(priv->agent); // priv->agent is NULL,
// return false
if (hasRefs)
virObjectUnlock(priv->agent); //agent lock will not be released here
In order to resolve, during EOF close the agent first, then set priv->agent
to NULL to fix the deadlock.
This essentially reverts commit id '1020a504'. It's also of note that commit
id '362d0477' notes a possible/rare deadlock similar to what was seen in
the monitor in commit id '25f582e3'. However, it seems interceding changes
including commit id 'd960d06f' should remove the deadlock issue.
With this change, if EOF is called:
Get VM lock
Check if !priv->agent || priv->beingDestroyed, then unlock VM
Call qemuAgentClose
Unlock VM
When qemuAgentClose is called
Get Agent lock
If Agent->fd open, close it
Unlock Agent
Unref Agent
qemuDomainObjEnterAgent
Enter with VM lock
Get Agent lock
Increase Agent refcnt
Unlock VM
After running agent command, calling qemuDomainObjExitAgent
Enter with Agent lock
Unref Agent
If not last reference, unlock Agent
Get VM lock
If we were in the middle of an EnterAgent, call Agent command, and
ExitAgent sequence and the EOF code is triggered, then the EOF code
can get the VM lock, make it's checks against !priv->agent ||
priv->beingDestroyed, and call qemuAgentClose. The CloseAgent
would wait to get agent lock. The other thread then will eventually
call ExitAgent, release the Agent lock and unref the Agent. Once
ExitAgent releases the Agent lock, AgentClose will get the Agent
Agent lock, close the fd, unlock the agent, and unref the agent.
The final unref would cause deletion of the agent.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Reviewed-by: Ren Guannan <renguannan@huawei.com>
The parameter --disable-dependency-tracking is supposed to speed up
one-time build due to the fact that it disables some dependency
extractors that, apparently, take longer time to execute. That is a
problem for code that is generated into builddir (especially some
specific subdirectory) because the directory it should be installed to
does not exists in VPATH and without the dependency tracking is not
created. Generating such file hence fails with -ENOENT. In order to
keep generating files into builddir instead of srcdir, we must create
the directory ourselves. This should finally fix the problem that is
being fixed multiple times since its introduction in commit a9fe620372
and let us continue with cleaning those parts of Makefiles that depend
on generating files into the srcdir rather than builddir as it should
be.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Our domain_conf.* files are big enough. Not only they contain XML
parsing code, but they served as a storage of all functions whose
name is virDomain prefixed. This is just wrong as it gathers not
related functions (and modules) into one big file which is then
harder to maintain. Split virDomainObjList module into a separate
file called virdomainobjlist.[ch].
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
To be used by the family of virtio input devices:
<input type='mouse' bus='virtio'/>
<input type='tablet' bus='virtio'/>
<input type='keyboard' bus='virtio'/>
https://bugzilla.redhat.com/show_bug.cgi?id=1231114
Add capabilities for virtio-keyboard, virtio-mouse
and virtio-tablet devices:
name "virtio-keyboard-device", bus virtio-bus
name "virtio-keyboard-pci", bus PCI
name "virtio-mouse-device", bus virtio-bus
name "virtio-mouse-pci", bus PCI
name "virtio-tablet-device", bus virtio-bus
name "virtio-tablet-pci", bus PCI
Map both -device and -pci versions of the device to one capability.
https://bugzilla.redhat.com/show_bug.cgi?id=1231114
If for some reason there is an existing log file, that is larger then
max length of log file, we need to rollover that file immediately.
Trying to figure out how much data we could write will resolve in
overflow of unsigned variable 'towrite' and this leads to segfault.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Introduce a new API to get libvirt version. It is worth noting, that
libvirt-admin and libvirt share the same version number. Unfortunately,
our existing API isn't generic enough to be used with virAdmConnectPtr
as well. Also this patch wires up this API to the virt-admin client
as a generic cmdVersion command.
As we need a client disconnect handler, we also need a mechanism to register
such handlers for a client. This patch introduced both the close callbacks and
also the client vshAdmCatchDisconnect handler to be registered with it. By
registering the handler we still need to make sure the client can react to
daemon's events like disconnect or keepalive, so asynchronous I/O event polling
is necessary to be enabled too.
Now that we introduced URI support in libvirt-admin, we should also support URI
aliases during connection establishment phase. After applying this patch,
virAdmConnectOpen will also support VIR_CONNECT_NO_ALIASES flag.
As we need to provide support for URI aliases in libvirt-admin as well, URI
alias matching needs to be internally visible. Since
virConnectOpenResolveURIAlias does have a compatible signature, it could be
easily reused by libvirt-admin. This patch moves URI alias matching to util,
renaming it accordingly.
As we plan to add more and more logic to remote connecting methods,
these cannot be generated from admin_protocol.x anymore. Instead,
this patch implements these to methods explicitly.
By moving the remote version into a separate module, we gain a slightly
better maintainability in the long run than just by leaving it in one
place with the existing libvirt-admin library which can start getting
pretty messy later on.
Since most of our APIs rely on an acive functional connection to a daemon and
we have such a mechanism in libvirt already, there's need to have such a way in
libvirt-admin as well. By introducing a new public API, this patch provides
support to check for an active connection.
Unfortunately, client side version retrieval API virGetVersion uses
one-time initialization (due to the fact we might not have initialized the
library by calling connect prior to this) which is not completely compatible
with admin initialization. This API is rather simplistic and reimplementing
it for admin might be the preferred method of reusing it. Note that even though
the method will be reimplemented, the version number is still the same for both
the libvirt and libvirt-admin library.
virConnectGetConfig and virConnectGetConfigPath were static libvirt
methods, merely because there hasn't been any need for having them
internally exported yet. Since libvirt-admin also needs to reference
its config file, 'xGetConfig' should be exported.
Besides moving, this patch also renames the methods accordingly,
as they are libvirt config specific.
Check if virtio-gpu provides virgl option, and add qemu command line
formatter.
It is enabled with the existing accel3d attribute:
<model type='virtio' heads='1'>
<acceleration accel3d='yes'/>
</model>
Signed-off-by: Marc-André Lureau <marcandre.lureau@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemu 2.5 provides virtio video device. It can be used with -device
virtio-vga for primary devices, or -device virtio-gpu for non-vga
devices. However, only the primary device (VGA) is supported with this
patch.
Reference:
https://bugzilla.redhat.com/show_bug.cgi?id=1195176
Signed-off-by: Marc-André Lureau <marcandre.lureau@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Allowing to have the extra undefined/default state.
Signed-off-by: Marc-André Lureau <marcandre.lureau@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Machine name escaping follows the same rules as serice name escape,
except that '.' and '-' must not be escaped in machine names, due
to a bug in systemd-machined.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1282846
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The libvirt_logd.aug and test_libvirt_logd.aug.in files
have never existed so shouldn't be in EXTRA_DIST. It was
a copy+paste mistake when closing virtlogd from virtlockd
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
With some versions of GLibC / GCC, a variable called 'daemon'
will result in a warning about clashing with the function also
named 'daemon'. Rename it to 'dmn' to avoid the clash.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Otherwise we fail on 32bit with:
CC logging/virtlogd-log_daemon_dispatch.o
logging/log_daemon_dispatch.c: In function 'virLogManagerProtocolDispatchDomainReadLogFile':
logging/log_daemon_dispatch.c:120:9: error: format '%zu' expects argument of type 'size_t', but argument 7 has type 'uint64_t' [-Werror=format]
The virtlogd daemon is launched with a 30 second timeout for
unprivileged users. Unfortunately the timeout is only inhibited
while RPC clients are connected, and they only connect for a
short while to open the log file descriptor. We need to hold
an inhibition for as long as the log file descriptor itself
is open.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the QEMU stdout/stderr streams are written directly to
a regular file (eg /var/log/libvirt/qemu/$GUEST.log). While those
can be rotated by logrotate (using copytruncate option) this is
not very efficient. It also leaves open a window of opportunity
for a compromised/broken QEMU to DOS the host filesystem by
writing lots of text to stdout/stderr.
This makes it possible to connect the stdout/stderr file handles
to a pipe that is provided by virtlogd. The virtlogd daemon will
read from this pipe and write data to the log file, performing
file rotation whenever a pre-determined size limit is reached.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently the QEMU monitor is given an FD to the logfile. This
won't work in the future with virtlogd, so it needs to use the
qemuDomainLogContextPtr instead, but it shouldn't directly
access that object either. So define a callback that the
monitor can use for reporting errors from the log file.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When the qemuProcessAttach/Stop methods write a marker into
the log file, they can use qemuDomainLogContextWrite to
write a formatted message.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of writing directly to a log file descriptor, change
qemuLogOperation to use qemuDomainLogContextWrite().
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The qemuDomainTaint APIs currently expect to be passed a log file
descriptor. Change them to instead use a qemuDomainLogContextPtr
to hide the implementation details.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Convert the places which create/open log files to use the new
qemuDomainLogContextPtr object instead.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Introduce a qemuDomainLogContext object to encapsulate
handling of I/O to/from the domain log file. This will
hide details of the log file implementation from the
rest of the driver, making it easier to introduce
support for virtlogd later.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are two pretty similar functions qemuProcessReadLog and
qemuProcessReadChildErrors. Both read from the QEMU log file
and try to strip out libvirt messages. The latter then reports
an error, while the former lets the callers report an error.
Re-write qemuProcessReadLog so that it uses a single read
into a dynamically allocated buffer. Then introduce a new
qemuProcessReportLogError that calls qemuProcessReadLog
and reports an error.
Convert all callers to use qemuProcessReportLogError.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The rename operation only works on inactive virtual machines,
but it none the less writes to the log file used by the QEMU
processes. This log file is not intended to provide a general
purpose audit trail of operations performed on VMs. The audit
subsystem has recording of important operations. If we want
to extend that to cover all significant public APIs that is
a valid thing to consider, but we shouldn't arbitrarily log
specific APIs into the QEMU log file in the meantime.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add the virLogManager API which allows for communication with
the virtlogd daemon to RPC program. This provides the client
side API to open log files for guest domains.
The virtlogd daemon is setup to auto-spawn on first use when
running unprivileged. For privileged usage, systemd socket
activation is used instead.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Define a new RPC protocol for the virtlogd daemon that provides
for handling of logs. The initial RPC method defined allows a
client to obtain a file handle to use for writing to a log
file for a guest domain. The file handle passed back will not
actually refer to the log file, but rather an anonymous pipe.
The virtlogd daemon will forward I/O between them, ensuring
file rotation happens when required.
Initially the log setup is hardcoded to cap log files at
128 KB, and keep 3 backups when rolling over, which gives
a max usage of 512 KB per guest.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Copy the virtlockd codebase across to form the initial virlogd
code. Simple search & replace of s/lock/log/ and gut the remote
protocol & dispatcher. This gives us a daemon that starts up
and listens for connections, but does nothing with them.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add virRotatingFileReader and virRotatingFileWriter objects
which allow reading & writing from/to files with automation
rotation to N backup files when a size limit is reached. This
is useful for guest logging when a guaranteed finite size
limit is required. Use of external tools like logrotate is
inadequate since it leaves the possibility for guest to DOS
the host in between invokations of logrotate.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
According to the documentation, CreateMachine accepts only 7bit ASCII
characters in the machinename parameter, so let's make sure we can start
machines with unicode names with systemd. We already have a function
for that, we just forgot to use it.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1062943
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1282846
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Using qemuProcess{Init,Launch,FinishStartup} allows us to run
pre-migration commands on destination before asking QEMU to wait for
incoming migration data.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
NBD storage migration will not work with offline migration anyway and we
already checked that the user did not ask for it. Thus it doesn't make
sense to keep the code after 'done' label where we jump in case of
offline migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Some failure paths in qemuMigrationPrepareAny forgot to kill the just
started QEMU process. This patch fixes this by combining 'stop' and
'endjob' label into a new label 'stopjob'. This name was chosen to avoid
confusion with the most common semantics of 'endjob'. Normally, 'endjob'
is always called at the end of an API to stop the job we entered at the
beginning. In qemuMigrationPrepareAny we only want to stop the job in
failure path; on success we need to carry the job over to the Finish
phase.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Once qemuProcessInit was called, qemuProcessLaunch will launch a new
QEMU process with stopped virtual CPUs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is going to be split in three parts: qemuProcessInit,
qemuProcessLaunch, and qemuProcessFinish so that migration Prepare phase
can insert additional code in the process. qemuProcessStart will be a
small wrapper for all other callers.
qemuProcessInit prepares the domain up to the point when priv->qemuCaps
is initialized.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
'model' attribute was added to a panic device but only one panic
device is allowed. This patch changes panic device presence
from 'optional' to 'zeroOrMore'.
Panic device type used depends on 'model' attribute.
If no model is specified then device type depends on hypervisor
and guest arch. 'pseries' model is used for pSeries guest and
'isa' model is used in other cases.
XML:
<devices>
<panic model='hyperv'/>
</devices>
QEMU command line:
qemu -cpu <cpu_model>,hv_crash
Libvirt already has two types of panic devices - pvpanic and pSeries firmware.
This patch introduces the 'model' attribute and a new type of panic device.
'isa' model is for ISA pvpanic device.
'pseries' model is a default value for pSeries guests.
'hyperv' model is the new type. It's used for Hyper-V crash.
Schema and docs are updated for the new attribute.
A PCI device may have the capability to setup virtual functions (VFs)
but have them currently all disabled. Prior to this patch, if that was
the case the the node device XML for the device wouldn't report any
virtual_functions capability.
With this patch, if a file called "sriov_totalvfs" is found in the
device's sysfs directory, its contents will be interpreted as a
decimal number, and that value will be reported as "maxCount" in a
capability element of the device's XML, e.g.:
<capability type='virtual_functions' maxCount='7'/>
This will be reported regardless of whether or not any VFs are
currently enabled for the device.
NB: sriov_numvfs (the number of VFs currently active) is also
available in sysfs, but that value is implied by the number of items
in the list that is inside the capability element, so there is no
reason to explicitly provide it as an attribute.
sriov_totalvfs and sriov_numvfs are available in kernels at least as far
back as the 2.6.32 that is in RHEL6.7, but in the case that they
simply aren't there, libvirt will behave as it did prior to this patch
- no maxCount will be displayed, and the virtual_functions capability
will be absent from the device's XML when 0 VFs are enabled.
Report the maximum possible number of VFs for an SRIOV PF, like this:
<capability type='virtual_functions' maxCount='7'>
...
</capability>
I've just discovered that the virtual_functions and physical_functions
capabilities are not supported in the virNodeDeviceParse functions,
only in virNodeDeviceFormat (I suppose because they are only reported,
not set from XML). This should probably be remedied, but is less
immediately useful than the current patch.
The checked predicate is a deduction from the following checks:
1) maximum cpu id is checked for every parsed <vcpusched> element
2) the resulting bitmaps are checked for overlaps
3) there has to be at least one cpu per <vcpusched>
From the above checks we can indeed deduce that if we have one
<vcpusched> element per CPU we will have at most 'maxvcpus' of them.
Drop the explicit check since it's redundant.
Now that new domains are started inside a QEMU_ASYNC_JOB_START job,
we need to pass it down to qemuProcessStartCPUs too.
This removes the warning:
qemuDomainObjEnterMonitorInternal:1750 : This thread seems to be the
async job owner; entering monitor without asking for a nested job is
dangerous
Introduced by commit 04c721f, before that this code path was only
executed with QEMU_ASYNC_JOB_NONE.
(This code is not executed on migration, because qemuMigrationPrepareAny
sets the VIR_QEMU_PROCESS_START_PAUSED flag.)
The domain definition is not needed in any of these functions.
Only pass it to qemuSetupChardevCgroup, which is used as a callback
for virDomainChrDefForeach.
Use the right type for passing virDomainObjPtr instead of
void* where possible.
https://bugzilla.redhat.com/show_bug.cgi?id=1282288
Rather than using just open on the path, allow for the possibility that
the path to be opened resides on an NFS root-squash target and was created
under a different uid/gid.
Without using virFileOpenAs an attempt to get the volume size data may fail
if the current user doesn't have permissions to read the volume, such as
would be the case if mode wasn't supplied in the volume XML and the default
VIR_STORAGE_DEFAULT_VOL_PERM_MODE (e.g. 0600) was used. Under this scenario
the owner/group is not root:root, thus this path run under root would fail
to open/read the volume.
NB: The virFileOpenAs code using OPEN_FORK will only work when the failure
is not EACESS/EPERM and the path resolves to a shared file system.
https://bugzilla.redhat.com/show_bug.cgi?id=1282288
Although commit id '77346f27' resolves part of the problem regarding creating
a qemu-img image in an NFS root-squash environment, it really didn't fix the
entire problem. Unfortunately it only masked the problem. It seems qemu-img
must open/create the image using 0644, which if used by target.perms would
result in the chmod not being called since the mode desired and set match.
Although qemu-img could conceivably ignore the mode when creating, libvirt
has more knowledge of the environment and can make the adjustment to the
mode far more easily by using virFileOpenAs with VIR_FILE_OPEN_FORCE_MODE.
If that's successful, then we know on return the file will have the right
owner and mode, so we can declare success
The amount of memory a ppc64 domain might need to lock is different
than that of a equally-sized x86 domain, so we need to check the
domain's architecture and act accordingly.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1273480
The function is used everywhere else to check whether the locked
memory limit should be set / updated, and it should be used here
as well.
Moreover, qemuDomainGetMlockLimitBytes() expects the hostdev to
have already been added to the domain definition, but we only do
that at the end of qemuDomainAttachHostPCIDevice(). Work around
the issue by adding the hostdev before adjusting the locked memory
limit and removing it immediately afterwards.
Commit 6472e54a unlocks the virDomainObj even if libxlDomainObjEndJob
returns false, indicating that its refcnt has dropped to 0.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Commits b6e19cf4 and 6472e54a missed unref'ing the
libxlDriverConfig object. Add missing calls to virObjectUnref.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remembering to call qemuMonitorSetDomainLog in the right paths before
calling qemuProcessStop is annoying and easy to forget. And I already
forgot to do so in commit v1.2.8-52-g0389060: logfd may be leaked if
QEMU process dies between Prepare and Finish migration phases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart is so big that any nontrivial code should be moved to
dedicated functions to make the code easier to read and maintain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Traditionally, we pass incoming migration URI on QEMU command line,
which has some drawbacks. Depending on the URI QEMU may initialize its
migration state immediately without giving us a chance to set any
additional migration parameters (this applies mainly for fd: URIs). For
some URIs the monitor may be completely blocked from the beginning until
migration is finished, which means we may be stuck in qmp_capabilities
command without being able to send any QMP commands.
QEMU solved this by introducing "defer" parameter for -incoming command
line option. This will tell QEMU to prepare for an incoming migration
while the actual incoming URI is sent using migrate-incoming QMP
command. Before calling this command we can normally talk to the
monitor and even set any migration parameters which will be honored by
the incoming migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We only started an async job for incoming migration from another host.
When we were starting a domain from scratch or restoring from a saved
state (migration from file) we didn't set any async job. Let's introduce
a new QEMU_ASYNC_JOB_START for these cases.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Incoming migration may require quite a few parameters (URI, fd, path) to
be considered while starting QEMU and we will soon add another one.
Let's group all of them in a single struct.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Make callers of qemuBuildCommandLine responsible for providing the URI
which should be passed as a parameter for -incoming.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Introduce a new helper function "virDiskNameParse" which extends
virDiskNameToIndex but handling both disk index and partition index.
Also rework virDiskNameToIndex to be based on virDiskNameParse.
A test is also added for this function testing both valid and
invalid disk names.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Introduce support for domainMemoryStats API call, which
consequently enables the use of `virsh dommemstat` command to
query for memory statistics of a domain. We support
the following statistics: balloon info, available and currently
in use. swap-in, swap-out, major-faults, minor-faults require
cooperation of the guest and thus currently not supported.
We build on the data returned from libxl_domain_info and deliver
it in the virDomainMemoryStat format.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
The LXC driver uses virSetUIDGID() to become UID/GID 0.
It passes an empty groups list to virSetUIDGID()
to get rid of all supplementary groups from the host side.
But virSetUIDGID() calls setgroups() only if the supplied list
is larger than 0.
This leads to a container root with unrelated supplementary groups.
In most cases this issue is unoticed as libvirtd runs as UID/GID 0
without any supplementary groups.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Instead of creating symlinks, bind mount the devices to
/dev/pts/XY.
Using bind mounts it is no longer needed to add pts devices
to files like /etc/securetty.
Signed-off-by: Richard Weinberger <richard@nod.at>
Userspace does not expect that the initial console
is a controlling TTY. systemd can deal with that, others not.
On sysv init distros getty will fail to spawn a controlling on
/dev/console or /dev/tty1. Which will cause to whole container
to reboot upon ctrl-c.
This patch changes the behavior of libvirt to match the kernel
behavior where the initial TTY is also not controlling.
The only user visible change should be that a container with
bash as PID 1 would complain. But this matches exactly the kernel
be behavior with init=/bin/bash.
To get a controlling TTY for bash just run "setsid /bin/bash".
Signed-off-by: Richard Weinberger <richard@nod.at>
https://bugzilla.redhat.com/show_bug.cgi?id=1251190
So, if domain loses access to storage, sanlock tries to kill it
after some timeout. So far, the default is 80 seconds. But for
some scenarios this might not be enough. We should allow users to
adjust the timeout according to their needs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Adjust the config code so that it does not enforce that target memory
node is specified. To avoid breakage, adjust the qemu memory hotplug
config checker to disallow such config for now.
Since we already make sure before that the domain configuration is
valid we may execute it always at the cost of doing 0 iterations of the
for loop.
This patch will simplify later refactor as it will avoid whitespace
changes.
Make the function usable so that -1 can be passed to it as cell ID so
that we can later enable memory hotplug on non-NUMA guests for certain
architectures.
This patch addresses BZ 1244895.
Adapt the sysfs TPM command cancel path for the TPM driver that
does not use a miscdevice anymore since Linux 4.0. Support old
and new paths and check their availability.
Add a mockup for the test cases to avoid the testing for
availability of the cancel path.
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Introduce support for domainGetCPUStats API call and consequently
allow us to use `virsh cpu-stats`. The latter returns a more brief
output than the one provided by`virsh vcpuinfo`.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Current monitor code overrides domain object's privateData, e.g.
in virBhyveProcessStart():
vm->privateData = bhyveMonitorOpen(vm, driver);
where bhyveMonitorPtr() returns bhyveMonitorPtr.
This is not right thing to do, so make bhyveMonitorPtr
a part of the bhyveDomainObjPrivate struct and change related code
accordingly.
https://bugzilla.redhat.com/show_bug.cgi?id=1277781
The virStoragePoolFCRefreshThread had passed a pointer to the pool obj
in the virStoragePoolFCRefreshInfoPtr; however, we cannot assume that
the pool exists still since we don't keep the pool lock throughout
the duration of the thread.
Therefore, instead of passing the pool obj pointer, pass the UUID of
the pool and perform a lookup. If found, then we can perform the
refresh using the locked pool obj pointer; otherwise, we just exit
the thread since the pool is now gone.
Logging current async job while in BeginJob is useful, but the async job
we want to start is even more interesting.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Fixes several style issues and removes "DEF" (what is it supposed to
mean anyway?) from debug messages.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
USB controllers can share the same 'index' which indicates, that there
is some sort of master-companion relationship. Reorder the controllers
in XML in to place the master controller before its companions. This is
required by QEMU to not fail with error message:
error: internal error: process exited while connecting to monitor:
2015-10-26T16:25:17.630265Z qemu-system-x86_64:
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6:
USB bus 'usb.0' not found
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1166452
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Just straight-forward patch.
Use reference counting for privdom as stats internally could drop domain lock.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
The previous commit
commit 4e8993a250
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Mon Nov 9 16:20:08 2015 +0000
qemu: assume various QEMU 0.10 features are always available
Added broken handling of -sdl. Instead of duplicating existing
SDL handling code, just ensure it is invoked in the right
scenarios.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The -sdl and -net ...name=XXX arguments were both introduced
in QEMU 0.10, so the QEMU driver can assume they are always
available.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As of QEMU 0.10.0 the -vga argument was introduced, so the
QEMU driver can assume it is always available.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As of QEMU 0.10.0 the -drive format= parameter was added,
so the QEMU driver can assume it is always available.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As of QEMU 0.10.0, the -drive cache option stopped using
the on/off value names, so the QEMU driver can assume
use of the new value names.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Since we require QEMU 0.12.0, we can assume that QEMU supports
all of the fd, tcp, unix and exec migration protocols.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
We have twice previously attempted to remove Xenner
support
commit de9be0ab4d
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Wed Aug 22 17:29:01 2012 +0100
Remove xenner support
commit 92572c3d71
Author: Ján Tomko <jtomko@redhat.com>
Date: Wed Feb 18 16:33:50 2015 +0100
Remove code handling the QEMU_CAPS_DOMID capability
This change really does remove the last traces of it
in the capabilities handling code
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As of QEMU 0.9.1 the -drive argument can be used to configure
all disks, so the QEMU driver can assume it is always available
and drop support for -hda/-cdrom/etc.
Many of the tests need updating because a great many were
running without CAPS_DRIVE set, so using the -hda legacy
syntax.
Fixing the tests uncovered a bug in the argv -> xml
convertor which failed to handle disk with if=floppy.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The QEMU argv -> virDomainDef conversion code was not handling
-drive arguments using the floppy bus. This caused them to be
added as hard disks instead.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The -no-reboot arg was added in QEMU 0.9.0, so the QEMU driver
can now assume it is always present.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As of QEMU 0.11.0 the 'info chardev' monitor command can be
used to report on allocated chardev paths, so we can drop
support for parsing QEMU stderr to locate the PTY paths.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
As of QEMU 0.9.0 the -vnc option accepts a ':' to separate port
from listen address, so the QEMU driver can assume that support
for listen addresses is always available.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The kQEMU accelerator was deleted in QEMU 0.12, so we no
longer need to support it in the QEMU driver.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Check the QEMU version and refuse to work with QEMU versions
older than 0.12.0. This is approximately the vintage of QEMU
that is available in RHEL-6 era distros.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If mlock is required either due to use of VFIO hostdevs or due to the
fact that it's enabled it needs to be tweaked prior to adding new memory
or after removing a module. Add a helper to determine when it's
necessary and reuse it both on hotplug and hotunplug.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1273491
Use virNetDevSetupControl instead of open coding using socket(AF_LOCAL...)
and clearing virIfreq.
By using virNetDevSetupControl, the socket is then opened using
AF_PACKET which requires being privileged (effectively root) in
order to complete successfully. Since that's now a requirement,
then the ioctl(SIOCETHTOOL) should not fail with EPERM, thus it
is removed from the filtered listed of failure codes.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since the SIOCETHTOOL ioctl only works for privileged daemons, if called
when not root, then virNetDevGetFeatures will VIR_DEBUG a message and
return 0 as if the functions were not available for the architecture.
This effectively returns an empty bitmap indicating no features available.
Introduced by commit id 'c9027d8f4'
Signed-off-by: John Ferlan <jferlan@redhat.com>
In commit id 'c9027d8f4' when updating the posted patch to generate
a bitmap instead of an array of named feature bits, adjustment of
the args was missed
Recently reverted commit id '6f2a0198' showed a need to add extra
comments when dealing with filtering of potential "non-issues".
Scanning through upstream patch postings indicates early on the
reasons for the filtering of specific ioctl failures were provided;
however, when converted from causing an error to VIR_DEBUG's the
reasons were missing. A future read/change of the code incorrectly
assumed they could or should be removed.
This reverts commit 6f2a0198e9.
This commit removed error reporting from virNetDevSendEthtoolIoctl
pushing responsibility onto the callers. This is wrong, however,
since virNetDevSendEthtoolIoctl calls virNetDevSetupControl
which can still report errors. So as a result virNetDevSendEthtoolIoctl
may or may not report errors depending on which bit of it fails, and as
a result callers now overwrite some errors.
It also introduced a regression causing unprivileged libvirtd to
spew error messages to the console due to inability to query the
NIC features, an error which was previously ignored.
virNetDevSetupControlFull:148 : Cannot open network interface control socket: Operation not permitted
virNetDevFeatureAvailable:3062 : Cannot get device wlp3s0 flags: Operation not permitted
virNetDevSetupControlFull:148 : Cannot open network interface control socket: Operation not permitted
virNetDevFeatureAvailable:3062 : Cannot get device wlp3s0 flags: Operation not permitted
virNetDevSetupControlFull:148 : Cannot open network interface control socket: Operation not permitted
virNetDevFeatureAvailable:3062 : Cannot get device wlp3s0 flags: Operation not permitted
virNetDevSetupControlFull:148 : Cannot open network interface control socket: Operation not permitted
virNetDevFeatureAvailable:3062 : Cannot get device wlp3s0 flags: Operation not permitted
Looking back at the original posting I see no explanation of why
thsi refactoring was needed, so reverting the clearly broken
error reporting logic looks like the best option.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The code reported that a migration flag is unsupported but didn't jump
to the error label. Probably an oversight in commit f88af9dc that
introduced the flag checking.
Since the flag was not enabled when 'eating' the migration cookie,
libvirt reported a bogus error when memory hotplug was enabled:
unsupported migration cookie feature memory-hotplug
The error was ignored though due to a bug in the code so it slipped
through testing.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1278404
Commit id '0f7436ca' added virNetDevWaitDadFinish using ATTRIBUTE_NONNULL
for both arguments, although one is a non-null argument. A Coverity build
balks at that.
Rather than "if (virNetDevFeatureAvailable(ifname, &cmd))" change the
success criteria to "if (virNetDevFeatureAvailable(ifname, &cmd) == 1)".
The called helper returns -1 on failure, 0 on not found, and 1 on found.
Thus a failure was setting bits.
Introduced by commit ac3ed20 which changed the helper's return
values without adjusting its callers
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Commit id 'fdda3760' only managed a symptom where it was possible to
create a file in a pool without libvirt's knowledge, so it was reverted.
The real fix is to have all the createVol API's which actually create
a volume (disk, logical, zfs) and the buildVol API's which handle the
real creation of some volume file (fs, rbd, sheepdog) manage deleting
any volume which they create when there is some sort of error in
processing the volume.
This way the onus isn't left up to the storage_driver to determine whether
the buildVol failure was due to some failure as a result of adjustments
made to the volume after creation such as getting sizes, changing ownership,
changing volume protections, etc. or simple a failure in creation.
Without needing to consider that the volume has to be removed, the
buildVol failure path only needs to remove the volume from the pool.
This way if a creation failed due to duplicate name, libvirt wouldn't
remove a volume that it didn't create in the pool target.
This reverts commit fdda37608a.
This commit only manages a symptom of finding a buildRet failure
where a volume was not listed in the pool, but someone created the
volume outside of libvirt in the pool being managed by libvirt.
After successfully returning from virFileOpenAs, if subsequent calls fail,
then we need to remove the file since our caller expects that failures after
creation will remove the created file.
After a successful qemu-img/qcow-create of the backing file, if we
fail to stat the file, change it owner/group, or mode, then the
cleanup path should remove the file.
Currently the code does not handle the NFS root squash environment
properly since if the file gets created, then the subsequent chmod
will fail in a root squash environment where we're creating a file
in the pool with qemu tools, such as seen via:
$ virsh vol-create-from $pool $file.xml file.img --inputpool $pool
assuming $file.xml is creating a file of "<format type='qcow2'"> from
an existing file.img in the pool of "<format type='raw'>".
This patch will utilize the virCommandSetUmask when creating the file
in the NETFS pool. The virCommandSetUmask API was added in commit id
'0e1a1a8c4', which was after the original code was developed in commit
id 'e1f27784' to attempt to handle the root squash environment.
Also, rather than blindly attempting to chmod, check to see if the
st_mode bits from the stat match what we're trying to set and only
make the chmod if they don't.
Also, a slight adjustment to the fallback algorithm to move the
virCommandSetUID/virCommandSetGID inside the if (!filecreated) since
they're only useful if we need to attempt to create the file again.
VIR_DEBUG and VIR_WARN will automatically add a new line to the message,
having "\n" at the end or at the beginning of the message results in
empty lines.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
nodeset should be freed in both success and failure paths.
While tmppath is freed immediately after it's consumed, moving it from
error to cleanup label is a bit more consistent and robust.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Generally, we use "ret" variable for storing the value we are going to
return at the and of a function, but this is not the case in
qemuProcessStart. Let's rename "ret" as "rv".
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessStart was passing char * migrateFrom as the third argument to
qemuPrepareNVRAM. We should explicitly convert the pointer to bool which
is what the function expects.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This was originally set to 5 seconds, but times of 5.5 to 7 seconds
were experienced. Since it's an arbitrary number intended to prevent
an infinite hang, having it a bit too high won't hurt anything, and 20
seconds looks to be adequate (i.e. I think/hope we don't need to make
it tunable in libvirtd.conf)
If DAD not finished in 5 seconds, user will get an
unknown error like this:
# virsh net-start ipv6
error: Failed to start network ipv6
error: An error occurred, but the cause is unknown
Call virReportError to set an error.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Build on non-Linux fails because the virNetDevWaitDadFinish() stub
has unused parameters. Fix by adding appropriate ATTRIBUTE_UNUSED
for these parameters.
Pushing under build-breaker rule.
commit db488c79 assumed that dnsmasq would complete IPv6 DAD before
daemonizing, but in reality it doesn't wait, which creates problems
when libvirt's bridge driver sets the matching "dummy tap device" to
IFF_DOWN prior to DAD completing.
This patch waits for DAD completion by periodically polling the kernel
using netlink to check whether there are any IPv6 addresses assigned
to bridge which have a 'tentative' state (if there are any in this
state, then DAD hasn't yet finished). After DAD is finished, execution
continues. To avoid an endless hang in case something was wrong with
the kernel's DAD, we wait a maximum of 5 seconds.
https://bugzilla.redhat.com/show_bug.cgi?id=1270715
Commit id '9deb96f' removed the code to fetch the nodeset from the
CpusetMems cgroup for a running vm in favor of using the return from
virDomainNumatuneFormatNodeset introduced by commit id '43b67f2e7'.
However, that API will return the value of the passed 'auto_nodeset'
when placement is VIR_DOMAIN_NUMATUNE_PLACEMENT_AUTO, which happens
to be NULL.
Since commit id 'c74d58ad' started using priv->autoNodeset in order
to manage the auto placement value during qemuProcessStart, it should
be passed along in order to return the correct value if the domain
requests the auto placement.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
When a RBD volume has snapshots it can not be removed.
This patch introduces a new flag to force volume removal,
VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS.
With this flag any existing snapshots will be removed prior to
removing the volume.
No existing mechanism in libvirt allowed us to pass such information,
so that's why a new flag was introduced.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
After commit a26669d7, we only jump to error when
virDomainMigrateUnmanagedParams return a value less than -1.
this will make the migrate result always be success even we
meet some problem.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
In commit f41be296, we moved vm->persistent check into
qemuDomainRemoveInactive, but we didn't change the vm->persistent
before call qemuDomainRemoveInactive in some place before and just
call it to remove the inactive vm.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Lets use wrapper functions virLockDaemonLock and
virLockDaemonUnlock instead of virMutexLock and virMutexUnlock.
This has no functional impact, but it's easier to read (at least
for me).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This calls the PCI-, USB- and SCSI-specific functions just
like qemuHostdev{Prepare,ReAttach}DomainDevices() already do,
and was the missing piece for the qemuHostdev API to nicely
mirror the virHostdev API.
Update qemuProcessReconnect() to use the new function.
Adopt the same names used for virHostdevUpdateActive*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
Adopt the same names used for virHostdevReAttach*Devices() for
consistency's sake and to make it easier to jump between the two.
No functional changes.
Fix a cut-n-paste error from commit id '35eecdde' where the previous
check for max_sectors seems to have been copied, but the error message
parameter not updated to be ioeventfd
Commit id '1c24cfe9' added error messages for virNumaSetPagePoolSize;
however, virNumaGetHugePageInfo also uses virNumaGetHugePageInfoPath
in order to build the path, but it never checked upon return if
the built path exists which could lead to an error message as follows:
$ virsh freepages 0 1
error: Failed to open file
'/sys/devices/system/node/node0/hugepages/hugepages-1kB/free_hugepages':
No such file or directory
Rather than add the same message for the other two callers, adjust
the virNumaGetHugePageInfoPath in order not only build the path, but
also check if the built path exists. If the path does not exist,
then generate the error message and return failure.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Commit id '1c24cfe9' added new checks and error messaes for failure
scenarios. Let's adjust those error messages to after the call to
virNumaGetHugePageInfoPath in order to provide a more specific error
message depending on node and page_size
After this patch:
# virsh allocpages --pagesize 2047 --pagecount 1 --cellno 0
error: operation failed: page size 2047 is not available on node 0
# virsh allocpages --pagesize 2047 --pagecount 1
error: operation failed: page size 2047 is not available
Signed-off-by: Luyao Huang <lhuang@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1265114
Refactor helper virNumaGetHugePageInfoPath to handle returning a directory
path when passed a page_size of 0 and suffix == NULL into a new helper
virNumaGetHugePageInfoDir which will only be called when a directory
path is expected to be returned. This solves the issue where the helper
was called with page_size == 0 expecting a file path in return, but
instead got a directory path and failed in virFileReadAll with:
error : virFileReadAll:1358 : Failed to read file
'/sys/devices/system/node/node0/hugepages/': Is a directory
Since virNumaGetPages API expects to return a directory by passing
page_size == 0 and suffix == NULL, it will now call the new helper.
Callers to virNumaGetHugePageInfoPath expect to return a file path
which could then be used in the call to virFileReadAll.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
We have macros for both positive and negative string matching.
Therefore there is no need to use !STREQ or !STRNEQ. At the same
time as we are dropping this, new syntax-check rule is
introduced to make sure we won't introduce it again.
Signed-off-by: Ishmanpreet Kaur Khera <khera.ishman@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The following functions are implemented:
vzDomainIsUpdated, vzDomainGetVcpusFlags and vzDomainGetMaxVcpus.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
The following functions were implemented:
vzNodeGetCPUStats, vzNodeGetMemoryStats,
vzNodeGetCellsFreeMemory and vzNodeGetFreeMemory.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Because we have no limitation for maximal number of vcpus in containers
we report as maximum 1028 just for the sake of common sence.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Even though the APIs are not implemented yet, they create a
skeleton that can be filled in later.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This function should really be called only when we want to change
ownership of a file (or disk source). Lets switch to calling a
wrapper function which will eventually record the current owner
of the file and call virSecurityDACSetOwnershipInternal
subsequently.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is pure code adjustment. The structure is going to be needed
later as it will hold a reference that will be used to talk to
virtlockd. However, so far this is no functional change just code
preparation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is pure code adjustment. The structure is going to be needed
later as it will hold a reference that will be used to talk to
virtlockd. However, so far this is no functional change just code
preparation.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
It's better if we stat() file that we are about to chown() at
first and check if there's something we need to change. Not that
it would make much difference, but for the upcoming patches we
need to be doing stat() anyway. Moreover, if we do things this
way, we can drop @chown_errno variable which will become
redundant.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Correctly mark the places where we need to remember and recall
file ownership. We don't want to mislead any potential developer.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So we have this mechanism that on SIGUSR1 the virtlockd dumps its
internal state into a JSON file, reexec itself and the reloads
the internal state back. However, there's a bug in our
implementation:
(gdb) signal SIGUSR1
Continuing with signal SIGUSR1.
[Thread 0x7fd094f7b700 (LWP 10602) exited]
process 10600 is executing new program: /home/zippy/work/libvirt/libvirt.git/src/virtlockd
warning: Could not load shared library symbols for linux-vdso.so.1.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7fb28bc3c700 (LWP 14501)]
Program received signal SIGSEGV, Segmentation fault.
0x00007fb29133d530 in virExpandN (ptrptr=0x70, size=8, countptr=0x68, add=1, report=true, domcode=7, filename=0x7fb29138aeab "rpc/virnetserver.c", funcname=0x7fb29138b680 <__FUNCTION__.15821> "virNetServerAddProgram", linenr=661) at util/viralloc.c:288
288 if (*countptr + add < *countptr) {
(gdb) bt
#0 0x00007fb29133d530 in virExpandN (ptrptr=0x70, size=8, countptr=0x68, add=1, report=true, domcode=7, filename=0x7fb29138aeab "rpc/virnetserver.c", funcname=0x7fb29138b680 <__FUNCTION__.15821> "virNetServerAddProgram", linenr=661) at util/viralloc.c:288
#1 0x00007fb29132a267 in virNetServerAddProgram (srv=0x0, prog=0x7fb2915d08b0) at rpc/virnetserver.c:661
#2 0x00007fb29131f27f in main (argc=1, argv=0x7fff8f771298) at locking/lock_daemon.c:1445
Notice the NULL @srv passed to frame 2? Usually, the @srv
variable is initialized on fresh start. However, in case of
daemon reload, the code path that is responsible for initializing
the value was not triggered and therefore we crashed immediately.
Fix this by always setting the variable.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Tunnelled migration can hang if the destination qemu exits despite all the
ABI checks. This happens whenever the destination qemu exits before the
complete transfer is noticed by source qemu. The savevm state checks at
runtime can fail at destination and cause qemu to error out.
The source qemu cant notice it as the EPIPE is not propogated to it.
The qemuMigrationIOFunc() notices the stream being broken from virStreamSend()
and it cleans up the stream alone. The qemuMigrationWaitForCompletion() would
never get to 100% transfer completion.
The qemuMigrationWaitForCompletion() never breaks out as well since
the ssh connection to destination is healthy, and the source qemu also thinks
the migration is ongoing as the Fd to which it transfers, is never
closed or broken. So, the migration will hang forever. Even Ctrl-C on the
virsh migrate wouldn't be honoured. Close the source side FD when there is
an error in the stream. That way, the source qemu updates itself and
qemuMigrationWaitForCompletion() notices the failure.
Close the FD for all kinds of errors to be sure. The error message is not
copied for EPIPE so that the destination error is copied instead later.
Note:
Reproducible with repeated migrations between Power hosts running in different
subcores-per-core modes.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1264008
The existing algorithm assumed that someone was making small, incremental
changes; however, it is possible to change iothreads from 0 (or relatively
small number) to some really large number and the algorithm would possibly
spin its wheels doing unnecessary searches.
So, optimize the algorithm using a bitmap to find available iothread_id's
starting at 1 that aren't already defined by a "<thread id='#'>" and
filling in the iothreadids array with those iothread_id values.
https://bugzilla.redhat.com/show_bug.cgi?id=1249981
When qemuDomainPinIOThread was added in commit id 'fb562614', a check
for the IOThread capability was not needed since a check for iothreadpids
covered the condition where the support for IOThreads was not present.
The iothreadpids array was only created if qemuProcessDetectIOThreadPIDs
was able to query the monitor for IOThreads. It would only do that if
the QEMU_CAPS_OBJECT_IOTHREAD capability was set.
However, when iothreadids were added in commit id '8d4614a5' and the
check for iothreadpids was replaced by a search through the iothreadids[]
array for the matching iothread_id that left open the possibility that
an iothreadids[] array was defined, but the entries essentially pointed
to elements with only the 'iothread_id' defined leaving the 'thread_id'
value of 0 and eventually the cpumap entry of NULL.
This was because, the original IOThreads commit id '72edaae7' only
checked if IOThreads were defined and if the emulator had the IOThreads
capability, then IOThread objects were added at startup. The "capability
failure" check was only done when a disk was assigned to an IOThread in
qemuCheckIOThreads. This was because the initial implementation had no way
to dynamically add IOThreads, but it was possible to dynamically add a
disk to the domain. So the decision was if the domain supported it, then
add the IOThread objects. Then if a disk with an IOThread defined was
added, it could check the capability and fail to add if not there. This
just meant the 'iothreads' value was essentially ignored.
Eventually commit id 'a27ed6e7' allowed for the dynamic addition and
deletion of IOThread objects. So it was no longer necessary to generate
IOThread objects to dynamically attach a disk to. However, the startup
and disk check code was not modified to reflect this.
This patch will move the capability failure check to when IOThread
objects are being added to the command line. Thus a domain that has
IOThreads defined will not be started if the emulator doesn't support
the capability. This means when qemuCheckIOThreads is called to add
a disk, it's no longer necessary to check the capability. Instead the
code can use the IOThreadFind call to indicate that the IOThread
doesn't exist.
Finally because it could be possible to have a domain running with the
iothreadids[] defined prior to this change if libvirtd is restarted each
having mostly empty elements, qemuProcessDetectIOThreadPIDs will check
if there are niothreadids when the QEMU_CAPS_OBJECT_IOTHREAD capability
check fails and remove the elements and array if it exists.
With these changes in place, it turns out the cputune-numatune test
was failing because the right bit wasn't set in the test. So used the
opportunity to fix that and create a test that would expect to fail
with some sort of iothreads defined and used, but not having the
correct capability.
Although theoretically both should be the same value, the niothreadids
should be used in favor of iothreads when performing comparisons. This
leaves the iothreads as a purely numeric value to be saved in the config
file. The one exception to the rule is virDomainIOThreadIDDefArrayInit
where the iothreadids are being generated from the iothreads count since
iothreadids were added after initial iothreads support.
Event implementations need to be registered before a connection to the
Hypervisor is opened, otherwise event handling can be impaired (e.g.
delayed messages). This fact is referenced in an e-mail [1], but should
also be noted in the documentation of the registration functions.
[1] https://www.redhat.com/archives/libvirt-users/2014-April/msg00011.html
Create a separate local API that will fill in the iothreadid array
entries that were not defined by <iothread id='#'> entries in the XML.
Signed-off-by: John Ferlan <jferlan@redhat.com>
After a successful creation of a directory, if some other call results
in returning a failure, let's remove the directory we created to
prevent another round trip or confusion in the caller. In particular, this
function can be called during a storage backend buildVol, so in order
to ensure that caller doesn't need to distinguish between failed create
or some other failure after create, just remove the directory we created.
Signed-off-by: John Ferlan <jferlan@redhat.com>
After a successful creation of a file, if some other call results
in returning a failure, let's unlink the file we created to prevent
another round trip or confusion in the caller. In particular, this
function can be called during a storage backend buildVol, so in order
to ensure that caller doesn't need to distinguish between failed create
or some other failure after create, just remove the volume we created.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Track when the logical volume was successfully created in order to
properly handle the call to virStorageBackendLogicalDeleteVol. It's
possible that the failure to create was because someone created an
LV in the pool outside of libvirt's knowledge. In this case, we don't
want to delete that LV. A subsequent or future refresh of the pool
will find the volume and cause an earlier failure
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '1b5685da' refactored the code to move buildvoldef inside
the buildVol conditional; however, the VIR_FREE of the memory was
left only when 'buildret' failed, thus we're leaking memory.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after a buildVol
succeeds in storageVolCreateXML, thus a volStorageBackendSheepdogRefreshVolInfo
call in virStorageBackendSheepdogBuildVol is no longer necessary.
Additionally, the 'conn' parameter becomes unused.
Signed-off-by: John Ferlan <jferlan@redhat.com>
As of commit id '155ca616' a 'refreshVol' is called after the buildVol
succeeds in storageVolCreateXML, thus the volStorageBackendRBDRefreshVolInfo
call in virStorageBackendRBDBuildVol is no longer necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Without this, building on cygwin fails with:
CC libvirt_admin_la-libvirt-admin.lo
libvirt-admin.c:25:21: fatal error: rpc/rpc.h: No such file or directory
#include <rpc/rpc.h>
^
Reported-by: Yaakov Selkowitz <yselkowi@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Our apibuild.py script does not cope with ATTRIBUTE_NONNULL:
Parse Error: parsing function type, ')' expected
Got token ('name', 'char')
Last token: ('name', 'char')
Token queue: [('op', '*'), ('name', 'dconnuri'), ('sep', ')')]
Line 3297 end:
Makefile:2441: recipe for target '../../docs/apibuild.py.stamp' failed
Let's drop it. Moreover, up until e17ae3ccc2 where it was
introduced we did not really care about NULL-ity of dconnuri. And
moreover the ATTRIBUTE_NONNULL merely checks for static calls
over NULL, it won't catch the dynamic ones, where a NULL is
passed by a variable at runtime.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1256999
After creating a copy of the 'authdef' in a pool -> disk translation,
unconditionally clear the 'authType' in the resulting disk auth def
structure since that's used for a storage pool and not a disk. This
ensures virStorageAuthDefFormat will properly format the <auth> XML
for a <disk> (e.g. it won't have a <auth type='%s'.../>).
Check dconnuri is not null or we will catch nullpointer later.
I hope this makes Coverity happy.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
virDomainMigrateUnmanagedParams is not a good candidate for this functionality
as it is used by migrate family functions too and its have its own checks that
are superset of extracted and we don't need to check twice.
Actually name of the function is slightly misleading as there is also a check
for consistensy of flags parameter alone. So it could be refactored further and
reused by all migrate functions but for now let it be a matter of a different
patchset.
It is *not* a pure refactoring patch as it introduces offline check for older
versions. Looks like it must be done that way and no one will be broken too.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Finally on this step we get what we were aimed for - toURI{1, 2} (and
migration{*} APIs too) now can work thru V3_PARAMS protocol. Execution path
goes thru unchanged virDomainMigrateUnmanaged adapter function which is called
by all target places.
Note that we keep the fact that direct migration never works
thru V3_PARAMS proto. We can't change this aspect without
further investigation.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Move virDomainMigrateUnmanagedProto* expected params list check into
function itself and use common virTypedParamsCheck for this purpose.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Extract parameter adaptation and checking which is protocol dependent into
designated functions. Leave only branching and common checks in
virDomainMigrateUnmanagedParams.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Let's put main functionality into params version of virDomainMigrateUnmanaged
as a preparation step for merging it with virDomainMigratePeer2PeerParams.
virDomainMigrateUnmanaged then does nothing more then just adapting arguments.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
p2p plain and direct function are good candidates for code reuse. Their main
function is same - to branch among different versions of migration protocol and
implementation of this function is also same. Also they have other common
functionality in lesser aspects. So let's merge them.
But as they have different signatures we have to get to convention on how to
pass direct migration 'uri' in 'dconnuri' and 'miguri'. Fortunately we alreay
have such convention in parameters passed to toURI2 function, just let's follow
it. 'uri' is passed in miguri and dconnuri is ignored.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
We use miguri name for this parameter in other places. So
make naming more consitent.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Direct migration should work if *perform3 is present but *perform
is not. This is situation when driver migration is implemented
after new version of driver function is introduced. We should not
be forced to support old version too as its parameter space is
subspace of newer one.
This is more structured code so it will be easier to add branch for _PARAMS
protocol here. It is not a pure refactoring strictly speaking as we remove
scenarios for broken cases when driver defines V3 feature and implements
perform function. So it is additionally a more solid code.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
'useParams' parameter usage is an example of control coupling. Most of the work
inside the function is done differently except for the uri check. Lets split
this function into two, one with extensible parameters set and one with hardcoded
parameter set.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
The internal representation of a JSON array counts the items in
size_t. However, for some reason, when asking for the count it's
reported as int. Firstly, we need the function to return a signed
type as it's returning -1 on an error. But, not every system has
integer the same size as size_t. Therefore, lets return ssize_t.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 4e8032272f used $(builddir) in the header search
path to fix a build issue; however, $(builddir) is not defined
by old autoconf versions such as the one available in CentOS 5,
resulting in the following error:
cc1: error: /util: No such file or directory
make[3]: *** [libvirt_driver_la-fdstream.lo] Error 1
Since $(builddir) is defined to always be '.', just use that
value directly instead.
Since a9fe620372, we are generating virkeymaps.h at build
time; however, we are not including $(builddir)/util in the
header search path, so when doing a VPATH build the compiler
is unable to locate the file.
make[2]: Entering directory
`/home/jenkins/libvirt/systems/libvirt-fedora-20/build/src'
GEN util/virkeymaps.h
...
CC util/libvirt_util_la-virkeycode.lo
CC util/libvirt_util_la-virkeyfile.lo
CC util/libvirt_util_la-virlockspace.lo
CC util/libvirt_util_la-virlog.lo
../../src/util/virkeycode.c:27:24: fatal error: virkeymaps.h: No such file or directory
#include "virkeymaps.h"
^
compilation terminated.
Coverity notices that net->ifname is potentially referenced after a
VIR_FREE(). Since the net->ifname will eventually be free'd during
virDomainDefFree when calling virDomainNetDefFree, let's just that
processing take care the free.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Since the strtok_r call in libxlCapsInitGuests expects a non NULL first
parameter when the third parameter is NULL, we need to check that
the returned 'capabilities' from a libxl_get_version_info call is
not NULL and error out if so since the code expects it.
Signed-off-by: John Ferlan <jferlan@redhat.com>
So imagine you want to crate new security manager:
if (!(mgr = virSecurityManagerNew("selinux", "QEMU", false, true, false, true)));
Hard to parse, right? What about this:
if (!(mgr = virSecurityManagerNew("selinux", "QEMU",
VIR_SECURITY_MANAGER_DEFAULT_CONFINED |
VIR_SECURITY_MANAGER_PRIVILEGED)));
Now that's better! This is what the commit does.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This gets rid of the partially enforced alignment and makes it less
likely for a bogus value to be introduced in the enumeration.
Capabilities are divided in five-element groups for better readability.
Use #define for QEMU_CAPS_NET_NAME and QEMU_CAPS_HOST_NET_ADD, both
of which are aliases for QEMU_CAPS_0_10.
qemuMigrationIsAllowed would disallow offline migration if the VM
contained host devices or memory modules. Since during offline migration
we don't transfer any state we can safely migrate VMs with such
configuration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1265049
Use the migration @flags for checking various migration aspects rather
than picking them out as booleans. Document the new semantics in the
function header.
Now that qemuMigrationIsAllowed is always called with @vm, we can drop
the @def argument and simplify the control flow.
Additionally the comment is invalid so drop it.
Extract the hostdev check from qemuMigrationIsAllowed into a separate
function since that is the only part that needs to be done in the v2
migration protocol prepare phase on the destination. All other checks
were added when the v3 protocol existed so they don't need to be
extracted.
This change will allow to drop the @def argument for
qemuMigrationIsAllowed and further simplify the function.
In fact, it was never used as far as vz has no features supporting it.
That is why there will be no harm to anyone if we just remove this code to
prevent further misunderstanding and efforts to support dead code.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
At the time this code was added we had intentions to support libvirt interface
to manage vz networks. In fact, it was never implemented completely to work
correctly that makes me think that there will be no harm to anyone if we just
rip it off. Moreover, in vz7 we started to use libvirt bridge network driver to
manage networks.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Even though QEMU on the source host reports completed migration and thus
we move to the Finish phase, QEMU on the destination host may still be
processing migration data. Thus before we can start guest CPUs on the
destination, we have to wait for a completed migration event.
https://bugzilla.redhat.com/show_bug.cgi?id=1265902
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
With new QEMU which supports migration events,
qemuMigrationCheckJobStatus needs to explicitly query QEMU for migration
statistics once migration is completed to make sure the caller sees
up-to-date statistics with both old and new QEMU. However, some callers
are not interested in the statistics at all and once we start waiting
for a completed migration on the destination host too, checking the
statistics would even fail. Let's push the decision whether to update
the statistics or not to the caller.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The function already has two bool parameters and we will need to add a
new one. Let's switch to flags to make the callers readable.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The destination host gets detailed statistics about the current
migration form the source host via migration cookie and copies them to
the domain object so that they can be queried using
virDomainGetJobStats. However, we should only copy statistics to the
domain object when migration finished successfully.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Even if we are migrating a domain with VIR_MIGRATE_PAUSED flag set, we
should still update the total time of the migration. Updating downtime
doesn't hurt either, even though we don't actually start guest CPUs.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We are distributing virkeymaps.h and all the tools needed to rebuild
that file. On top of that, we are generating that file into the
$(srcdir) and that sometimes fails when trying to do make dist in VPATH
on rawhide fedora. And we don't clean the file when maintainer-clean
make target is requested. So let's not distribute the file and rather
let everyone rebuild it when needed and clean it when appropriate.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
profile_status function was not making any difference between error
cases and unconfined profiles. The problem with this approach is that
dominfo was throwing an error on unconfined domains.
Our docs state that subelements of <metadata> shall have a namespace
and the medatata APIs expect that too. To avoid inaccessible
<metadata> sub-elements, just remove those that don't conform to the
documentation.
Apart from adding the new condition this patch renames the function and
refactors the code flow to allow the changes.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1245525
https://bugzilla.redhat.com/show_bug.cgi?id=1247987
Calculation of the extended and logical partition values for the disk
pool is complex. As the bz points out an extended partition should have
it's allocation initialized to 0 (zero) and keep the capacity as the size
dictated by the extents read. Then for each logical partition found,
adjust the allocation of the extended partition.
Finally, previous logic tried to avoid recalculating things if a logical
partition was deleted; however, since we now have special logic to handle
the allocation of the extended partition, just make life easier by reading
the partition table again - rather than doing the reverse adjustment.
https://bugzilla.redhat.com/show_bug.cgi?id=1251461
When 'starting' up a disk pool, we need to make sure the label on the
device is valid; otherwise, the followup refreshPool will assume the
disk has been properly formatted for use. If we don't find the valid
label, then refuse the start and give a proper reason.
Let's check to ensure we can find the Partition Table in the label
and that libvirt actually recognizes that type; otherwise, when we
go to read the partitions during a refresh operation we may not be
reading what we expect.
This will expand upon the types of errors or reason that a build
would fail, so we can create more direct error messages.
Modify virStorageBackendDiskValidLabel to add a 'writelabel' parameter.
While initially for the purpose of determining whether the label should
be written during DiskBuild, a future use during DiskStart could determine
whether the pool should be started using the label found. Augment the
error messages also to give a hint as to what someone may need to do
or why the command failed.
Create a new function virStorageBackendDiskValidLabel to handle checking
whether there is a label on the device and whether it's valid or not.
While initially for the purpose of determining whether the label can be
overwritten during DiskBuild, a future use during DiskStart could determine
whether the pool should be started using the label found.
https://bugzilla.redhat.com/show_bug.cgi?id=1233003
Although perhaps bordering on a don't do that type scenario, if
someone creates a volume in a pool outside of libvirt, then uses that
same name to create a volume in the pool via libvirt, then the creation
will fail and in some cases cause the same name volume to be deleted.
This patch will refresh the pool just prior to checking whether the
named volume exists prior to creating the volume in the pool. While
it's still possible to have a timing window to create a file after the
check - at least we tried. At that point, someone is being malicious.
As it turns out the caller in this case expects a return < 0 for failure
and to get/use "errno" rather than using the negative of returned status.
Again different than the create path.
If someone "deleted" a file from the pool without using virsh vol-delete,
then the unlink/rmdir would return an error (-1) and set errno to ENOENT.
The caller checks errno for ENOENT when determining whether to throw an
error message indicating the failure. Without the change, the error
message is:
error: Failed to delete vol $vol
error: cannot unlink file '/$pathto/$vol': Success
This patch thus allows the fork path to follow the non-fork path
where unlink/rmdir return -1 and errno.
Unlike create options, if the file to be removed is already in the
pool, then the uid/gid will come from the pool. If it's the same as the
currently running process, then just do the unlink/rmdir directly
rather than going through the fork processing unnecessarily
qemu-kvm can be used to run ppc64 guests on ppc64le hosts and vice
versa, since the hardware is actually the same and the endianness
is chosen by the guest kernel.
Up until now, however, libvirt didn't allow the use of qemu-kvm
to run guests if their endianness didn't match the host's.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1267882
Commit 792f81a40e caused a regression in the libssh2 host key
verification code by changing the variable type of 'i' to unsigned.
Since one of the loops used -1 as a special value if the asking
callback was found the conversion made a subsequent test always fail.
The bug was stealth enough to pass review, compilers and coverity.
Refactor the condition to avoid problems.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1047861
Since we'd disallow migration of a guest that would have possibly
invalid config but still be able to work, relax the WWN check to be
performed only on new starts of the VM.
If a system has a large number of active or active interfaces, it can
be a big waste of time to retrieve and qualify all interfaces if the
caller only wanted one subset. Since netcf has a simple flag for this,
translate the libvirt flag into a netcf flag and let netcf pre-filter.
Getting the MAC address of an interface is actually fairly expensive,
and we've already gotten it and stored it into def, so just keep def
around a bit longer and retrieve it from there.
This reduces the time for "virsh iface-list --all" from 28 to 23
seconds when there are 400 interfaces.
The spec for virConnectListAllInterfaces says that if the pointer that
is supposed to hold the list of interfaces is NULL, the function
should just return the count of interfaces that matched the filter,
but the code never increments the count if the list pointer is NULL.
We are using memory-backing-file even when it's not needed, for example
if user requests hugepages for memory backing, but does not specify any
pagesize or memory node pinning. This causes migrations to fail when
migrating from older libvirt that did not do this. So similarly to
commit 7832fac847 which does it for
memory-backend-ram, this commit makes is more generic and
backend-agnostic, so the backend is not used if there is no specific
pagesize of hugepages requested, no nodeset the memory node should be
bound to, no memory access change required, and so on.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1266856
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
So since the introduction of the memory-backend-file object until now we
only added '-mem-path' for non-NUMA guests and we used the parameters of
the memory-backend-file object to specify the path to the hugetlbfs
mount. But hugepages can be also used without memory-backend-file
object, as it used to be before its introduction. Let's just get this
part of the code back and properly append the '-mem-path' for NUMA
guests as well, but only when the memory backend is not needed.
This parameter is already being applied when no numa is requested and
because we still use memory-object-file unconditionally for
hugepage-backed NUMA guests, this should not fire until later.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
That function is called qemuBuildMemPathStr() and will be used in
other places in the future. The change in the test suite is proper due
to the fact that -mem-prealloc makes only sense with -mem-path (from
qemu documentation -- html/qemu-doc.html).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Support for GICv3 has been recently introduced in qemu using gic-version
option for the 'virt' machine. The option can actually take values of
'2', '3' and 'host', however, since in libvirt this is a numeric
parameter, we limit it only to 2 and 3. Value of 2 is not added to the
command line in order to keep backward compatibility with older qemu
versions.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Unfortunately qemu currently doesn't offer introspection for machine types,
so we have to rely on version number, similar to QEMU_CAPS_MACHINE_USB_OPT.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Commit 307fb904 (Sep 10) added a 'privileged' variable when creating
the DAC driver:
@@ -153,6 +157,7 @@ virSecurityManagerNewDAC(const char *virtDriver,
bool defaultConfined,
bool requireConfined,
bool dynamicOwnership,
+ bool privileged,
virSecurityManagerDACChownCallback chownCallback)
But argument order is mixed up at the caller, swapping dynamicOwnership
and privileged values. This corrects the argument order
https://bugzilla.redhat.com/show_bug.cgi?id=1266628
Since commit e0139e3, we update the pool allocation with
the user-provided allocation values.
For qcow2, the allocation is ignored for volume building,
but we still subtracted it from pool's allocation.
This can result in interesting values if the user-provided
allocation is large enough:
Capacity: 104.71 GiB
Allocation: 109.13 GiB
Available: 16.00 EiB
We already do a VolRefresh on volume creation. Also refresh
the volume after creating and use the new value to update the pool.
https://bugzilla.redhat.com/show_bug.cgi?id=1163091
Commit id '7383b8cc' changed virDomainDef 'virtType' to an enum, that
caused a build failure on some archs due to comparing an unsigned value
to < 0. Adjust the fetch of 'type' to be into temporary 'int virtType'
and then assign that virtType to the def->virtType
Introduce VIR_DOMAIN_VIRT_NONE to give domaintype the default value of zero.
This is specially helpful in constructing better error messages
when we don't want to look up the default emulator by virtType.
The test data in vircapstest.c is also modified to reflect this change.
As of commit 6992994, we set graphics/@listen attribute according to the
first listen child element even if that element is of type='network'.
This was done for backward compatibility with applications which only
support the original listen attribute. However, by doing so we broke
migration to older libvirt which tried to check that the listen
attribute matches one of the listen child elements but which did not
take type='network' elements into account.
We are not concerned about compatibility with old applications when
formatting domain XML for migration for two reasons. The XML is consumed
only by libvirtd and the IP address associated with type='network'
listen address on the source host is just useless on the destination
host. Thus, we can safely avoid propagating the type='network' IP
address to graphics/@listen attribute when creating migratable XML.
https://bugzilla.redhat.com/show_bug.cgi?id=1265111
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This seemed to be more of a false positive as for some reason Coverity
was missing the "ret < 0" goto error condition and somehow believing that
event could be overwritten. At first I thought it was just the ret != 0
condition difference, but it wasn't.
In any case, make use of the recent change to qemuDomainEventQueue to
check event == NULL and just pass it as a parameter directly in the
error path. That avoids the error.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Coverity complains that return from virHookCall is not checked in
one place in qemuProcessStop. Since the comment notes that we cannot
stop the operation even it if fails, just added the ignore_value.
So while working on my previous patches, I've noticed that
virDomainRestore implementation in qemu and test drivers has the
same problem as I am fixing.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far we have the following pattern occurring over and over
again:
if (!vm->persistent)
qemuDomainRemoveInactive(driver, vm);
It's safe to put the check into the function and save some LoC.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=871452
So, you want to create a domain from XML. The domain already
exists in libvirt's database of domains. It's okay, because name
and UUID matches. However, on domain startup, internal
representation of the domain is overwritten with your XML even
though we claim that the XML you've provided is a transient one.
The bug is to be found across nearly all the drivers.
Le sigh.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=871452
Okay, so we allow users to 'virsh create' an already existing
domain, providing completely different XML than the one stored in
Libvirt. Well, as long as name and UUID matches. However, in some
drivers the code that handles errors unconditionally removes the
domain that failed to start even though the domain might have
been persistent. Fortunately, the domain is removed just from the
internal list of domains and the config file is kept around.
Steps to reproduce:
1) virsh dumpxml $dom > /tmp/dom.xml
2) change XML so that it is still parse-able but won't boot, e.g.
change guest agent path to /foo/bar
3) virsh create /tmp/dom.xml
4) virsh dumpxml $dom
5) Observe "No such domain" error
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
I initially added this in order to keep the code more error-prone to
following additions, but it seems it's still frowned upon.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Now that virQEMUDriverCreateXMLConf is never called with NULL
(after 086f37e97a) we can safely drop useless check in
qemuDomainDeviceDefPostParse as we are guaranteed to be always
called with the driver initialized. Therefore checking if driver
is NULL makes no sense. Moreover, if we mix it with direct driver
dereference. And after that, we are sure that nor @cfg will be
NULL, therefore we can drop checks for that too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Qemu unfortunately doesn't update internal state right after migration
and so the actual balloon size as returned by 'query-balloon' are
invalid for a while after the CPUs are started after migration. If we'd
refresh our internal state at this point we would report invalid current
memory size until the next balloon event would arrive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1242940
My original implementation was based on a qemu version that still did
not have all the checks in place. Using sizes that would align to odd
megabyte increments will produce the following error:
qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,id=dimm0: backend memory size must be multiple of 0x200000
qemu-kvm: -device pc-dimm,node=0,memdev=memdimm0,id=dimm0: Device 'pc-dimm' could not be initialized
Introduce an alignment retrieval function for memory devices and use it
to align the devices separately and modify a test case to verify it.
Even though we hit an error in client's IO loop, we still want to
process any pending data. So instead of reporting the error right away,
we can finish the current iteration and report the error once we're done
with it. Note that the error is stored in client->error by
virNetClientMarkClose so we don't need to worry about it being reset or
rewritten by any API we call in the meantime.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Whenever a connection was closed due to keepalive timeout, we would log
a warning but the interrupted API would return rather useless generic
error:
internal error: received hangup / error event on socket
Let's report a proper keepalive timeout error and make sure it is
propagated to all pending APIs. The error should be better now:
internal error: connection closed due to keepalive timeout
Based on an old patch from Martin Kletzander.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Build fail and error like this:
CC qemu/libvirt_driver_qemu_impl_la-qemu_command.lo
qemu/qemu_capabilities.c:46:27: fatal error: qemu_capspriv.h: No such file or directory
#include "qemu_capspriv.h"
Add qemu_capspriv.h to source.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
For some machine types ppc64 machines now require that memory sizes are
aligned to 256MiB increments (due to the dynamically reconfigurable
memory). As now we treat existing configs reasonably in regards to
migration, we can round all the sizes unconditionally. The only drawback
will be that the memory size of a VM can potentially increase by
(256MiB - 1byte) * number_of_NUMA_nodes.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1249006
When we are starting a qemu process for an incomming migration or
snapshot reloading we should not modify the memory sizes in the domain
since we could potentially change the guest ABI that was tediously
checked before. Additionally the function now updates the initial memory
size according to the NUMA node size, which should not happen if we are
restoring state.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1252685
When implementing memory hotplug I've opted to recalculate the initial
memory size (contents of the <memory> element) as a sum of the sizes of
NUMA nodes when NUMA was enabled. This was based on an assumption that
qemu did not allow starting when the NUMA node size total didn't equal
to the initial memory size. Unfortunately the check was introduced to
qemu just lately.
This patch uses the new XML parser flag to decide whether it's safe to
update the memory size total from the NUMA cell sizes or not.
As an additional improvement we now report an error in case when the
size of hotplug memory would exceed the total memory size.
The rest of the changes assures that the function is called with correct
flags.
Add 'initial_memory' member to struct virDomainMemtune so that the
memory size can be pre-calculated once instead of inferring it always
again and again.
Separating of the fields will also allow finer granularity of decisions
in later patches where it will allow to keep the old initial memory
value in cases where we are handling incomming migration from older
versions that did not always update the size from NUMA as the code did
previously.
The change also requires modification of the qemu memory alignment
function since at the point where we are modifying the size of NUMA
nodes the total size needs to be recalculated too.
The refactoring done in this patch also fixes a crash in the hyperv
driver that did not properly initialize def->numa and thus
virDomainNumaGetMemorySize(def->numa) crashed.
In summary this patch should have no functional impact at this point.
The post parse func is growing rather large. Since later patches will
introduce more logic in the memory post parse code, split it into a
separate handler.
Add a new parser flag that will mark code paths that parse XML files
wich will not be used with existing VM state so that post parse
callbacks can possibly do ABI incompatible changes if needed.
The flag was used only for formatting the XML and once the parser and
formatter flags were split in 0ecd685109
it doesn't make sense any more to have it.
Extract the size determination into a separate function and reuse it
across the memory device alignment functions. Since later we will need
to decide the alignment size according to architecture let's pass def to
the functions.
Since test suite now correctly creates capabilities cache, the hack is not
needed any more.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
The main purpose of this patch is to introduce test mode to
virQEMUCapsCacheLookup(). This is done by adding a global variable, which
effectively overrides binary name. This variable is supposed to be set by
test suite.
The second addition is qemuTestCapsCacheInsert() function which allows the
test suite to actually populate the cache.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Invalid read of size 4
at 0x945CA30: __pthread_mutex_unlock_full (in /lib64/libpthread-2.20.so)
by 0x4F0404B: virMutexUnlock (virthread.c:94)
by 0x4F7161B: virStoragePoolObjUnlock (storage_conf.c:2603)
by 0x4FE0476: testStoragePoolUndefine (test_driver.c:4328)
by 0x4FCF086: virStoragePoolUndefine (libvirt-storage.c:656)
by 0x15A7F5: cmdPoolUndefine (virsh-pool.c:1721)
by 0x12F48D: vshCommandRun (vsh.c:1212)
by 0x132AA7: main (virsh.c:943)
Address 0xfda56a0 is 16 bytes inside a block of size 104 free'd
at 0x4C2BA6C: free (vg_replace_malloc.c:473)
by 0x4EA5C96: virFree (viralloc.c:582)
by 0x4F70B69: virStoragePoolObjFree (storage_conf.c:412)
by 0x4F7167B: virStoragePoolObjRemove (storage_conf.c:437)
by 0x4FE0468: testStoragePoolUndefine (test_driver.c:4323)
by 0x4FCF086: virStoragePoolUndefine (libvirt-storage.c:656)
by 0x15A7F5: cmdPoolUndefine (virsh-pool.c:1721)
by 0x12F48D: vshCommandRun (vsh.c:1212)
by 0x132AA7: main (virsh.c:943)
Similar to commit id '35847860', it's possible to attempt to create
a 'netfs' directory in an NFS root-squash environment which will cause
the 'vol-delete' command to fail. It's also possible error paths from
the 'vol-create' would result in an error to remove a created directory
if the permissions were incorrect (and disallowed root access).
Thus rename the virFileUnlink to be virFileRemove to match the C API
functionality, adjust the code to following using rmdir or unlink
depending on the path type, and then use/call it for the VIR_STORAGE_VOL_DIR
As far as not every call of prlsdkUUIDParse assume correct UUID
supplied, there is no use to complain about wrong format in it.
Otherwise our log is flooded with false error messages.
For instance, calling prlsdkUUIDParse from prlsdkEventsHandler
works as a filter and in case of uuid absence for event issuer,
we simply know that we shouldn't continue further processing.
Instead of error logging for all calls we should explicitly take
into accaunt where it is called from.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far this function was not kept in sync with changing
virDomainDiskDef. Fill in all the missing checks and reorganize
their order so it's easier to track which items are not being
checked for.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
I always felt like this function is qemu specific rather than
libvirt-wide. Other drivers may act differently on virDomainDef
change and in fact may require talking to underlying hypervisor
even if something else's than disk->src has changed. I know that
the function is still incomplete, but lets break that into two
commits that are easier to review. This one is pure code
movement.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Firstly, our coding guidelines suggest using 'cleanup' label
instead of 'end'. Then, @ret should be set to value representing
success as the last statement before the 'cleanup' label.
And while I am at this function, lets enumerate all the possible
enum items (virDomainDiskDevice) and avoid using 'default' in
switch(). Pooh. Also, nothing bad happens if we look up the disk
to change in the domain upfront. In fact, it's going to be
helpful later when we want to keep some old values for performing
a rollback.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This new private API should return true iff sources of two disks
differs in sense that qemu should be instructed to change the
disk backend. For instance, ejecting a CDROM is such case, or
pointing disk into a different ISO location, and so on.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
While we currently only allow changing a media in a disk, this is
going to change in a while, so the function name would be
invalid. Moreover, the old name does not match the pattern laid
out by other update functions.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1159219
So, in 11e058ca58 I've tried to make UpdateDevice update
startupPolicy too. And it worked well until somebody came around
and pushed d0dc6c0369 which accidentally removed my
contribution. Redo my commit.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When persistently migrating a domain to a destination host where the
same domain already exists (i.e., it is persistent and shutdown at the
destination), we would happily throw away the original persistent
definition without properly freeing it. And when updating the definition
fails for some reason we don't properly revert to the original state
leaving the domain broken.
In addition to fixing these issues, the patch also makes sure the domain
definition parsed from a migration cookie is either used or freed.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
For quite a long time we don't need to postpone queueing events until
the end of the function since we no longer have the big driver lock.
Let's make the code of qemuMigrationFinish simpler by queuing events at
the time we generate them.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Every single call to qemuDomainEventQueue() uses the following pattern:
if (event)
qemuDomainEventQueue(driver, event);
Let's move the check for valid event to qemuDomainEventQueue and
simplify all callers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Finish is the final state in v2 of our migration protocol. If something
fails, we have no option to abort the migration and resume the original
domain. Non fatal errors (such as failure to start guest CPUs or make
the domain persistent) has to be treated as success. Keeping the domain
running while reporting the failure was just asking for trouble.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Whenever something fails during incoming migration in Finish phase
before we started guest CPUs, we need to kill the domain in addition to
reporting the failure.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When we save status XML at the point during migration where we have
already started the domain on destination, we can't really go back and
abort migration. Thus the only thing we can do is to log a warning and
report success.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Offline migration is quite special because we don't really need to do
anything but make the domain persistent. Let's do it separately from
normal migration to avoid cluttering the code with
!(flags & VIR_MIGRATE_OFFLINE).
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
After attach-device a <hostdev> with --config, new device doesn't
show up in dumpxml and in guest.
To fix that, set dev->data.hostdev = NULL after work so that the
pointer is not freed, since vmdef has the pointer and still need it.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Tool such as libguestfs need the datacenter path to get access to disk
images. The ESX driver knows the correct datacenter path, but this
information cannot be accessed using libvirt API yet. Also, it cannot
be deduced from the connection URI in a robust way.
Expose the datacenter path in the domain XML as <vmware:datacenterpath>
node similar to the way the <qemu:commandline> node works. The new node
is ignored while parsing the domain XML. In contrast to <qemu:commandline>
it is output only.
Commit id 'f1f68ca33' added code to remove the directory paths for
auto-generated sockets, but that code could be called before the
paths were created resulting in generating error messages from
virFileDeleteTree indicating that the file doesn't exist.
Rather than "enforce" all callers to make the non-NULL and existence
checks, modify the virFileDeleteTree API to silently ignore NULL on
input and non-existent directory trees.
When looking for a QEMU binary suitable for running ppc64le guests
we have to take into account the fact that we use the QEMU target
as key for the hash, so direct comparison is not good enough.
Factor out the logic from virQEMUCapsFindBinaryForArch() to a new
virQEMUCapsFindTarget() function and use that both when looking
for QEMU binaries available on the system and when looking up
QEMU capabilities later.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1260753
libxl/libxl_conf.c: In function 'libxlDriverConfigNew':
libxl/libxl_conf.c:1560:30: error: 'log_level' may be used uninitialized
in this function [-Werror=maybe-uninitialized]
Instead of a hardcoded DEBUG log level, use the overall
daemon log level specified in libvirtd.conf when opening
a log stream with libxl. libxl is very verbose when DEBUG
log level is set, resulting in huge log files that can
potentially fill a disk. Control of libxl verbosity should
be placed in the administrator's hands.
Fixes the following error when attempting to add a disk with bus='virtio'
to a machine which actually supports virtio-mmio (caught with ARM virt):
virtio disk cannot have an address of type 'virtio-mmio'
The problem has been likely introduced by
e8d5517254. Before that
qemuAssignDevicePCISlots() was never called for ARM "virt" machine.
Signed-off-by: Pavel Fedin <p.fedin@samsung.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1124841
If running in session mode it may happen that we fail to set
correct SELinux label, but the image may still be readable to
the qemu process. Take this into account.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We may want to do some decisions in drivers based on fact if we
are running as privileged user or not. Propagate this info there.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We have plenty of callbacks in the driver. Some of these
callbacks require more than one argument to be passed. For that
we currently have a data type (struct) per each callback. Well,
so far for only one - SELinuxSCSICallbackData. But lets turn it
into more general name so it can be reused in other callbacks too
instead of each one introducing a new, duplicate data type.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit f1f68ca334 tried fixing running multiple domains under various
users, but if the user can't browse the directory, it's hard for the
qemu running under that user to create the monitor socket.
The permissions need to be fixed in two places in the spec file due to
support for both installations with and without driver modules.
Creating a directory with '$(MKDIR_P) -m' shouldn't fail even on systems
where autoconf needs to fallback to 'install-sh -d'.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1146886
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Commit 8125113c added code that should remove the disk backend if the
fronted hotplug failed for any reason. The code had a bug though as it
used the disk string for unplug rather than the backend alias. Fix the
code by pre-creating an alias string and using it instead of the disk
string. In cases where qemu does not support QEMU_CAPS_DEVICE, we ignore
the unplug of the backend since we can't really create an alias in that
case.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1262399
There's a couple reports of things failing in this area (bug 1259070),
but it's tough to tell what's going wrong without stderr from
qemu-bridge-helper. So let's report stderr in the error message
Couple new examples:
virbr0 is inactive:
internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=virbr0 --fd=21: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=failed to get mtu of bridge `virbr0': No such device
bridge isn't on the ACL:
internal error: /usr/libexec/qemu-bridge-helper --use-vnet --br=br0 --fd=21: failed to communicate with bridge helper: Transport endpoint is not connected
stderr=access denied by acl file
The xenXMConfigCacheRefresh method scans /etc/xen and loads
all config files it finds. It then scans its internal hash
table and purges any (previously) loaded config files whose
refresh timestamp does not match the timestamp recorded at
the start of xenXMConfigCacheRefresh(). There is unfortunately
a subtle flaw in this, because if loading the config files
takes longer than 1 second, some of the config files will
have a refresh timestamp that is 1 or more seconds different
(newer) than is checked for. So we immediately purge a bunch
of valid config files we just loaded.
To avoid this flaw, we must pass the timestamp we record at
the start of xenXMConfigCacheRefresh() into the
xenXMConfigCacheAddFile() method, instead of letting the
latter call time(NULL) again.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
commit 4b53d0d4ac "libxl: don't remove persistent domain on start
failure" cleans up the vm object and sets it to NULL if the vm is not
persistent, however at end job vm (now NULL) is dereferenced via the call to
libxlDomainObjEndJob. Avoid this by skipping "endjob" and going
straight to "cleanup" in this case.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Up until now, the default has been rtl8139, but no check was in
place to make sure that device was actually available.
Now we try rtl8139, e1000 and virtio-net in turn, checking for
availability before using any of them: this means we have a much
better chance for the guest to be able to boot.