Tunnelled migration doesn't require any extra network connections
beside the libvirt daemon. It's capable of strong encryption and the
default option of openstack-nova.
This patch adds the tunnelled migration(Tunnel3params) support to
libxl. On the source side, the data flow is:
* libxlDoMigrateSend() -> pipe libxlTunnel3MigrationFunc() polls pipe
* out and then write to dest stream.
While on the destination side:
* Stream -> pipe -> 'recvfd of libxlDomainStartRestore'
The usage is the same as p2p migration, execpt adding one extra
'--tunnelled' to the libvirt p2p migration command.
Signed-off-by: Bob Liu <bob.liu@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
The newly introduced function libxlDomainMigrationPrepareAny
will be shared between P2P and tunnelled variations.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
This function is returning a boolean therefore check for '< 0'
makes no sense. It should have been
'!qemuDomainNamespaceAvailable'.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The bare fact that mnt namespace is available is not enough for
us to allow/enable qemu namespaces feature. There are other
requirements: we must copy all the ACL & SELinux labels otherwise
we might grant access that is administratively forbidden or vice
versa.
At the same time, the check for namespace prerequisites is moved
from domain startup time to qemu.conf parser as it doesn't make
much sense to allow users to start misconfigured libvirt just to
find out they can't start a single domain.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If the apparmor security driver is loaded/enabled and domain config
contains a <seclabel> element whose type attribute is not 'apparmor',
starting the domain fails when attempting to label resources such
as tap FDs.
Many of the apparmor driver entry points attempt to retrieve the
apparmor security label from the domain def, returning failure if
not found. Functions such as AppArmorSetFDLabel fail even though
domain config contains an explicit 'none' secuirty driver, e.g.
<seclabel type='none' model='none'/>
Change the entry points to succeed if the domain config <seclabel>
is not apparmor. This matches the behavior of the selinux driver.
Commit 2a8d40f4ec refactored qemuMonitorJSONGetCPUx86Data and replaced
virJSONValueObjectGet(reply, "return") with virJSONValueObjectGetArray.
While the former is guaranteed to always return non-NULL pointer the
latter may return NULL if the returned JSON object is not an array.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
mknod() is affected my the current umask, so we're not
guaranteed the newly-created device node will have the
right permissions.
Call chmod(), which is not affected by the current umask,
immediately afterwards to solve the issue.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1421036
For the namespaces feature to work properly we need to be able
to make a perfect copy of the original /dev, including ACLs.
By adding a BuildRequires on libacl-devel we ensure that ACL
support will be enabled at configure time and made available
to the QEMU driver.
To make sure bit 'b' fits into the bitmap, we need to allocate b+1
bits, since we number from 0.
Adjust the bitmap test to set a bit at a multiple of 16.
That way the test fails without this fix, because the VIR_REALLOC
call clears the newly added memory even if the original pointer
has not changed.
Move the range check introduced by commit 2650d5e into
virDomainUSBAddressFindPort. That way both virDomainUSBAddressRelease
and virDomainUSBAddressSetAddHub can benefit from it.
Reported-by: Michal Privoznik <mprivozn@redhat.com>
Due to a logic error, the autofilling of USB port when a bus is
specified:
<address type='usb' bus='0'/>
does not work for non-hub devices on domain startup.
Fix the logic in qemuDomainAssignUSBPortsIterator to also
assign ports for USB addresses that do not yet have one.
https://bugzilla.redhat.com/show_bug.cgi?id=1374128
We have to allocate first and if, and only if, it was successful we
can set the count. A segfault has occurred in
virNetServerServiceNewPostExecRestart() when VIR_ALLOC_N(svc->socks,
n) has failed, but svc->nsocsk = n was already set. Thus
virObejectUnref(svc) was called and therefore it was possible that
virNetServerServiceDispose was called => segmentation fault. For
safeness NULL pointer check were added in
virNetServerServiceDispose().
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Recently e1000 NIC support was added to bhyve; implement that in
the bhyve driver:
- Add capability check by analyzing output of the 'bhyve -s 0,e1000'
command
- Modify bhyveBuildNetArgStr() to support e1000 and also pass
virConnectPtr so it could call bhyveDriverGetCaps() to check if this
NIC is supported
- Modify command parsing code to add support for e1000 and adjust tests
- Add net-e1000 test
If either the "if (STRPREFIX(mem_tokens[j], "max:"))" is never entered
or the "if (virStrToLong_ull(mem_tokens[j] + 4, &p, 10, maxmem) < 0)" break
is hit, control goes back to the outer loop processing 'cmd_tokens' and
it's possible that the 'mem_tokens' would be overwritten.
Found by Coverity
Right now, we use simple string comparison both on the source paths
(mount's output vs pool's source) and the target (mount's mnt_dir vs
pool's target). The problem are symlinks and mount indeed returns
symlinks in its output, e.g. /dev/mappper/lvm_symlink. The same goes for
the pool's source/target, so in order to successfully compare these two
replace plain string comparison with virFileComparePaths which will
resolve all symlinks and canonicalize the paths prior to comparison.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1417203
Signed-off-by: Erik Skultety <eskultet@redhat.com>
So rather than comparing 2 paths (strings) as they are, which can very
easily lead to unnecessary errors (e.g. in storage driver) that the paths
are not the same when in fact they'd be e.g. just symlinks to the same
location, we should put our best effort into resolving any symlinks and
canonicalizing the path and only then compare the 2 paths for equality.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
When FS pool's source is already mounted on the target location instead
of just simply marking the pool as active, thus starting it we fail with
an error stating that the source is indeed already mounted on the target.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Seeing similar error to commit id '997be5c27' with the inability
to find the libvirt_event_poll_purge_timeout_semaphore symbol
causing a virusbtest failure.
On a system with 697 SCSI disks each configured with 8 paths the command
virsh nodedev-list fails with
error: Failed to list node devices
error: internal error: Too many node_devices '16816' for limit '16384'
Increasing the upper limit on lists of node devices from 16K to 64K.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Although currently this is documented in virsh man page
and virsh help, the expicit mention in the error message
is helful for tools using the API directly.
Signed-off-by: Nitesh Konkar <nitkon12@linux.vnet.ibm.com>
==11846== 240 bytes in 1 blocks are definitely lost in loss record 81 of 107
==11846== at 0x4C2BC75: calloc (vg_replace_malloc.c:624)
==11846== by 0x18C74242: virAllocN (viralloc.c:191)
==11846== by 0x4A05E8: qemuMonitorCPUModelInfoCopy (qemu_monitor.c:3677)
==11846== by 0x446E3C: virQEMUCapsNewCopy (qemu_capabilities.c:2171)
==11846== by 0x437335: testQemuCapsCopy (qemucapabilitiestest.c:108)
==11846== by 0x437CD2: virTestRun (testutils.c:180)
==11846== by 0x437AD8: mymain (qemucapabilitiestest.c:176)
==11846== by 0x4397B6: virTestMain (testutils.c:992)
==11846== by 0x437B44: main (qemucapabilitiestest.c:188)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
==22187== 77 (56 direct, 21 indirect) bytes in 1 blocks are definitely lost in loss record 23 of 37
==22187== at 0x4C2BC75: calloc (vg_replace_malloc.c:624)
==22187== by 0x4E75685: virAlloc (viralloc.c:144)
==22187== by 0x4F0613A: virUSBDeviceNew (virusb.c:332)
==22187== by 0x4F05BA2: virUSBDeviceSearch (virusb.c:183)
==22187== by 0x4F05F95: virUSBDeviceFind (virusb.c:296)
==22187== by 0x403514: testUSBList (virusbtest.c:209)
==22187== by 0x403BD8: virTestRun (testutils.c:180)
==22187== by 0x4039E5: mymain (virusbtest.c:285)
==22187== by 0x4056BC: virTestMain (testutils.c:992)
==22187== by 0x403A4A: main (virusbtest.c:293)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We are using couple of functions from there (e.g. virStrdup) and
rely that the binary linking us has the libvirt_utils linked
already. Well, this makes valgrind sad.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
A lot of our tests re-execute themeselves after loading their
mock library. This, however, makes valgrind sad because currently
we do not tell it to trace the process after exec().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This commit removes the handcrafted code for
remoteDomainCreateWithFlags() and lets it auto generate.
A little bit of history repeating...
Commit 03d813bbcd removed the auto generation of
remoteDomainCreateWithFlags() because it was thought that the design
flaw in the remote protocol for virDomainCreate is also within the
remote protocol for virDomainCreateWithFlags. As the commit message of
ddaf15d7a3 mentions this is not the case therefore we
can auto generate the client part.
Even worse there was a typo in remoteDomainCreateWithFlags()
'remote_domain_create_with_flags_args ret;' but in fact it has to be
'remote_domain_create_with_flags_ret ret;'.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
After freeing the data structures we have to reset the counters to
zero. This fixes a segmentation fault when virNetDevIPInfoClear is
called twice (e.g. this is possible in virDomainNetDefParseXML() if
virDomainNetIPInfoParseXML(...) fails with ret < 0 (this leads to the
first call of 'virNetDevIPInfoClear(&def->guestIP)') and the resulting
call of virDomainNetDefFree(def) in the error path of
virDomainNetDefParseXML() (this leads to the second call of
virNetDevIPInfoClear(&def->guestIP), and finally to the segmentation
fault).
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
If virDomainChrSourceDefNew(xmlopt) fails, it will lead to free()ing
the uninitialized pointer bus. The fix for this is to initialize bus
with NULL.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Check if virQEMUCapsNewCopy(...) has failed, thus a segmentation fault
in virQEMUCapsFilterByMachineType(...) will be avoided.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Using libvirt to do live migration over RDMA via IPv6 address failed.
For example:
rhel73_host1_guest1 qemu+ssh://[deba::2222]/system --verbose
root@deba::2222's password:
error: internal error: unable to execute QEMU command 'migrate': RDMA
ERROR: could not rdma_getaddrinfo address deba
As we can see, the IPv6 address used by rdma_getaddrinfo() has only
"deba" part because we didn't properly enclose the IPv6 address in []
and passed rdma:deba::2222:49152 as the migration URI in
qemuMonitorMigrateToHost.
Signed-off-by: David Dai <zdai@linux.vnet.ibm.com>
When the libxl driver is initialized, it creates a virDomainDef
object for dom0 and adds it to the list of domains. Total memory
for dom0 was being set from the max_memkb field of libxl_dominfo
struct retrieved from libxl, but this field can be set to
LIBXL_MEMKB_DEFAULT (~0ULL) if dom0 maximum memory has not been
explicitly set by the user.
This patch adds some simple parsing of the Xen commandline,
looking for a dom0_mem parameter that also specifies a 'max' value.
If not specified, dom0 maximum memory is effectively all physical
host memory.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
The libxl driver reports different values of maximum memory depending
on state of a domain. If inactive, maximum memory value is reported
correctly. When active, maximum memory is derived from max_pages value
returned by the XEN_SYSCTL_getdomaininfolist sysctl operation. But
max_pages can be changed by toolstacks and does not necessarily
represent the maximum memory a domain can use during its active
lifetime.
A better location for determining a domain's maximum memory is the
/local/domain/<id>/memory/static-max node in xenstore. This value
is set from the libxl_domain_build_info.max_memkb field when creating
the domain. Currently it cannot be changed nor can its value be
exceeded by a balloon operation. From libvirt's perspective, always
reporting maximum memory with virDomainDefGetMemoryTotal() will produce
the same results as reading the static-max node in xenstore.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
When a user does not explicitly set a <driver> in the disk config,
libvirt defers selection of a default to libxl. This approach works
fine when starting a domain with such configuration or attaching a
disk to a running domain. But when detaching such a disk, libxl
will fail with "unrecognized disk backend type: 0". libxl makes no
attempt to recalculate a default backend (driver) on detach and
simply fails when uninitialized.
This patch updates the libvirt disk config with the backend selected
by libxl when starting a domain or attaching a disk to a running
domain. Another benefit of this approach is that the live XML is
also updated with the backend driver selected by libxl.
When starting a domian, a libxl_domain_config object is created from
virDomainDef. Any virDomainDiskDef devices with a format of
VIR_STORAGE_FILE_NONE are mapped to LIBXL_DISK_FORMAT_RAW in the
corresponding libxl_disk_device, but the virDomainDiskDef format is
never updated to reflect the change.
A better place to set a default format for disk devices is the
device post-parse callback, ensuring the virDomainDiskDef object
reflects the default format.
Since a successful completion of the calls to openvswitch is expected
a longer timeout should be able to be chosen to account for loaded systems.
Therefore this patch provides the ability to specify the timeout value for
openvswitch calls in the libvirtd configuration file.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
This patchs allows to set the timeout value used for all
openvswitch calls. The default timeout value remains as
before at 5 seconds.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Provide the ability to specify a default timeout value for
successful completion of openvswitch calls in the libvirtd
configuration file.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This patch add support for file memory backing on numa topology.
The specified access mode in memoryBacking can be overriden
by specifying token memAccess in numa cell.
This part introduces new xml elements for file based
memorybacking support and their parsing.
(It allows vhost-user to be used without hugepages.)
New xml elements:
<memoryBacking>
<source type="file|anonymous"/>
<access mode="shared|private"/>
<allocation mode="immediate|ondemand"/>
</memoryBacking>
Add new parameter memory_backing_dir where files will be stored when memoryBacking
source is selected as file.
Value is stored inside char* memoryBackingDir
Rename to avoid duplicate code. Because virDomainMemoryAccess will be
used in memorybacking for setting default behaviour.
NOTE: The enum cannot be moved to qemu/domain_conf because of headers
dependency
Strings associated with virDomainHyperv values in domain_conf.c are used to
construct HyperV CPU features names to be compared with names defined in
cpu_x86_data.h and the names for HyperV "spinlocks" feature don't match.
This leads to a misleading warning:
"host doesn't support hyperv 'spinlocks' feature" even when it's supported.
Let's fix it and rename along with it VIR_CPU_x86_KVM_HV_SPINLOCK to
VIR_CPU_x86_KVM_HV_SPINLOCKS.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>