Call the internal driver callbacks rather than the public APIs to avoid
calling unnecessarily the error dispatching code and don't overwrite
the error messages provided by the APIs. They are good enough to
describe which secret is missing either by UUID or the usage (basically
name).
For a few cases where we handle secret information it's good to clear
the buffers containing sensitive data before freeing them.
Introduce VIR_DISPOSE, VIR_DISPOSE_N and VIR_DISPOSE_STRING that allow
simple clearing fo the buffers holding sensitive information on cleanup
paths.
When -cpu host is supported by a QEMU binary, a user can use
<cpu mode='host-passthrough'/> in domain XML even when libvirtd failed
to find a matching model for the host CPU. Let's make it obvious by
advertising <cpuselection/> guest capability whenever -cpu host is
supported.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When probing the <emulator> with '-help' to determine if
it is the old qemu, errors are reported if the emulator
doesn't exist
libvirt: error : internal error: Child process
(/usr/lib/xen/bin/qemu-dm -help) unexpected exit status 127:
libvirt: error : cannot execute binary /usr/lib/xen/bin/qemu-dm:
No such file or directory
Avoid the probe if the specified emulator doesn't exist,
squelching the error. There is no behavior change since
libxlDomainGetEmulatorType() would return
LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN if the probe failed
via virCommandRun().
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Move some parts of virStorageFileRemoveLastPathComponent
into a separate function so they can be reused.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Further followup discussions in list on commit 192a53e concluded
that we should be leaving out the USB controller only for
i440fx machines as default USB can be used by someone on q35
at random slots.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Move filling out the default video (v)ram to DeviceDefPostParse.
This means it can be removed from virDomainVideoDefParseXML
and qemuParseCommandLine. Also, we no longer need to special case
VIR_DOMAIN_VIRT_XEN, since the per-driver callback gets called
before the generic one.
Commit 6879be48 moved adding of an implicit video device after XML
parsing. As a result, libxlDomainDeviceDefPostParse() is no longer
called to set the default vram when adding an implicit device.
Commit 6879be48 assumes virDomainVideoDefaultRAM() will set the
default vram, but it returns 0 if the domain virtType is
VIR_DOMAIN_VIRT_XEN. Attempting to start an HVM domain with vram=0
results in
error: unsupported configuration: videoram must be at least 4MB for CIRRUS
The default vram setting for Xen HVM domains depends on the device
model used (qemu-xen vs qemu-traditional), hence setting the
default is deferred to libxlDomainDeviceDefPostParse().
Call the device post-parse callback even for implicit video,
to fill out the default vram even for VIR_DOMAIN_VIRT_XEN.
https://bugzilla.redhat.com/show_bug.cgi?id=1334557
Most-of-commit-message-by: Jim Fehlig <jfehlig@suse.com>
Both virGetLastError and virGetLastErrorMessage call virLastErrorObject method
that returns a thread-local error object. However, if a direct call to malloc
or pthread_setspecific (probably also due to malloc, since it sets ENOMEM)
fail, virLastErrorObject returns NULL which, although incorrectly interpreted
by virGetLastError as no error, still requires the caller to check for NULL
pointer. This isn't the case with virGetLastErrorMessage that also treated it
incorrectly as no error, but returned the literal "no error".
This patch tweaks the checks in the virGetLastErrorMessage function, so that
if virLastErrorObject failed, it returned "unknown error" which is equivalent
to the current approach with virGetLastError and if it returned NULL,
"unknown error" was set.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit id 'df1011ca8' modified virStorageBackendDiskDeleteVol to use
"dmsetup remove --force" to remove the volume, but left things in an
inconsistent state since the partition still existed on the disk and
only the device mapper device (/dev/dm-#) was removed.
Prior to commit '1895b421' (or '1ffd82bb' and '471e1c4e'), this could
go unnoticed since virStorageBackendDiskRefreshPool wasn't called.
However, the pool would be unusable since the /dev/dm-# device would
be removed even though the partition was not removed unless a multipathd
restart reset the link. That would of course make the volume appear again
in the pool after a refresh or pool start after libvirt reload.
This patch removes the 'dmsetup' logic and re-implements the partition
deletion logic for device mapper devices. The removal of the partition
via 'parted rm --script #' will cause udev device change logic to allow
multipathd to handle removing the dm-* device associated with the partition.
https://bugzilla.redhat.com/show_bug.cgi?id=1265694
Commit id '020135dc' didn't quite get the algorithm correct when a
device mapper source ended with a non numeric value (e.g. ends with
an alphabet value).
This patch modifies the 'part_separator' logic to add the "p" separator
to the attempted target path name only when specified as part_separator='yes'.
For a source name that already ends with a number, the logic doesn't change
as the part separator would need to be there.
For a source name that ends with something other than a number, this allows
the possibility that a "p" separator can be added. The default for one of
these source devices is to not add the separator.
The key for device mapper and the need for a partition separator "p" is
the presence of a number in the last character of the device name link
in /dev/mapper. A name such as "/dev/mapper/mpatha1" would generate
a "/dev/mapper/mpatha1p1" partition, while "/dev/mapper/mpatha" would
generate partition "/dev/mapper/mpatha1". Similarly for a device
mapper entry not using friendly names or an alias, a device such as
"/dev/mapper/3600a0b80005b10ca00005ad656fd8d93" would generate a
paritition "/dev/mapper/3600a0b80005b10ca00005ad656fd8d93p1", while
a device such as "/dev/mapper/3600a0b80005b10ca00005e115729093f" would
generate a partition "/dev/mapper/3600a0b80005b10ca00005e115729093f1".
The long number is the WWID of the device. It's also possible to assign
an alias for a device mapper entry, that alias follows the same rules
with respect to ending with a number or not when adding a "p" to create
the target device path.
Prior to calling the 'refreshPool' during CreatePool or UploadPool
operations, we need to clear the pool; otherwise, the pool will
have duplicated entries.
https://bugzilla.redhat.com/show_bug.cgi?id=1318993
Commit id 'dd519a294' caused a regression cloning a volume into a
logical pool by removing just the 'allocation' adjustment during
storageVolCreateXMLFrom. Combined with the change to not require the
new volume input XML to have a capacity listed (commit id 'e3f1d2a8')
left the possibility that a zero allocation value (e.g., not provided)
would create a thin/sparse logical volume. When a thin lv becomes fully
populated, then LVM sets the partition 'inactive' and the subsequent
fdatasync() fails.
Add a new 'has_allocation' flag to be set at XML parse time to indicate
that allocation was provided. This is done so that if it's not provided
the create-from code uses the capacity value since we document that if
omitted, the volume will be fully allocated at time of creation.
For a logical backend, that creation time is 'createVol', while for a
file backend, creation doesn't set the size, but the 'createRaw' called
during buildVolFrom will decide whether the file is sparse or not based
on the provided capacity and allocation value.
For volume clones that provide different allocation and capacity values
to allow for sparse files, there is no change.
Usage of this keyword in front of function declaration that is exported via a
header file is unnecessary, since internally, this has been the default for most
compilers for quite some time.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
libvirt may automatically add a pci-root or pcie-root controller to a
domain, depending on the arch/machinetype, and it hopefully always
makes the right decision about which to add (since in all cases these
controllers are an implicit part of the virtual machine).
But it's always possible that someone will create a config that
explicitly supplies the wrong type of PCI controller for the selected
machinetype. In the past that would lead to an error later when
libvirt was trying to assign addresses to other devices, for example:
XML error: PCI bus is not compatible with the device at
0000:00:02.0. Device requires a PCI Express slot, which is not
provided by bus 0000:00
(that's the error message that appears if you replace the pcie-root
controller in a Q35 domain with a pci-root controller).
This patch adds a check at the same place that the implicit
controllers are added (to ensure that the same logic is used to check
which type of pci root is correct). If a pci controller with index='0'
is already present, we verify that it is of the model that we would
have otherwise added automatically; if not, an error is logged:
The PCI controller with index='0' must be " model='pcie-root' for
this machine type, " but model='pci-root' was found instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1004602
Similar to "support Xen migration stream V2 in save/restore",
add support for indicating the migration stream version in
the migration code. To accomplish this, add a minimal migration
cookie in the libxl driver that is passed between source and
destination hosts. Initially, the cookie is only used in
the Begin and Prepare phases of migration to communicate the
version of the migration stream produced by the source.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Xen 4.6 introduced a new migration stream commonly referred to as
"migration V2". Xen 4.6 and newer always produce this new stream,
whereas Xen 4.5 and older always produce the legacy stream.
Support for migration stream V2 can be detected at build time with
LIBXL_HAVE_SRM_V2 from libxl.h. The legacy and V2 streams are not
compatible, but a V2 host can accept and convert a legacy stream.
Commit e7440656 changed the libxl driver to use the lowest libxl
API version possible (version 0x040200) to ensure the driver
builds against older Xen releases. The old 4.2 restore API does
not support specifying a stream version and assumes a legacy
stream, even if the incoming stream is migration V2. Thinking it
has been given a legacy stream, libxl will fail to convert an
incoming stream that is already V2, which causes the entire
restore operation to fail. Xen's libvirt-related OSSTest has been
failing since commit e7440656 landed in libvirt.git master. One
of the more recent failures can be seen here
http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg00071.html
This patch changes the call to libxl_domain_create_restore() to
include the stream version if LIBXL_HAVE_SRM_V2 is defined. The
version field of the libxlSavefileHeader struct is also updated
to '2' when LIBXL_HAVE_SRM_V2 is defined, ensuring the stream
version in the header matches the actual stream version produced
by Xen. Along with bumping the libxl API requirement to 0x040400,
this patch fixes save/restore on a migration V2 Xen host.
Oddly, migration has never used the libxlSavefileHeader. It
handles passing configuration in the Begin and Prepare phases,
and then calls libxl directly to transfer domain state/memory
in the Perform phase. A subsequent patch will add stream
version handling in the Begin and Prepare phase handshaking,
which will fix the migration related OSSTest failures.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
In LIBXL_API_VERSION 0x040400, the libxl_domain_create_restore API
gained a parameter for specifying restore parameters. Switch to
using version 0x040400, which will be useful in a subsequent commit
to specify the Xen migration stream version when restoring.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remove the possibility that a NULL hostdev->privateData or a
disk->privateData could crash libvirtd by checking for NULL
before dereferencing for the secinfo structure in the
qemuDomainSecret{Disk|Hostdev}Destroy functions. The hostdevPriv
could be NULL if qemuProcessNetworkPrepareDevices adds a new
hostdev during virDomainNetGetActualHostdev that then gets
inserted via virDomainHostdevInsert. The hostdevPriv was added
by commit id '27726d8' and is currently only used by scsi hostdev.
SRIOV VFs used in macvtap passthrough mode can take advantage of the
SRIOV card's transparent vlan tagging. All the code was there to set
the vlan tag, and it has been used for SRIOV VFs used for hostdev
interfaces for several years, but for some reason, the vlan tag for
macvtap passthrough devices was stubbed out with a -1.
This patch moves a bit of common validation down to a lower level
(virNetDevReplaceNetConfig()) so it is shared by hostdev and macvtap
modes, and updates the macvtap caller to actually send the vlan config
instead of -1.
Once we're able to list and identify all clients connected to a specific
server, we can then support force-closing a connection. This patch introduces
a simple API calling virNetServerClientClose on a specific client, which
can be later extended easily, e.g. by sending an event once the client is
disconnected successfully.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Unlike the previous commit, we do actually support one client-side only flag
VIR_CONNECT_NO_ALIASES, so besides removing the check for flags this flag
has to be masked out before sending a message to the daemon, otherwise it
would trigger an error when checking flags on the daemon side.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit 5ed235c6 added unnecessary redifinition of
virDomainCapsDeviceHostdev in conf/domain_capabilities.h. This breaks
build with clang 3.4:
In file included from conf/domain_capabilities.c:25:
conf/domain_capabilities.h:88:44: error: redefinition of typedef
'virDomainCapsDeviceHostdev' is a C11 feature
[-Werror,-Wtypedef-redefinition]
typedef struct _virDomainCapsDeviceHostdev virDomainCapsDeviceHostdev;
^
conf/domain_capabilities.h:86:44: note: previous definition is here
typedef struct _virDomainCapsDeviceHostdev virDomainCapsDeviceHostdev;
So drop one of those.
If the call to virXPathNodeSet to set naddresses fails, Coverity notes
that the subsequent VIR_ALLOC_N cannot have a negative value (well it
probably wouldn't be negative per se).
Signed-off-by: John Ferlan <jferlan@redhat.com>
Both instances use VIR_WARN() to print the error from a failed
virDBusGetSystemBus() call. Rather than use the virGetLastError
and need to check for valid return err pointer, just use the
virGetLastErrorMessage.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Requires adding the plumbing for <device><video>
The value is <enum name='modelType'> to match the associated domain
XML of <video><model type='XXX'/>
Wire it up for qemu too
Commit 0b36b0e9 broke polkit agent startup when attempting to fix a
coverity warning. Refactor it properly so that we don't need the 'cmd'
intermediate variable.
qemuDomainCheckDiskPresence has short-circuit code to skip the
determination of the disk backing chain for storage formats that can't
have backing volumes. The code treats VIR_STORAGE_FILE_NONE as not
having backing chain and skips the call to qemuDomainDetermineDiskChain.
This is wrong as qemuDomainDetermineDiskChain is responsible for storage
format detection and has logic to determine the default type if format
detection is disabled.
This allows to storage passed via <disk type="volume"> to circumvent the
enforcement to have correct storage format or that we shall default to
format='raw', since we don't set the default type via the post parse
callback for "volume" backed disks as the translation code could come up
with a better guess.
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1328003
Extract the relevant parts of the existing checker and reuse them for
blockcopy since copying to a non-block device creates an invalid
configuration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1209802
In qemuCheckDiskConfig would now use virDomainDiskSourceIsBlockType just
as a glorified version of virStorageSourceIsBlockLocal that reports
error messages. Replace it with the latter including the message for
clarity.
Commit c820fbff9f added support for iSCSI
disk as backing for <disk device='lun'>. We would not use it for a disk
type="volume" with direct access mode which basically maps to direct
iSCSI usage. Fix it by adding the storage source type accessor that
resolves the volume type.
Commit 36025c552 tried to improve error reporting for <disk type="lun">
but reused the code in LXC which doesn't care about the actual disk
type. The error messages would then contain a bogous hint that the
config for the 'lun' device is invalid which might not be the case.
Re-do the relevant portion of the commit with the original message.
For disks sources described by a libvirt volume we don't need to do a
complicated check since virStorageTranslateDiskSourcePool already
correctly determines the actual disk type.
Replace the checks using a new accessor that does not open-code the
whole logic.
In 7884d089d2 I've started to refactor qemu_monitor_json.c.
Thing is, it's current structure is nothing like the rest of our
code. The @ret variable is rewritten all the time, if()-s are
nested instead of using goto and so on.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commin 36785c7e refactored the code for input devices but introduced a
bug where we removed all keyboard from migratable XML. We have to
remove only implicit keyboards like PS2 or XEN.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Move adding the config listen type=address if there is none in
qemuProcessPrepareDomain and move check for multiple listens to
qemuProcessStartValidate.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Fron c3bd0019c0 on instead of creating the following path for
cgroups:
/sys/fs/cgroupX/$name.libvirt-$driver
we generate rather more verbose one:
/sys/fs/cgroupX/$driver-$id-$name.libvirt-$driver
where $name is optional and included iff contains allowed chars.
See original commit for more reasoning. Now, problem with the
original commit is that we are unable to start any LXC domain
after it. Because when starting LXC container, the CGroup layout
is created by our lxc_controller process and then detected and
validated by libvirtd. The validation is done by trying to match
detected layout against all the possible patterns for cgroup
paths that we've ever had. And the commit in question forgot to
update this part of the code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add the data structure and infrastructure to support an initialization
vector (IV) secrets. The IV secret generation will need to have access
to the domain private master key, so let's make sure the prepare disk
and hostdev functions can accept that now.
Anywhere that needs to make a decision over which secret type to use
in order to fill in or use the IV secret has a switch added.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Create helper API's in order to build the network URI as shortly we will
be adding a new SecretInfo type
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than need to call qemuDomainSecretDestroy after any call to
qemuProcessLaunch, let's do the destroy in qemuProcessLaunch since
that's where command line is eventually generated and processed. Once
it's generated, we can clear out the secrets.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '40d8e2ba3' added the function to qemuProcessStart because
in order to set up some secrets in the future we will need the master
key. However, since the previous patch split the master key creation
into two parts (create just the key and create the file), we can now
call qemuDomainSecretPrepare from qemuProcessPrepareDomain since the
file is not necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
A recent review of related changes noted that we should split the creation
(or generation) of the master key into the qemuProcessPrepareDomain and leave
the writing of the master key for qemuProcessPrepareHost.
Made the adjustment and modified some comments to functions that have
changed calling parameters, but didn't change the intro doc.
Signed-off-by: John Ferlan <jferlan@redhat.com>
From a review after push, add the "_TYPE" into the name.
Also use qemuDomainSecretInfoType in the struct rather than int
with the comment field containing the struct name
Signed-off-by: John Ferlan <jferlan@redhat.com>
This removes the opencoded payload freeing in the client, to use
the shared virNetMessageClearPayload call. Two changes:
- ClearPayload sets nfds=0, which fixes a potential crash if
an error path called virNetMessageFree/Clear on the message
after fds was free'd
- We drop the inner loop VIR_FORCE_CLOSE... this may mean fds are
kept open a little bit longer if the call is blocking but in
practice I don't think it will have any effect
I've noticed this while trying to compile libvirt on my arm box.
CC rpc/libvirt_net_rpc_server_la-virnetserverclient.lo
rpc/virnetserverclient.c: In function 'virNetServerClientNewPostExecRestart':
rpc/virnetserverclient.c:516:45: error: cast increases required alignment of target type [-Werror=cast-align]
(long long *) ×tamp) < 0) {
^
cc1: all warnings being treated as errors
Problem is, @timestap is defined as time_t which is 32 bits long,
and we are typecasting it to long long which is 64bits long.
Solution is to make @timestamp type of long long. At the same
time, we can make @conn_time in _virNetServerClient struct long
long too. There is no need for it to be type of time_t.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In this function, @id is defined as unsigned long long. When
passing this variable to virJSONValueObjectGetNumberUlong(),
well address of this variable, it's typecasted to ull*. There
is no need for that. It's a same story with @nrequests_max.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
json_reformat uses two spaces for when indenting nested objects, let's
do the same. The result of virJSONValueToString will be exactly the same
as json_reformat would produce.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
virQEMUCapsNewForBinary unconditionally loads data from cache and probes
using both QMP and -help parsing, which is suboptimal when we want to
use it in tests.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Bhyve supports ACPI shutdown by issuing SIGTERM signal to a bhyve
process.
Add the bhyveDomainShutdown() function and virBhyveProcessShutdown()
helper function that just sends SIGTERM to VM's bhyve process. If
a guest supports ACPI shutdown then process will be terminated and
this event will be noticed by the bhyve monitor code that will handle
setting proper status and clean up VM's resources by calling
virBhyveProcessStop().
Current implementation of domainDestroy for bhyve calls
virProcessKillPainfully() for the bhyve process and then
executes "bhyvectl --destroy".
This is wrong for two reasons:
* bhyvectl --destroy alone is sufficient because it terminates
the process
* virProcessKillPainfully() first sends SIGTERM and after few
attempts sends SIGKILL. As SIGTERM triggers ACPI shutdown that
we're not interested in, it creates an unwanted side effect in
domainDestroy.
Also, destroy monitor only after "bhyvectl --destroy" command succeeded
to avoid a case when the command fails and domain remains running, but
not being monitored anymore.
Since nparams can be technically negative, it is a good practice throughout
our code to check if nparams actually has a non-negative value. The same effect
would be achieved by converting our internal typed params serializer argument
to 'unsigned' type, but it definitely would not be the path of least resistance.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1286709
Now that we have all the pieces in place, we can add the 'iothread=#' to
the command line for the (two) controllers that support it (virtio-scsi-pci
and virtio-scsi-ccw). Add the tests as well...
Rather than an if statement, use a switch.
The switch will also catch the illegal usage of 'iothread' with some other
kind of unsupported bus configuration.
Add the ability to add an 'iothread' to the controller which will be how
virtio-scsi-pci and virtio-scsi-ccw iothreads have been implemented in qemu.
Describe the new functionality and add tests to parse/validate that the
new attribute can be added.
An iothread for virtio-scsi is a property of the controller. Add a lookup
of the 'virtio-scsi-pci' and 'virtio-scsi-ccw' device properties and parse
the output. For both, support for the iothread was added in qemu 2.4
while support for virtio-scsi in general was added in qemu 1.4.
Modify the various mock capabilities replies (by hand) to reflect the
when virtio-scsi was supported and then specifically when the iothread
property was added. For versions prior to 1.4, use the no device error
return for virtio-scsi. For versions 1.4 to before 2.4, add some data
for virtio-scsi-pci even though it isn't complete we're not looking for
anything specific there anyway. For 2.4 to 2.6, add a more complete reply.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Expose a public API to retrieve some identity and connection information about
a client connected to the specified server on daemon. The identity info
retrieved is mostly connection transport dependent, i.e. there won't be any
socket address returned for a local (UNIX socket) connection, while on the
other hand, when connected through TLS or unencrypted TCP, obviously no UNIX
process identification will be present in the returned data. All supported
values that can be returned in typed params are exposed and documented in
include/libvirt/libvirt-admin.h
Signed-off-by: Erik Skultety <eskultet@redhat.com>
This method just aggregates various client object attributes, like socket
address, connection type (RO/RW), and some TCP/TLS/UNIX identity in an atomic
manner.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
We do have a similar method, serving the same purpose, for TLS, but we lack
one for SASL. So introduce one, in order for other modules to be able to find
out, if a SASL session is active, or better said, that a SASL session exists
at all.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Our socket address format is in a rather non-standard format and that is
because sasl library requires the IP address and service to be delimited by a
semicolon. The string form is a completely internal matter, however once the
admin interfaces to retrieve client identity information are merged, we should
return the socket address string in a common format, e.g. format defined by
URI rfc-3986, i.e. the IP address and service are delimited by a colon and
in case of an IPv6 address, square brackets are added:
Examples:
127.0.0.1:1234
[::1]:1234
This patch changes our default format to the one described above, while adding
separate methods to request the non-standard SASL format using semicolon as a
delimiter.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Just like with server-related APIs, before any of client-based APIs can be
called, a reference to a client-side client object needs to be obtained. For
this purpose, a lookup method should exist. Apart from the client retrieval
logic, a new error code for non-existent client had to be added as well.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
In majority of our functions we have this variable @ret that is
overwritten a lot. In other areas of the code we use 'goto
cleanup;' just so that this wouldn't happen. But here.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This adds a ports= attribute to usb controller XML, like
<controller type='usb' model='nec-xhci' ports='8'/>
This maps to:
qemu -device nec-usb-xhci,p2=8,p3=8
Meaning, 8 ports that support both usb2 and usb3 devices. Gerd
suggested to just expose them as one knob.
https://bugzilla.redhat.com/show_bug.cgi?id=1271408
In these functions I'm fixing here, we do call
qemuMonitorJSONCheckError() followed by another check if qemu
reply contains 'return' object. If it wouldn't, the former
CheckError() function would error out and the flow would not even
get to the latter.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Usually, the flow in this area of the code is as follows:
qemuMonitorJSONMakeCommand()
qemuMonitorJSONCommand()
qemuMonitorJSONCheckError()
parseReply()
But in this function, for some reasons, the last two steps were
swapped. This makes no sense.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In qemuDomainDefAddDefaultDevices we check for a non-NULL
def->os.machine for x86 archs, but not the others.
Moreover, the only caller - qemuDomainDefPostParse
already checks for it and even then it can happen only
if /etc/libvirt contains an XML without a machine type.
We do not need to propagate the exact return values
and the only possible ones are 0 and -1 anyway.
Remove the temporary variable and use the usual pattern:
if (f() < 0)
return -1;
https://bugzilla.redhat.com/show_bug.cgi?id=1139766
Thing is, for some reasons you can have your domain's RTC to be
in something different than UTC. More weirdly, it's not only time
zone what you can shift it of, but an arbitrary value. So, if
domain is configured that way, libvirt will correctly put it onto
qemu cmd line and moreover track it as this offset changes during
domain's life time (e.g. because guest OS decides the best thing
to do is set new time to RTC). Anyway, they way in which this
tracking is implemented is events. But we've got a problem if
change in guest's RTC occurs and the daemon is not running. The
event is lost and we end up reporting invalid value in domain
XML. Therefore, when the daemon is starting up again and it is
reconnecting to all running domains, re-fetch their RTC so the
correct offset value can be computed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Although we document 6 types of transport that we support, internally we can
only differentiate between TCP, TLS, and UNIX transports only, since both SSH
and libssh2 transports, due to using netcat, behave in the exactly the same
way as a UNIX socket.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
For now, the list copy is done simply by locking the whole server, walking the
original and increasing the refcount on each object. We may want to change
the list to a lockable object (like list of domains) later in the future if
we discover some performance issues related to locking the whole server in
order to walk the whole list of clients, possibly issuing some 'ForEach'
callback.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Now that libvirt-admin supports another client-side object and provided that
we want to generate as many both client-side and server-side RPC dispatchers,
support for this needs to be added to gendispatch.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Besides ID, the object also stores static data like connection transport and
connection timestamp, since once obtained a list of all clients connected to a
server, from user's perspective, it would be nice to know whether a given
client is remote or local only and when did it connect to the daemon.
Along with the object introduction, all necessary client-side methods necessary
to work with the object are added as well.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Besides ID, libvirt should provide several parameters to help the user
distinguish two clients from each other. One of them is the connection
timestamp. This patch also adds a testcase for proper JSON formatting of the
new attribute too (proper formatting of older clients that did not support
this attribute yet is included in the existing tests) - in order to
testGenerateJSON to work, a mock of time_t time(time_t *timer) needed to be
created.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Admin API needs a way of addressing specific clients. Unlike servers, which we
are happy to address by names both because its name reflects its purpose (to
some extent) and we only have two of them (so far), naming clients doesn't make
any sense, since a) each client is an anonymous, i.e. not recognized after a
disconnect followed by a reconnect, b) we can't predict what kind of requests
it's going to send to daemon, and c) the are loads of them comming and going,
so the only viable option is to use an ID which is of a reasonably wide data
type.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
If a panic device is being defined without a model in a domain
the default value is always overwritten with model ISA. An ISA
bus does not exist on S390 and therefore specifying a panic device
results in an unsupported configuration.
Since the S390 architecture inherently provides a crash detection
capability the panic device should be defined in the domain xml.
This patch adds an s390 panic device model and prevents setting a
device address on it.
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The iohelper dies on SIGPIPE if the stream is closed before all data
is processed. IMO this should be an error condition for virStreamFinish
according to docs like:
* This method is a synchronization point for all asynchronous
* errors, so if this returns a success code the application can
* be sure that all data has been successfully processed.
However for virStreamAbort, not so much:
* Request that the in progress data transfer be cancelled
* abnormally before the end of the stream has been reached.
* For output streams this can be used to inform the driver
* that the stream is being terminated early. For input
* streams this can be used to inform the driver that it
* should stop sending data.
Without this, virStreamAbort will realistically always error for
active streams like domain console. So, treat the SIGPIPE case
as non-fatal if abort is requested.
Note, this will only affect an explicit user requested abort. An
abnormal abort, like from a server error, always raises an error
in the daemon.
libvirt-daemon-config-nwfilter will put a bunch of xml configs
into /etc/libvirt/nwfilter. These configs don't hardcode a UUID
and depends on libvirt to generate one. However the generated UUID
is never saved to disk, unless the user manually calls Define.
This makes daemon reload quite noisy with many errors like:
error : virNWFilterObjAssignDef:3101 : operation failed: filter 'allow-incoming-ipv4' already exists with uuid 50def3b5-48d6-46a3-b005-cc22df4e5c5c
Because a new UUID is generated every time the config is read from
disk, so libvirt constantly thinks it's finding a new nwfilter.
Detect if we generated a UUID when the config file is loaded; if so,
resave the new contents to disk to ensure the UUID is persisteny.
This is similar to what was done in commit a47ae7c0 with virtual
networks and generated MAC addresses
In virNWFilterObjLoad we can still fail after virNWFilterObjAssignDef,
but we don't unlock and free the created virNWFilterObjPtr in the
cleanup path.
The bit we are trying to do after AssignDef is just STRDUP in the
configFile path. However caching the configFile in the NWFilterObj
is largely redundant and doesn't follow the same pattern we use
for domain and network objects.
So just remove all the configFile caching which fixes the latent
bug as a side effect.
We will segfault of a daemon reload picks up a new network config
that needs to be autostarted. We shouldn't be passing NULL for
network_driver here. This seems like it was missed in the larger
rework in commit 1009a61e
The default USB controller is not sent to destination as the older versions
of libvirt(0.9.4 or earlier as I see in commit log of 409b5f54) didn't
support them. For some archs where the support started much later can
safely send the USB controllers without this worry. So, send the controller
to destination for all archs except x86. Moreover this is not very applicable
to x86 as the USB controller has model ich9_ehci1 on q35 and for pc-i440fx,
there cant be any slots before USB as it is fixed on slot 1.
The patch fixes a bug that, if the USB controller happens to occupy
a slot after disks/interfaces and one of them is hot-unplugged, then
the default USB controller added on destination takes the smallest slot
number and that would lead to savestate mismatch and migration
failure. Seen and verified on PPC64.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
We historically format runtime seclabel selinux/apparmor values,
however we skip formatting runtime DAC values. This was added in
commit 990e46c454
Author: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
Date: Fri Aug 31 13:40:41 2012 +0200
conf: Avoid formatting auto-generated DAC labels
to maintain migration compatibility with libvirt < 0.10.0.
However the formatting was skipped unconditionally. Instead only
skip formatting in the VIR_DOMAIN_DEF_FORMAT_MIGRATABLE case.
https://bugzilla.redhat.com/show_bug.cgi?id=1215833
Trying to define a pool name containing an embedded '/'
will immediately fail when trying to write the XML to disk.
This patch explicitly rejects names containing a '/'
Besides our stateful driver, there are two other storage impls:
esx and phyp. esx doesn't support pool creation, so this should
doesn't apply.
phyp does support pool creation, and the name is passed to the
'mksp' tool, which google doesn't reveal whether it accepts '/'
or not. IMO the likeliness of this impacting any users is near zero
Trying to define a network name containing an embedded '/'
will immediately fail when trying to write the XML to disk.
This patch explicitly rejects names containing a '/'
Besides the network bridge driver, the only other network
implementation is a very thin one for virtualbox, which seems to
use the network name as a host interface name, which won't
accept '/' anyways, so I think this is fine to do unconitionally.
https://bugzilla.redhat.com/show_bug.cgi?id=787604
Trying to define a domain name containing an embedded '/'
will immediately fail when trying to write the XML to disk for
our stateful drivers. This patch explicitly rejects names
containing a '/', and provides an xmlopt feature for drivers
to avoid this validation check, which is enabled in every
non-stateful driver that already has xmlopt handling wired up.
(Technically this could reject a previously accepted vmname like
'/foo', however at least for the qemu driver that falls over
later when starting qemu)
https://bugzilla.redhat.com/show_bug.cgi?id=639923
We were lacking tests that are checking for the completeness of our
nodedev XMLs and also whether we output properly formatted ones. This
patch adds parsing for the capability elements inside the <capability
type='pci'> element. Also bunch of tests are added to show everything
works properly.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
We had both and the only difference was that the latter also included
information about multifunction setting. The problem with that was that
we couldn't use functions made for only one of the structs (e.g.
parsing). To consolidate those two structs, use the one in virpci.h,
include that in domain_conf.h and add the multifunction member in it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Rather than take username and password as parameters, now take
a qemuDomainSecretInfoPtr and decode within the function.
NB: Having secinfo implies having the username for a plain type
from a successful virSecretGetSecretString
Signed-off-by: John Ferlan <jferlan@redhat.com>
Similar to the qemuDomainSecretDiskPrepare, generate the secret
for the Hostdev's prior to call qemuProcessLaunch which calls
qemuBuildCommandLine. Additionally, since the secret is not longer
added as part of building the command, the hotplug code will need
to make the call to add the secret in the hostdevPriv.
Since this then is the last requirement to pass a virConnectPtr
to qemuBuildCommandLine, we now can remove that as part of these
changes. That removal has cascading effects through various callers.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Modeled after the qemuDomainDiskPrivatePtr logic, create a privateData
pointer in the _virDomainHostdevDef to allow storage of private data
for a hypervisor in order to at least temporarily store auth/secrets
data for usage during qemuBuildCommandLine.
NB: Since the qemu_parse_command (qemuParseCommandLine) code is not
expecting to restore the auth/secret data, there's no need to add
code to handle this new structure there.
Updated copyrights for modules touched. Some didn't have updates in a
couple years even though changes have been made.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than needing to pass the conn parameter to various command
line building API's, add qemuDomainSecretPrepare just prior to the
qemuProcessLaunch which calls qemuBuilCommandLine. The function
must be called after qemuProcessPrepareHost since it's expected
to eventually need the domain masterKey generated during the prepare
host call. Additionally, future patches may require device aliases
(assigned during the prepare domain call) in order to associate
the secret objects.
The qemuDomainSecretDestroy is called after the qemuProcessLaunch
finishes in order to clear and free memory used by the secrets
that were recently prepared, so they are not kept around in memory
too long.
Placing the setup here is beneficial for future patches which will
need the domain masterKey in order to generate an encrypted secret
along with an initialization vector to be saved and passed (since
the masterKey shouldn't be passed around).
Finally, since the secret is not added during command line build,
the hotplug code will need to get the secret into the private disk data.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Introduce a new private structure to hold qemu domain auth/secret data.
This will be stored in the qemuDomainDiskPrivate as a means to store the
auth and fetched secret data rather than generating during building of
the command line.
The initial changes will handle the current username and secret values
for rbd and iscsi disks (in their various forms). The rbd secret is
stored as a base64 encoded value, while the iscsi secret is stored as
a plain text value. Future changes will store encoded/encrypted secret
data as well as an initialization vector needed to be given to qemu
in order to decrypt the encoded password along with the domain masterKey.
The inital assumption will be that VIR_DOMAIN_SECRET_INFO_PLAIN is
being used.
Although it's expected that the cleanup of the secret data will be
done immediately after command line generation, reintroduce the object
dispose function qemuDomainDiskPrivateDispose to handle removing
memory associated with the structure for "normal" cleanup paths.
Signed-off-by: John Ferlan <jferlan@redhat.com>
After killing one of the conditionals it's now guaranteed to have
@drivealias populated when calling the monitor, so the code attempting
to cleanup can be simplified.
For strange reasons if a perf event type was not supported or failed to
be enabled at VM start libvirt would ignore the failure.
On the other hand on restart if the event could not be re-enabled
libvirt would fail to reconnect to the VM and kill it.
Both don't make really sense. Fix it by failing to start the VM if the
event is not supported and change the event to disabled if it can't be
reconnected (unlikely).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1329045
Both disk->src->shared and disk->src->readonly can't be modified when
changing disk source for floppy and cdrom drives since both arguments
are passed as arguments of the disk rather than the image in qemu.
Historically these fields have only two possible values since they are
represented as XML thus we need to ignore if user did not provide them
and thus we are treating them as false.
If qemu doesn't support DEVICE_TRAY_MOVED event the code that attempts
to change media would attempt to re-eject the tray even if it wouldn't
be notified when the tray opened. Add a capability bit and skip retrying
for old qemus.
Empty floppy drives start with tray in "open" state and libvirt did not
refresh it after startup. The code that inserts media into the tray then
waited until the tray was open before inserting the media and thus
floppies could not be inserted.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1326660
These are wrappers over virStreamRecv and virStreamSend so that
users have to care about nothing but writing data into / reading
data from a sink (typically a file). Note, that these wrappers
are used exclusively on client side as the daemon has slightly
different approach. Anyway, the wrappers allocate this buffer and
use it for intermediate data storage until the data is passed to
stream to send, or to the client application. So far, we are
using 64KB buffer. This is enough, but suboptimal because server
can send messages up to VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX bytes
big (262120B, roughly 256KB). So if we make the buffer this big,
a single message containing the data is sent instead of four,
which is current situation. This means lower overhead, because
each message contains a header which needs to be processed, each
message is processed roughly same amount of time regardless of
its size, less bytes need to be sent through the wire, and so on.
Note that since server will never sent us a stream message bigger
than VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX there's no point in
sizing up the client buffer past this threshold.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
There are two functions on the client that handle incoming stream
data. The first one virNetClientStreamQueuePacket() is a low
level function that just processes the incoming stream data from
the socket and stores it into an internal structure. This happens
in the client event loop therefore the shorter the callbacks are,
the better. The second function virNetClientStreamRecvPacket()
then handles copying data from internal structure into a client
provided buffer.
Change introduced in this commit makes just that: new queue for
incoming stream packets is introduced. Then instead of copying
data into intermediate internal buffer and then copying them into
user buffer, incoming stream messages are queue into the queue
and data is copied just once - in the upper layer function
virNetClientStreamRecvPacket(). In the end, there's just one
copying of data and therefore shorter event loop callback. This
should boost the performance which has proven to be the case in
my testing.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This reverts commit d9c9e138f2.
Unfortunately, things are going to be handled differently so this
commit must go.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This reverts commit 6e244c659f, which
added support to qemu for the "peer" attribute in domain interface <ip>
elements.
It's being removed temporarily for the release of libvirt 1.3.4
because the feature doesn't work, and there are concerns that it may
need to be modified in an externally visible manner which could create
backward compatibility problems.
Conflicts:
tests/qemuxml2argvmock.c - a mock of virNetDevSetOnline() was added
which may be assumed by other tests added since the original commit,
so it isn't being reverted.
This reverts commit afee47d07c, which
added support to lxc for the "peer" attribute in domain interface <ip>
elements.
It's being removed temporarily for the release of libvirt 1.3.4
because the feature doesn't work, and there are concerns that it may
need to be modified in an externally visible manner which could create
backward compatibility problems.
This reverts commit 690969af9c, which
added the domain config parts to support a "peer" attribute in domain
interface <ip> elements.
It's being removed temporarily for the release of libvirt 1.3.4
because the feature doesn't work, and there are concerns that it may
need to be modified in an externally visible manner which could create
backward compatibility problems.
FD passing APIs like CreateXMLWithFiles or OpenGraphicsFD will leak
file descriptors. The user passes in an fd, which is dup()'d in
virNetClientProgramCall. The new fd is what is transfered to the
server virNetClientIOWriteMessage.
Once all the fds have been written though, the parent msg->fds list
is immediately free'd, so the individual fds are never closed.
This closes each FD as its send to the server, so all fds have been
closed by the time msg->fds is free'd.
https://bugzilla.redhat.com/show_bug.cgi?id=1159766
If we want to delete all disks for container or vm
we should make a loop from 0 to NumberOfDisks and always
use zero index in PrlVmCfg_GetHardDisk to get disk handle.
When we delete first disk after that numbers of other disks
will be changed, start from 0 to NumberOfDisks-1.
That's why we should always use zero index.
Similarly to what commit 7140807917 did with some internal paths,
clear vnc socket paths that were generated by us. Having such path in
the definition can cause trouble when restoring the domain. The path is
generated to the per-domain directory that contains the domain ID.
However, that ID will be different upon restoration, so qemu won't be
able to create that socket because the directory will not be prepared.
To be able to migrate to older libvirt, skip formatting the socket path
in migratable XML if it was autogenerated. And mark it as autogenerated
if it already exists and we're parsing live XML.
Best viewed with '-C'.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1326270
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
When the domain definition describes a machine with NUMA, setting the
maximum vCPU count via the API might lead to an invalid config.
Add a check that will forbid this until we add more advanced cpu config
capabilities.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1327499
Instead of setting the default qemu stdio logging approach in
virQEMUDriverConfigLoadFile set it in virQEMUDriverConfigNew so that
it's properly set even when the config is not present.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1325075
If the domain name is long enough, the timestamp can prolong the
filename for automatic coredump to more than the filesystem's limit.
Simply shorten it like we do in other places. The timestamp helps with
the unification, but having the ID in the name won't hurt.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1289363
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Add virDomainObjGetShortName() and use it. For now that's used in one
place, but we should expose it so that future patches can use it.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Currently we only allow /dev/random and /dev/hwrng as host input
for <rng><backend model='random'/> device. This was added after
various upstream discussions in commit 4932ef45
However this restriction has generated quite a few complaints over
the years, so a new discussion was initiated:
http://www.redhat.com/archives/libvir-list/2016-April/msg00987.html
Several people suggested removing the restriction, and nobody really
spoke up to defend it. So this patch drops the path restriction
entirely
https://bugzilla.redhat.com/show_bug.cgi?id=1074464
If you compile a client --without-polkit, and connect to a URI that needs
polkit auth, the connection will fail with:
$ ./tools/virsh --connect qemu+ssh://crobinso@machine/system
error: failed to connect to the hypervisor
error: authentication failed: unsupported authentication type 2
This is because the client side portion of the polkit handling is
compiled out. However, nothing polkit specific is actually required
of the client.
Fix that error by unconditionally compiling the basic polkit client
handling.
https://bugzilla.redhat.com/show_bug.cgi?id=635529
Introduce the final accessor's to _virSecretObject data and move the
structure from virsecretobj.h to virsecretobj.c
The virSecretObjSetValue logic will handle setting both the secret
value and the value_size. Some slight adjustments to the error path
over what was in secretSetValue were made.
Additionally, a slight logic change in secretGetValue where we'll
check for the internalFlags and error out before checking for
and erroring out for a NULL secret->value. That way, it won't be
obvious to anyone that the secret value wasn't set rather they'll
just know they cannot get the secret value since it's private.
Move and rename the secretRewriteFile, secretSaveDef, and secretSaveValue
from secret_driver to virsecretobj
Need to make some slight adjustments since the secretSave* functions
called secretEnsureDirectory, but otherwise mostly just a move of code.
Move and rename secretDeleteSaved from secret_driver into virsecretobj and
split it up into two parts since there is error path code that looks to
just delete the secret data file
Move to secret_conf.c and rename to virSecretLoadAllConfigs. Also includes
moving/renaming the supporting virSecretLoad, virSecretLoadValue, and
virSecretLoadValidateUUID.
This patch replaces most of the guts of secret_driver.c with recently
added secret_conf.c APIs in order manage secret lists and objects
using the hashed virSecretObjList* lookup API's.
Add function to return a "match" filtered list of secret objects. This
function replaces the guts of secretConnectListAllSecrets.
Need to also move and make global virSecretUsageIDForDef since it'll
be used by both secret_driver.c and secret_conf.c
Add the functions to add/remove elements from the hashed secret obj list.
These will replace secret_driver functions secretAssignDef and secretObjRemove.
The virSecretObjListAddLocked will perform the necessary lookups and
decide whether to replace an existing hash entry or create a new one.
This includes setting up the configPath and base64Path as well as being
able to support the caller's need to restore from a previous definition
in case something goes wrong in the caller.
New API's including unlocked and Locked versions in order to be able
to use in either manner.
Support for searching hash object lists instead of linked lists will
replace existing secret_driver functions secretFindByUUID and
secretFindByUsage
Move virSecretObj from secret_driver.c to virsecretobj.h
To support being able to create a hashed secrets list, move the
virSecretObj to virsecretobj.h so that the code can at least find
the definition.
This should be a temporary situation while the virsecretobj.c code
is patched in order to support a hashed secret object while still
having the linked list support in secret_driver.c. Eventually, the
goal is to move the virSecretObj into virsecretobj.c, although it
is notable that the existing model from which virSecretObj was
derived has virDomainObj in src/conf/domain_conf.h and virNetworkObj
in src/conf/network_conf.h, so virSecretObj wouldn't be unique if
it were to remain in virsecretobj.h Still adding accessors to fetch
and store hashed object data will be the end goal.
Add definitions and infrastucture in virsecretobj.c to create and
handle a hashed virSecretObj and virSecretObjList including the class,
object, lock setup, and disposal API's. Nothing will call these yet.
This infrastructure will replace the forward linked list logic
within the secret_driver, eventually.
This function - in contrast with qemuBuildCommandLine - merely
constructs our internal command representation of a domain. This
is then later compared against expected output. Or, this function
is used also in virConnectDomainXMLToNative(). But due to a copy
paste error this function, just like its image - has @forceFips
argument that if enabled forces FIPS, otherwise mimics FIPS state
in the host. If FIPS is enabled or forced the generated command
line is different to state in which FIPS is disabled. Problem is,
while this could be desired in the virConnectDomainXMLToNative()
case, this is undesirable in the test suite as it will produce
unpredicted results.
Solution to this is to rename argument to @enableFips to
specifically tell whether we expect command line to be build in
either of fashions and make virConnectDomainXMLToNative()
implementation fetch FIPS state and pass it to
qemuProcessCreatePretendCmd().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This error message was too specific, based on the incorrect assumption
that any error was cause by auto-added bridges:
failed to create PCI bridge on bus 2: too many devices
with fixed addresses
In practice you can't know if a bridge with an index <= the bus it's
connecting to was added automatically, or if it was a mistake in
explicit config, and the auto-add problem is going to be dealt with in
a different way in an upcoming patch. The new message is this:
PCI Controller at index 1 (0x01) has "
bus='0x02', but bus must be <= index
(note that index is given in both decimal and hex because it is
formatted as decimal in the XML, but bus is formatted as hex, and
displaying the hex value of index makes it easier to see the problem
when index > 9 (which will often be the case with PCIe, since most
controllers only have a single port, not 32 slots as with standard
PCI)).
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1004593
We can't use eg. @sysconfdir@ directly in the .pod file, because
pod2man(1) will interpret that as a variable name and format it
accordingly.
Instead, we use eg. SYSCONFDIR and use a subsequent sed(1) call
to turn it into the expected @sysconfdir@.
After this commit, all man pages are generated using the same two
steps:
1. Process a source $command.pod file with pod2man(1) to obtain
a valid man page in $command.$section.in
2. Process $command.$section.in with sed(1) to obtain the final
man page in $command.$section
The values are currently limited to LLONG_MAX which causes some
problems. QEMU conveniently changed their maximum to 1e15 (1 PB) which
is enough for some time and we need to adapt to that so that we don't
throw "Unknown error" messages. Strictly limiting these values actually
fixes some corner case values (off-by-one checks in QEMU probably).
Since values out of the new specified range do not overflow anything,
change the type of error as well.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1317531
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
$ echo -n 'log_level=1' > ~/.config/libvirt/libvirtd.conf
$ libvirtd --timeout=10
2014-10-10 10:30:56.394+0000: 6626: info : libvirt version: 1.1.3.6, package: 1.fc20 (Fedora Project, 2014-09-08-17:50:42, buildvm-05.phx2.fedoraproject.org)
2014-10-10 10:30:56.394+0000: 6626: error : main:1261 : Can't load config file: configuration file syntax error: /home/rjones/.config/libvirt/libvirtd.conf:1: expecting a value: /home/rjones/.config/libvirt/libvirtd.conf
Rather than try to fix this in the depths of the parser, just catch
the case when a config file doesn't end in a newline, and manually
append a newline to the content before parsing
https://bugzilla.redhat.com/show_bug.cgi?id=1151409
According to the dnsmasq manpage, the netmask for IPv4 address ranges
will be auto-deteremined from the interface dnsmasq is listening on,
but it can't do this for IPv6 for some reason - it instead assumes a
network prefix of 64 for all IPv6 address ranges. If this is
incorrect, dnsmasq will refuse to give out an address to clients,
instead logging this message:
dnsmasq-dhcp[2380]: no address range available for DHCPv6 request via virbr0
The solution is for libvirt to add ",$prefix" to all IPv6 dhcp-range
arguments when building the dnsmasq.conf file.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1033739
FreeBSD's sed(1) doesn't support using "\n" to insert a newline,
so the installed default.xml file ends up containing a literal
"n" between tags; to work around this problem, add a tr(1)
invocation as suggested by the sed FAQ[1].
[1] http://sed.sourceforge.net/sedfaq4.html (4.1 c)
The stream serial number is the serial number of the RPC call
that initiated a data transfer. And as such can never be
negative. Moreover, when looking up internal state for a stream,
the serial numbers are compared. But hey, the serial number in
message header is unsigned too!
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
According to the autoconf manual, using '$(LN_S) -f' is not
portable; remove the target explicitly beforehand to work around
this limitation.
Adjust some slightly awkward indentation while at it.
My commit 0d1579572 crashes on a URI without a scheme, like via
'virsh --connect frob'
Add a check on uri->server too while we are at it, and centralize
them all
The current rule fails if the target already exists:
cd /home/jenkins/build/libvirt/lib && \
ln -s libnss_libvirt.so.1 nss_libvirt.so.1
ln: nss_libvirt.so.1: File exists
Makefile:3357: recipe for target 'install-exec-hook' failed
However, all other rules concerned with installation are
idempotent and will happily overwrite an existing target,
so this one should as well.
We don't have input devices in SDK thus for define/dumpxml
operations to be consistent we need to:
1. on dumpxml: infer input devices from other parts of config.
It is already done in prlsdkLoadDomain.
2. on define: check that input devices are the same that
will be infer back on dumpxml operation.
The second part should be fixed.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Remove all the plumbing needed for the different qcow-create/kvm-img
non-raw file creation.
We can drop the error messages because CreateQemuImg will thrown an
error for us but with slightly less fidelity (unable to find qemu-img),
which I think is acceptable given the unlikeliness of that error in
practice.
This an ubuntu/debian packaging convention. At one point it may have
been an actually different binary, but at least as of ubuntu precise
(the oldest supported ubuntu distro, released april 2012) kvm-img is
just a symlink to qemu-img for back compat.
I think it's safe to drop support for it
qcow-create was a crippled qemu-img impl that shipped with xen. I
think supporting this was only relevant for really old distros
that didn't have a proper qemu package, like early RHEL5. I think
it's fair to drop support
VIR_ERR_NO_SUPPORT maps to the error string
this function is not supported by the connection driver
and is largely only used for when a driver doesn't have any
implementation for a public API. So its usage with invalid
net-update requests is a bit out of place. Instead use
VIR_ERR_OPERATION_UNSUPPORTED which maps to:
Operation not supported
And is what qemu's hotplug routines use in similar scenarios
QEMU introduced the query-gic-capabilities QMP command
with commit 4468d4e0f383: use the command, if available,
to probe available GIC capabilities.
The information obtained is stored in a virQEMUCaps
instance, and will be later used to fill in a
virDomainCaps instance.
The struct contains a single boolean field, 'supported':
the meaning of this field is too generic to be limited to
devices only, and in fact it's already being used for
other things like loaders and OSs.
Instead of trying to come up with a more generic name just
get rid of the struct altogether.
libvirt handles empty source as NULL, while vz sdk as
"" thus we need a bit of conversion.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Current implementation does not detect all incompatible configurations.
For example if we have in vzsdk bootorder "cdrom1, cdrom0" (that is
"hdb, hda" in case of ide cdroms) and cdroms do not have disk
images inserted. In this case boot order check code fails to
distiguish them at all as for both PrlVmDev_GetFriendlyName gives "".
Well the consequences are only missing warnings but as
we just have introduced all the necessary tools to face the problem -
let's fix it.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Actually using disk PrlVmDev_GetFriendlyName as id on
detaching volumes is not a problem. We can only detach
hard disks and these can not have empty friendly names.
But upcoming update device functionality for cdroms
can not use disk source as id at all as update operation
typically change this same source value. Thus we will need
to use cdrom bus and cdrom target name as cdrom id. So in attempt
to use same id scheme for all purpuses lets fix hard disk
detach function to use new id.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Our intention is to use disk bus and disk target name pair
as disk id instead of name returned by PrlVmDev_GetFriendlyName.
We already have the code that extracts this pair from vzsdk
data. Let's factor it out into a function.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Prior to this patch we didn't make any attempt to prevent two entries
in the array of interfaces/PCI devices from pointing to the same
device.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1002423
The common idiom in the driver API implementations is roughly:
- ACL check
- BeginJob (if needed)
- AgentAvailable (if needed)
- !IsActive
A few calls had an extra !IsActive before BeginJob, which doesn't
seem to serve much use. Drop them
It isn't implemented and does not work:
error: internal error: guest failed to start: /usr/lib/libvirt/libvirt_lxc: option '--veth' requires an argument
syntax: /usr/lib/libvirt/libvirt_lxc [OPTIONS] ...
We previously threw an explicit error, but this changed in
22cff52a2b , which I suspect was
untested for LXC
So in glibc-2.23 sys/sysmacros.h is no longer included from sys/types.h
and we don't build because of the usage of major/minor/makedev macros.
Autoconf already has AC_HEADER_MAJOR macro that check where exactly
these functions/macros are defined, so let's use that.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
These API already support VIR_DOMAIN_AFFECT_* flags. But the
documentation does not mention it. Eww.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Since threadpool increments the current number of threads according to current
load, i.e. how many jobs are waiting in the queue. The count however, is
constrained by max and min limits of workers. The logic of this new API works
like this:
1) setting the minimum
a) When the limit is increased, depending on the current number of
threads, new threads are possibly spawned if the current number of
threads is less than the new minimum limit
b) Decreasing the minimum limit has no possible effect on the current
number of threads
2) setting the maximum
a) Icreasing the maximum limit has no immediate effect on the current
number of threads, it only allows the threadpool to spawn more
threads when new jobs, that would otherwise end up queued, arrive.
b) Decreasing the maximum limit may affect the current number of
threads, if the current number of threads is less than the new
maximum limit. Since there may be some ongoing time-consuming jobs
that would effectively block this API from killing any threads.
Therefore, this API is asynchronous with best-effort execution,
i.e. the necessary number of workers will be terminated once they
finish their previous job, unless other workers had already
terminated, decreasing the limit to the requested value.
3) setting priority workers
- both increase and decrease in count of these workers have an
immediate impact on the current number of workers, new ones will be
spawned or some of them get terminated respectively.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
New API to retrieve current server workerpool specs. Since it uses typed
parameters, more specs to retrieve can be further included in the pool of
supported ones.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
In order for the client to see all thread counts and limits, current total
and free worker count getters need to be introduced. Client might also be
interested in the job queue length, so provide a getter for that too. As with
the other getters, preparing for the admin interface, mutual exclusion is used
within all getters.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
So far, the values the affected getters retrieve are static, i.e. there's no
way of changing them during runtime. But admin interface will later enable
not only getting but changing them as well. So to prevent phenomenons like
torn reads or concurrent reads and writes of unaligned values, use mutual
exclusion when getting these values (writes do, understandably, use them
already).
Signed-off-by: Erik Skultety <eskultet@redhat.com>
When either creating a threadpool, or creating a new thread to accomplish a job
that had been placed into the jobqueue, every time thread-specific data need to
be allocated, threadpool needs to be (re)-allocated and thread count indicators
updated. Make the code clearer to read by compressing these operations into a
more complex one.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
virTypedParamsValidate currently uses an index based check to find
duplicate parameters. This check does not work. Consider the following
simple example:
We have only 2 keys
A (multiples allowed)
B (multiples NOT allowed)
We are given the following list of parameters to check:
A
A
B
If you work through the validation loop you will see that our last iteration
through the loop has i=2 and j=1. In this case, i > j and keys[j].value.i will
indicate that multiples are not allowed. Both conditionals are satisfied so
an incorrect error will be given: "parameter '%s' occurs multiple times"
This patch replaces the index based check with code that remembers
the name of the last parameter seen and only triggers the error case if
the current parameter name equals the last one. This works because the
list is sorted and duplicate parameters will be grouped together.
In reality, we hit this bug while using selective block migration to migrate
a guest with 5 disks. 5 was apparently just the right number to push i > j
and hit this bug.
virsh migrate --live guestname --copy-storage-all
--migrate-disks vdb,vdc,vdd,vde,vdf
qemu+ssh://dsthost/system
Signed-off-by: Jason J. Herne <jjherne@linux.vnet.ibm.com>
Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com>
Migration API allows to specify a destination domain configuration.
Offline domain has only inactive XML and it is replaced by configuration
specified using VIR_MIGRATE_PARAM_DEST_XML param. In case of live
migration VIR_MIGRATE_PARAM_DEST_XML param is applied for active XML.
This commit introduces the new VIR_MIGRATE_PARAM_PERSIST_XML param
that can be used within live migration to replace persistent/inactive
configuration.
Required for: https://bugzilla.redhat.com/show_bug.cgi?id=835300
By default, `zfs create -V ...` reserves space for the entire volsize,
plus some extra (which attempts to account for overhead).
If `zfs create -s -V ...` is used instead, zvols are (fully) sparse.
A middle ground (partial allocation) can be achieved with
`zfs create -s -o refreservation=... -V ...`. Both libvirt and ZFS
support this approach, so the ZFS storage backend should support it.
Signed-off-by: Richard Laager <rlaager@wiktel.com>
Commit id '4b75237f' seems to have triggered Coverity into finding
at least one memory leak in xen_xl.c for error path for cleanup where
the listenAddr would be leaked. Reviewing other callers, it seems that
qemu_parse_command.c would have the same issue, so just it too.
To ensure the libvirt libxl driver will build with future versions
of Xen where the libxl API may change in incompatible ways,
explicitly use LIBXL_API_VERSION 0x040200. The libxl driver
does use new libxl APIs that have been added since Xen 4.2, but
currently it does not make use of any changes made to existing
APIs such as libxl_domain_create_restore or libxl_set_vcpuaffinity.
The version can be bumped if/when the libxl driver consumes the
changed APIs.
Further details can be found in the following discussion thread
https://www.redhat.com/archives/libvir-list/2016-April/msg00178.html
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
When creating the master key, we used mode 0600 (which we should) but
because we were creating it as root, the file is not readable by any
qemu running as non-root. Fortunately, it's just a matter of labelling
the file. We are generating the file path few times already, so let's
label it in the same function that has access to the path already.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In a few places in libvirt we busy-wait for events, for example qemu
creating a monitor socket. This is problematic because:
- We need to choose a sufficiently small polling period so that
libvirt doesn't add unnecessary delays.
- We need to choose a sufficiently large polling period so that
the effect of busy-waiting doesn't affect the system.
The solution to this conflict is to use an exponential backoff.
This patch adds two functions to hide the details, and modifies a few
places where we currently busy-wait.
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
In case of ploop volume, target path of the volume is the path to the
directory that contains image file named root.hds and DiskDescriptor.xml.
While using uploadVol and downloadVol callbacks we need to open root.hds
itself.
Upload or download operations with ploop volume are only allowed when
images do not have snapshots. Otherwise operation fails.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Refreshes meta-information such as allocation, capacity, format, etc.
Ploop volumes differ from other volume types. Path to volume is the path
to directory with image file root.hds and DiskDescriptor.xml.
https://openvz.org/Ploop/format
Due to this fact, operations of opening the volume have to be done once
again. get the information.
To decide whether the given volume is ploops one, it is necessary to check
the presence of root.hds and DiskDescriptor.xml files in volumes' directory.
Only in this case the volume can be manipulated as the ploops one.
Such strategy helps us to resolve problems that might occure, when we
upload some other volume type from ploop source.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Recursively deletes whole directory of a ploop volume.
To delete ploop image it has to be unmounted.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
These callbacks let us to create ploop volumes in dir, fs and etc. pools.
If a ploop volume was created via buildVol callback, then this volume
is an empty ploop device with DiskDescriptor.xml.
If the volume was created via .buildFrom - then its content is similar to
input volume content.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Ploop image consists of directory with two files: ploop image itself,
called root.hds and DiskDescriptor.xml that contains information about
ploop device: https://openvz.org/Ploop/format.
Such volume are difficult to manipulate in terms of existing volume types
because they are neither a single files nor a directory.
This patch introduces new volume type - ploop. This volume type is used
by ploop volume's exclusively.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Rather than trying some magic calculations on our side query the monitor
for the current size of the memory balloon both on hotplug and
hotunplug.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1220702
Now that there is just one format of the memory balloon command line
used the code can be merged into a single function.
Additionally with some tweaks to the control flow the code is easier to
read.
The change that made qemu not add the memballoon by default happened
prior to 0.12.0. Additionaly the comment was misleading due to the code
that was added below. Since we always need to add a balloon on the
commandline drop the comment.
The only caller of this code is:
for (i = 0; i < dom->def->ngraphics; i++) {
if (dom->def->graphics[i]->type == VIR_DOMAIN_GRAPHICS_TYPE_SPICE) {
if (!(mig->graphics =
qemuMigrationCookieGraphicsAlloc(driver, dom->def->graphics[i])))
return -1;
mig->flags |= QEMU_MIGRATION_COOKIE_GRAPHICS;
break;
}
}
So this is never triggered for VNC, and in fact VNC has no support for
seamless migration anyways so that seems correct. Drop the dead VNC
handling.
Since vz driver is now lives as a part of daemon we can benefit from
this fact and allow vz clients to use shared drivers API like storage,
network, nwfilter etc.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
This is backed by the qemu device pxb-pcie, which will be available in
qemu 2.6.0.
As with pci-expander-bus (which uses qemu's pxb device), the busNr
attribute and <node> subelement of <target> are used to set the bus_nr
and numa_node options.
During post-parse we validate that the domain's machinetype is
q35-based (since the device shows up for 440fx-based machinetypes, but
is unusable), as well as checking that <node> specifies a node that is
actually configured on the guest.
This controller provides a single PCIe port on a new root. It is
similar to pci-expander-bus, intended to provide a bus that can be
associated with a guest-identifiable NUMA node, but is for
machinetypes with PCIe rather than PCI (e.g. q35-based machinetypes).
Aside from PCIe vs. PCI, the other main difference is that a
pci-expander-bus has a companion pci-bridge that is automatically
attached along with it, but pcie-expander-bus has only a single port,
and that port will only connect to a pcie-root-port, or to a
pcie-switch-upstream-port. In order for the bus to be of any use in
the guest, it must have either a pcie-root-port or a
pcie-switch-upstream-port attached (and one or more
pcie-switch-downstream-ports attached to the
pcie-switch-upstream-port).
The pxb device is a PCIe expander bus that can be added to any
Q35-based machinetype. A single PCIe port (*not* hotpluggable) is
provided; if more than one device is desired, or if hotplug
support is needed, either a pcie-root-port, or some combination of
pcie-switch-upstream-port and pcie-swith-downstream-ports must be
added to it. It can have a NUMA node number associated with it, as
well as a bus number.
This is backed by the qemu device "pxb".
The pxb device always includes a pci-bridge that is at the bus number
of the pxb + 1.
busNr and <node> from the <target> subelement are used to set the
bus_nr and numa_node options for pxb.
During post-parse we validate that the domain's machinetype is
440fx-based (since the pxb device only works on 440fx-based machines),
and <node> also gets a sanity check to assure that the NUMA node
specified for the pxb (if any - it's optional) actually exists on the
guest.
This is a standard PCI root bus (not a bridge) that can be added to a
440fx-based domain. Although it uses a PCI slot, this is *not* how it
is connected into the PCI bus hierarchy, but is only used for
control. Each pci-expander-bus provides 32 slots (0-31) that can
accept hotplug of standard PCI devices.
The usefulness of pci-expander-bus relative to a pci-bridge is that
the NUMA node of the bus can be specified with the <node> subelement
of <target>. This gives guest-side visibility to the NUMA node of
attached devices (presuming that management apps only assign a device
to a bus that has a NUMA node number matching the node number of the
device on the host).
Each pci-expander-bus also has a "busNr" attribute. The expander-bus
itself will take the busNr specified, and all buses that are connected
to this bus (including the pci-bridge that is automatically added to
any expander bus of model "pxb" (see the next commit)) will use
busNr+1, busNr+2, etc, and the pci-root (or the expander-bus with next
lower busNr) will use bus numbers lower than busNr.
The pxb device is a PCI expander bus that can be added to any
440fx-based machinetype. The PCI bus that is created has 32 standard
PCI slots (hotpluggable). It can have a NUMA node number associated
with it, as well as a bus number.
There are two places in qemu_domain_address.c where we have a switch
statement to convert PCI controller models
(VIR_DOMAIN_CONTROLLER_MODEL_PCI*) into the connection type flag that
is matched when looking for an upstream connection for that model of
controller (VIR_PCI_CONNECT_TYPE_*). This patch makes a utility
function in conf/domain_addr.c to do that, so that when a new PCI
controller is added, we only need to add the new model-->connect-type
in a single place.
The flags used to determine which devices could be plugged into which
controllers were quite confusing, as they tried to create classes of
connections, then put particular devices into possibly multiple
classes, while sometimes setting multiple flags for the controllers
themselves. The attempt to have a single flag indicate, e.g. that a
root-port or a switch-downstream-port could connect was not only
confusing, it was leading to a situation where it would be impossible
to specify exactly the right combinations for a new controller.
The solution is for the VIR_PCI_CONNECT_TYPE_* flags to have a 1:1
correspondence with each type of PCI controller, plus a flag for a PCI
endpoint device and another for a PCIe endpoint device (the only
exception to this is that pci-bridge and pcie-expander-bus controllers
have their upstream connection classified as
VIR_PCI_CONNECT_TYPE_PCI_DEVICE since they can be plugged into
*exactly* the same ports as any endpoint device). Each device then
has a single flag for connect type (plus the HOTPLUG flag if that
device can e hotplugged), and each controller sets the CONNECT bits
for all controllers that can be plugged into it, as well as for either
type of endpoint device that can be plugged in (and the HOTPLUG flag
if it can accept hotplugged devices).
With this change, it is *slightly* easier to understand the matching
of connections (as long as you remember that the flag for a
device/upstream-facing connection of a controller is the same as that
device's type, while the flags for a controller's downstream
connections is the OR of all device types that can be plugged into
that controller). More importantly, it will be possible to correctly
specify what can be plugged into a pcie-switch-expander-bus, when
support for it is added.
When support for dmi-to-pci-bridge was added, it was assumed that,
just as with the pci-root bus, slot 0 was reserved. This is not the
case - it can be used to connect a device just like any other slot, so
remove the restriction and update the test cases that auto-assign an
address on a dmi-to-pci-bridge.
Every other maxSlot was either set to 0 or to
VIR_PCI_ADDRESS_SLOT_LAST, but this one was for some reason set to the
literal value 31 (which is the same as VIR_PCI_ADDRESS_SLOT_LAST).
This makes them all consistent.
We use device-mapper to enumerate all dm devices, and filter out
the list of multipath devices by checking the target_type string
name. The code however cancels all scanning if we encounter
target_type=NULL
I don't know how to reproduce that situation, but a user was hitting
it in their setup, and inspecting the lvm2/device-mapper code shows
many places where !target_type is explicitly ignored and processing
continues on to the next device. So I think we should do the same
https://bugzilla.redhat.com/show_bug.cgi?id=1069317
The watchdog cli refactoring in 4666b762 dropped the temporary variable
we use to convert to action=dump to action=pause for the qemu cli, and
stored the converted value in the domain structure. Our other watchdog
handling code then treated it as though the user requested action=pause,
which broke action=dump handling.
Revive the temporary variable to fix things.
I tried compiling libvirt with older gcc and probably because I used
different configure options I got some shadowed declarations.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Add codes to support creating domain with network defition of assigning
SRIOV VF from a pool.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
commit 30c61901 added new functions to libvirt_private.syms
not alpabetically sorted and erroneously added vz sources to
STATEFUL_DRIVER_SOURCE_FILES, which triggered check-aclrules
running while vz driver isn't ready for it yet.
Pushing under build-breaker rule.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
SDK does not allocate memory when getting strings thus we
need to call every function that returns string twice.
First to obtain string length, second to obtain string
itself. It is tedious so let's create helper functions
for cases when we know length of the result beforehand
and we are not.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
remove unnecessary vzConnectClose prototype and make
local structure vzDomainDefParserConfig be static
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
We don't need them anymore as all pointers within vzDriver structure
are not changed during the time it exists.
Where we still need to synchronize we use virObjectLock/Unlock as far
as vzDriver is lockable object.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Lock driver when a new domain is created in prlsdkNewDomainByHandle
and try to find it in the list under lock again because it can race
with vzDomainDefineXMLFlags when a domain with the same uuid is added
via vz dispatcher directly and libvirt define.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
This patch introduces a new 'vzDriver' lockable object and provides
helper functions to allocate/destroy it and we pass it to prlsdkXxx
functions instead of virConnectPtr.
Now we store domain related objects such as domain list, capabitilies
etc. within a single vz_driver vzDriver structure, which is shared by
all driver connections. It is allocated during daemon initialization or
in a lazy manner when a new connection to 'vz' driver is established.
When a connection to vz daemon drops, vzDestroyConnection is called,
which in turn relays disconnect event to all connection to 'vz' driver.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Make it possible to build vz driver as a module and don't link it with
libvirt.so statically.
Remove registering it on client's side as far as we start relying on daemon
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
GCC in RHEL-6 complains about listen:
../../src/conf/domain_conf.c:23718: error: declaration of 'listen' shadows a global declaration [-Wshadow]
/usr/include/sys/socket.h:204: error: shadowed declaration is here [-Wshadow]
This renames all the listen to gListen.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Trying to reload/SIGUSR1 virtlogd or virtlockd fails with:
error : virNetDaemonRun:747 : internal error: Not all servers restored, cannot run server
Commit 252610f7 changed the daemon state json to allow tracking
multiple servers. However it missed clearing dmn->srvObject after
the json is empty, like the previous code paths handled. Later on in
virNewDaemonRun, dmn->srvObject is expected to be empty otherwise we
throw the above error.
https://bugzilla.redhat.com/show_bug.cgi?id=1311013
Since qemu is now able to notify us that the guest rejected the memory
unplug operation we can relay this to the user and make the API fail
right away.
Additionally document the possible values from the ACPI docs for future
reference.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1320447
The event is emitted on ACPI OSPM Status Indication events.
ACPI standard documentation describes the method as:
This object is an optional control method that is invoked by OSPM to
indicate processing status to the platform. During device ejection,
device hot add, or other event processing, OSPM may need to perform
specific handshaking with the platform. OSPM may also need to indicate
to the platform its inability to complete a requested operation; for
example, when a user presses an ejection button for a device that is
currently in use or is otherwise currently incapable of being ejected.
In this case, the processing of the ACPI Eject Request notification by
OSPM fails. OSPM may indicate this failure to the platform through the
invocation of the _OST control method. As a result of the status
notification indicating ejection failure, the platform may take certain
action including reissuing the notification or perhaps turning on an
appropriate indicator light to signal the failure to the user.
Since we didn't opt to use one single event for device lifecycle for a
VM we are missing one last event if the device removal failed. This
event will be emitted once we asked to eject the device but for some
reason it is not possible.
Similarly to the DEVICE_DELETED event we will be able to tell when
unplug of certain device types will be rejected by the guest OS. Wire up
the device deletion signalling code to allow handling this.
Neither of the callers cares whether the DEVICE_DELETED event isn't
supported or the event was received. Simplify the code and callers by
unifying the two values and changing the return value constants so that
a temporary variable can be omitted.
Callers ignore if this function returns -1 and continue as though the
DEVICE_DELETED event was not received. Since we can't be sure that the
event was not received we should behave as if the event was not
supported and remove the device definition right away. The error
fortunately won't really happen here.
The address assigning code might add new pci bridges.
We need them to have an alias when building the command line.
In real word usage, this is not a problem because all the code
paths already call qemuDomainAssignAddresses. However moving
this call lets us remove one extra call from qemuxml2argvtest.
Essentially revert commit 3a6204c which added these to allow the test
suite to pass without depending on the host system state.
Since commit 4b527c1 we already mock virSCSIDeviceGetSgName, so these
callbacks are useless.
Instead of calling the virDomainGraphicsListensParseXML function for all
graphics types and ignore the wrong ones move the call only to graphics
types where we supports listen elements.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Those are the last two places that uses the getter functions. Use a
direct access instead and remove those getters.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Removes the check for graphics type, it's not a public API and developer
know what he's doing and this check makes no sense. It also removes
the ability to allocate a new array if there is none. This was used by
the virDomainGraphicsListenAdd* functions and isn't used anymore.
This is now a simple getter with simple check for listens array presence
and whether the index in out of bounds.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
This effectively removes virDomainGraphicsListenSetAddress which was
used only to change the address of listen structure and possible change
the listen type. The new function will auto-expand the listens array
and append a new listen.
The old function was used on pre-allocated array of listens and in most
cases it only "add" a new listen. The two remaining uses can access the
listen structure directly.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Rest of the fields of the iotune data structure did not check for
malformed integers. Use the previously defined macro to extract them
which will simplify the code and add error reporting.
Since the structure was pre-initialized to 0 we don't need to set every
single member to 0 if it's not present in the XML. Additionally if we
put the name of the field into the error message the code can be
simplified using a macro to parse the members.
No need to remember connection name and have corresponding
domain type to keep backward compatibility with former
'parallels' driver. It is enough to be able to accept 'parallels'
uri and domain types.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>