Rather than returning a "char *" indicating perhaps some sized set of
characters that is NUL terminated, alter the function to return 0 or -1
for success/failure and add two parameters to handle returning the
buffer and it's size.
The function no longer encodes the returned secret, rather it returns
the unencoded secret forcing callers to make the necessary adjustments.
Alter the callers to handle the adjusted model.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Call the internal driver callbacks rather than the public APIs to avoid
calling unnecessarily the error dispatching code and don't overwrite
the error messages provided by the APIs. They are good enough to
describe which secret is missing either by UUID or the usage (basically
name).
For a few cases where we handle secret information it's good to clear
the buffers containing sensitive data before freeing them.
Introduce VIR_DISPOSE, VIR_DISPOSE_N and VIR_DISPOSE_STRING that allow
simple clearing fo the buffers holding sensitive information on cleanup
paths.
When -cpu host is supported by a QEMU binary, a user can use
<cpu mode='host-passthrough'/> in domain XML even when libvirtd failed
to find a matching model for the host CPU. Let's make it obvious by
advertising <cpuselection/> guest capability whenever -cpu host is
supported.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When probing the <emulator> with '-help' to determine if
it is the old qemu, errors are reported if the emulator
doesn't exist
libvirt: error : internal error: Child process
(/usr/lib/xen/bin/qemu-dm -help) unexpected exit status 127:
libvirt: error : cannot execute binary /usr/lib/xen/bin/qemu-dm:
No such file or directory
Avoid the probe if the specified emulator doesn't exist,
squelching the error. There is no behavior change since
libxlDomainGetEmulatorType() would return
LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN if the probe failed
via virCommandRun().
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Move some parts of virStorageFileRemoveLastPathComponent
into a separate function so they can be reused.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Further followup discussions in list on commit 192a53e concluded
that we should be leaving out the USB controller only for
i440fx machines as default USB can be used by someone on q35
at random slots.
Signed-off-by: Shivaprasad G Bhat <sbhat@linux.vnet.ibm.com>
Move filling out the default video (v)ram to DeviceDefPostParse.
This means it can be removed from virDomainVideoDefParseXML
and qemuParseCommandLine. Also, we no longer need to special case
VIR_DOMAIN_VIRT_XEN, since the per-driver callback gets called
before the generic one.
Commit 6879be48 moved adding of an implicit video device after XML
parsing. As a result, libxlDomainDeviceDefPostParse() is no longer
called to set the default vram when adding an implicit device.
Commit 6879be48 assumes virDomainVideoDefaultRAM() will set the
default vram, but it returns 0 if the domain virtType is
VIR_DOMAIN_VIRT_XEN. Attempting to start an HVM domain with vram=0
results in
error: unsupported configuration: videoram must be at least 4MB for CIRRUS
The default vram setting for Xen HVM domains depends on the device
model used (qemu-xen vs qemu-traditional), hence setting the
default is deferred to libxlDomainDeviceDefPostParse().
Call the device post-parse callback even for implicit video,
to fill out the default vram even for VIR_DOMAIN_VIRT_XEN.
https://bugzilla.redhat.com/show_bug.cgi?id=1334557
Most-of-commit-message-by: Jim Fehlig <jfehlig@suse.com>
Both virGetLastError and virGetLastErrorMessage call virLastErrorObject method
that returns a thread-local error object. However, if a direct call to malloc
or pthread_setspecific (probably also due to malloc, since it sets ENOMEM)
fail, virLastErrorObject returns NULL which, although incorrectly interpreted
by virGetLastError as no error, still requires the caller to check for NULL
pointer. This isn't the case with virGetLastErrorMessage that also treated it
incorrectly as no error, but returned the literal "no error".
This patch tweaks the checks in the virGetLastErrorMessage function, so that
if virLastErrorObject failed, it returned "unknown error" which is equivalent
to the current approach with virGetLastError and if it returned NULL,
"unknown error" was set.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit id 'df1011ca8' modified virStorageBackendDiskDeleteVol to use
"dmsetup remove --force" to remove the volume, but left things in an
inconsistent state since the partition still existed on the disk and
only the device mapper device (/dev/dm-#) was removed.
Prior to commit '1895b421' (or '1ffd82bb' and '471e1c4e'), this could
go unnoticed since virStorageBackendDiskRefreshPool wasn't called.
However, the pool would be unusable since the /dev/dm-# device would
be removed even though the partition was not removed unless a multipathd
restart reset the link. That would of course make the volume appear again
in the pool after a refresh or pool start after libvirt reload.
This patch removes the 'dmsetup' logic and re-implements the partition
deletion logic for device mapper devices. The removal of the partition
via 'parted rm --script #' will cause udev device change logic to allow
multipathd to handle removing the dm-* device associated with the partition.
https://bugzilla.redhat.com/show_bug.cgi?id=1265694
Commit id '020135dc' didn't quite get the algorithm correct when a
device mapper source ended with a non numeric value (e.g. ends with
an alphabet value).
This patch modifies the 'part_separator' logic to add the "p" separator
to the attempted target path name only when specified as part_separator='yes'.
For a source name that already ends with a number, the logic doesn't change
as the part separator would need to be there.
For a source name that ends with something other than a number, this allows
the possibility that a "p" separator can be added. The default for one of
these source devices is to not add the separator.
The key for device mapper and the need for a partition separator "p" is
the presence of a number in the last character of the device name link
in /dev/mapper. A name such as "/dev/mapper/mpatha1" would generate
a "/dev/mapper/mpatha1p1" partition, while "/dev/mapper/mpatha" would
generate partition "/dev/mapper/mpatha1". Similarly for a device
mapper entry not using friendly names or an alias, a device such as
"/dev/mapper/3600a0b80005b10ca00005ad656fd8d93" would generate a
paritition "/dev/mapper/3600a0b80005b10ca00005ad656fd8d93p1", while
a device such as "/dev/mapper/3600a0b80005b10ca00005e115729093f" would
generate a partition "/dev/mapper/3600a0b80005b10ca00005e115729093f1".
The long number is the WWID of the device. It's also possible to assign
an alias for a device mapper entry, that alias follows the same rules
with respect to ending with a number or not when adding a "p" to create
the target device path.
Prior to calling the 'refreshPool' during CreatePool or UploadPool
operations, we need to clear the pool; otherwise, the pool will
have duplicated entries.
https://bugzilla.redhat.com/show_bug.cgi?id=1318993
Commit id 'dd519a294' caused a regression cloning a volume into a
logical pool by removing just the 'allocation' adjustment during
storageVolCreateXMLFrom. Combined with the change to not require the
new volume input XML to have a capacity listed (commit id 'e3f1d2a8')
left the possibility that a zero allocation value (e.g., not provided)
would create a thin/sparse logical volume. When a thin lv becomes fully
populated, then LVM sets the partition 'inactive' and the subsequent
fdatasync() fails.
Add a new 'has_allocation' flag to be set at XML parse time to indicate
that allocation was provided. This is done so that if it's not provided
the create-from code uses the capacity value since we document that if
omitted, the volume will be fully allocated at time of creation.
For a logical backend, that creation time is 'createVol', while for a
file backend, creation doesn't set the size, but the 'createRaw' called
during buildVolFrom will decide whether the file is sparse or not based
on the provided capacity and allocation value.
For volume clones that provide different allocation and capacity values
to allow for sparse files, there is no change.
Usage of this keyword in front of function declaration that is exported via a
header file is unnecessary, since internally, this has been the default for most
compilers for quite some time.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
libvirt may automatically add a pci-root or pcie-root controller to a
domain, depending on the arch/machinetype, and it hopefully always
makes the right decision about which to add (since in all cases these
controllers are an implicit part of the virtual machine).
But it's always possible that someone will create a config that
explicitly supplies the wrong type of PCI controller for the selected
machinetype. In the past that would lead to an error later when
libvirt was trying to assign addresses to other devices, for example:
XML error: PCI bus is not compatible with the device at
0000:00:02.0. Device requires a PCI Express slot, which is not
provided by bus 0000:00
(that's the error message that appears if you replace the pcie-root
controller in a Q35 domain with a pci-root controller).
This patch adds a check at the same place that the implicit
controllers are added (to ensure that the same logic is used to check
which type of pci root is correct). If a pci controller with index='0'
is already present, we verify that it is of the model that we would
have otherwise added automatically; if not, an error is logged:
The PCI controller with index='0' must be " model='pcie-root' for
this machine type, " but model='pci-root' was found instead.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1004602
Similar to "support Xen migration stream V2 in save/restore",
add support for indicating the migration stream version in
the migration code. To accomplish this, add a minimal migration
cookie in the libxl driver that is passed between source and
destination hosts. Initially, the cookie is only used in
the Begin and Prepare phases of migration to communicate the
version of the migration stream produced by the source.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Xen 4.6 introduced a new migration stream commonly referred to as
"migration V2". Xen 4.6 and newer always produce this new stream,
whereas Xen 4.5 and older always produce the legacy stream.
Support for migration stream V2 can be detected at build time with
LIBXL_HAVE_SRM_V2 from libxl.h. The legacy and V2 streams are not
compatible, but a V2 host can accept and convert a legacy stream.
Commit e7440656 changed the libxl driver to use the lowest libxl
API version possible (version 0x040200) to ensure the driver
builds against older Xen releases. The old 4.2 restore API does
not support specifying a stream version and assumes a legacy
stream, even if the incoming stream is migration V2. Thinking it
has been given a legacy stream, libxl will fail to convert an
incoming stream that is already V2, which causes the entire
restore operation to fail. Xen's libvirt-related OSSTest has been
failing since commit e7440656 landed in libvirt.git master. One
of the more recent failures can be seen here
http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg00071.html
This patch changes the call to libxl_domain_create_restore() to
include the stream version if LIBXL_HAVE_SRM_V2 is defined. The
version field of the libxlSavefileHeader struct is also updated
to '2' when LIBXL_HAVE_SRM_V2 is defined, ensuring the stream
version in the header matches the actual stream version produced
by Xen. Along with bumping the libxl API requirement to 0x040400,
this patch fixes save/restore on a migration V2 Xen host.
Oddly, migration has never used the libxlSavefileHeader. It
handles passing configuration in the Begin and Prepare phases,
and then calls libxl directly to transfer domain state/memory
in the Perform phase. A subsequent patch will add stream
version handling in the Begin and Prepare phase handshaking,
which will fix the migration related OSSTest failures.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
In LIBXL_API_VERSION 0x040400, the libxl_domain_create_restore API
gained a parameter for specifying restore parameters. Switch to
using version 0x040400, which will be useful in a subsequent commit
to specify the Xen migration stream version when restoring.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remove the possibility that a NULL hostdev->privateData or a
disk->privateData could crash libvirtd by checking for NULL
before dereferencing for the secinfo structure in the
qemuDomainSecret{Disk|Hostdev}Destroy functions. The hostdevPriv
could be NULL if qemuProcessNetworkPrepareDevices adds a new
hostdev during virDomainNetGetActualHostdev that then gets
inserted via virDomainHostdevInsert. The hostdevPriv was added
by commit id '27726d8' and is currently only used by scsi hostdev.
SRIOV VFs used in macvtap passthrough mode can take advantage of the
SRIOV card's transparent vlan tagging. All the code was there to set
the vlan tag, and it has been used for SRIOV VFs used for hostdev
interfaces for several years, but for some reason, the vlan tag for
macvtap passthrough devices was stubbed out with a -1.
This patch moves a bit of common validation down to a lower level
(virNetDevReplaceNetConfig()) so it is shared by hostdev and macvtap
modes, and updates the macvtap caller to actually send the vlan config
instead of -1.
Once we're able to list and identify all clients connected to a specific
server, we can then support force-closing a connection. This patch introduces
a simple API calling virNetServerClientClose on a specific client, which
can be later extended easily, e.g. by sending an event once the client is
disconnected successfully.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Unlike the previous commit, we do actually support one client-side only flag
VIR_CONNECT_NO_ALIASES, so besides removing the check for flags this flag
has to be masked out before sending a message to the daemon, otherwise it
would trigger an error when checking flags on the daemon side.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Commit 5ed235c6 added unnecessary redifinition of
virDomainCapsDeviceHostdev in conf/domain_capabilities.h. This breaks
build with clang 3.4:
In file included from conf/domain_capabilities.c:25:
conf/domain_capabilities.h:88:44: error: redefinition of typedef
'virDomainCapsDeviceHostdev' is a C11 feature
[-Werror,-Wtypedef-redefinition]
typedef struct _virDomainCapsDeviceHostdev virDomainCapsDeviceHostdev;
^
conf/domain_capabilities.h:86:44: note: previous definition is here
typedef struct _virDomainCapsDeviceHostdev virDomainCapsDeviceHostdev;
So drop one of those.
If the call to virXPathNodeSet to set naddresses fails, Coverity notes
that the subsequent VIR_ALLOC_N cannot have a negative value (well it
probably wouldn't be negative per se).
Signed-off-by: John Ferlan <jferlan@redhat.com>
Both instances use VIR_WARN() to print the error from a failed
virDBusGetSystemBus() call. Rather than use the virGetLastError
and need to check for valid return err pointer, just use the
virGetLastErrorMessage.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Requires adding the plumbing for <device><video>
The value is <enum name='modelType'> to match the associated domain
XML of <video><model type='XXX'/>
Wire it up for qemu too
Commit 0b36b0e9 broke polkit agent startup when attempting to fix a
coverity warning. Refactor it properly so that we don't need the 'cmd'
intermediate variable.
qemuDomainCheckDiskPresence has short-circuit code to skip the
determination of the disk backing chain for storage formats that can't
have backing volumes. The code treats VIR_STORAGE_FILE_NONE as not
having backing chain and skips the call to qemuDomainDetermineDiskChain.
This is wrong as qemuDomainDetermineDiskChain is responsible for storage
format detection and has logic to determine the default type if format
detection is disabled.
This allows to storage passed via <disk type="volume"> to circumvent the
enforcement to have correct storage format or that we shall default to
format='raw', since we don't set the default type via the post parse
callback for "volume" backed disks as the translation code could come up
with a better guess.
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1328003
Extract the relevant parts of the existing checker and reuse them for
blockcopy since copying to a non-block device creates an invalid
configuration.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1209802
In qemuCheckDiskConfig would now use virDomainDiskSourceIsBlockType just
as a glorified version of virStorageSourceIsBlockLocal that reports
error messages. Replace it with the latter including the message for
clarity.
Commit c820fbff9f added support for iSCSI
disk as backing for <disk device='lun'>. We would not use it for a disk
type="volume" with direct access mode which basically maps to direct
iSCSI usage. Fix it by adding the storage source type accessor that
resolves the volume type.
Commit 36025c552 tried to improve error reporting for <disk type="lun">
but reused the code in LXC which doesn't care about the actual disk
type. The error messages would then contain a bogous hint that the
config for the 'lun' device is invalid which might not be the case.
Re-do the relevant portion of the commit with the original message.
For disks sources described by a libvirt volume we don't need to do a
complicated check since virStorageTranslateDiskSourcePool already
correctly determines the actual disk type.
Replace the checks using a new accessor that does not open-code the
whole logic.
In 7884d089d2 I've started to refactor qemu_monitor_json.c.
Thing is, it's current structure is nothing like the rest of our
code. The @ret variable is rewritten all the time, if()-s are
nested instead of using goto and so on.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commin 36785c7e refactored the code for input devices but introduced a
bug where we removed all keyboard from migratable XML. We have to
remove only implicit keyboards like PS2 or XEN.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Move adding the config listen type=address if there is none in
qemuProcessPrepareDomain and move check for multiple listens to
qemuProcessStartValidate.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Fron c3bd0019c0 on instead of creating the following path for
cgroups:
/sys/fs/cgroupX/$name.libvirt-$driver
we generate rather more verbose one:
/sys/fs/cgroupX/$driver-$id-$name.libvirt-$driver
where $name is optional and included iff contains allowed chars.
See original commit for more reasoning. Now, problem with the
original commit is that we are unable to start any LXC domain
after it. Because when starting LXC container, the CGroup layout
is created by our lxc_controller process and then detected and
validated by libvirtd. The validation is done by trying to match
detected layout against all the possible patterns for cgroup
paths that we've ever had. And the commit in question forgot to
update this part of the code.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add the data structure and infrastructure to support an initialization
vector (IV) secrets. The IV secret generation will need to have access
to the domain private master key, so let's make sure the prepare disk
and hostdev functions can accept that now.
Anywhere that needs to make a decision over which secret type to use
in order to fill in or use the IV secret has a switch added.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Create helper API's in order to build the network URI as shortly we will
be adding a new SecretInfo type
Signed-off-by: John Ferlan <jferlan@redhat.com>
Rather than need to call qemuDomainSecretDestroy after any call to
qemuProcessLaunch, let's do the destroy in qemuProcessLaunch since
that's where command line is eventually generated and processed. Once
it's generated, we can clear out the secrets.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '40d8e2ba3' added the function to qemuProcessStart because
in order to set up some secrets in the future we will need the master
key. However, since the previous patch split the master key creation
into two parts (create just the key and create the file), we can now
call qemuDomainSecretPrepare from qemuProcessPrepareDomain since the
file is not necessary.
Signed-off-by: John Ferlan <jferlan@redhat.com>
A recent review of related changes noted that we should split the creation
(or generation) of the master key into the qemuProcessPrepareDomain and leave
the writing of the master key for qemuProcessPrepareHost.
Made the adjustment and modified some comments to functions that have
changed calling parameters, but didn't change the intro doc.
Signed-off-by: John Ferlan <jferlan@redhat.com>
From a review after push, add the "_TYPE" into the name.
Also use qemuDomainSecretInfoType in the struct rather than int
with the comment field containing the struct name
Signed-off-by: John Ferlan <jferlan@redhat.com>
This removes the opencoded payload freeing in the client, to use
the shared virNetMessageClearPayload call. Two changes:
- ClearPayload sets nfds=0, which fixes a potential crash if
an error path called virNetMessageFree/Clear on the message
after fds was free'd
- We drop the inner loop VIR_FORCE_CLOSE... this may mean fds are
kept open a little bit longer if the call is blocking but in
practice I don't think it will have any effect
I've noticed this while trying to compile libvirt on my arm box.
CC rpc/libvirt_net_rpc_server_la-virnetserverclient.lo
rpc/virnetserverclient.c: In function 'virNetServerClientNewPostExecRestart':
rpc/virnetserverclient.c:516:45: error: cast increases required alignment of target type [-Werror=cast-align]
(long long *) ×tamp) < 0) {
^
cc1: all warnings being treated as errors
Problem is, @timestap is defined as time_t which is 32 bits long,
and we are typecasting it to long long which is 64bits long.
Solution is to make @timestamp type of long long. At the same
time, we can make @conn_time in _virNetServerClient struct long
long too. There is no need for it to be type of time_t.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In this function, @id is defined as unsigned long long. When
passing this variable to virJSONValueObjectGetNumberUlong(),
well address of this variable, it's typecasted to ull*. There
is no need for that. It's a same story with @nrequests_max.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
json_reformat uses two spaces for when indenting nested objects, let's
do the same. The result of virJSONValueToString will be exactly the same
as json_reformat would produce.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
virQEMUCapsNewForBinary unconditionally loads data from cache and probes
using both QMP and -help parsing, which is suboptimal when we want to
use it in tests.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Bhyve supports ACPI shutdown by issuing SIGTERM signal to a bhyve
process.
Add the bhyveDomainShutdown() function and virBhyveProcessShutdown()
helper function that just sends SIGTERM to VM's bhyve process. If
a guest supports ACPI shutdown then process will be terminated and
this event will be noticed by the bhyve monitor code that will handle
setting proper status and clean up VM's resources by calling
virBhyveProcessStop().
Current implementation of domainDestroy for bhyve calls
virProcessKillPainfully() for the bhyve process and then
executes "bhyvectl --destroy".
This is wrong for two reasons:
* bhyvectl --destroy alone is sufficient because it terminates
the process
* virProcessKillPainfully() first sends SIGTERM and after few
attempts sends SIGKILL. As SIGTERM triggers ACPI shutdown that
we're not interested in, it creates an unwanted side effect in
domainDestroy.
Also, destroy monitor only after "bhyvectl --destroy" command succeeded
to avoid a case when the command fails and domain remains running, but
not being monitored anymore.
Since nparams can be technically negative, it is a good practice throughout
our code to check if nparams actually has a non-negative value. The same effect
would be achieved by converting our internal typed params serializer argument
to 'unsigned' type, but it definitely would not be the path of least resistance.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1286709
Now that we have all the pieces in place, we can add the 'iothread=#' to
the command line for the (two) controllers that support it (virtio-scsi-pci
and virtio-scsi-ccw). Add the tests as well...
Rather than an if statement, use a switch.
The switch will also catch the illegal usage of 'iothread' with some other
kind of unsupported bus configuration.
Add the ability to add an 'iothread' to the controller which will be how
virtio-scsi-pci and virtio-scsi-ccw iothreads have been implemented in qemu.
Describe the new functionality and add tests to parse/validate that the
new attribute can be added.
An iothread for virtio-scsi is a property of the controller. Add a lookup
of the 'virtio-scsi-pci' and 'virtio-scsi-ccw' device properties and parse
the output. For both, support for the iothread was added in qemu 2.4
while support for virtio-scsi in general was added in qemu 1.4.
Modify the various mock capabilities replies (by hand) to reflect the
when virtio-scsi was supported and then specifically when the iothread
property was added. For versions prior to 1.4, use the no device error
return for virtio-scsi. For versions 1.4 to before 2.4, add some data
for virtio-scsi-pci even though it isn't complete we're not looking for
anything specific there anyway. For 2.4 to 2.6, add a more complete reply.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Expose a public API to retrieve some identity and connection information about
a client connected to the specified server on daemon. The identity info
retrieved is mostly connection transport dependent, i.e. there won't be any
socket address returned for a local (UNIX socket) connection, while on the
other hand, when connected through TLS or unencrypted TCP, obviously no UNIX
process identification will be present in the returned data. All supported
values that can be returned in typed params are exposed and documented in
include/libvirt/libvirt-admin.h
Signed-off-by: Erik Skultety <eskultet@redhat.com>
This method just aggregates various client object attributes, like socket
address, connection type (RO/RW), and some TCP/TLS/UNIX identity in an atomic
manner.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
We do have a similar method, serving the same purpose, for TLS, but we lack
one for SASL. So introduce one, in order for other modules to be able to find
out, if a SASL session is active, or better said, that a SASL session exists
at all.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Our socket address format is in a rather non-standard format and that is
because sasl library requires the IP address and service to be delimited by a
semicolon. The string form is a completely internal matter, however once the
admin interfaces to retrieve client identity information are merged, we should
return the socket address string in a common format, e.g. format defined by
URI rfc-3986, i.e. the IP address and service are delimited by a colon and
in case of an IPv6 address, square brackets are added:
Examples:
127.0.0.1:1234
[::1]:1234
This patch changes our default format to the one described above, while adding
separate methods to request the non-standard SASL format using semicolon as a
delimiter.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Just like with server-related APIs, before any of client-based APIs can be
called, a reference to a client-side client object needs to be obtained. For
this purpose, a lookup method should exist. Apart from the client retrieval
logic, a new error code for non-existent client had to be added as well.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
In majority of our functions we have this variable @ret that is
overwritten a lot. In other areas of the code we use 'goto
cleanup;' just so that this wouldn't happen. But here.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>