Resolves a CI test integration failure with a RHEL6/Centos6 environment.
In order to use a LUKS encrypted device, the design decision was to
generate an encrypted secret based on the master key. However, commit
id 'da86c6c' missed checking for that specifically.
When qemuDomainSecretSetup was implemented, a design decision was made
to "fall back" to a plain text secret setup if the specific cipher was
not available (e.g. virCryptoHaveCipher(VIR_CRYPTO_CIPHER_AES256CBC))
as well as the QEMU_CAPS_OBJECT_SECRET. For the luks encryption setup
there is no fall back to the plaintext secret, thus if that gets set
up by qemuDomainSecretSetup, then we need to fail.
Also, while the qemuxml2argvtest has set the QEMU_CAPS_OBJECT_SECRET
bit, it didn't take into account the second requirement that the
ability to generate the encrypted secret is possible. So modify the
test to not attempt to run the luks-disk if we know we don't have
the encryption algorithm.
Most of the lines we look at are not going to match one of the
driver types contained in $groups_regex.
Move on to the next line if it does not contain any of them early.
This speeds up the script execution by 50%, since this simple regex
does not have any capture groups.
The %groups hash contains all the driver types (e.g.
virHypervisorDriver or virSecretDriver).
When searching for all the APIs that are implemented by a driver
of that specific driver type, we keep iterating over the %groups
hash on every line we look at, then matching against the driver type.
This is inefficient because it prevents perl from caching the regex
and it executes the regex once for every driver type, even though
one regex matching excludes all the others, since all the driver types
are different.
Construct the regex containing all the driver types upfront to save
about 6.4s (~98%) of the script execution time.
When generating the hvsupport.html.in file, we parse the -api.xml
files generated by apibuild.py to know in which HTML file the API
function is.
Doing an XPath query for every single 'function' element in the
file is inefficient.
Since the XML file is generated by another of our build scripts
(apibuild.py, using Python's standard 'output.write' XML library),
just find the function name->file mapping by a regex upfront.
Also add a note about this next to the line that generates it
in apibuild.py and do not check if XML::XPath is installed in
bootstrap since we no longer use it.
Any error happening after the hand shake in the lxc controller
will not result in a failure as errors are checked during the handshake.
Move the handshake after the last possible error.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1301021
Generate the luks command line using the AES secret key to encrypt the
luks secret. A luks secret object will be in addition to a an AES secret.
For hotplug, check if the encinfo exists and if so, add the AES secret
for the passphrase for the secret object used to decrypt the device.
Modify/augment the fakeSecret* in qemuxml2argvtest in order to handle
find a uuid or a volume usage with a specific path prefix in the XML
(corresponds to the already generated XML tests). Add error message
when the 'usageID' is not 'mycluster_myname'. Commit id '1d632c39'
altered the error message generation to rely on the errors from the
secret_driver (or it's faked replacement).
Add the .args output for adding the LUKS disk to the domain
Signed-off-by: John Ferlan <jferlan@redhat.com>
Soon we will be adding luks encryption support. Since a volume could require
both a luks secret and a secret to give to the server to use of the device,
alter the alias generation to create a slightly different alias so that
we don't have two objects with the same alias.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id 'a1344f70a' added AES secret processing for RBD when starting
up a guest. As such, when the hotplug code calls qemuDomainSecretDiskPrepare
an AES secret could be added to the disk about to be hotplugged. If an AES
secret was added, then the hotplug code would need to generate the secret
object because qemuBuildDriveStr would add the "password-secret=" to the
returned 'driveStr' rather than the base64 encoded password.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Partially resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1301021
If the volume xml was looking to create a luks volume take the necessary
steps in order to make that happen.
The processing will be:
1. create a temporary file (virStorageBackendCreateQemuImgSecretPath)
1a. use the storage driver state dir path that uses the pool and
volume name as a base.
2. create a secret object (virStorageBackendCreateQemuImgSecretObject)
2a. use an alias combinding the volume name and "_luks0"
2b. add the file to the object
3. create/add luks options to the commandline (virQEMUBuildLuksOpts)
3a. at the very least a "key-secret=%s" using the secret object alias
3b. if found in the XML the various "cipher" and "ivgen" options
Signed-off-by: John Ferlan <jferlan@redhat.com>
The 'res' variable was only being initialized to NULL in the
if (!state) path; however, that path never used res and evenutally
res is assigned one of two results based on a pair of if then else if
conditions. If for some reason neither of those paths was taken and
the (!state) path wasn't taken, then 'res' would be indeterminate.
Found by Coverity, probably a false positive based on code paths, but
better safe than sorry for the future.
Signed-off-by: John Ferlan <jferlan@redhat.com>
When formatting the graphics data for TYPE_SPICE, check if the glisten
is NULL before blindly referencing
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
Cannot assume virGetLastError returns non-NULL value - modify the code to
fetch err and check if err && err->code
Found by Coverity
Signed-off-by: John Ferlan <jferlan@redhat.com>
Commit id '740e4d70' altered the logic to fetch the sysconf values and
added a new virConfGetValueStringList which returns -1 on failure, 0 if
missing, and 1 if the value was present.
However, the caller only checked !shargv which caught Coverity's attention
since the following VIR_ALLOC_N(*shargv, 2) would be a NULL ptr deref
Signed-off-by: John Ferlan <jferlan@redhat.com>
Internally, all the data are represented as unsigned int, it is also documented
in the header file that users should use our exported constants that also
indicate that the data should be unsigned int. However, when polling for the
current server threadpool's configuration, virt-admin uses an incorrect
formatting parameter '%d' for printf. Instead, virt-admin should use formatting
parameter '%u'.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1356769
Signed-off-by: Erik Skultety <eskultet@redhat.com>
A recent adjustment to qemuDomainAttachRNGDevice to properly cleanup
the props object after a qemuMonitorAddObject also would affect this
code. Alter the cleanup to be similar to RNG changes.
Based on recent review comment - rather than have a spate of goto failxxxx,
change to a boolean based model. Ensures that the original error can be
preserved and cleanup is a bit more orderly if more objects are added.
Based on recent review comment - rather than have a spate of goto failxxxx,
change to a boolean based model. Ensures that the original error can be
preserved and cleanup is a bit more orderly if more objects are added.
Based on recent review comment - rather than have a spate of goto failxxxx,
change to a boolean based model. Ensures that the original error can be
preserved and cleanup is a bit more orderly if more objects are added.
Based on recent review comment - rather than have a spate of goto failxxxx,
change to a boolean based model. Ensures that the original error can be
preserved and cleanup is a bit more orderly if more objects are added.
Based on recent review comment - rather than have a spate of goto failxxxx,
change to a boolean based model. Ensures that the original error can be
preserved and cleanup is a bit more orderly if more objects are added.
Commit da665fbd introduced the following condition to virLXCProcessEnsureRootFS
and openvzReadFSConf:
if (!(<some_var> = virDomainFSDefNew()) < 0)
which broke the build on fedora with GCC 5.3.1: "logical not is only applied to
the left hand side of comparison".
Signed-off-by: Erik Skultety <eskultet@redhat.com>
The commit da665fbd introduced virStorageSourcePtr inside the structure
_virDomainFSDef. This is causing an error when libvirt is being compiled.
make[3]: Entering directory `/media/julio/8d65c59c-6ade-4740-9cdc-38016a4cb8ae
/home/julio/Desktop/virt/libvirt/src'
CC security/virt_aa_helper-virt-aa-helper.o
security/virt-aa-helper.c: In function 'get_files':
security/virt-aa-helper.c:1087:13: error: passing argument 2 of 'vah_add_path'
from incompatible pointer type [-Werror]
if (vah_add_path(&buf, fs->src, "rw", true) != 0)
^
security/virt-aa-helper.c:732:1: note: expected 'const char *' but argument is
of type 'virStorageSourcePtr'
vah_add_path(virBufferPtr buf, const char *path, const char *perms, bool
recursive)
^
cc1: all warnings being treated as errors
Adding the attribute "path" from virStorageSourcePtr fixes this issue.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
vz supports only a subset of tcp and udp parameters.
1. tcp type supports only 'raw' protocol.
2. udp type supports only same parameters of 'host' and 'service'
for 'bind' and 'connect'.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
After domain is in the domains list let's keep it there. This
is approach taken by qemu driver and vz vzDomainMigrateFinish3Params too.
It quite reasonable, driver domain object is fully constructed and
can be discovered by client later.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
9c14a9ab introduced vzNewDomain function to enlist libvirt domain
object before actually creating vz sdk domain. Fix should fix
race on same vz sdk domain added event where libvirt domain object is
enlisted too. But later eb5e9c1e added locked checks for
adding livirtd domain object to list on vz sdk domain added event.
Thus now approach of 9c14a9ab is unnecessary complicated.
See we have otherwise unuseful prlsdkGetDomainIds function only
to create minimal domain definition to create libvirt domain object.
Also vzNewDomain is difficult to use as it creates partially
constructed domain object.
Let's move back to original approach where prlsdkLoadDomain do
all the necessary job. Another benefit is that we can now
take driver lock for bare minimum and in single place. Reducing
locking time have small disadvatage of double parsing on race
conditions which is typical if domain is added thru vz driver.
Well we have this double parse inevitably with current vz sdk api
on any domain updates so i would not take it here seriously.
Performance events subscribtion is done before locked check and
therefore could be done twice on races but this is not the problem.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Vz containers are able to use ploop volumes from storage pools
to work upon.
To use filesystem type volume, pool name and volume name should be
specifaed in <source> :
<filesystem type='volume' accessmode='passthrough'>
<driver type='ploop' format='ploop'/>
<source pool='guest_images' volume='TEST_POOL_CT'/>
<target dir='/'/>
</filesystem>
The information about pool and volume is stored in ct dom configuration:
<StorageURL>libvirt://localhost/pool_name/vol_name</StorageURL>
and can be easily obtained via PrlVmDevHd_GetStorageURL sdk call.
The only shorcoming: if storage pool is moved somewhere the ct
should be redefined in order to refresh the information aboot path
to root.hdd
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
We do not need to check domainf fs type there,
because it is done in prlsdkCheckUnsupportedParams.
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
New type of <devices> <filesystem type= 'volume'> is introduced.
This patch allows to use volumes for storing the filesystem, that is
accessed from the guest e.g. root directory for container.
To take advantage of volumes as a backend of filesystem volume
and pool names should be specified:
<filesystem type= 'volume'>
<source pool='pool name' volume='volume name'/>
Signed-off-by: Olga Krishtal <okrishtal@virtuozzo.com>
Adding domain to domain list on preparation step is not correct.
First domain is not fully constructed - domain definition is
missing. Second we can't use VIR_MIGRATE_PARAM_DEST_XML parameter
to parse definition as vz sdk can patch it by itself. Let's add/remove
domain on finish step. This is for synchronization purpose only so domain
is present/absent on destination after migration completion. Actually
domain object will probably be created right after actual vz sdk
migration start by vz sdk domain defined event.
We can not and should not sync domain cache on error path in finish step
of migration. We can not as we really don't know what is the reason of
cancelling and we should not as user should not make assumptions on
state on error path. What we should do is cleaning up temporary migration
state that is induced on prepare step but we don't have one. Thus
cancellation should be noop.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
libvirt domain defined event is issued only on correspondent vz sdk
event. But in case event delivered before domain is added to
domain list we can mistakenly skip this event if prlsdkNewDomainByHandle
return NULL in case of domain is discovered in the list under
the driver lock. Let's return domain object in this case.
Now prlsdkNewDomainByHandle returns NULL only in case of
error which is more convinient.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
The first version of migration cookie was rather dumb resulting
in passing empty or unused fields here and there. Add flags to
specify what to bake to and eat from cookie so we deal only
with meaningful data. However for backwards compatibility
we still need to pass at least some faked fields sometimes.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
Older libvirt versions send persistent XML in a migration cookie even
when VIR_MIGRATE_PERSIST_DEST flag is not used, but current libvirt
properly fails if the cookie contains unexpected flags. Thus migration
from old libvirt fails with
internal error: Unsupported migration cookie feature persistent
unless VIR_MIGRATE_PERSIST_DEST flag is set.
https://bugzilla.redhat.com/show_bug.cgi?id=1320500
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
A nodedev device definition like this is required for testing
NodeDeviceCreateXML and NodeDeviceDestroy. So unless it's part
of the stock test:///default set there's no way to actually
invoke those functions for the default URI