Before this commit SessionIsActive was not used because ESX(i)
doesn't implement it. vCenter supports SessionIsActive, so use
it here, but keep the fall back mechanism for ESX(i) and GSX.
QueryVirtualDiskUuid is only available on an ESX(i) server. vCenter
returns an NotImplemented fault and a GSX server is missing the
VirtualDiskManager completely. Therefore only use QueryVirtualDiskUuid
with an ESX(i) server and fall back to path as storage volume key for
vCenter and GSX server.
VirtualDisks are .vmdk file based. Other files in a datastore
like .iso or .flp files don't have a UUID attached, fall back
to the path as key for them.
Instead of splitting the path part of a datastore path into
directory and file name, keep this in one piece. An example:
"[datastore] directory/file"
was split into this before:
datastoreName = "datastore"
directoryName = "directory"
fileName = "file"
Now it's split into this:
datastoreName = "datastore"
directoryName = "directory"
directoryAndFileName = "directory/file"
This simplifies code using esxUtil_ParseDatastorePath, because
directoryAndFileName is used more often than fileName. Also the
old approach expected the datastore path to reference an actual
file, but this isn't always correct, especially when listing
volumes. In that case esxUtil_ParseDatastorePath is used to parse
a path that references a directory. This fails for a vpx://
connection because the vCenter returns directory paths with a
trailing '/'. The new approach is robust against this and the
actual decision if the datastore path should reference a file or
a directory is up to the caller of esxUtil_ParseDatastorePath.
Update the tests accordingly.
The check was altered in 8c48743b97
and got too strict, I've no clue how that snuck in. This check
makes every try to open a connection using the ESX driver fail
with an invalid argument error.
Revert the change to the check and add a comment to prevent future
mistakes with this check.
Instead of using one big traversal spec for lookup use a set of
more fine grained traversal specs that are selected based on the
actual needs of the lookup.
This gives up to 20% speedup for certain operations like domain
listing due to less HTTP(S) traffic.
In case an optional object cannot be found the lookup function is
left early and the cleanup code is not executed.
This pattern occurs in some other functions too.
Don't rely on summary.url anymore, because its value is different
between an esx:// and vpx:// connection. Use host.mountInfo.path
instead.
Don't fallback to lookup by UUID (actually lookup by absolute path)
in esxVI_LookupDatastoreByName when lookup by name fails. Add a
seperate function for this: esxVI_LookupDatastoreByAbsolutePath
Now a vpx:// connection has an explicitly specified host. This
allows to enabled several functions for a vpx:// connection
again, like host UUID, hostname, general node info, max vCPU
count, free memory, migration and defining new domains.
Lookup datacenter, compute resource, resource pool and host
system once and cache them. This simplifies the rest of the
code and reduces overall HTTP(S) traffic a bit.
esx:// and vpx:// can be mixed freely for a migration.
Ensure that migration source and destination refer to the
same vCenter. Also directly encode the resource pool and
host system object IDs into the migration URI in the prepare
function. Then directly build managed object references in
the perform function instead of re-looking up already known
information.
esxVI_WaitForTaskCompletion can take a UUID to lookup the
corresponding domain and check if the current task for it
is blocked by a question. It calls another function to do
this: esxVI_LookupAndHandleVirtualMachineQuestion looks up
the VirtualMachine and checks for a question. If there is
a question it calls esxVI_HandleVirtualMachineQuestion to
handle it.
If there was no question or it has been answered the call
to esxVI_LookupAndHandleVirtualMachineQuestion returns 0.
If any error occurred during the lookup and answering
process -1 is returned. The problem with this is, that -1
is also returned when there was no error but the question
could not be answered. So esxVI_WaitForTaskCompletion cannot
distinguish between this two situations and reports that a
question is blocking the task even when there was actually
another problem.
This inherent problem didn't surface until vSphere 4.1 when
you try to define a new domain. The driver tries to lookup
the domain that is just in the process of being registered.
There seems to be some kind of race condition and the driver
manages to issue a lookup command before the ESX server was
able to register the domain. This used to work before.
Due to the return value problem described above the driver
reported a false error message in that case.
To solve this esxVI_WaitForTaskCompletion now takes an
additional occurrence parameter that describes whether or
not to expect the domain to be existent. Also add a new
parameter to esxVI_LookupAndHandleVirtualMachineQuestion
that allows to distinguish if the call returned -1 because
of an actual error or because the question could not be
answered.
Eliminate almost all backward jumps by replacing this common pattern:
int
some_random_function(void)
{
int result = 0;
...
cleanup:
<unconditional cleanup code>
return result;
failure:
<cleanup code in case of an error>
result = -1;
goto cleanup
}
with this simpler pattern:
int
some_random_function(void)
{
int result = -1;
...
result = 0;
cleanup:
if (result < 0) {
<cleanup code in case of an error>
}
<unconditional cleanup code>
return result;
}
Add a bool success variable in functions that don't have a int result
that can be used for the new pattern.
Also remove some unnecessary memsets in error paths.
Allows listing existing pools and requesting information about them.
Alter the esxVI_ProductVersion enum in a way that allows to check for
product type by masking.
An empty root snapshot list was considered as error condition. Creating a
new snapshot would fail if the domain didn't have snapshots yet, because
the snapshot-create function tries to lookup the list of existing snapshots
in order to verify that the snapshot name is unique. This fails if the
domain doesn't have snapshots yet.
Removing the NULL check from esxVI_LookupRootSnapshotTreeList fixes this.
Generate almost all SOAP method mapping code.
Update the driver code to use the complete paramater list of some methods
that had parameters skipped before.
Improve the ESX_VI__METHOD marco to do automatic output deserialization
based on output occurrence. Also incorporate automatic _this binding and
output pointer check.
Fix invalid code generating in esx_vi_generator.py regarding deep copy
types that contain enum properties.
Add strptime and timegm to bootstrap.conf. Both are used to convert a
xsd:dateTime to calendar time.
Add a testcase of the xsd:dateTime conversion.
Also define ESX_ERROR and ESX_VI_ERROR in a central place, instead of
defining them in each source file.
Add ESX_ERROR and ESX_VI_ERROR to the msg_gen_function list in cfg.mk.
Update po/POTFILES.in accordingly.
The Python script generates the mappings based on the type descriptions
in the esx_vi_generator.input file.
This also improves the inheritance handling and allows to get rid of the
ugly, inflexible, and error prone _base/_super approach. Now every struct
that represents a SOAP type contains a _type member, that allows to
recreate C++-like dynamic dispatch for "method" calls in C.
If esxVI_String_DeepCopyValue or esxVI_SelectionSpec_AppendToList fail
then selectionSpec would leak. Add a free call in the failure path to
fix the leak.
This is actually a consequence of the reworked required parameter
checking: Unify the required parameter check into a Validate function
instead of doing it separately im the (de)serialization part.
The required parameter checking for the mapped methods parameter was
done in the (de)serialize functions before. Now it's explicitly done
in the mapped method itself.
Currently only the faultcode and faultstring are deserialized, the
detail part is ignored. The implementation of many new SOAP types
would be necessary to deserialize the detail part correctly. As an
intermediate solution the raw response is dumped to the debug log.
The data passed to the callback is not guaranteed to be zero terminated,
take care of that by coping the data and adding a zero terminator.
Also dump the data for other types than CURLINFO_TEXT.
Set CURLOPT_VERBOSE to 1 so the debug callback is called when enabled.
Questions can block tasks, to handle them automatically the driver can answers
them with the default answer. The auto_answer query parameter allows to enable
this automatic question handling.
* src/esx/README: add a detailed explanation for automatic question handling
* src/esx/esx_driver.c: add automatic question handling for all task related
driver functions
* src/esx/esx_util.[ch]: add handling for the auto_answer query parameter
* src/esx/esx_vi.[ch], src/esx/esx_vi_methods.[ch], src/esx/esx_vi_types.[ch]:
add new VI API methods and types and additional helper functions for
automatic question handling
* src/esx/esx_vi.c (esxVI_List_CastFromAnyType): For invalid
inputs, fail right away. Do not "goto failure" where a NULL
input pointer would be dereferenced.
Replace free(virBufferContentAndReset()) with virBufferFreeAndReset().
Update documentation and replace all remaining calls to free() with
calls to VIR_FREE(). Also add missing calls to virBufferFreeAndReset()
and virReportOOMError() in OOM error cases.
If an error occurs between the allocation of an item and appending it
to the list, the item leaks. Free such orphaned items in error cases.
* src/esx/esx_vi.c: free orphaned items in error cases