For parsing try to match by datastore mount path first, if that
fails fallback to /vmfs/volumes/<datastore>/<path> parsing. This
also fixes problems with GSX on Windows. Because GSX on Windows
doesn't use /vmfs/volumes/ style file names.
For formatting use the datastore mount path too, instead of using
/vmfs/volumes/<datastore>/<path> as fixed format.
Introduce esxVMX_Context containing functions pointers to
glue both parts together in a generic way.
Move the ESX specific part to esx_driver.c.
This is a step towards making the VMX code reusable in a
potential VMware Workstation and VMware Player driver.
Don't rely on summary.url anymore, because its value is different
between an esx:// and vpx:// connection. Use host.mountInfo.path
instead.
Don't fallback to lookup by UUID (actually lookup by absolute path)
in esxVI_LookupDatastoreByName when lookup by name fails. Add a
seperate function for this: esxVI_LookupDatastoreByAbsolutePath
Now a vpx:// connection has an explicitly specified host. This
allows to enabled several functions for a vpx:// connection
again, like host UUID, hostname, general node info, max vCPU
count, free memory, migration and defining new domains.
Lookup datacenter, compute resource, resource pool and host
system once and cache them. This simplifies the rest of the
code and reduces overall HTTP(S) traffic a bit.
esx:// and vpx:// can be mixed freely for a migration.
Ensure that migration source and destination refer to the
same vCenter. Also directly encode the resource pool and
host system object IDs into the migration URI in the prepare
function. Then directly build managed object references in
the perform function instead of re-looking up already known
information.
esxVI_WaitForTaskCompletion can take a UUID to lookup the
corresponding domain and check if the current task for it
is blocked by a question. It calls another function to do
this: esxVI_LookupAndHandleVirtualMachineQuestion looks up
the VirtualMachine and checks for a question. If there is
a question it calls esxVI_HandleVirtualMachineQuestion to
handle it.
If there was no question or it has been answered the call
to esxVI_LookupAndHandleVirtualMachineQuestion returns 0.
If any error occurred during the lookup and answering
process -1 is returned. The problem with this is, that -1
is also returned when there was no error but the question
could not be answered. So esxVI_WaitForTaskCompletion cannot
distinguish between this two situations and reports that a
question is blocking the task even when there was actually
another problem.
This inherent problem didn't surface until vSphere 4.1 when
you try to define a new domain. The driver tries to lookup
the domain that is just in the process of being registered.
There seems to be some kind of race condition and the driver
manages to issue a lookup command before the ESX server was
able to register the domain. This used to work before.
Due to the return value problem described above the driver
reported a false error message in that case.
To solve this esxVI_WaitForTaskCompletion now takes an
additional occurrence parameter that describes whether or
not to expect the domain to be existent. Also add a new
parameter to esxVI_LookupAndHandleVirtualMachineQuestion
that allows to distinguish if the call returned -1 because
of an actual error or because the question could not be
answered.
Add a pointer to the primary context of a connection and use it in all
driver functions that don't dependent on the context type. This includes
almost all functions that deal with a virDomianPtr. Therefore, using
a vpx:// connection allows you to perform all the usual domain related
actions like start, destroy, suspend, resume, dumpxml etc.
Some functions that require an explicitly specified ESX server don't work
yet. This includes the host UUID, the hostname, the general node info, the
max vCPU count and the free memory. Also not working yet are migration and
defining new domains.
Since 070f61002f the vcenter query
parameter has been ignored, because the refactoring to use
esxUtil_ParseQuery was incomplete. This effectively broke migration,
because the vcenter query parameter is essential for a migration.
Add the library entry point for the new virDomainQemuMonitorCommand()
entry point. Because this is not part of the "normal" libvirt API,
it gets its own header file, library file, and will eventually
get its own over-the-wire protocol later in the series.
Changes since v1:
- Go back to using the virDriver table for qemuDomainMonitorCommand, due to
linking issues
- Added versioning information to the libvirt-qemu.so
Changes since v2:
- None
Changes since v3:
- Add LGPL header to libvirt-qemu.c
- Make virLibConnError and virLibDomainError macros instead of function calls
Changes since v4:
- Move exported symbols to libvirt_qemu.syms
Signed-off-by: Chris Lalancette <clalance@redhat.com>
Also don't abuse the disk driver name to specify the SCSI controller
model anymore:
<driver name='buslogic'/>
Use the newly added model attribute of the controller element for this:
<controller type='scsi' index='0' model='buslogic'/>
The disk driver name approach is deprecated now, but still works for
backward compatibility reasons.
Update the documentation and tests accordingly.
Fix usage of the words controller and id in the VMX handling code. Use
controller, bus and unit properly.
The domain XML parsing code autogenerates disk address and
controller elements when they are not explicitly specified.
The code assumes a narrow SCSI bus (7 units per bus). ESX
uses a wide SCSI bus (16 units per bus).
This is a step towards controller support for the ESX driver.
Eliminate almost all backward jumps by replacing this common pattern:
int
some_random_function(void)
{
int result = 0;
...
cleanup:
<unconditional cleanup code>
return result;
failure:
<cleanup code in case of an error>
result = -1;
goto cleanup
}
with this simpler pattern:
int
some_random_function(void)
{
int result = -1;
...
result = 0;
cleanup:
if (result < 0) {
<cleanup code in case of an error>
}
<unconditional cleanup code>
return result;
}
Add a bool success variable in functions that don't have a int result
that can be used for the new pattern.
Also remove some unnecessary memsets in error paths.
This defines the internal driver API and stubs out each driver
* src/driver.h: Define virDrvDomainGetBlockInfo signature
* src/libvirt.c, src/libvirt_public.syms: Glue public API to drivers
* src/esx/esx_driver.c, src/lxc/lxc_driver.c, src/opennebula/one_driver.c,
src/openvz/openvz_driver.c, src/phyp/phyp_driver.c,
src/test/test_driver.c, src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/xen/xen_driver.c, src/xenapi/xenapi_driver.c: Stub out driver
Generate almost all SOAP method mapping code.
Update the driver code to use the complete paramater list of some methods
that had parameters skipped before.
Improve the ESX_VI__METHOD marco to do automatic output deserialization
based on output occurrence. Also incorporate automatic _this binding and
output pointer check.
Fix invalid code generating in esx_vi_generator.py regarding deep copy
types that contain enum properties.
Add strptime and timegm to bootstrap.conf. Both are used to convert a
xsd:dateTime to calendar time.
Add a testcase of the xsd:dateTime conversion.
Also define ESX_ERROR and ESX_VI_ERROR in a central place, instead of
defining them in each source file.
Add ESX_ERROR and ESX_VI_ERROR to the msg_gen_function list in cfg.mk.
Update po/POTFILES.in accordingly.
virDomainManagedSave() is to be run on a running domain. Once the call
complete, as in virDomainSave() the domain is stopped upon completion,
but there is no restore counterpart as any order to start the domain
from the API would load the state from the managed file, similary if
the domain is autostarted when libvirtd starts.
Once a domain has restarted his managed save image is destroyed,
basically managed save image can only exist for a stopped domain,
for a running domain that would be by definition outdated data.
* include/libvirt/libvirt.h.in src/libvirt.c src/libvirt_public.syms:
adds the new entry points virDomainManagedSave(),
virDomainHasManagedSaveImage() and virDomainManagedSaveRemove()
* src/driver.h src/esx/esx_driver.c src/lxc/lxc_driver.c
src/opennebula/one_driver.c src/openvz/openvz_driver.c
src/phyp/phyp_driver.c src/qemu/qemu_driver.c src/vbox/vbox_tmpl.c
src/remote/remote_driver.c src/test/test_driver.c src/uml/uml_driver.c
src/xen/xen_driver.c: add corresponding new internal drivers entry
points
virParseVersionString uses virStrToLong_ui instead of sscanf.
This also fixes a bug in the UML driver, that always returned 0
as version number.
Introduce STRSKIP to check if a string has a certain prefix and
to skip this prefix.
virStrToLong* guarantees (via strtol) that the end pointer will be set
to the point at which parsing stopped (even on failure, this point is
the start of the input string).
* src/esx/esx_driver.c (esxGetVersion): Remove pointless
conditional.
* src/qemu/qemu_conf.c (qemuParseCommandLinePCI)
(qemuParseCommandLineUSB, qemuParseCommandLineSmp): Likewise.
* src/qemu/qemu_monitor_text.c
(qemuMonitorTextGetMigrationStatus): Likewise.
The Python script generates the mappings based on the type descriptions
in the esx_vi_generator.input file.
This also improves the inheritance handling and allows to get rid of the
ugly, inflexible, and error prone _base/_super approach. Now every struct
that represents a SOAP type contains a _type member, that allows to
recreate C++-like dynamic dispatch for "method" calls in C.
The current virDomainAttachDevice API can be (ab)used to change
the media of an existing CDROM/Floppy device. Going forward there
will be more devices that can be configured on the fly and overloading
virDomainAttachDevice for this is not too pleasant. This patch adds
a new virDomainUpdateDeviceFlags() explicitly just for modifying
existing devices.
* include/libvirt/libvirt.h.in: Add virDomainUpdateDeviceFlags
* src/driver.h: Internal API for virDomainUpdateDeviceFlags
* src/libvirt.c, src/libvirt_public.syms: Glue public API to
driver API
* src/esx/esx_driver.c, src/lxc/lxc_driver.c, src/opennebula/one_driver.c,
src/openvz/openvz_driver.c, src/phyp/phyp_driver.c, src/qemu/qemu_driver.c,
src/remote/remote_driver.c, src/test/test_driver.c, src/uml/uml_driver.c,
src/vbox/vbox_tmpl.c, src/xen/xen_driver.c, src/xenapi/xenapi_driver.c: Add
stubs for new driver entry point
The current API for domain events has a number of problems
- Only allows for domain lifecycle change events
- Does not allow the same callback to be registered multiple times
- Does not allow filtering of events to a specific domain
This introduces a new more general purpose domain events API
typedef enum {
VIR_DOMAIN_EVENT_ID_LIFECYCLE = 0, /* virConnectDomainEventCallback */
...more events later..
}
int virConnectDomainEventRegisterAny(virConnectPtr conn,
virDomainPtr dom, /* Optional, to filter */
int eventID,
virConnectDomainEventGenericCallback cb,
void *opaque,
virFreeCallback freecb);
int virConnectDomainEventDeregisterAny(virConnectPtr conn,
int callbackID);
Since different event types can received different data in the callback,
the API is defined with a generic callback. Specific events will each
have a custom signature for their callback. Thus when registering an
event it is neccessary to cast the callback to the generic signature
eg
int myDomainEventCallback(virConnectPtr conn,
virDomainPtr dom,
int event,
int detail,
void *opaque)
{
...
}
virConnectDomainEventRegisterAny(conn, NULL,
VIR_DOMAIN_EVENT_ID_LIFECYCLE,
VIR_DOMAIN_EVENT_CALLBACK(myDomainEventCallback)
NULL, NULL);
The VIR_DOMAIN_EVENT_CALLBACK() macro simply does a "bad" cast
to the generic signature
* include/libvirt/libvirt.h.in: Define new APIs for registering
domain events
* src/driver.h: Internal driver entry points for new events APIs
* src/libvirt.c: Wire up public API to driver API for events APIs
* src/libvirt_public.syms: Export new APIs
* src/esx/esx_driver.c, src/lxc/lxc_driver.c, src/opennebula/one_driver.c,
src/openvz/openvz_driver.c, src/phyp/phyp_driver.c,
src/qemu/qemu_driver.c, src/remote/remote_driver.c,
src/test/test_driver.c, src/uml/uml_driver.c,
src/vbox/vbox_tmpl.c, src/xen/xen_driver.c,
src/xenapi/xenapi_driver.c: Stub out new API entries
This is actually a consequence of the reworked required parameter
checking: Unify the required parameter check into a Validate function
instead of doing it separately im the (de)serialization part.
The required parameter checking for the mapped methods parameter was
done in the (de)serialize functions before. Now it's explicitly done
in the mapped method itself.
Another warning caught by coverity. Continue to perform best-effort
closing and resource release, but warn the caller about the failure.
* src/esx/esx_driver.c (esxClose): Return an error on failure to close.
This provides the internal glue for the driver API
* src/driver.h: Internal API contract
* src/libvirt.c, src/libvirt_public.syms: Connect public API
to driver API
* src/esx/esx_driver.c, src/lxc/lxc_driver.c, src/opennebula/one_driver.c,
src/openvz/openvz_driver.c, src/phyp/phyp_driver.c,
src/qemu/qemu_driver.c, src/remote/remote_driver.c,
src/test/test_driver.c src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/xen/xen_driver.c: Stub out entry points
The internal glue layer for the new pubic API
* src/driver.h: Define internal driver API contract
* src/libvirt.c, src/libvirt_public.syms: Wire up public
API to internal driver API
* src/esx/esx_driver.c, src/lxc/lxc_driver.c, src/opennebula/one_driver.c,
src/openvz/openvz_driver.c, src/phyp/phyp_driver.c,
src/qemu/qemu_driver.c, src/remote/remote_driver.c,
src/test/test_driver.c, src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/xen/xen_driver.c: Stub new entry point