Use it in all places where a memory or storage request size is converted
to a larger granularity. This avoids requesting too small memory or storage
sizes that could result from the truncation done by a simple division.
This extends the round up fix in 6002e0406c
to the whole codebase.
Instead of reporting errors for odd values in the VMX code round them up.
Update the QEMU Argv tests accordingly as the original memory size 219200
isn't a even multiple of 1024 and is rounded up to 215 megabyte now. Change
it to 219100 and 219136. Use two different values intentionally to make
sure that rounding up works.
Update virsh.pod accordingly, as rounding down and rejecting are replaced
by rounding up.
Now the VMware driver doesn't depend on the ESX driver anymore.
Add a WITH_VMX option that depends on WITH_ESX and WITH_VMWARE.
Also add a libvirt_vmx.syms file.
Move some escaping functions from esx_util.c to vmx.c.
Adapt the test suite, ESX and VMware driver to the new code layout.
Instead of just reporting that a task failed get the
localized message from the TaskInfo error and include
it in the reported error message.
Implement minimal deserialization support for the
MethodFault type in order to obtain the actual fault
type.
For example, this changes the reported error message
when trying to create a volume with zero size from
Could not create volume
to
Could not create volume: InvalidArgument - A specified parameter was not correct.
Not perfect yet, but better than before.
Except LXC and UML driver, implementations of all other drivers
simply return 0, because these drivers doesn't have config both
in memory and on disk, no need to track if the domain of these
drivers updated or not.
Rename "xenUnifiedDomainisPersistent" to "xenUnifiedDomainIsPersistent"
* esx/esx_driver.c
* lxc/lxc_driver.c
* opennebula/one_driver.c
* openvz/openvz_driver.c
* phyp/phyp_driver.c
* test/test_driver.c
* uml/uml_driver.c
* vbox/vbox_tmpl.c
* xen/xen_driver.c
* xenapi/xenapi_driver.c
To enable virsh console (or equivalent) to be used remotely
it is necessary to provide remote access to the /dev/pts/XXX
pseudo-TTY associated with the console/serial/parallel device
in the guest. The virStream API provide a bi-directional I/O
stream capability that can be used for this purpose. This
patch thus introduces a virDomainOpenConsole API that uses
the stream APIs.
* src/libvirt.c, src/libvirt_public.syms,
include/libvirt/libvirt.h.in, src/driver.h: Define the
new virDomainOpenConsole API
* src/esx/esx_driver.c, src/lxc/lxc_driver.c,
src/opennebula/one_driver.c, src/openvz/openvz_driver.c,
src/phyp/phyp_driver.c, src/qemu/qemu_driver.c,
src/remote/remote_driver.c, src/test/test_driver.c,
src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/xen/xen_driver.c, src/xenapi/xenapi_driver.c: Stub
API entry point
Note - this wrapping is completely mechanical; the old API will
function identically, since the new API validates that the exact
same flags are provided by the old API. On a per-driver basis,
it may make sense to have the old API pass a different set of flags,
but that should be done in the per-driver patch that implements
the full range of flag support in the new API.
* src/esx/esx_driver.c (esxDomainSetVcpus, escDomainGetMaxVpcus):
Move guts...
(esxDomainSetVcpusFlags, esxDomainGetVcpusFlags): ...to new
functions.
(esxDriver): Trivially support the new API.
* src/openvz/openvz_driver.c (openvzDomainSetVcpus)
(openvzDomainSetVcpusFlags, openvzDomainGetMaxVcpus)
(openvzDomainGetVcpusFlags, openvzDriver): Likewise.
* src/phyp/phyp_driver.c (phypDomainSetCPU)
(phypDomainSetVcpusFlags, phypGetLparCPUMAX)
(phypDomainGetVcpusFlags, phypDriver): Likewise.
* src/qemu/qemu_driver.c (qemudDomainSetVcpus)
(qemudDomainSetVcpusFlags, qemudDomainGetMaxVcpus)
(qemudDomainGetVcpusFlags, qemuDriver): Likewise.
* src/test/test_driver.c (testSetVcpus, testDomainSetVcpusFlags)
(testDomainGetMaxVcpus, testDomainGetVcpusFlags, testDriver):
Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainSetVcpus)
(vboxDomainSetVcpusFlags, virDomainGetMaxVcpus)
(virDomainGetVcpusFlags, virDriver): Likewise.
* src/xen/xen_driver.c (xenUnifiedDomainSetVcpus)
(xenUnifiedDomainSetVcpusFlags, xenUnifiedDomainGetMaxVcpus)
(xenUnifiedDomainGetVcpusFlags, xenUnifiedDriver): Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainSetVcpus)
(xenapiDomainSetVcpusFlags, xenapiDomainGetMaxVcpus)
(xenapiDomainGetVcpusFlags, xenapiDriver): Likewise.
(xenapiError): New helper macro.
Public api to set/get memory tunables supported by the hypervisors.
dv:
* some cleanups in libvirt.c
* adding extra checks in libvirt.c new entry points
v4:
* Move exporting public API to this patch
* Add unsigned int flags to the public api for future extensions
v3:
* Add domainGetMemoryParamters and NULL in all the driver interface
v2:
* Initialize domainSetMemoryParameters to NULL in all the driver
interface structure.
Instead of splitting the path part of a datastore path into
directory and file name, keep this in one piece. An example:
"[datastore] directory/file"
was split into this before:
datastoreName = "datastore"
directoryName = "directory"
fileName = "file"
Now it's split into this:
datastoreName = "datastore"
directoryName = "directory"
directoryAndFileName = "directory/file"
This simplifies code using esxUtil_ParseDatastorePath, because
directoryAndFileName is used more often than fileName. Also the
old approach expected the datastore path to reference an actual
file, but this isn't always correct, especially when listing
volumes. In that case esxUtil_ParseDatastorePath is used to parse
a path that references a directory. This fails for a vpx://
connection because the vCenter returns directory paths with a
trailing '/'. The new approach is robust against this and the
actual decision if the datastore path should reference a file or
a directory is up to the caller of esxUtil_ParseDatastorePath.
Update the tests accordingly.
Instead of using one big traversal spec for lookup use a set of
more fine grained traversal specs that are selected based on the
actual needs of the lookup.
This gives up to 20% speedup for certain operations like domain
listing due to less HTTP(S) traffic.
For parsing try to match by datastore mount path first, if that
fails fallback to /vmfs/volumes/<datastore>/<path> parsing. This
also fixes problems with GSX on Windows. Because GSX on Windows
doesn't use /vmfs/volumes/ style file names.
For formatting use the datastore mount path too, instead of using
/vmfs/volumes/<datastore>/<path> as fixed format.
Introduce esxVMX_Context containing functions pointers to
glue both parts together in a generic way.
Move the ESX specific part to esx_driver.c.
This is a step towards making the VMX code reusable in a
potential VMware Workstation and VMware Player driver.
Don't rely on summary.url anymore, because its value is different
between an esx:// and vpx:// connection. Use host.mountInfo.path
instead.
Don't fallback to lookup by UUID (actually lookup by absolute path)
in esxVI_LookupDatastoreByName when lookup by name fails. Add a
seperate function for this: esxVI_LookupDatastoreByAbsolutePath
Now a vpx:// connection has an explicitly specified host. This
allows to enabled several functions for a vpx:// connection
again, like host UUID, hostname, general node info, max vCPU
count, free memory, migration and defining new domains.
Lookup datacenter, compute resource, resource pool and host
system once and cache them. This simplifies the rest of the
code and reduces overall HTTP(S) traffic a bit.
esx:// and vpx:// can be mixed freely for a migration.
Ensure that migration source and destination refer to the
same vCenter. Also directly encode the resource pool and
host system object IDs into the migration URI in the prepare
function. Then directly build managed object references in
the perform function instead of re-looking up already known
information.
esxVI_WaitForTaskCompletion can take a UUID to lookup the
corresponding domain and check if the current task for it
is blocked by a question. It calls another function to do
this: esxVI_LookupAndHandleVirtualMachineQuestion looks up
the VirtualMachine and checks for a question. If there is
a question it calls esxVI_HandleVirtualMachineQuestion to
handle it.
If there was no question or it has been answered the call
to esxVI_LookupAndHandleVirtualMachineQuestion returns 0.
If any error occurred during the lookup and answering
process -1 is returned. The problem with this is, that -1
is also returned when there was no error but the question
could not be answered. So esxVI_WaitForTaskCompletion cannot
distinguish between this two situations and reports that a
question is blocking the task even when there was actually
another problem.
This inherent problem didn't surface until vSphere 4.1 when
you try to define a new domain. The driver tries to lookup
the domain that is just in the process of being registered.
There seems to be some kind of race condition and the driver
manages to issue a lookup command before the ESX server was
able to register the domain. This used to work before.
Due to the return value problem described above the driver
reported a false error message in that case.
To solve this esxVI_WaitForTaskCompletion now takes an
additional occurrence parameter that describes whether or
not to expect the domain to be existent. Also add a new
parameter to esxVI_LookupAndHandleVirtualMachineQuestion
that allows to distinguish if the call returned -1 because
of an actual error or because the question could not be
answered.
Add a pointer to the primary context of a connection and use it in all
driver functions that don't dependent on the context type. This includes
almost all functions that deal with a virDomianPtr. Therefore, using
a vpx:// connection allows you to perform all the usual domain related
actions like start, destroy, suspend, resume, dumpxml etc.
Some functions that require an explicitly specified ESX server don't work
yet. This includes the host UUID, the hostname, the general node info, the
max vCPU count and the free memory. Also not working yet are migration and
defining new domains.
Since 070f61002f the vcenter query
parameter has been ignored, because the refactoring to use
esxUtil_ParseQuery was incomplete. This effectively broke migration,
because the vcenter query parameter is essential for a migration.
Add the library entry point for the new virDomainQemuMonitorCommand()
entry point. Because this is not part of the "normal" libvirt API,
it gets its own header file, library file, and will eventually
get its own over-the-wire protocol later in the series.
Changes since v1:
- Go back to using the virDriver table for qemuDomainMonitorCommand, due to
linking issues
- Added versioning information to the libvirt-qemu.so
Changes since v2:
- None
Changes since v3:
- Add LGPL header to libvirt-qemu.c
- Make virLibConnError and virLibDomainError macros instead of function calls
Changes since v4:
- Move exported symbols to libvirt_qemu.syms
Signed-off-by: Chris Lalancette <clalance@redhat.com>
Also don't abuse the disk driver name to specify the SCSI controller
model anymore:
<driver name='buslogic'/>
Use the newly added model attribute of the controller element for this:
<controller type='scsi' index='0' model='buslogic'/>
The disk driver name approach is deprecated now, but still works for
backward compatibility reasons.
Update the documentation and tests accordingly.
Fix usage of the words controller and id in the VMX handling code. Use
controller, bus and unit properly.
The domain XML parsing code autogenerates disk address and
controller elements when they are not explicitly specified.
The code assumes a narrow SCSI bus (7 units per bus). ESX
uses a wide SCSI bus (16 units per bus).
This is a step towards controller support for the ESX driver.
Eliminate almost all backward jumps by replacing this common pattern:
int
some_random_function(void)
{
int result = 0;
...
cleanup:
<unconditional cleanup code>
return result;
failure:
<cleanup code in case of an error>
result = -1;
goto cleanup
}
with this simpler pattern:
int
some_random_function(void)
{
int result = -1;
...
result = 0;
cleanup:
if (result < 0) {
<cleanup code in case of an error>
}
<unconditional cleanup code>
return result;
}
Add a bool success variable in functions that don't have a int result
that can be used for the new pattern.
Also remove some unnecessary memsets in error paths.
This defines the internal driver API and stubs out each driver
* src/driver.h: Define virDrvDomainGetBlockInfo signature
* src/libvirt.c, src/libvirt_public.syms: Glue public API to drivers
* src/esx/esx_driver.c, src/lxc/lxc_driver.c, src/opennebula/one_driver.c,
src/openvz/openvz_driver.c, src/phyp/phyp_driver.c,
src/test/test_driver.c, src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/xen/xen_driver.c, src/xenapi/xenapi_driver.c: Stub out driver