New APIs are added allowing streaming of content to/from
storage volumes.
* include/libvirt/libvirt.h.in: Add virStorageVolUpload and
virStorageVolDownload APIs
* src/driver.h, src/libvirt.c, src/libvirt_public.syms: Stub
code for new APIs
* src/storage/storage_driver.c, src/esx/esx_storage_driver.c:
Add dummy entries in driver table for new APIs
It is possible to set a migration speed limit when starting
migration. This new API allows the speed limit to be changed
on the fly to adjust to changing conditions
* src/driver.h, src/libvirt.c, src/libvirt_public.syms,
include/libvirt/libvirt.h.in: Add virDomainMigrateSetMaxSpeed
* src/esx/esx_driver.c, src/lxc/lxc_driver.c,
src/opennebula/one_driver.c, src/openvz/openvz_driver.c,
src/phyp/phyp_driver.c, src/qemu/qemu_driver.c,
src/remote/remote_driver.c, src/test/test_driver.c,
src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/vmware/vmware_driver.c, src/xen/xen_driver.c,
src/libxl/libxl_driver.c: Stub new API
This patch introduces a new libvirt API (virDomainSetMemoryFlags) and
a flag (virDomainMemoryModFlags).
Signed-off-by: Taku Izumi <izumi.taku@jp.fujitsu.com>
Etienne Gosset reported that libvirt fails to connect to his ESX
server because it failed to parse its malformed host UUID, that
contains an additional space and lacks one hexdigit in the last
group:
xxxxxxxx-xxxx-xxxx-xxxx- xxxxxxxxxxx
Don't treat this as a fatal error, just ignore it.
Use it in all places where a memory or storage request size is converted
to a larger granularity. This avoids requesting too small memory or storage
sizes that could result from the truncation done by a simple division.
This extends the round up fix in 6002e0406c
to the whole codebase.
Instead of reporting errors for odd values in the VMX code round them up.
Update the QEMU Argv tests accordingly as the original memory size 219200
isn't a even multiple of 1024 and is rounded up to 215 megabyte now. Change
it to 219100 and 219136. Use two different values intentionally to make
sure that rounding up works.
Update virsh.pod accordingly, as rounding down and rejecting are replaced
by rounding up.
Now the VMware driver doesn't depend on the ESX driver anymore.
Add a WITH_VMX option that depends on WITH_ESX and WITH_VMWARE.
Also add a libvirt_vmx.syms file.
Move some escaping functions from esx_util.c to vmx.c.
Adapt the test suite, ESX and VMware driver to the new code layout.
Connecting to a ESX(i) server that is part of a cluster failed
when the connection also involved a vCenter.
Accept ClusterComputeResource type in addition to ComputeResource
type in the object lookup function.
Reported by Guillaume Le Louët.
Instead of just reporting that a task failed get the
localized message from the TaskInfo error and include
it in the reported error message.
Implement minimal deserialization support for the
MethodFault type in order to obtain the actual fault
type.
For example, this changes the reported error message
when trying to create a volume with zero size from
Could not create volume
to
Could not create volume: InvalidArgument - A specified parameter was not correct.
Not perfect yet, but better than before.
Except LXC and UML driver, implementations of all other drivers
simply return 0, because these drivers doesn't have config both
in memory and on disk, no need to track if the domain of these
drivers updated or not.
Rename "xenUnifiedDomainisPersistent" to "xenUnifiedDomainIsPersistent"
* esx/esx_driver.c
* lxc/lxc_driver.c
* opennebula/one_driver.c
* openvz/openvz_driver.c
* phyp/phyp_driver.c
* test/test_driver.c
* uml/uml_driver.c
* vbox/vbox_tmpl.c
* xen/xen_driver.c
* xenapi/xenapi_driver.c
This is more flexible regarding the location of the python binary
but doesn't allow to pass the -u flag. The -i flag can be passed
from inside the script using the PYTHONINSPECT env variable.
This fixes a problem with the esx_vi_generator.py on FreeBSD.
To enable virsh console (or equivalent) to be used remotely
it is necessary to provide remote access to the /dev/pts/XXX
pseudo-TTY associated with the console/serial/parallel device
in the guest. The virStream API provide a bi-directional I/O
stream capability that can be used for this purpose. This
patch thus introduces a virDomainOpenConsole API that uses
the stream APIs.
* src/libvirt.c, src/libvirt_public.syms,
include/libvirt/libvirt.h.in, src/driver.h: Define the
new virDomainOpenConsole API
* src/esx/esx_driver.c, src/lxc/lxc_driver.c,
src/opennebula/one_driver.c, src/openvz/openvz_driver.c,
src/phyp/phyp_driver.c, src/qemu/qemu_driver.c,
src/remote/remote_driver.c, src/test/test_driver.c,
src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/xen/xen_driver.c, src/xenapi/xenapi_driver.c: Stub
API entry point
This extends the XML syntax for <graphics> to allow a password
expiry time to be set
eg
<graphics type='vnc' port='5900' autoport='yes' keymap='en-us' passwd='12345' passwdValidTo='2010-04-09T15:51:00'/>
The timestamp is in UTC.
* src/conf/domain_conf.h: Pull passwd out into separate struct
virDomainGraphicsAuthDef to allow sharing between VNC & SPICE
* src/conf/domain_conf.c: Add parsing/formatting of new passwdValidTo
argument
* src/opennebula/one_conf.c, src/qemu/qemu_conf.c, src/qemu/qemu_driver.c,
src/xen/xend_internal.c, src/xen/xm_internal.c: Update for changed
struct containing VNC password
Although this patch adds a distinction between maximum vcpus and
current vcpus in the XML, the values should be identical for all
drivers at this point. Only in subsequent per-driver patches will
a distinction be made.
In general, virDomainGetInfo should prefer the current vcpus.
* src/conf/domain_conf.h (_virDomainDef): Adjust vcpus to unsigned
short, to match virDomainGetInfo limit. Add maxvcpus member.
* src/conf/domain_conf.c (virDomainDefParseXML)
(virDomainDefFormat): parse and print out vcpu details.
* src/xen/xend_internal.c (xenDaemonParseSxpr)
(xenDaemonFormatSxpr): Manage both vcpu numbers, and require them
to be equal for now.
* src/xen/xm_internal.c (xenXMDomainConfigParse)
(xenXMDomainConfigFormat): Likewise.
* src/phyp/phyp_driver.c (phypDomainDumpXML): Likewise.
* src/openvz/openvz_conf.c (openvzLoadDomains): Likewise.
* src/openvz/openvz_driver.c (openvzDomainDefineXML)
(openvzDomainCreateXML, openvzDomainSetVcpusInternal): Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainDumpXML, vboxDomainDefineXML):
Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainDumpXML): Likewise.
* src/xenapi/xenapi_utils.c (createVMRecordFromXml): Likewise.
* src/esx/esx_vmx.c (esxVMX_ParseConfig, esxVMX_FormatConfig):
Likewise.
* src/qemu/qemu_conf.c (qemuBuildSmpArgStr)
(qemuParseCommandLineSmp, qemuParseCommandLine): Likewise.
* src/qemu/qemu_driver.c (qemudDomainHotplugVcpus): Likewise.
* src/opennebula/one_conf.c (xmlOneTemplate): Likewise.
Note - this wrapping is completely mechanical; the old API will
function identically, since the new API validates that the exact
same flags are provided by the old API. On a per-driver basis,
it may make sense to have the old API pass a different set of flags,
but that should be done in the per-driver patch that implements
the full range of flag support in the new API.
* src/esx/esx_driver.c (esxDomainSetVcpus, escDomainGetMaxVpcus):
Move guts...
(esxDomainSetVcpusFlags, esxDomainGetVcpusFlags): ...to new
functions.
(esxDriver): Trivially support the new API.
* src/openvz/openvz_driver.c (openvzDomainSetVcpus)
(openvzDomainSetVcpusFlags, openvzDomainGetMaxVcpus)
(openvzDomainGetVcpusFlags, openvzDriver): Likewise.
* src/phyp/phyp_driver.c (phypDomainSetCPU)
(phypDomainSetVcpusFlags, phypGetLparCPUMAX)
(phypDomainGetVcpusFlags, phypDriver): Likewise.
* src/qemu/qemu_driver.c (qemudDomainSetVcpus)
(qemudDomainSetVcpusFlags, qemudDomainGetMaxVcpus)
(qemudDomainGetVcpusFlags, qemuDriver): Likewise.
* src/test/test_driver.c (testSetVcpus, testDomainSetVcpusFlags)
(testDomainGetMaxVcpus, testDomainGetVcpusFlags, testDriver):
Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainSetVcpus)
(vboxDomainSetVcpusFlags, virDomainGetMaxVcpus)
(virDomainGetVcpusFlags, virDriver): Likewise.
* src/xen/xen_driver.c (xenUnifiedDomainSetVcpus)
(xenUnifiedDomainSetVcpusFlags, xenUnifiedDomainGetMaxVcpus)
(xenUnifiedDomainGetVcpusFlags, xenUnifiedDriver): Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainSetVcpus)
(xenapiDomainSetVcpusFlags, xenapiDomainGetMaxVcpus)
(xenapiDomainGetVcpusFlags, xenapiDriver): Likewise.
(xenapiError): New helper macro.
ESX(i) uses UTF-8, but a Windows based GSX server writes
Windows-1252 encoded VMX files.
Add a test case to ensure that libxml2 provides Windows-1252
to UTF-8 conversion.
Adding parsing code for memory tunables in the domain xml file
also change the internal define structures used for domain memory
informations
Adds a new specific test
Public api to set/get memory tunables supported by the hypervisors.
dv:
* some cleanups in libvirt.c
* adding extra checks in libvirt.c new entry points
v4:
* Move exporting public API to this patch
* Add unsigned int flags to the public api for future extensions
v3:
* Add domainGetMemoryParamters and NULL in all the driver interface
v2:
* Initialize domainSetMemoryParameters to NULL in all the driver
interface structure.
Since version 4.1 ESX(i) can expose virtual serial devices over TCP.
Add support in the VMX handling code for this, add test cases to cover
it and add links to some documentation.
ESX supports two additional protocols: TELNETS and TLS. Add them to
the list of serial-over-TCP protocols.
Before this commit SessionIsActive was not used because ESX(i)
doesn't implement it. vCenter supports SessionIsActive, so use
it here, but keep the fall back mechanism for ESX(i) and GSX.
QueryVirtualDiskUuid is only available on an ESX(i) server. vCenter
returns an NotImplemented fault and a GSX server is missing the
VirtualDiskManager completely. Therefore only use QueryVirtualDiskUuid
with an ESX(i) server and fall back to path as storage volume key for
vCenter and GSX server.
VirtualDisks are .vmdk file based. Other files in a datastore
like .iso or .flp files don't have a UUID attached, fall back
to the path as key for them.
Instead of splitting the path part of a datastore path into
directory and file name, keep this in one piece. An example:
"[datastore] directory/file"
was split into this before:
datastoreName = "datastore"
directoryName = "directory"
fileName = "file"
Now it's split into this:
datastoreName = "datastore"
directoryName = "directory"
directoryAndFileName = "directory/file"
This simplifies code using esxUtil_ParseDatastorePath, because
directoryAndFileName is used more often than fileName. Also the
old approach expected the datastore path to reference an actual
file, but this isn't always correct, especially when listing
volumes. In that case esxUtil_ParseDatastorePath is used to parse
a path that references a directory. This fails for a vpx://
connection because the vCenter returns directory paths with a
trailing '/'. The new approach is robust against this and the
actual decision if the datastore path should reference a file or
a directory is up to the caller of esxUtil_ParseDatastorePath.
Update the tests accordingly.
The check was altered in 8c48743b97
and got too strict, I've no clue how that snuck in. This check
makes every try to open a connection using the ESX driver fail
with an invalid argument error.
Revert the change to the check and add a comment to prevent future
mistakes with this check.
Instead of using one big traversal spec for lookup use a set of
more fine grained traversal specs that are selected based on the
actual needs of the lookup.
This gives up to 20% speedup for certain operations like domain
listing due to less HTTP(S) traffic.
With the previous storage pool UUID source not all storage pools
had a proper UUID, especially GSX storage pools. The mount path
is unique per host and cannot change during the lifetime of the
datastore. Therefore, it's MD5 sum can be used as UUID.
Use gnulib's crypto/md5 module to generate the MD5 sum.
In case an optional object cannot be found the lookup function is
left early and the cleanup code is not executed.
This pattern occurs in some other functions too.
floppy0.present defaults to true. Therefore, it needs to be
explicitly set to false when the XML config doesn't specify the
corresponding floppy device.
Also update tests accordingly.
For parsing try to match by datastore mount path first, if that
fails fallback to /vmfs/volumes/<datastore>/<path> parsing. This
also fixes problems with GSX on Windows. Because GSX on Windows
doesn't use /vmfs/volumes/ style file names.
For formatting use the datastore mount path too, instead of using
/vmfs/volumes/<datastore>/<path> as fixed format.
Introduce esxVMX_Context containing functions pointers to
glue both parts together in a generic way.
Move the ESX specific part to esx_driver.c.
This is a step towards making the VMX code reusable in a
potential VMware Workstation and VMware Player driver.
Don't rely on summary.url anymore, because its value is different
between an esx:// and vpx:// connection. Use host.mountInfo.path
instead.
Don't fallback to lookup by UUID (actually lookup by absolute path)
in esxVI_LookupDatastoreByName when lookup by name fails. Add a
seperate function for this: esxVI_LookupDatastoreByAbsolutePath
Now a vpx:// connection has an explicitly specified host. This
allows to enabled several functions for a vpx:// connection
again, like host UUID, hostname, general node info, max vCPU
count, free memory, migration and defining new domains.
Lookup datacenter, compute resource, resource pool and host
system once and cache them. This simplifies the rest of the
code and reduces overall HTTP(S) traffic a bit.
esx:// and vpx:// can be mixed freely for a migration.
Ensure that migration source and destination refer to the
same vCenter. Also directly encode the resource pool and
host system object IDs into the migration URI in the prepare
function. Then directly build managed object references in
the perform function instead of re-looking up already known
information.
esxVI_WaitForTaskCompletion can take a UUID to lookup the
corresponding domain and check if the current task for it
is blocked by a question. It calls another function to do
this: esxVI_LookupAndHandleVirtualMachineQuestion looks up
the VirtualMachine and checks for a question. If there is
a question it calls esxVI_HandleVirtualMachineQuestion to
handle it.
If there was no question or it has been answered the call
to esxVI_LookupAndHandleVirtualMachineQuestion returns 0.
If any error occurred during the lookup and answering
process -1 is returned. The problem with this is, that -1
is also returned when there was no error but the question
could not be answered. So esxVI_WaitForTaskCompletion cannot
distinguish between this two situations and reports that a
question is blocking the task even when there was actually
another problem.
This inherent problem didn't surface until vSphere 4.1 when
you try to define a new domain. The driver tries to lookup
the domain that is just in the process of being registered.
There seems to be some kind of race condition and the driver
manages to issue a lookup command before the ESX server was
able to register the domain. This used to work before.
Due to the return value problem described above the driver
reported a false error message in that case.
To solve this esxVI_WaitForTaskCompletion now takes an
additional occurrence parameter that describes whether or
not to expect the domain to be existent. Also add a new
parameter to esxVI_LookupAndHandleVirtualMachineQuestion
that allows to distinguish if the call returned -1 because
of an actual error or because the question could not be
answered.
There is actually a difference between the character device type (serial,
parallel, channel, ...) and the target type (virtio, guestfwd). Currently
they are awkwardly conflated.
Start to pull them apart by renaming targetType -> deviceType. This is
an entirely mechanical change.
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Add a pointer to the primary context of a connection and use it in all
driver functions that don't dependent on the context type. This includes
almost all functions that deal with a virDomianPtr. Therefore, using
a vpx:// connection allows you to perform all the usual domain related
actions like start, destroy, suspend, resume, dumpxml etc.
Some functions that require an explicitly specified ESX server don't work
yet. This includes the host UUID, the hostname, the general node info, the
max vCPU count and the free memory. Also not working yet are migration and
defining new domains.
Since 070f61002f the vcenter query
parameter has been ignored, because the refactoring to use
esxUtil_ParseQuery was incomplete. This effectively broke migration,
because the vcenter query parameter is essential for a migration.
Add the library entry point for the new virDomainQemuMonitorCommand()
entry point. Because this is not part of the "normal" libvirt API,
it gets its own header file, library file, and will eventually
get its own over-the-wire protocol later in the series.
Changes since v1:
- Go back to using the virDriver table for qemuDomainMonitorCommand, due to
linking issues
- Added versioning information to the libvirt-qemu.so
Changes since v2:
- None
Changes since v3:
- Add LGPL header to libvirt-qemu.c
- Make virLibConnError and virLibDomainError macros instead of function calls
Changes since v4:
- Move exported symbols to libvirt_qemu.syms
Signed-off-by: Chris Lalancette <clalance@redhat.com>
Also don't abuse the disk driver name to specify the SCSI controller
model anymore:
<driver name='buslogic'/>
Use the newly added model attribute of the controller element for this:
<controller type='scsi' index='0' model='buslogic'/>
The disk driver name approach is deprecated now, but still works for
backward compatibility reasons.
Update the documentation and tests accordingly.
Fix usage of the words controller and id in the VMX handling code. Use
controller, bus and unit properly.
The domain XML parsing code autogenerates disk address and
controller elements when they are not explicitly specified.
The code assumes a narrow SCSI bus (7 units per bus). ESX
uses a wide SCSI bus (16 units per bus).
This is a step towards controller support for the ESX driver.
Eliminate almost all backward jumps by replacing this common pattern:
int
some_random_function(void)
{
int result = 0;
...
cleanup:
<unconditional cleanup code>
return result;
failure:
<cleanup code in case of an error>
result = -1;
goto cleanup
}
with this simpler pattern:
int
some_random_function(void)
{
int result = -1;
...
result = 0;
cleanup:
if (result < 0) {
<cleanup code in case of an error>
}
<unconditional cleanup code>
return result;
}
Add a bool success variable in functions that don't have a int result
that can be used for the new pattern.
Also remove some unnecessary memsets in error paths.
Allows listing existing pools and requesting information about them.
Alter the esxVI_ProductVersion enum in a way that allows to check for
product type by masking.
This defines the internal driver API and stubs out each driver
* src/driver.h: Define virDrvDomainGetBlockInfo signature
* src/libvirt.c, src/libvirt_public.syms: Glue public API to drivers
* src/esx/esx_driver.c, src/lxc/lxc_driver.c, src/opennebula/one_driver.c,
src/openvz/openvz_driver.c, src/phyp/phyp_driver.c,
src/test/test_driver.c, src/uml/uml_driver.c, src/vbox/vbox_tmpl.c,
src/xen/xen_driver.c, src/xenapi/xenapi_driver.c: Stub out driver