Store version numbers in this format
version = 1000000 * major + 1000 * minor + micro
produced by virParseVersionString instead of dedicated enums.
Split the complex esxVI_ProductVersion enum into a simpler
esxVI_ProductLine enum and a product version number.
Relax API and product version number checks to accept everything that
is equal or greater than the supported minimum version. VMware ESX
went through 3 major versions and the vSphere API always stayed
backward compatible. This commit assumes that this will also be true
for future VMware ESX versions.
Also reword error messages in esxConnectTo* to say what was expected
and what was found instead (suggested by Richard W.M. Jones).
This allows to implement libvirt functions that use streams, such as
virDoaminScreenshot, without the need to store the downloaded data in
a temporary file first. The stream driver directly interacts with
libcurl to send and receive data.
The driver uses the libcurl multi interface that allows to do a transfer
in multiple curl_multi_perform() calls. The easy interface would do the
whole transfer in a single curl_easy_perform() call. This doesn't work
with the libvirt stream API that is driven by multiple calls to the
virStreamSend() and virStreamRecv() functions.
The curl_multi_wait() function is used to do blocking operations. But it
was added in libcurl 7.28.0. For older versions it is emulated using the
socket callback of the multi interface.
The current driver only supports blocking operations. There is already
some code in place for non-blocking mode but it is not complete.
If there should be some sort of separator it is better to use comment
with the filename, copyright, description, license information and
authors.
Found by:
git grep -nH '^$' | grep '\.[ch]:1:'
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The ESX code has a method esxVI_Alloc which would call
virAllocN directly, instead of using the VIR_ALLOC_N
macro. Remove this method and make the callers just
use VIR_ALLOC as is normal practice.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The patch adds the backend driver to support iSCSI format storage pools
and volumes for ESX host. The mapping of ESX iSCSI specifics to Libvirt
is as follows:
1. ESX static iSCSI target <------> Libvirt Storage Pools
2. ESX iSCSI LUNs <------> Libvirt Storage Volumes.
The above understanding is based on http://libvirt.org/storage.html.
The operation supported on iSCSI pools includes:
1. List storage pools & volumes.
2. Get XML descriptor operaion on pools & volumes.
3. Lookup operation on pools & volumes by name, UUID and path (if applicable).
iSCSI pools does not support operations such as: Create / remove pools
and volumes.
The patch refactors the current ESX storage driver due to following reasons:
1. Given most of the public APIs exposed by the storage driver in Libvirt
remains same, ESX storage driver should not implement logic specific
for only one supported format (current implementation only supports VMFS).
2. Decoupling interface from specific storage implementation gives us an
extensible design to hook implementation for other supported storage
formats.
This patch refactors the current driver to implement it as a facade pattern i.e.
the driver exposes all the public libvirt APIs, but uses backend drivers to get
the required task done. The backend drivers provide implementation specific to
the type of storage device.
File changes:
------------------
esx_storage_driver.c ----> esx_storage_driver.c (base storage driver)
|
|---> esx_storage_backend_vmfs.c (VMFS backend)
Also remove warnings for upcoming versions. There hadn't been any
compatibility problems with new ESX version over the whole lifetime
of the ESX driver, so I don't expect any in the future.
Update documentation to mention vSphere 5.x support.
https://www.gnu.org/licenses/gpl-howto.html recommends that
the 'If not, see <url>.' phrase be a separate sentence.
* tests/securityselinuxhelper.c: Remove doubled line.
* tests/securityselinuxtest.c: Likewise.
* globally: s/; If/. If/
An ESX server has one or more PhysicalNics that represent the actual
hardware NICs. Those can be listed via the interface driver.
A libvirt virtual network is mapped to a HostVirtualSwitch. On the
physical side a HostVirtualSwitch can be connected to PhysicalNics.
On the virtual side a HostVirtualSwitch has HostPortGroups that are
mapped to libvirt virtual network's portgroups. Typically there is
HostPortGroups named 'VM Network' that is used to connect virtual
machines to a HostVirtualSwitch. A second HostPortGroup typically
named 'Management Network' is used to connect the hypervisor itself
to the HostVirtualSwitch. This one is not mapped to a libvirt virtual
network's portgroup. There can be more HostPortGroups than those
typical two on a HostVirtualSwitch.
+---------------+-------------------+
...---| | | +-------------+
| HostPortGroup | |---| PhysicalNic |
| VM Network | | | vmnic0 |
...---| | | +-------------+
+---------------+ HostVirtualSwitch |
| vSwitch0 |
+---------------+ |
| HostPortGroup | |
...---| Management | |
| Network | |
+---------------+-------------------+
The virtual counterparts of the PhysicalNic is the HostVirtualNic for
the hypervisor and the VirtualEthernetCard for the virtual machines
that are grouped into HostPortGroups.
+---------------------+ +---------------+---...
| VirtualEthernetCard |---| |
+---------------------+ | HostPortGroup |
+---------------------+ | VM Network |
| VirtualEthernetCard |---| |
+---------------------+ +---------------+
|
+---------------+
+---------------------+ | HostPortGroup |
| HostVirtualNic |---| Management |
+---------------------+ | Network |
+---------------+---...
The currently implemented network driver can list, define and undefine
HostVirtualSwitches including HostPortGroups for virtual machines.
Existing HostVirtualSwitches cannot be edited yet. This will be added
in a followup patch.
Lists available PhysicalNic devices. A PhysicalNic is always active
and can neither be defined nor undefined.
A PhysicalNic is used to bridge a HostVirtualSwitch to the physical
network.
Per the FSF address could be changed from time to time, and GNU
recommends the following now: (http://www.gnu.org/licenses/gpl-howto.html)
You should have received a copy of the GNU General Public License
along with Foobar. If not, see <http://www.gnu.org/licenses/>.
This patch removes the explicit FSF address, and uses above instead
(of course, with inserting 'Lesser' before 'General').
Except a bunch of files for security driver, all others are changed
automatically, the copyright for securify files are not complete,
that's why to do it manually:
src/security/security_selinux.h
src/security/security_driver.h
src/security/security_selinux.c
src/security/security_apparmor.h
src/security/security_apparmor.c
src/security/security_driver.c
Update the ESX driver to use virReportError instead of
the ESX_ERROR & ESX_VI_ERROR custom macros
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Allow the datacenter and compute resource parts of the path
to be prefixed with folders. Therefore, the way the path is
parsed has changed. Before, it was split in 2 or 3 items and
the items' meanings were determined by their positions. Now
the path can have 2 or more items and the the vCenter server
is asked whether a folder, datacenter of compute resource
with the specified name exists at the current hierarchy level.
Before the datacenter and compute resource lookup automatically
traversed folders during lookup. This is logic got removed
and folders have to be specified explicitly.
The proper datacenter path including folders is now used when
accessing a datastore over HTTPS. This makes virsh dumpxml
and define work for datacenters in folders.
https://bugzilla.redhat.com/show_bug.cgi?id=732676
Commit 9f5e53e introduced the ability to filter snapshots to
just roots, but it was never implemented for ESX until now.
* src/esx/esx_vi.h (esxVI_GetNumberOfSnapshotTrees)
(esxVI_GetSnapshotTreeNames): Add parameter.
* src/esx/esx_vi.c (esxVI_GetNumberOfSnapshotTrees)
(esxVI_GetSnapshotTreeNames): Allow choice of recursion or not.
* src/esx/esx_driver.c (esxDomainSnapshotNum)
(esxDomainSnapshotListNames): Use it to limit to roots.
When the session has expired then multiple threads can race while
reestablishing it.
This race condition is not that critical as it requires a special usage
pattern to be triggered. It can only happen when an application doesn't
do API calls for quite some time (the session expires after 30 min
inactivity) and then multiple threads doing simultaneous API calls and
end up doing simultaneous calls to esxVI_EnsureSession.
Now the VMware driver doesn't depend on the ESX driver anymore.
Add a WITH_VMX option that depends on WITH_ESX and WITH_VMWARE.
Also add a libvirt_vmx.syms file.
Move some escaping functions from esx_util.c to vmx.c.
Adapt the test suite, ESX and VMware driver to the new code layout.
Instead of just reporting that a task failed get the
localized message from the TaskInfo error and include
it in the reported error message.
Implement minimal deserialization support for the
MethodFault type in order to obtain the actual fault
type.
For example, this changes the reported error message
when trying to create a volume with zero size from
Could not create volume
to
Could not create volume: InvalidArgument - A specified parameter was not correct.
Not perfect yet, but better than before.
Before this commit SessionIsActive was not used because ESX(i)
doesn't implement it. vCenter supports SessionIsActive, so use
it here, but keep the fall back mechanism for ESX(i) and GSX.
QueryVirtualDiskUuid is only available on an ESX(i) server. vCenter
returns an NotImplemented fault and a GSX server is missing the
VirtualDiskManager completely. Therefore only use QueryVirtualDiskUuid
with an ESX(i) server and fall back to path as storage volume key for
vCenter and GSX server.
VirtualDisks are .vmdk file based. Other files in a datastore
like .iso or .flp files don't have a UUID attached, fall back
to the path as key for them.
Instead of using one big traversal spec for lookup use a set of
more fine grained traversal specs that are selected based on the
actual needs of the lookup.
This gives up to 20% speedup for certain operations like domain
listing due to less HTTP(S) traffic.
Don't rely on summary.url anymore, because its value is different
between an esx:// and vpx:// connection. Use host.mountInfo.path
instead.
Don't fallback to lookup by UUID (actually lookup by absolute path)
in esxVI_LookupDatastoreByName when lookup by name fails. Add a
seperate function for this: esxVI_LookupDatastoreByAbsolutePath
Now a vpx:// connection has an explicitly specified host. This
allows to enabled several functions for a vpx:// connection
again, like host UUID, hostname, general node info, max vCPU
count, free memory, migration and defining new domains.
Lookup datacenter, compute resource, resource pool and host
system once and cache them. This simplifies the rest of the
code and reduces overall HTTP(S) traffic a bit.
esx:// and vpx:// can be mixed freely for a migration.
Ensure that migration source and destination refer to the
same vCenter. Also directly encode the resource pool and
host system object IDs into the migration URI in the prepare
function. Then directly build managed object references in
the perform function instead of re-looking up already known
information.
esxVI_WaitForTaskCompletion can take a UUID to lookup the
corresponding domain and check if the current task for it
is blocked by a question. It calls another function to do
this: esxVI_LookupAndHandleVirtualMachineQuestion looks up
the VirtualMachine and checks for a question. If there is
a question it calls esxVI_HandleVirtualMachineQuestion to
handle it.
If there was no question or it has been answered the call
to esxVI_LookupAndHandleVirtualMachineQuestion returns 0.
If any error occurred during the lookup and answering
process -1 is returned. The problem with this is, that -1
is also returned when there was no error but the question
could not be answered. So esxVI_WaitForTaskCompletion cannot
distinguish between this two situations and reports that a
question is blocking the task even when there was actually
another problem.
This inherent problem didn't surface until vSphere 4.1 when
you try to define a new domain. The driver tries to lookup
the domain that is just in the process of being registered.
There seems to be some kind of race condition and the driver
manages to issue a lookup command before the ESX server was
able to register the domain. This used to work before.
Due to the return value problem described above the driver
reported a false error message in that case.
To solve this esxVI_WaitForTaskCompletion now takes an
additional occurrence parameter that describes whether or
not to expect the domain to be existent. Also add a new
parameter to esxVI_LookupAndHandleVirtualMachineQuestion
that allows to distinguish if the call returned -1 because
of an actual error or because the question could not be
answered.
Allows listing existing pools and requesting information about them.
Alter the esxVI_ProductVersion enum in a way that allows to check for
product type by masking.