Now that every caller is using virDomainObjListFindByUUIDRef,
let's just remove it and keep the name as virDomainObjListFindByUUID.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Jim Fehlig <jfehlig@suse.com>
For vmwareDomObjFromDomainLocked and vmwareDomainLookupByID
let's return a locked and referenced @vm object so that callers
can then use the common and more consistent virDomainObjEndAPI
in order to handle cleanup rather than needing to know that the
returned object is locked and calling virObjectUnlock.
The LookupByName already returns the ref counted and locked object,
so this will make things more consistent.
For vmwareDomainUndefineFlags and vmwareDomainShutdownFlags since
virDomainObjListRemove will return an unlocked object, we need to
relock before making the EndAPI call.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
If vmwareDomainLookupByID or vmwareDomainLookupByName fails
to find a vm, let's be a bit more descriptive by providing
the failing id or name in the error message.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Rather than repeat code throughout, create and use a couple of
accessors in order to lookup by UUID.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The virDomainObjListFindByName returns a locked and reffed
domain object, all we did was unlock it, leaving an extra
ref. Use the virDomainObjEndAPI to cleanup instead.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
The vmware driver wants to execute vmware-vmx from the same directory in
which vmrun was found. However, on VMware Fusion 10 vmrun at
/Applications/VMware Fusion.app/Contents/Public/vmrun is a symlink
pointing to ../Library/vmrun. vmware-vmx cannot be found, as
it is not in PATH, but only in this Library directory.
Therefore, follow the vmrun symlink and use the resulting path. Then the
assumption that vmware-vmx is right next to it will still work.
Signed-off-by: Rainer Müller <raimue@codingfarm.de>
Avoid the need for the drivers to explicitly check for a NULL path by
making sure it is at least the empty string.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Ensuring that we don't call the virDrvConnectOpen method with a NULL URI
means that the drivers can drop various checks for NULL URIs. These were
not needed anymore since the probe functionality was split
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Declare what URI schemes a driver supports in its virConnectDriver
struct. This allows us to skip trying to open the driver entirely
if the URI scheme doesn't match.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Add a localOnly flag to the virConnectDriver struct which allows a
driver to indicate whether it is local-only, or permits remote
connections. Stateful drivers running inside libvirtd are generally
local only. This allows us to remote the check for uri->server != NULL
from most drivers.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
virDomainXMLOption gains driver specific callbacks for parsing and
formatting save cookies.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
While checking for ABI stability, drivers might pose additional
checks that are not valid for general case. For instance, qemu
driver might check some memory backing attributes because of how
qemu works. But those attributes may work well in other drivers.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far our code is full of the following pattern:
dom = virGetDomain(conn, name, uuid)
if (dom)
dom->id = 42;
There is no reasong why it couldn't be just:
dom = virGetDomain(conn, name, uuid, id);
After all, client domain representation consists of tuple (name,
uuid, id).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
New line character in name of domain is now forbidden because it
mess virsh output and can be confusing for users.
Validation of name is done in drivers, after parsing XML to avoid
problems with dissappeared domains which was already created with
new-line char in name.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Just like virDomainDefPostParseCallback has gained new
parseOpaque argument, we need to follow the logic with
virDomainDeviceDefPostParse.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
We want to pass the proper opaque pointer instead of NULL to
virDomainDefParse and subsequently virDomainDefParseNode too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Some callers might want to pass yet another pointer to opaque
data to post parse callbacks. The driver generic one is not
enough because two threads executing post parse callback might
want to see different data (e.g. domain object pointer that
domain def belongs to).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The virConnectOpenInternal method opens the libvirt client
config file and uses it to resolve things like URI aliases.
There may be driver specific things that are useful to
store in the config file too, so rather than have them
re-parse the same file, pass the virConfPtr down to the
drivers.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Introduce a helper to check supported device and domain config and move
the memory hotplug checks to it.
The advantage of this approach is that by default all new features are
considered unsupported by all hypervisors unless specifically changed
rather than the previous approach where every hypervisor would need to
declare that a given feature is unsupported.
And use the newly added caps->host.netprefix (if it exists) for
interface names that match the autogenerated target names.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
This change ensures to call driver specific post-parse code to modify
domain definition after parsing hypervisor config the same way we do
after parsing XML.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=871452
So, you want to create a domain from XML. The domain already
exists in libvirt's database of domains. It's okay, because name
and UUID matches. However, on domain startup, internal
representation of the domain is overwritten with your XML even
though we claim that the XML you've provided is a transient one.
The bug is to be found across nearly all the drivers.
Le sigh.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=871452
Okay, so we allow users to 'virsh create' an already existing
domain, providing completely different XML than the one stored in
Libvirt. Well, as long as name and UUID matches. However, in some
drivers the code that handles errors unconditionally removes the
domain that failed to start even though the domain might have
been persistent. Fortunately, the domain is removed just from the
internal list of domains and the config file is kept around.
Steps to reproduce:
1) virsh dumpxml $dom > /tmp/dom.xml
2) change XML so that it is still parse-able but won't boot, e.g.
change guest agent path to /foo/bar
3) virsh create /tmp/dom.xml
4) virsh dumpxml $dom
5) Observe "No such domain" error
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Tool such as libguestfs need the datacenter path to get access to disk
images. The ESX driver knows the correct datacenter path, but this
information cannot be accessed using libvirt API yet. Also, it cannot
be deduced from the connection URI in a robust way.
Expose the datacenter path in the domain XML as <vmware:datacenterpath>
node similar to the way the <qemu:commandline> node works. The new node
is ignored while parsing the domain XML. In contrast to <qemu:commandline>
it is output only.
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
Add a XML element that will allow to specify maximum supportable memory
and the count of memory slots to use with memory hotplug.
To avoid possible confusion and misuse of the new element this patch
also explicitly forbids the use of the maxMemory setting in individual
drivers's post parse callbacks. This limitation will be lifted when the
support is implemented.
As there are two possible approaches to define a domain's memory size -
one used with legacy, non-NUMA VMs configured in the <memory> element
and per-node based approach on NUMA machines - the user needs to make
sure that both are specified correctly in the NUMA case.
To avoid this burden on the user I'd like to replace the NUMA case with
automatic totaling of the memory size. To achieve this I need to replace
direct access to the virDomainMemtune's 'max_balloon' field with
two separate getters depending on the desired size.
The two sizes are needed as:
1) Startup memory size doesn't include memory modules in some
hypervisors.
2) After startup these count as the usable memory size.
Note that the comments for the functions are future aware and document
state that will be present after a few later patches.
Return 0 instead of ERR_NO_SUPPORT in each driver
where we don't support managed save or -1 if
the domain does not exist.
This avoids spamming daemon logs when 'virsh dominfo' is run.
https://bugzilla.redhat.com/show_bug.cgi?id=1095637
For stateless, client side drivers, it is never correct to
probe for secondary drivers. It is only ever appropriate to
use the secondary driver that is associated with the
hypervisor in question. As a result the ESX & HyperV drivers
have both been forced to do hacks where they register no-op
drivers for the ones they don't implement.
For stateful, server side drivers, we always just want to
use the same built-in shared driver. The exception is
virtualbox which is really a stateless driver and so wants
to use its own server side secondary drivers. To deal with
this virtualbox has to be built as 3 separate loadable
modules to allow registration to work in the right order.
This can all be simplified by introducing a new struct
recording the precise set of secondary drivers each
hypervisor driver wants
struct _virConnectDriver {
virHypervisorDriverPtr hypervisorDriver;
virInterfaceDriverPtr interfaceDriver;
virNetworkDriverPtr networkDriver;
virNodeDeviceDriverPtr nodeDeviceDriver;
virNWFilterDriverPtr nwfilterDriver;
virSecretDriverPtr secretDriver;
virStorageDriverPtr storageDriver;
};
Instead of registering the hypervisor driver, we now
just register a virConnectDriver instead. This allows
us to remove all probing of secondary drivers. Once we
have chosen the primary driver, we immediately know the
correct secondary drivers to use.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virDomainDefineXMLFlags and virDomainCreateXML APIs both
gain new flags allowing them to be told to validate XML.
This updates all the drivers to turn on validation in the
XML parser when the flags are set
The virDomainDefParse* and virDomainDefFormat* methods both
accept the VIR_DOMAIN_XML_* flags defined in the public API,
along with a set of other VIR_DOMAIN_XML_INTERNAL_* flags
defined in domain_conf.c.
This is seriously confusing & error prone for a number of
reasons:
- VIR_DOMAIN_XML_SECURE, VIR_DOMAIN_XML_MIGRATABLE and
VIR_DOMAIN_XML_UPDATE_CPU are only relevant for the
formatting operation
- Some of the VIR_DOMAIN_XML_INTERNAL_* flags only apply
to parse or to format, but not both.
This patch cleanly separates out the flags. There are two
distint VIR_DOMAIN_DEF_PARSE_* and VIR_DOMAIN_DEF_FORMAT_*
flags that are used by the corresponding methods. The
VIR_DOMAIN_XML_* flags received via public API calls must
be converted to the VIR_DOMAIN_DEF_FORMAT_* flags where
needed.
The various calls to virDomainDefParse which hardcoded the
use of the VIR_DOMAIN_XML_INACTIVE flag change to use the
VIR_DOMAIN_DEF_PARSE_INACTIVE flag.
To prepare for introducing a single global driver, rename the
virDriver struct to virHypervisorDriver and the registration
API to virRegisterHypervisorDriver()
Rather than walking the possible driver backends by handle, use a helper
function. Additionally I've done a bit of refactoring in the code over
the past few commits so add myself to the copyright line.
Currently the VMware version check code only supports two types of
VMware backends, Workstation and Player. But in the near future we will
have an additional one so we need to support more. Additionally, we
discover and cache the path to the vmrun binary so we should use that
path when using the corresponding binary from the VMware VIX SDK.
Rather than looking up the path to vmrun each time we call it, look it
up once and save it. This sets up the ability for us to detect where the
path is on Mac OS X and not have to look it up each time we execute it.
The VMware driver supports multiple backends for the VMware Player and
VMware Workstation, convert this logic into enum and use VIR_ENUM_IMPL()
to provide conversions to and from strings.