libvirt/src/libvirt_internal.h

311 lines
12 KiB
C
Raw Normal View History

/*
* libvirt_internal.h: internally exported APIs, not for public use
*
event: server RPC protocol tweaks for domain lifecycle events This patch adds some new RPC call numbers, but for ease of review, they sit idle until a later patch adds the client counterpart to drive the new RPCs. Also for ease of review, I limited this patch to just the lifecycle event; although converting the remaining 15 domain events will be quite mechanical. On the server side, we have to have a function per RPC call, largely with duplicated bodies (the key difference being that we store in our callback opaque pointer whether events should be fired with old or new style); meanwhile, a single function can drive multiple RPC messages. With a strategic choice of XDR struct layout, we can make the event generation code for both styles fairly compact. I debated about adding a tri-state witness variable per connection (values 'unknown', 'legacy', 'modern'). It would start as 'unknown', move to 'legacy' if any RPC call is made to a legacy event call, and move to 'modern' if the feature probe is made; then the event code could issue an error if the witness state is incorrect (a legacy RPC call while in 'modern', a modern RPC call while in 'unknown' or 'legacy', and a feature probe while in 'legacy' or 'modern'). But while it might prevent odd behavior caused by protocol fuzzing, I don't see that it would prevent any security holes, so I considered it bloat. Note that sticking @acl markers on the new RPCs generates unused functions in access/viraccessapicheck.c, because there is no new API call that needs to use the new checks; however, having a consistent .x file is worth the dead code. * src/libvirt_internal.h (VIR_DRV_FEATURE_REMOTE_EVENT_CALLBACK): New feature. * src/remote/remote_protocol.x (REMOTE_PROC_CONNECT_DOMAIN_EVENT_CALLBACK_REGISTER_ANY) (REMOTE_PROC_CONNECT_DOMAIN_EVENT_CALLBACK_DEREGISTER_ANY) (REMOTE_PROC_DOMAIN_EVENT_CALLBACK_LIFECYCLE): New RPCs. * daemon/remote.c (daemonClientCallback): Add field. (remoteDispatchConnectDomainEventCallbackRegisterAny) (remoteDispatchConnectDomainEventCallbackDeregisterAny): New functions. (remoteDispatchConnectDomainEventRegisterAny) (remoteDispatchConnectDomainEventDeregisterAny): Mark legacy use. (remoteRelayDomainEventLifecycle): Change message based on legacy or new use. (remoteDispatchConnectSupportsFeature): Advertise new feature. * src/remote_protocol-structs: Regenerate. Signed-off-by: Eric Blake <eblake@redhat.com>
2014-01-08 20:59:35 +00:00
* Copyright (C) 2006-2014 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library. If not, see
* <http://www.gnu.org/licenses/>.
*
* NB This file is ABI sensitive. Things here impact the wire
* protocol ABI in the remote driver. Same rules as for things
* include/libvirt/libvirt.h apply. ie this file is *append* only
*/
#pragma once
#include "internal.h"
typedef void (*virStateInhibitCallback)(bool inhibit,
void *opaque);
int virStateInitialize(bool privileged,
bool mandatory,
const char *root,
bool monolithic,
virStateInhibitCallback inhibit,
void *opaque);
int virStateShutdownPrepare(void);
int virStateShutdownWait(void);
2008-11-17 11:03:25 +00:00
int virStateCleanup(void);
int virStateReload(void);
int virStateStop(void);
/* Feature detection. This is a libvirt-private interface for determining
* what features are supported by the driver.
*
* The remote driver passes features through to the real driver at the
* remote end unmodified, except if you query a VIR_DRV_FEATURE_REMOTE*
* feature. Queries for VIR_DRV_FEATURE_PROGRAM* features are answered
* directly by the RPC layer and not by the real driver.
*/
typedef enum {
/* Driver supports V1-style virDomainMigrate, ie. domainMigratePrepare/
* domainMigratePerform/domainMigrateFinish.
*/
VIR_DRV_FEATURE_MIGRATION_V1 = 1,
/* Driver is not local. */
VIR_DRV_FEATURE_REMOTE = 2,
/* Driver supports V2-style virDomainMigrate, ie. domainMigratePrepare2/
* domainMigratePerform/domainMigrateFinish2.
*/
VIR_DRV_FEATURE_MIGRATION_V2 = 3,
Support a new peer-to-peer migration mode & public API Introduces several new public API options for migration - VIR_MIGRATE_PEER2PEER: With this flag the client only invokes the virDomainMigratePerform method, expecting the source host driver to do whatever is required to complete the entire migration process. - VIR_MIGRATE_TUNNELLED: With this flag the actual data for migration will be tunnelled over the libvirtd RPC channel. This requires that VIR_MIGRATE_PEER2PEER is also set. - virDomainMigrateToURI: This is variant of the existing virDomainMigrate method which does not require any virConnectPtr for the destination host. Given suitable driver support, this allows for all the same modes as virDomainMigrate() The URI for VIR_MIGRATE_PEER2PEER must be a valid libvirt URI. For non-p2p migration a hypervisor specific migration URI is used. virDomainMigrateToURI without a PEER2PEER flag is only support for Xen currently, and it involves XenD talking directly to XenD, no libvirtd involved at all. * include/libvirt/libvirt.h.in: Add VIR_MIGRATE_PEER2PEER flag for migration * src/libvirt_internal.h: Add feature flags for peer to peer migration (VIR_FEATURE_MIGRATE_P2P) and direct migration (VIR_MIGRATE_PEER2PEER mode) * src/libvirt.c: Implement support for VIR_MIGRATE_PEER2PEER and virDomainMigrateToURI APIs. * src/xen/xen_driver.c: Advertise support for DIRECT migration * src/xen/xend_internal.c: Add TODO item for p2p migration * src/libvirt_public.syms: Export virDomainMigrateToURI method * src/qemu/qemu_driver.c: Add support for PEER2PEER and migration, and adapt TUNNELLED migration. * tools/virsh.c: Add --p2p and --direct args and use the new virDomainMigrateToURI method where possible.
2009-09-17 17:10:04 +00:00
/* Driver supports peer-2-peer virDomainMigrate ie source host
* does all the prepare/perform/finish steps directly
*/
VIR_DRV_FEATURE_MIGRATION_P2P = 4,
/* Driver supports migration with only the source host involved,
* no libvirtd connetions on the destination at all, only the
* perform step is used.
*/
VIR_DRV_FEATURE_MIGRATION_DIRECT = 5,
Introduce yet another migration version in API. Migration just seems to go from bad to worse. We already had to introduce a second migration protocol when adding the QEMU driver, since the one from Xen was insufficiently flexible to cope with passing the data the QEMU driver required. It turns out that this protocol still has some flaws that we need to address. The current sequence is * Src: DumpXML - Generate XML to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Kill off VM if successful, resume if failed * Dst: Finish - Wait for recv completion and check status - Kill off VM if unsuccessful The problems with this are: - Since the first step is a generic 'DumpXML' call, we can't add in other migration specific data. eg, we can't include any VM lease data from lock manager plugins - Since the first step is a generic 'DumpXML' call, we can't emit any 'migration begin' event on the source, or have any hook that runs right at the start of the process - Since there is no final step on the source, if the Finish method fails to receive all migration data & has to kill the VM, then there's no way to resume the original VM on the source This patch attempts to introduce a version 3 that uses the improved 5 step sequence * Src: Begin - Generate XML to pass to dst - Generate optional cookie to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Generate optional cookie to pass to dst * Dst: Finish - Wait for recv completion and check status - Kill off VM if failed, resume if success - Generate optional cookie to pass to src * Src: Confirm - Kill off VM if success, resume if failed The API is designed to allow both input and output cookies in all methods where applicable. This lets us pass around arbitrary extra driver specific data between src & dst during migration. Combined with the extra 'Begin' method this lets us pass lease information from source to dst at the start of migration Moving the killing of the source VM out of Perform and into Confirm, means we can now recover if the dst host can't successfully Finish receiving migration data.
2010-11-02 12:43:44 +00:00
/*
* Driver supports V3-style virDomainMigrate, ie domainMigrateBegin3/
* domainMigratePrepare3/domainMigratePerform3/domainMigrateFinish3/
* domainMigrateConfirm3.
*/
VIR_DRV_FEATURE_MIGRATION_V3 = 6,
/*
* Driver supports protecting the whole V3-style migration against changes
* to domain configuration, i.e., starting from Begin3 and not Perform3.
*/
VIR_DRV_FEATURE_MIGRATE_CHANGE_PROTECTION = 7,
/*
* Support for file descriptor passing
*/
API: add VIR_TYPED_PARAM_STRING This allows strings to be transported between client and server in the context of name-type-value virTypedParameter functions. For compatibility, o new clients will not send strings to old servers, based on a feature check o new servers will not send strings to old clients without the flag VIR_TYPED_PARAM_STRING_OKAY; this will be enforced at the RPC layer in the next patch, so that drivers need not worry about it in general. The one exception is that virDomainGetSchedulerParameters lacks a flags argument, so it must not return a string; drivers that forward that function on to virDomainGetSchedulerParametersFlags will have to pay attention to the flag. o the flag VIR_TYPED_PARAM_STRING_OKAY is set automatically, based on a feature check (so far, no driver implements it), so clients do not have to worry about it Future patches can then enable the feature on a per-driver basis. This patch also ensures that drivers can blindly strdup() field names (previously, a malicious client could stuff 80 non-NUL bytes into field and cause a read overrun). * src/libvirt_internal.h (VIR_DRV_FEATURE_TYPED_PARAM_STRING): New driver feature. * src/libvirt.c (virTypedParameterValidateSet) (virTypedParameterSanitizeGet): New helper functions. (virDomainSetMemoryParameters, virDomainSetBlkioParameters) (virDomainSetSchedulerParameters) (virDomainSetSchedulerParametersFlags) (virDomainGetMemoryParameters, virDomainGetBlkioParameters) (virDomainGetSchedulerParameters) (virDomainGetSchedulerParametersFlags, virDomainBlockStatsFlags): Use them. * src/util/util.h (virTypedParameterArrayClear): New helper function. * src/util/util.c (virTypedParameterArrayClear): Implement it. * src/libvirt_private.syms (util.h): Export it. Based on an initial patch by Hu Tao, with feedback from Daniel P. Berrange. Signed-off-by: Eric Blake <eblake@redhat.com>
2011-10-12 09:26:34 +00:00
VIR_DRV_FEATURE_FD_PASSING = 8,
/*
* Support for VIR_TYPED_PARAM_STRING
*/
VIR_DRV_FEATURE_TYPED_PARAM_STRING = 9,
/*
* Remote party supports keepalive program (i.e., sending keepalive
* messages).
*/
VIR_DRV_FEATURE_PROGRAM_KEEPALIVE = 10,
/*
* Support for VIR_DOMAIN_XML_MIGRATABLE flag in domainGetXMLDesc
*/
VIR_DRV_FEATURE_XML_MIGRATABLE = 11,
/*
* Support for offline migration.
*/
VIR_DRV_FEATURE_MIGRATION_OFFLINE = 12,
/*
* Support for migration parameters.
*/
VIR_DRV_FEATURE_MIGRATION_PARAMS = 13,
event: server RPC protocol tweaks for domain lifecycle events This patch adds some new RPC call numbers, but for ease of review, they sit idle until a later patch adds the client counterpart to drive the new RPCs. Also for ease of review, I limited this patch to just the lifecycle event; although converting the remaining 15 domain events will be quite mechanical. On the server side, we have to have a function per RPC call, largely with duplicated bodies (the key difference being that we store in our callback opaque pointer whether events should be fired with old or new style); meanwhile, a single function can drive multiple RPC messages. With a strategic choice of XDR struct layout, we can make the event generation code for both styles fairly compact. I debated about adding a tri-state witness variable per connection (values 'unknown', 'legacy', 'modern'). It would start as 'unknown', move to 'legacy' if any RPC call is made to a legacy event call, and move to 'modern' if the feature probe is made; then the event code could issue an error if the witness state is incorrect (a legacy RPC call while in 'modern', a modern RPC call while in 'unknown' or 'legacy', and a feature probe while in 'legacy' or 'modern'). But while it might prevent odd behavior caused by protocol fuzzing, I don't see that it would prevent any security holes, so I considered it bloat. Note that sticking @acl markers on the new RPCs generates unused functions in access/viraccessapicheck.c, because there is no new API call that needs to use the new checks; however, having a consistent .x file is worth the dead code. * src/libvirt_internal.h (VIR_DRV_FEATURE_REMOTE_EVENT_CALLBACK): New feature. * src/remote/remote_protocol.x (REMOTE_PROC_CONNECT_DOMAIN_EVENT_CALLBACK_REGISTER_ANY) (REMOTE_PROC_CONNECT_DOMAIN_EVENT_CALLBACK_DEREGISTER_ANY) (REMOTE_PROC_DOMAIN_EVENT_CALLBACK_LIFECYCLE): New RPCs. * daemon/remote.c (daemonClientCallback): Add field. (remoteDispatchConnectDomainEventCallbackRegisterAny) (remoteDispatchConnectDomainEventCallbackDeregisterAny): New functions. (remoteDispatchConnectDomainEventRegisterAny) (remoteDispatchConnectDomainEventDeregisterAny): Mark legacy use. (remoteRelayDomainEventLifecycle): Change message based on legacy or new use. (remoteDispatchConnectSupportsFeature): Advertise new feature. * src/remote_protocol-structs: Regenerate. Signed-off-by: Eric Blake <eblake@redhat.com>
2014-01-08 20:59:35 +00:00
/*
* Support for server-side event filtering via callback ids in events.
*/
VIR_DRV_FEATURE_REMOTE_EVENT_CALLBACK = 14,
/*
* Support for driver close callback rpc
*/
VIR_DRV_FEATURE_REMOTE_CLOSE_CALLBACK = 15,
lib: Fix calling of virNetworkUpdate() driver callback The order in which virNetworkUpdate() accepts @section and @command arguments is not the same as in which it passes them onto networkUpdate() callback. Until recently, it did not really matter, because calling the API on client side meant arguments were encoded in reversed order (compared to the public API), but then on the server it was fixed again - because the server decoded RPC (still swapped), called public API (still swapped) and in turn called the network driver callback (with reversing the order - so magically fixing the order). Long story short, if the public API is called even number of times those swaps cancel each other out. The problem is when the API is called an odd numbed of times - which happens with split daemons and the right URI. There's one call in the client (e.g. virsh net-update), the other in a hypervisor daemon (say virtqemud) which ends up calling the API in the virnetworkd. The fix is obvious - fix the order in which arguments are passed to the callback. But, to maintain compatibility with older, yet unfixed, daemons new connection feature is introduced. The feature is detected just before calling the callback and allows client to pass arguments in correct order (talking to fixed daemon) or in reversed order (talking to older daemon). Unfortunately, older client talking to newer daemon can't be fixed. Let's hope that it's less frequent scenario. Fixes: 574b9bc66b6b10cc4cf50f299c3f0ff55f2cbefb Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1870552 Signed-off-by: Michal Privoznik <mprivozn@redhat.com> Reviewed-by: Martin Kletzander <mkletzan@redhat.com>
2021-03-16 09:33:26 +00:00
/*
* Whether the virNetworkUpdate() API implementation passes arguments to
* the driver's callback in correct order. */
VIR_DRV_FEATURE_NETWORK_UPDATE_HAS_CORRECT_ORDER = 16,
} virDrvFeature;
int virConnectSupportsFeature(virConnectPtr conn, int feature);
int virDomainMigrateCheckNotLocal(const char *dconnuri);
2008-11-17 11:03:25 +00:00
int virDomainMigratePrepare (virConnectPtr dconn,
char **cookie,
int *cookielen,
const char *uri_in,
char **uri_out,
unsigned long flags,
const char *dname,
unsigned long resource);
2008-11-17 11:03:25 +00:00
int virDomainMigratePerform (virDomainPtr domain,
const char *cookie,
int cookielen,
const char *uri,
unsigned long flags,
const char *dname,
unsigned long resource);
2008-11-17 11:03:25 +00:00
virDomainPtr virDomainMigrateFinish (virConnectPtr dconn,
const char *dname,
const char *cookie,
int cookielen,
const char *uri,
unsigned long flags);
int virDomainMigratePrepare2 (virConnectPtr dconn,
char **cookie,
int *cookielen,
const char *uri_in,
char **uri_out,
unsigned long flags,
const char *dname,
unsigned long resource,
2008-11-17 11:03:25 +00:00
const char *dom_xml);
virDomainPtr virDomainMigrateFinish2 (virConnectPtr dconn,
const char *dname,
const char *cookie,
int cookielen,
const char *uri,
unsigned long flags,
int retcode);
int virDomainMigratePrepareTunnel(virConnectPtr dconn,
virStreamPtr st,
unsigned long flags,
const char *dname,
unsigned long resource,
const char *dom_xml);
Introduce yet another migration version in API. Migration just seems to go from bad to worse. We already had to introduce a second migration protocol when adding the QEMU driver, since the one from Xen was insufficiently flexible to cope with passing the data the QEMU driver required. It turns out that this protocol still has some flaws that we need to address. The current sequence is * Src: DumpXML - Generate XML to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Kill off VM if successful, resume if failed * Dst: Finish - Wait for recv completion and check status - Kill off VM if unsuccessful The problems with this are: - Since the first step is a generic 'DumpXML' call, we can't add in other migration specific data. eg, we can't include any VM lease data from lock manager plugins - Since the first step is a generic 'DumpXML' call, we can't emit any 'migration begin' event on the source, or have any hook that runs right at the start of the process - Since there is no final step on the source, if the Finish method fails to receive all migration data & has to kill the VM, then there's no way to resume the original VM on the source This patch attempts to introduce a version 3 that uses the improved 5 step sequence * Src: Begin - Generate XML to pass to dst - Generate optional cookie to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Generate optional cookie to pass to dst * Dst: Finish - Wait for recv completion and check status - Kill off VM if failed, resume if success - Generate optional cookie to pass to src * Src: Confirm - Kill off VM if success, resume if failed The API is designed to allow both input and output cookies in all methods where applicable. This lets us pass around arbitrary extra driver specific data between src & dst during migration. Combined with the extra 'Begin' method this lets us pass lease information from source to dst at the start of migration Moving the killing of the source VM out of Perform and into Confirm, means we can now recover if the dst host can't successfully Finish receiving migration data.
2010-11-02 12:43:44 +00:00
char *virDomainMigrateBegin3(virDomainPtr domain,
const char *xmlin,
Introduce yet another migration version in API. Migration just seems to go from bad to worse. We already had to introduce a second migration protocol when adding the QEMU driver, since the one from Xen was insufficiently flexible to cope with passing the data the QEMU driver required. It turns out that this protocol still has some flaws that we need to address. The current sequence is * Src: DumpXML - Generate XML to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Kill off VM if successful, resume if failed * Dst: Finish - Wait for recv completion and check status - Kill off VM if unsuccessful The problems with this are: - Since the first step is a generic 'DumpXML' call, we can't add in other migration specific data. eg, we can't include any VM lease data from lock manager plugins - Since the first step is a generic 'DumpXML' call, we can't emit any 'migration begin' event on the source, or have any hook that runs right at the start of the process - Since there is no final step on the source, if the Finish method fails to receive all migration data & has to kill the VM, then there's no way to resume the original VM on the source This patch attempts to introduce a version 3 that uses the improved 5 step sequence * Src: Begin - Generate XML to pass to dst - Generate optional cookie to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Generate optional cookie to pass to dst * Dst: Finish - Wait for recv completion and check status - Kill off VM if failed, resume if success - Generate optional cookie to pass to src * Src: Confirm - Kill off VM if success, resume if failed The API is designed to allow both input and output cookies in all methods where applicable. This lets us pass around arbitrary extra driver specific data between src & dst during migration. Combined with the extra 'Begin' method this lets us pass lease information from source to dst at the start of migration Moving the killing of the source VM out of Perform and into Confirm, means we can now recover if the dst host can't successfully Finish receiving migration data.
2010-11-02 12:43:44 +00:00
char **cookieout,
int *cookieoutlen,
unsigned long flags,
const char *dname,
unsigned long resource);
int virDomainMigratePrepare3(virConnectPtr dconn,
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
const char *uri_in,
char **uri_out,
unsigned long flags,
const char *dname,
unsigned long resource,
const char *dom_xml);
int virDomainMigratePrepareTunnel3(virConnectPtr dconn,
virStreamPtr st,
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
unsigned long flags,
const char *dname,
unsigned long resource,
const char *dom_xml);
int virDomainMigratePerform3(virDomainPtr dom,
const char *xmlin,
Introduce yet another migration version in API. Migration just seems to go from bad to worse. We already had to introduce a second migration protocol when adding the QEMU driver, since the one from Xen was insufficiently flexible to cope with passing the data the QEMU driver required. It turns out that this protocol still has some flaws that we need to address. The current sequence is * Src: DumpXML - Generate XML to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Kill off VM if successful, resume if failed * Dst: Finish - Wait for recv completion and check status - Kill off VM if unsuccessful The problems with this are: - Since the first step is a generic 'DumpXML' call, we can't add in other migration specific data. eg, we can't include any VM lease data from lock manager plugins - Since the first step is a generic 'DumpXML' call, we can't emit any 'migration begin' event on the source, or have any hook that runs right at the start of the process - Since there is no final step on the source, if the Finish method fails to receive all migration data & has to kill the VM, then there's no way to resume the original VM on the source This patch attempts to introduce a version 3 that uses the improved 5 step sequence * Src: Begin - Generate XML to pass to dst - Generate optional cookie to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Generate optional cookie to pass to dst * Dst: Finish - Wait for recv completion and check status - Kill off VM if failed, resume if success - Generate optional cookie to pass to src * Src: Confirm - Kill off VM if success, resume if failed The API is designed to allow both input and output cookies in all methods where applicable. This lets us pass around arbitrary extra driver specific data between src & dst during migration. Combined with the extra 'Begin' method this lets us pass lease information from source to dst at the start of migration Moving the killing of the source VM out of Perform and into Confirm, means we can now recover if the dst host can't successfully Finish receiving migration data.
2010-11-02 12:43:44 +00:00
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
const char *dconnuri, /* libvirtd URI if Peer2Peer, NULL otherwise */
const char *uri, /* VM Migration URI */
Introduce yet another migration version in API. Migration just seems to go from bad to worse. We already had to introduce a second migration protocol when adding the QEMU driver, since the one from Xen was insufficiently flexible to cope with passing the data the QEMU driver required. It turns out that this protocol still has some flaws that we need to address. The current sequence is * Src: DumpXML - Generate XML to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Kill off VM if successful, resume if failed * Dst: Finish - Wait for recv completion and check status - Kill off VM if unsuccessful The problems with this are: - Since the first step is a generic 'DumpXML' call, we can't add in other migration specific data. eg, we can't include any VM lease data from lock manager plugins - Since the first step is a generic 'DumpXML' call, we can't emit any 'migration begin' event on the source, or have any hook that runs right at the start of the process - Since there is no final step on the source, if the Finish method fails to receive all migration data & has to kill the VM, then there's no way to resume the original VM on the source This patch attempts to introduce a version 3 that uses the improved 5 step sequence * Src: Begin - Generate XML to pass to dst - Generate optional cookie to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Generate optional cookie to pass to dst * Dst: Finish - Wait for recv completion and check status - Kill off VM if failed, resume if success - Generate optional cookie to pass to src * Src: Confirm - Kill off VM if success, resume if failed The API is designed to allow both input and output cookies in all methods where applicable. This lets us pass around arbitrary extra driver specific data between src & dst during migration. Combined with the extra 'Begin' method this lets us pass lease information from source to dst at the start of migration Moving the killing of the source VM out of Perform and into Confirm, means we can now recover if the dst host can't successfully Finish receiving migration data.
2010-11-02 12:43:44 +00:00
unsigned long flags,
const char *dname,
unsigned long resource);
virDomainPtr virDomainMigrateFinish3(virConnectPtr dconn,
const char *dname,
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
const char *dconnuri, /* libvirtd URI if Peer2Peer, NULL otherwise */
const char *uri, /* VM Migration URI, NULL in tunnelled case */
unsigned long flags,
int cancelled); /* Kill the dst VM */
Introduce yet another migration version in API. Migration just seems to go from bad to worse. We already had to introduce a second migration protocol when adding the QEMU driver, since the one from Xen was insufficiently flexible to cope with passing the data the QEMU driver required. It turns out that this protocol still has some flaws that we need to address. The current sequence is * Src: DumpXML - Generate XML to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Kill off VM if successful, resume if failed * Dst: Finish - Wait for recv completion and check status - Kill off VM if unsuccessful The problems with this are: - Since the first step is a generic 'DumpXML' call, we can't add in other migration specific data. eg, we can't include any VM lease data from lock manager plugins - Since the first step is a generic 'DumpXML' call, we can't emit any 'migration begin' event on the source, or have any hook that runs right at the start of the process - Since there is no final step on the source, if the Finish method fails to receive all migration data & has to kill the VM, then there's no way to resume the original VM on the source This patch attempts to introduce a version 3 that uses the improved 5 step sequence * Src: Begin - Generate XML to pass to dst - Generate optional cookie to pass to dst * Dst: Prepare - Get ready to accept incoming VM - Generate optional cookie to pass to src * Src: Perform - Start migration and wait for send completion - Generate optional cookie to pass to dst * Dst: Finish - Wait for recv completion and check status - Kill off VM if failed, resume if success - Generate optional cookie to pass to src * Src: Confirm - Kill off VM if success, resume if failed The API is designed to allow both input and output cookies in all methods where applicable. This lets us pass around arbitrary extra driver specific data between src & dst during migration. Combined with the extra 'Begin' method this lets us pass lease information from source to dst at the start of migration Moving the killing of the source VM out of Perform and into Confirm, means we can now recover if the dst host can't successfully Finish receiving migration data.
2010-11-02 12:43:44 +00:00
int virDomainMigrateConfirm3(virDomainPtr domain,
const char *cookiein,
int cookieinlen,
unsigned long flags,
int restart); /* Restart the src VM */
char *virDomainMigrateBegin3Params(virDomainPtr domain,
virTypedParameterPtr params,
int nparams,
char **cookieout,
int *cookieoutlen,
unsigned int flags);
int virDomainMigratePrepare3Params(virConnectPtr dconn,
virTypedParameterPtr params,
int nparams,
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
char **uri_out,
unsigned int flags);
int virDomainMigratePrepareTunnel3Params(virConnectPtr conn,
virStreamPtr st,
virTypedParameterPtr params,
int nparams,
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
unsigned int flags);
int virDomainMigratePerform3Params(virDomainPtr domain,
const char *dconnuri,
virTypedParameterPtr params,
int nparams,
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
unsigned int flags);
virDomainPtr virDomainMigrateFinish3Params(virConnectPtr dconn,
virTypedParameterPtr params,
int nparams,
const char *cookiein,
int cookieinlen,
char **cookieout,
int *cookieoutlen,
unsigned int flags,
int cancelled);
int virDomainMigrateConfirm3Params(virDomainPtr domain,
virTypedParameterPtr params,
int nparams,
const char *cookiein,
int cookieinlen,
unsigned int flags,
int cancelled);
int
virTypedParameterValidateSet(virConnectPtr conn,
virTypedParameterPtr params,
int nparams);
int virStreamInData(virStreamPtr stream,
int *data,
long long *length);