In case of prlsdkLoadDomains fails, vzOpenDefault should
clear connection privateData pointer every time its
memory is actually freed.
Also it is not necessary to call vzConnectClose if a call
to vzOpenDefault fails, because they both make cleanup of
connection privateData.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
By default, QEMU truncates serial file on open. Sometimes, it could be weird -
for example, when we are trying to investigate some event, which occured several
restarts ago. This patch adds an ability to preserve previous content.
Signed-off-by: Dmitry Mishin <dim@virtuozzo.com>
Currently, there is no possibility for user to specify desired behaviour of
output to file - truncate or append. This patch adds an ability to explicitly
specify that user wants to preserve file's content on reopen.
Signed-off-by: Dmitry Mishin <dim@virtuozzo.com>
prlsdkCleanupBridgedNet call should be made strongly after
any actual domain deletion accurs. By doing this we avoid
any potential problems connected with second undefine call
when it is made after first one fails by some reason, and
we detect that network is already deleted.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Currently vz driver unregisters domains when undefine is called,
which is wrong because it contradicts with expected behavior.
All vz domains are persistent, which means that when one is
defined a new bundle directory containing meta data is created.
Undefining domains in a way we do now leaves those directories
undeleted, which prevents subsequent define call for the same
domain xml. I.e. the following sequence define->undefine->define
doesn't work now.
The patch fixes the problem by calling PrlVm_Delete instead of
PrlVm_Unreg detaching all disks prior actually doing this to
prevent images deletion.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
We want to eventually factor out the code dealing with device detaching
and reattaching, so that we can share it and make sure it's called eg.
when 'virsh nodedev-detach' is used.
For that to happen, it's important that the lists of active and inactive
PCI devices are updated every time a device changes its state.
Instead of passing NULL as the last argument of virPCIDeviceDetach() and
virPCIDeviceReattach(), pass the proper list so that it can be updated.
This replaces the virPCIKnownStubs string array that was used
internally for stub driver validation.
Advantages:
* possible values are well-defined
* typos in driver names will be detected at compile time
* avoids having several copies of the same string around
* no error checking required when setting / getting value
The names used mirror those in the
virDomainHostdevSubsysPCIBackendType enumeration.
This internal function supports, in theory, binding to a different
stub driver than the one the PCI device has been configured to use.
In practice, it is only ever called like
virPCIDeviceBindToStub(dev, dev->stubDriver);
which makes its second parameter redundant. Get rid of it, along
with the extra string copy required to support it.
Commmit df8192aa introduced admin related rename and some minor
(caused by automated approach, aka sed) and some more severe isues along with
it. First reason to revert is the inconsistency with libvirt library.
Although we deal with the daemon directly rather than with a specific
hypervisor, we still do have a connection. That being said, contributors might
get under the impression that AdmDaemonNew would spawn/start a new daemon
(since it's admin API, why not...), or AdmDaemonClose would do the exact
opposite or they might expect DaemonIsAlive report overall status of the daemon
which definitely isn't the case.
The second reason to revert this patch is renaming virt-admin client. The
client tool does not necessarily have to reflect the names of the API's it's
using in his internals. An example would be 's/vshAdmConnect/vshAdmDaemon'
where noone can be certain of what the latter function really does. The former
is quite expressive about some connection magic it performs, but the latter does
not say anything, especially when vshAdmReconnect and vshAdmDisconnect were
left untouched.
From: Ian Campbell <ian.campbell@citrix.com>
xend prior to 4.0 understands vcpus as maxvcpus and vcpu_avail
as a bit map of which cpus are online (default is all).
xend from 4.0 onwards understands maxvcpus as maxvcpus and
vcpus as the number which are online (from 0..N-1). The
upstream commit (68a94cf528e6 "xm: Add maxvcpus support")
claims that if maxvcpus is omitted then the old behaviour
(i.e. obeying vcpu_avail) is retained, but AFAICT it was not,
in this case vcpu==maxcpus==online cpus. This is good for us
because handling anything else would be fiddly.
This patch changes parsing of the virDomainDef maxvcpus and vcpus
entries to use the corresponding 'maxvcpus' and 'vcpus' settings
from xm and xl config. It also drops use of the old Xen 3.x
'vcpu_avail' setting.
The change also removes the maxvcpus limit of MAX_VIRT_VCPUS (since
maxvcpus is simply a count, not a bit mask), which is particularly
crucial on ARM where MAX_VIRT_CPUS == 1 (since all guests are
expected to support vcpu placement, and therefore only the boot
vcpu's info lives in the shared info page).
Existing tests adjusted accordingly, and new tests added for the
'maxvcpus' setting.
https://bugzilla.redhat.com/show_bug.cgi?id=1281710
Commit id '3c7590e0a' added the flag to the rbd backend, but provided
no means via virsh to use the flag. This patch adds a '--delete-snapshots'
option to both the "undefine" and "vol-delete" commands.
For "undefine", the flag is combined with the "--remove-all-storage" flag
in order to add the appropriate flag for the virStorageVolDelete call;
whereas, for the "vol-delete" command, just the flag is sufficient since
it's only operating on one volume.
Currently only supported for rbd backends.
Although they've been present for quite a while, they weren't added
to the API definition, so add them there to make it clearer.
Currently only the RBD backend even checks for any flags.
The initial commit '74951eade' did not include the proper check for whether
any flags are supported by the driver.
Even though the driver doesn't support VIR_STORAGE_VOL_DELETE_ZEROED,
it still checks and allows the processing to continue
Also add the new VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS since it is handled
as of commit id '3c7590e0a'.
Commit id '71ce4759' altered the cgroup processing with respect to the
call to virCgroupAddTask being moved out from lower layers into the calling
layers especially for qemu processing of emulator and vcpu threads. The
movement affected lxc insomuch as it is possible for a code path to
return a NULL cgroup *and* a 0 return status via virCgroupNewPartition
failure when virCgroupNewIgnoreError succeeded when virCgroupNewMachineManual
returns. Coverity pointed out that would cause virCgroupAddTask to core.
This patch will check for a NULL cgroup as well as the negative return
and just return the NULL cgroup to the caller (as it would have previously)
Remove use of xendConfigVersion in the s-expresion config formatter/parser
in src/xenconfig/. Adjust callers in the xen and libxl drivers accordingly.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Change all xml2sexpr tests to use the latest XEND_CONFIG_VERSION
(XEND_CONFIG_VERSION_3_1_0 = 4). Fix tests that do not conform to
the latest version.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
It has been quite some time since xend required specifying cdroms
and fds in '(image (hvm ...))'. Remove the code from the parsing
and formatting functions and fixup the associated tests.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Change all sexpr2xml tests to use the latest XEND_CONFIG_VERSION
(XEND_CONFIG_VERSION_3_1_0 = 4). Fix tests that do not conform to
the latest version.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remove use of xendConfigVersion in the xm and xl config formatter/parsers
in src/xenconfig/. Adjust callers in the xen and libxl drivers accordingly.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Change all tests to use the latest XEND_CONFIG_VERSION
(XEND_CONFIG_VERSION_3_1_0 = 4). Fix tests that do not conform to
the latest version.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remove the fullvirt-net-ioemu test since explicitly specifying
'type=ioemu' has not been needed in xm/xend for a long time. It is
not used at all in xl/libxl.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
https://bugzilla.redhat.com/show_bug.cgi?id=830056
Utilize recently added VIR_STORAGE_POOL_CREATE_WITH_BUILD* flags in
order to pass the flags along to the virStoragePoolCreateXML and
virStoragePoolCreate API's.
This affects the 'virsh pool-create', 'virsh pool-create-as', and
'virsh pool-start' commands. While it could be argued that pool-start
doesn't need the flags, they could prove useful for someone trying to
do one command build --overwrite and start command processing or
essentially starting with a clean slate.
NB:
This patch is loosely based upon code originally authored by Osier
Yang that were not reviewed and pushed, see:
https://www.redhat.com/archives/libvir-list/2012-July/msg00497.html
Although they both are the same now, a future patch will add new options
to pool-create-as. So create a common macro to capture commonality, then
use that in the command specific structure.
Although not currently used in more than one command, it soon will be
so create a common macro to be used in the new command location.
Additionally, add the ".flags = 0," for both to match the expections
of the structure being predefined.
Rather than continually cut/paste the "file" option for pool command
option structures, generate a macro which will commonly define it for
any command. Then of course use that macro.
Rather than continually cut/paste the "pool" option for pool command
option structures, generate a macro which will commonly define it for
any command. Then of course use that macro.
https://bugzilla.redhat.com/show_bug.cgi?id=830056
Add flags handling to the virStoragePoolCreate and virStoragePoolCreateXML
API's which will allow the caller to provide the capability for the storage
pool create API's to also perform a pool build during creation rather than
requiring the additional buildPool step. This will allow transient pools
to be defined, built, and started.
The new flags are:
* VIR_STORAGE_POOL_CREATE_WITH_BUILD
Perform buildPool without any flags passed.
* VIR_STORAGE_POOL_CREATE_WITH_BUILD_OVERWRITE
Perform buildPool using VIR_STORAGE_POOL_BUILD_OVERWRITE flag.
* VIR_STORAGE_POOL_CREATE_WITH_BUILD_NO_OVERWRITE
Perform buildPool using VIR_STORAGE_POOL_BUILD_NO_OVERWRITE flag.
It is up to the backend to handle the processing of build flags. The
overwrite and no-overwrite flags are mutually exclusive.
NB:
This patch is loosely based upon code originally authored by Osier
Yang that were not reviewed and pushed, see:
https://www.redhat.com/archives/libvir-list/2012-July/msg01328.html
We only support hotplugging SCSI controllers.
The USB and virtio-serial related code was never reachable because
this function was only called for VIR_DOMAIN_CONTROLLER_TYPE_SCSI
controllers.
This reverts commit ee0d97a and parts of commits 16db8d2
and d6d54cd1.
This function calls qemuDomainAttachControllerDevice for SCSI
controllers and reports an error for all other controllers.
Move the error inside qemuDomainAttachControllerDevice and delete this
wrapper.
Commit id '71b803ac' assumed that the storage pool source device path
was required for a 'logical' pool. This resulted in a failure to start
a pool without any device path defined.
So, adjust the virStorageBackendLogicalMatchPoolSource logic to
return success if at least the pool name matches the vgs output
when no pool source device path is/are provided.
A closer review of the code shows that for the transition from paused to
running which was supposed to emit the VIR_DOMAIN_EVENT_RESUMED - no event
would be generated. Rather the event is generated when going from running
to running.
Following the 'was_running' boolean shows it is set when the domain obj
is active and the domain obj state is VIR_DOMAIN_RUNNING. So rather than
using was_running to generate the RESUMED event, use !was_running
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to
update any volume data that may be used in future volume commands, such as
volume resize. For a raw file volume, a wipe could truncate the file and
a followup volume resize the capacity may fail because the volume target
allocation isn't updated to reflect the wipe activity.
The only caller always passes 0 for the extent start.
Drop the 'extent_start' parameter, as well as the mention of extents
from the function name.
Change off_t extent_length to unsigned long long wipe_len, as well as the
'remain' variable.