This replaces the virPCIKnownStubs string array that was used
internally for stub driver validation.
Advantages:
* possible values are well-defined
* typos in driver names will be detected at compile time
* avoids having several copies of the same string around
* no error checking required when setting / getting value
The names used mirror those in the
virDomainHostdevSubsysPCIBackendType enumeration.
This internal function supports, in theory, binding to a different
stub driver than the one the PCI device has been configured to use.
In practice, it is only ever called like
virPCIDeviceBindToStub(dev, dev->stubDriver);
which makes its second parameter redundant. Get rid of it, along
with the extra string copy required to support it.
Commmit df8192aa introduced admin related rename and some minor
(caused by automated approach, aka sed) and some more severe isues along with
it. First reason to revert is the inconsistency with libvirt library.
Although we deal with the daemon directly rather than with a specific
hypervisor, we still do have a connection. That being said, contributors might
get under the impression that AdmDaemonNew would spawn/start a new daemon
(since it's admin API, why not...), or AdmDaemonClose would do the exact
opposite or they might expect DaemonIsAlive report overall status of the daemon
which definitely isn't the case.
The second reason to revert this patch is renaming virt-admin client. The
client tool does not necessarily have to reflect the names of the API's it's
using in his internals. An example would be 's/vshAdmConnect/vshAdmDaemon'
where noone can be certain of what the latter function really does. The former
is quite expressive about some connection magic it performs, but the latter does
not say anything, especially when vshAdmReconnect and vshAdmDisconnect were
left untouched.
From: Ian Campbell <ian.campbell@citrix.com>
xend prior to 4.0 understands vcpus as maxvcpus and vcpu_avail
as a bit map of which cpus are online (default is all).
xend from 4.0 onwards understands maxvcpus as maxvcpus and
vcpus as the number which are online (from 0..N-1). The
upstream commit (68a94cf528e6 "xm: Add maxvcpus support")
claims that if maxvcpus is omitted then the old behaviour
(i.e. obeying vcpu_avail) is retained, but AFAICT it was not,
in this case vcpu==maxcpus==online cpus. This is good for us
because handling anything else would be fiddly.
This patch changes parsing of the virDomainDef maxvcpus and vcpus
entries to use the corresponding 'maxvcpus' and 'vcpus' settings
from xm and xl config. It also drops use of the old Xen 3.x
'vcpu_avail' setting.
The change also removes the maxvcpus limit of MAX_VIRT_VCPUS (since
maxvcpus is simply a count, not a bit mask), which is particularly
crucial on ARM where MAX_VIRT_CPUS == 1 (since all guests are
expected to support vcpu placement, and therefore only the boot
vcpu's info lives in the shared info page).
Existing tests adjusted accordingly, and new tests added for the
'maxvcpus' setting.
Although they've been present for quite a while, they weren't added
to the API definition, so add them there to make it clearer.
Currently only the RBD backend even checks for any flags.
The initial commit '74951eade' did not include the proper check for whether
any flags are supported by the driver.
Even though the driver doesn't support VIR_STORAGE_VOL_DELETE_ZEROED,
it still checks and allows the processing to continue
Also add the new VIR_STORAGE_VOL_DELETE_WITH_SNAPSHOTS since it is handled
as of commit id '3c7590e0a'.
Commit id '71ce4759' altered the cgroup processing with respect to the
call to virCgroupAddTask being moved out from lower layers into the calling
layers especially for qemu processing of emulator and vcpu threads. The
movement affected lxc insomuch as it is possible for a code path to
return a NULL cgroup *and* a 0 return status via virCgroupNewPartition
failure when virCgroupNewIgnoreError succeeded when virCgroupNewMachineManual
returns. Coverity pointed out that would cause virCgroupAddTask to core.
This patch will check for a NULL cgroup as well as the negative return
and just return the NULL cgroup to the caller (as it would have previously)
Remove use of xendConfigVersion in the s-expresion config formatter/parser
in src/xenconfig/. Adjust callers in the xen and libxl drivers accordingly.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
It has been quite some time since xend required specifying cdroms
and fds in '(image (hvm ...))'. Remove the code from the parsing
and formatting functions and fixup the associated tests.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Remove use of xendConfigVersion in the xm and xl config formatter/parsers
in src/xenconfig/. Adjust callers in the xen and libxl drivers accordingly.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
https://bugzilla.redhat.com/show_bug.cgi?id=830056
Add flags handling to the virStoragePoolCreate and virStoragePoolCreateXML
API's which will allow the caller to provide the capability for the storage
pool create API's to also perform a pool build during creation rather than
requiring the additional buildPool step. This will allow transient pools
to be defined, built, and started.
The new flags are:
* VIR_STORAGE_POOL_CREATE_WITH_BUILD
Perform buildPool without any flags passed.
* VIR_STORAGE_POOL_CREATE_WITH_BUILD_OVERWRITE
Perform buildPool using VIR_STORAGE_POOL_BUILD_OVERWRITE flag.
* VIR_STORAGE_POOL_CREATE_WITH_BUILD_NO_OVERWRITE
Perform buildPool using VIR_STORAGE_POOL_BUILD_NO_OVERWRITE flag.
It is up to the backend to handle the processing of build flags. The
overwrite and no-overwrite flags are mutually exclusive.
NB:
This patch is loosely based upon code originally authored by Osier
Yang that were not reviewed and pushed, see:
https://www.redhat.com/archives/libvir-list/2012-July/msg01328.html
We only support hotplugging SCSI controllers.
The USB and virtio-serial related code was never reachable because
this function was only called for VIR_DOMAIN_CONTROLLER_TYPE_SCSI
controllers.
This reverts commit ee0d97a and parts of commits 16db8d2
and d6d54cd1.
This function calls qemuDomainAttachControllerDevice for SCSI
controllers and reports an error for all other controllers.
Move the error inside qemuDomainAttachControllerDevice and delete this
wrapper.
Commit id '71b803ac' assumed that the storage pool source device path
was required for a 'logical' pool. This resulted in a failure to start
a pool without any device path defined.
So, adjust the virStorageBackendLogicalMatchPoolSource logic to
return success if at least the pool name matches the vgs output
when no pool source device path is/are provided.
A closer review of the code shows that for the transition from paused to
running which was supposed to emit the VIR_DOMAIN_EVENT_RESUMED - no event
would be generated. Rather the event is generated when going from running
to running.
Following the 'was_running' boolean shows it is set when the domain obj
is active and the domain obj state is VIR_DOMAIN_RUNNING. So rather than
using was_running to generate the RESUMED event, use !was_running
https://bugzilla.redhat.com/show_bug.cgi?id=1270709
When a volume wipe is successful, perform a volume refresh afterwards to
update any volume data that may be used in future volume commands, such as
volume resize. For a raw file volume, a wipe could truncate the file and
a followup volume resize the capacity may fail because the volume target
allocation isn't updated to reflect the wipe activity.
The only caller always passes 0 for the extent start.
Drop the 'extent_start' parameter, as well as the mention of extents
from the function name.
Change off_t extent_length to unsigned long long wipe_len, as well as the
'remain' variable.
Return -1:
* on all failures of fdatasync. Instead of propagating -errno
all the way up to the virStorageVolWipe API, which is documented
to return 0 or -1.
* after a partial wipe. If safewrite failed, we would re-use the
non-negative return value of lseek (which should be 0 in this case,
because that's the only offset we seek to).
When the function changes the memory lock limit for the first time,
it will retrieve the current value and store it inside the
virDomainObj for the domain.
When the function is called again, if memory locking is no longer
needed, it will be able to restore the memory locking limit to its
original value.
We increase the limit before plugging in a PCI hostdev or a memory
module because some memory might need to be locked due to eg. VFIO.
Of course we should do the opposite after unplugging a device: this
was already the case for memory modules, but not for PCI hostdevs.
This function can be used to retrieve the current locked memory
limit for a process, so that the setting can be later restored.
Add a configure check for getrlimit(), which we now use.
The prlimit() function allows both getting and setting limits for
a process; expose the same functionality in our wrapper.
Add the const modifier for new_limit, in accordance with the
prototype for prlimit().
In commit 686eb7a24f, the break was not considered part of the
condition, hence breaking after first node when searching.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Instead of replicating the information (domain, bus, slot, function)
inside the virPCIDevice structure, use the already-existing
virPCIDeviceAddress structure.
For users of the module, this means that the object returned by
virPCIDeviceGetAddress() can no longer be NULL and must no longer
be freed by the caller.
Introduces support for domainGetJobStats which has the same
info as domainGetJobInfo but in a slightly different format.
Another difference is that virDomainGetJobStats can also
retrieve info on the most recently completed job. Though so
far this is only used in the source node to know if the
migration has been completed. But because we don't support
completed jobs we will deliver an error.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Introduce support for domainGetJobInfo to get info about the
ongoing job. If the job is active it will update the
timeElapsed which is computed with the "started" field added to
struct libxlDomainJobObj. For now we support just the very basic
info and all jobs have VIR_DOMAIN_JOB_UNBOUNDED (i.e. no completion
time estimation) plus timeElapsed computed.
Openstack Kilo uses the Job API to monitor live-migration
progress which is currently nonexistent in libxl driver and
therefore leads to a crash in the nova compute node. Right
now, migration doesn't use jobs in the source node and will
return VIR_DOMAIN_JOB_NONE. Though nova handles this case and
will migrate it properly instead of crashing.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1025230
Add a new helper virStorageBackendLogicalMatchPoolSource to compare the
pool's source name against the output from a 'pvs' command to list all
volume group physical volume data on the host. In addition, compare the
pool's source device list against the particular volume group's device
list to ensure the source device(s) listed for the pool match what the
was listed for the volume group.
Then for pool startup or check API's we need to call this new API in
order to ensure that the pool we're about to start or declare active
during checkPool has a valid definition vs. the running host.
Rework virStorageBackendLogicalFindPoolSources a bit to create a
helper virStorageBackendLogicalGetPoolSources that will make the
pvs call in order to generate a list of associated pv_name and vg_name's.
A future patch will make use of this for start/check processing to
ensure the storage pool source definition matches expectations.
https://bugzilla.redhat.com/show_bug.cgi?id=1025230
When determining whether a FS pool is mounted, rather than assuming that
the FS pool is mounted just because the target.path is in the mount list,
let's make sure that the FS pool source matches what is mounted
Refactor the code that builds the pool source string during the FS
storage pool mount to be a separate helper.
A future patch will use the helper in order to validate the mounted
FS matches the pool's expectation during poolCheck processing
when appropriate, of course. If the config for a domain specifies boot
order with <boot dev='blah'/> elements, e.g.:
<os>
...
<boot dev='hd'/>
<boot dev='network'/>
</os>
Then the first disk device in the config will have ",bootindex=1"
appended to its qemu commandline -device options, and the first (and
*only* the first) network interface device will get ",bootindex=2".
However, if the first network interface device is a "hostdev" device
(an SRIOV Virtual Function (VF) being assigned to the domain with
vfio), then the bootindex option will *not* be appended. This happens
because the bootindex=n option corresponding to the order of "<boot
dev='network'/>" is added to the -device for the first network device
when network device commandline args are constructed, but if it's a
hostdev network device, its commandline arg is instead constructed in
the loop for hostdevs.
This patch fixes that omission by noticing (in bootHostdevNet) if the
first network device was a hostdev, and if so passing on the proper
bootindex to the commandline generator for hostdev devices - the
result is that ",bootindex=2" will be properly appended to the first
"network" device in the config even if it is really a hostdev
(including if it is assigned from a libvirt network pool). (note that
this is only the case if there is no <bootmenu enabled='yes'/> element
in the config ("-boot menu-on" in qemu) , since the two are mutually
exclusive - when the bootmenu is enabled, the individual per-device
bootindex options can't be used by qemu, and we revert to using "-boot
order=xyz" instead).
If a greater level of control over boot order is desired (e.g., more
than one network device should be tried, or a network device other
than the first one encountered in the config), then <boot
dev='network'/> in the <os> element should not be used; instead, the
individual device elements in the config should be given a "<boot
order='n'/>
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1278421
Many of the functions follow the pattern:
virSecurity.*Security.*Label
Remove the second 'Security' from the names, it should be
obvious that the virSecurity* functions deal with security
labels even without it.
Many of the functions follow the pattern:
virSecurity.*Security.*Label
Remove the second 'Security' from the names, it should be obvious
that the virSecurity* functions deal with security labels even
without it.
Many of the functions follow the pattern:
virSecurity.*Security.*Label
Remove the second 'Security' from the names, it should be obvious
that the virSecurity* functions deal with security labels even
without it.
Commit 256496e1 introduced a detection if "is locked" in error replay
from qemu monitor. Commit c4073657 fixed a memory leak, but it was
pointed out by Peter, that this could be done cleaner without
stringifing the replay.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>