Since the name (like scsi_host10) is not stable for vHBA, (it can
be changed either after recreating or system rebooting), current
API virNodeDeviceLookupByName is not nice to use for management app
in this case. (E.g. one wants to destroy the vHBA whose name has
been changed after system rebooting, he has to find out current
name first).
Later patches will support the persistent vHBA via storage pool,
with which one can identify the vHBA stably by the wwnn && wwpn
pair.
So this new API comes.
In commit 3ac26e2645 parameter "path" was
renamed to "disk" but this change was not reflected in the documentation.
Additionally, documentation of the "opaque" parameter was missing.
Working with virTypedParameters in clients written in C is ugly and
requires all clients to duplicate the same code. This set of APIs makes
this code for manipulating with virTypedParameters integral part of
libvirt so that all clients may benefit from it.
The api builder always associates comments to the last member it read,
not to the current member even if there was a comment for the previous
member and a comma was already seen.
This has the effect that the comment for the previous member gets
overwritten and the current member has no comment at all.
This patch introduces support for LXC specific public APIs. In
common with what was done for QEMU, this creates a libvirt_lxc.so
library and libvirt/libvirt-lxc.h header file.
The actual APIs are
int virDomainLxcOpenNamespace(virDomainPtr domain,
int **fdlist,
unsigned int flags);
int virDomainLxcEnterNamespace(virDomainPtr domain,
unsigned int nfdlist,
int *fdlist,
unsigned int *noldfdlist,
int **oldfdlist,
unsigned int flags);
which provide a way to use the setns() system call to move the
calling process into the container's namespace. It is not
practical to write in a generically applicable manner. The
nearest that we could get to such an API would be an API which
allows to pass a command + argv to be executed inside a
container. Even if we had such a generic API, this LXC specific
API is still useful, because it allows the caller to maintain
the current process context, in particular any I/O streams they
have open.
NB the virDomainLxcEnterNamespace() API is special in that it
runs client side, so does not involve the internal driver API.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
With the most recent patch from Claudio, I realized how many
indentation flaws we have in the libvirt.h.in file. Even though
they are harmless, it's still worth fixing them.
This patch adds a new API, virDomainOpenChannel, that uses streams to
connect to a virtio channel on a guest. This creates a secure
communication channel between a guest and a libvirt client.
This behaves the same as virDomainOpenConsole, except on channels
instead of console/serial/parallel devices.
Offline migration transfers inactive definition of a domain (which may
or may not be active). After successful completion, the domain remains
in its current state on source host and is defined but inactive on
destination host. It's a bit more clever than virDomainGetXMLDesc() on
source host followed by virDomainDefineXML() on destination host, as
offline migration will run pre-migration hook to update the domain XML
on destination host. Currently, copying non-shared storage is not
supported during offline migration.
Offline migration can be requested with a new migration flag called
VIR_MIGRATE_OFFLINE (which has to be combined with
VIR_MIGRATE_PERSIST_DEST flag).
Add VIR_STORAGE_VOL_CREATE_PREALLOC_METADATA flag to virStorageVolCreateXML
and virStorageVolCreateXMLFrom. This flag requests metadata
preallocation when creating/cloning qcow2 images, resulting in creating
a sparse file with qcow2 metadata. It has only slightly larger disk usage
compared to new image with no allocation, but offers higher performance.
The virDomainShutdownFlags and virDomainReboot APIs allow the caller
to request the operation is implemented via either acpi button press
or a guest agent. For containers, a couple of other methods make
sense, a message to /dev/initctl, and direct kill(SIGTERM|HUP) of
the container init process.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
To be able todo controlled shutdown/reboot of containers an
API to talk to init via /dev/initctl is required. Fortunately
this is quite straightforward to implement, and is supported
by both sysvinit and systemd. Upstart support for /dev/initctl
is unclear.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add an API for sending signals to arbitrary processes in the
guest OS. This is primarily useful for container based virt,
but can be used for machine virt too, if there is a suitable
guest agent,
* include/libvirt/libvirt.h.in: Add virDomainSendProcessSignal
and virDomainProcessSignal enum
* src/driver.h: Driver entry point
* src/libvirt.c, src/libvirt_public.syms: Impl for new API
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This will call FITRIM within guest. The API has 4 arguments,
however, only 2 will be used for now (@dom and @minumum).
The rest two are there if in future qemu guest agent learns them.
As we enable more modes of snapshot creation, it becomes more important
to be able to quickly filter based on snapshot properties. This patch
introduces new filter flags; subsequent patches will introduce virsh
back-compat filtering, as well as actual libvirt filtering.
* include/libvirt/libvirt.h.in (virDomainSnapshotListFlags): Add
five new flags in two new groups.
* src/libvirt.c (virDomainSnapshotNum, virDomainSnapshotListNames)
(virDomainListAllSnapshots, virDomainSnapshotNumChildren)
(virDomainSnapshotListChildrenNames)
(virDomainSnapshotListAllChildren): Document them.
* src/conf/snapshot_conf.h (VIR_DOMAIN_SNAPSHOT_FILTERS_STATUS)
(VIR_DOMAIN_SNAPSHOT_FILTERS_LOCATION): Add new convenience filter
collection macros.
* tools/virsh-snapshot.c (cmdSnapshotList): Add 5 new flags.
* tools/virsh.pod (snapshot-list): Document them.
Lately there were a few reports of the output of the virsh nodeinfo
command being inaccurate. This patch tries to avoid that by checking if
the topology actually makes sense. If it doesn't we then report a
synthetic topology that indicates to the user that the host capabilities
should be checked for the actual topology.
This is supposed to be thrown every time we need to pause domain
because of API execution (e.g. qemuDomainSaveInternal) but fails
to restore it back after. In this case, domain remains paused,
however, none of existing reasons can fit this scenario.
The default behavior while creating external checkpoints is to pause the
guest while the memory state is captured. We want the users to sacrifice
space saving for creating the memory save image while the guest is live
to minimize downtime.
This patch adds a flag that causes the guest not to be paused before
taking the snapshot.
*include/libvirt/libvirt.h.in:
- add new paused reason: VIR_DOMAIN_PAUSED_SNAPSHOT
- add new flag for taking snapshot: VIR_DOMAIN_SNAPSHOT_CREATE_LIVE
*tools/virsh-domain-monitor.c:
- add string representation for VIR_DOMAIN_PAUSED_SNAPSHOT
*tools/virsh-snapshot.c:
- add support for VIR_DOMAIN_SNAPSHOT_CREATE_LIVE
*tools/virsh.pod:
- add docs for --live option added to use
VIR_DOMAIN_SNAPSHOT_CREATE_LIVE flag
Handle the new type of block copy event and info. Of course,
this patch does nothing until a later patch actually allows the
creation/abort of a block copy job.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_BLOCK_JOB_READY): New
block job status.
* src/libvirt.c (virDomainBlockRebase): Document the event.
* src/qemu/qemu_monitor_json.c (eventHandlers): New event.
(qemuMonitorJSONHandleBlockJobReady): New function.
(qemuMonitorJSONGetBlockJobInfoOne): Translate new job type.
(qemuMonitorJSONHandleBlockJobImpl): Handle new event and job type.
* src/qemu/qemu_process.c (qemuProcessHandleBlockJob): Recognize
the event to minimize snooping.
* src/qemu/qemu_driver.c (qemuDomainBlockJobImpl): Snoop a successful
info query to save effort on a pivot request.
New macro VIR_CPU_USED added to facilitate the interpretation of
cpu maps.
Further, hardened the other cpumap macros against invocations
like VIR_CPU_USE(cpumap + 1, cpu)
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Adding a new API to obtain information about the
host node's present, online and offline CPUs.
int virNodeGetCPUMap(virConnectPtr conn,
unsigned char **cpumap,
unsigned int *online,
unsigned int flags);
The function will return the number of CPUs present on the host
or -1 on failure;
If cpumap is non-NULL virNodeGetCPUMap will allocate an array
containing a bit map representation of the online CPUs. It's
the callers responsibility to deallocate cpumap using free().
If online is non-NULL, the variable pointed to will contain
the number of online host node CPUs.
The variable flags has been added to support future extensions
and must be set to 0.
Extend the driver structure by nodeGetCPUMap entry in support of the
new API virNodeGetCPUMap.
Added implementation of virNodeGetCPUMap to libvirt.c
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Commit 12ad7435 added new functions (virNodeGetMemoryParameters,
virNodeSetMemoryParameters) into the section of the file reserved
for deprecated names. Fix this by moving things earlier; split
into two patches to make git diff easier to read.
* include/libvirt/libvirt.h.in: Move virNodeGetMemoryParameters
and friends earlier, add a note to prevent relapse.
Commit 12ad7435 added new functions (virNodeGetMemoryParameters,
virNodeSetMemoryParameters) into the section of the file reserved
for deprecated names. Fix this by moving things earlier; split
into two patches to make git diff easier to read.
* include/libvirt/libvirt.h.in: Move virTypedParameter earlier.
The previously introduced virFile{Lock,Unlock} APIs provide a
way to acquire/release fcntl() locks on individual files. For
unknown reason though, the POSIX spec says that fcntl() locks
are released when *any* file handle referring to the same path
is closed. In the following sequence
threadA: fd1 = open("foo")
threadB: fd2 = open("foo")
threadA: virFileLock(fd1)
threadB: virFileLock(fd2)
threadB: close(fd2)
you'd expect threadA to come out holding a lock on 'foo', and
indeed it does hold a lock for a very short time. Unfortunately
when threadB does close(fd2) this releases the lock associated
with fd1. For the current libvirt use case for virFileLock -
pidfiles - this doesn't matter since the lock is acquired
at startup while single threaded an never released until
exit.
To provide a more generally useful API though, it is necessary
to introduce a slightly higher level abstraction, which is to
be referred to as a "lockspace". This is to be provided by
a virLockSpacePtr object in src/util/virlockspace.{c,h}. The
core idea is that the lockspace keeps track of what files are
already open+locked. This means that when a 2nd thread comes
along and tries to acquire a lock, it doesn't end up opening
and closing a new FD. The lockspace just checks the current
list of held locks and immediately returns VIR_ERR_RESOURCE_BUSY.
NB, the API as it stands is designed on the basis that the
files being locked are not being otherwise opened and used
by the application code. One approach to using this API is to
acquire locks based on a hash of the filepath.
eg to lock /var/lib/libvirt/images/foo.img the application
might do
virLockSpacePtr lockspace = virLockSpaceNew("/var/lib/libvirt/imagelocks");
lockname = md5sum("/var/lib/libvirt/images/foo.img");
virLockSpaceAcquireLock(lockspace, lockname);
NB, in this example, the caller should ensure that the path
is canonicalized before calculating the checksum.
It is also possible to do locks directly on resources by
using a NULL lockspace directory and then using the file
path as the lock name eg
virLockSpacePtr lockspace = virLockSpaceNew(NULL);
virLockSpaceAcquireLock(lockspace, "/var/lib/libvirt/images/foo.img");
This is only safe to do though if no other part of the process
will be opening the files. This will be the case when this
code is used inside the soon-to-be-reposted virlockd daemon
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds support for SUSPEND_DISK event; both lifecycle and
separated. The support is added for QEMU, machines are changed to
PMSUSPENDED, but as QEMU sends SHUTDOWN afterwards, the state changes
to shut-off. This and much more needs to be done in order for libvirt
to work with transient devices, wake-ups etc. This patch is not
aiming for that functionality.
Upstream kernel introduced new sysfs knob "merge_across_nodes" to
specify if pages from different numa nodes can be merged. When set
to 0, only pages which physically reside in the memory area of
same NUMA node can be merged. When set to 1, pages from all nodes
can be merged.
This patch supports the tuning by adding new param field
"shm_merge_across_nodes".
Using VIR_DOMAIN_XML_MIGRATABLE flag, one can request domain's XML
configuration that is suitable for migration or save/restore. Such XML
may contain extra run-time stuff internal to libvirt and some default
configuration may be removed for better compatibility of the XML with
older libvirt releases.
This flag may serve as an easy way to get the XML that can be passed
(after desired modifications) to APIs that accept custom XMLs, such as
virDomainMigrate{,ToURI}2 or virDomainSaveFlags.
https://www.gnu.org/licenses/gpl-howto.html recommends that
the 'If not, see <url>.' phrase be a separate sentence.
* tests/securityselinuxhelper.c: Remove doubled line.
* tests/securityselinuxtest.c: Likewise.
* globally: s/; If/. If/
These enums originally were put into the flags for virNetworkUpdate,
and when they were moved into their own enum, the numbers weren't
appropriately changed, causing the commands to start with value 2
instead of 1. This causes problems for things like ENUM_IMPL, which
wants a string for every value in the requested range, including those
not used in the enum.
This patch adds a new public API virNetworkUpdate that will permit
updating an existing network configuration without requiring that the
network be destroyed/restarted for the changes to take effect.
A block commit moves data in the opposite direction of block pull.
Block pull reduces the chain length by dropping backing files after
data has been pulled into the top overlay, and is always safe; block
commit reduces the chain length by dropping overlays after data has
been committed into the backing file, and any files that depended
on base but not on top are invalidated at any point where they have
unallocated data that is now pointing to changed contents in base.
Both directions are useful, however: a qcow2 layer that is more than
50% allocated will typically be faster with a pull operation, while
a qcow2 layer with less than 50% allocation will be faster as a
commit operation. Committing across multiple layers can be more
efficient than repeatedly committing one layer at a time, but
requires extra support from the hypervisor.
This API matches Jeff Cody's proposed qemu command 'block-commit':
https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg02226.html
Jeff's command is still in the works for qemu 1.3, and may gain
further enhancements, such as the ability to control on-error
handling (it will be comparable to the error handling Paolo is
adding to 'drive-mirror', so a similar solution will be needed
when I finally propose virDomainBlockCopy with more functionality
than the basics supported by virDomainBlockRebase). However, even
without qemu support, this API will be useful for _offline_ block
commits, by wrapping qemu-img calls and turning them into a block
job, so this API is worth committing now.
For some examples of how this will be implemented, all starting
with the chain: base <- snap1 <- snap2 <- active
+ These are equivalent:
virDomainBlockCommit(dom, disk, NULL, NULL, 0, 0)
virDomainBlockCommit(dom, disk, NULL, "active", 0, 0)
virDomainBlockCommit(dom, disk, "base", NULL, 0, 0)
virDomainBlockCommit(dom, disk, "base", "active", 0, 0)
but cannot be implemented for online qemu with round 1 of
Jeff's patches; and for offline images, it would require
three back-to-back qemu-img invocations unless qemu-img
is patched to allow more efficient multi-layer commits;
the end result would be 'base' as the active disk with
contents from all three other files, where 'snap1' and
'snap2' are invalid right away, and 'active' is invalid
once any further changes to 'base' are made.
+ These are equivalent:
virDomainBlockCommit(dom, disk, "snap2", NULL, 0, 0)
virDomainBlockCommit(dom, disk, NULL, NULL, 0, _SHALLOW)
they cannot be implemented for online qemu, but for offline,
it is a matter of 'qemu-img commit active', so that 'snap2'
is now the active disk with contents formerly in 'active'.
+ Similarly:
virDomainBlockCommit(dom, disk, "snap2", NULL, 0, _DELETE)
for an offline domain will merge 'active' into 'snap2', then
delete 'active' to avoid leaving a potentially invalid file
around.
+ This version:
virDomainBlockCommit(dom, disk, NULL, "snap2", 0, _SHALLOW)
can be implemented online with 'block-commit' passing a base of
snap1 and a top of snap2; and can be implemented offline by
'qemu-img commit snap2' followed by 'qemu-img rebase -u
-b snap1 active'
* include/libvirt/libvirt.h.in (virDomainBlockCommit): New API.
* src/libvirt.c (virDomainBlockCommit): Implement it.
* src/libvirt_public.syms (LIBVIRT_0.10.2): Export it.
* src/driver.h (virDrvDomainBlockCommit): New driver callback.
* docs/apibuild.py (CParser.parseSignature): Add exception.
* include/libvirt/libvirt.h.in: (Add macros for the param fields,
declare the APIs).
* src/driver.h: (New methods for the driver struct)
* src/libvirt.c: (Implement the public APIs)
* src/libvirt_public.syms: (Export the public symbols)
This is to list the secret objects. Supports to filter the secrets
by its storage location, and whether it's private or not.
include/libvirt/libvirt.h.in: Declare enum virConnectListAllSecretFlags
and virConnectListAllSecrets.
python/generator.py: Skip auto-generating
src/driver.h: (virDrvConnectListAllSecrets)
src/libvirt.c: Implement the public API
src/libvirt_public.syms: Export the symbol to public
This is to list the network filter objects. No flags are supported
include/libvirt/libvirt.h.in: Declare enum virConnectListAllNWFilterFlags
and virConnectListAllNWFilters.
python/generator.py: Skip auto-generating
src/driver.h: (virDrvConnectListAllNWFilters)
src/libvirt.c: Implement the public API
src/libvirt_public.syms: Export the symbol to public
This is to list the node device objects, supports to filter the results
by capability types.
include/libvirt/libvirt.h.in: Declare enum virConnectListAllNodeDeviceFlags
and virConnectListAllNodeDevices.
python/generator.py: Skip auto-generating
src/driver.h: (virDrvConnectListAllNodeDevices)
src/libvirt.c: Implement the public API
src/libvirt_public.syms: Export the symbol to public
This is to list the interface objects, supported filtering flags
are: active|inactive.
include/libvirt/libvirt.h.in: Declare enum virConnectListAllInterfaceFlags
and virConnectListAllInterfaces.
python/generator.py: Skip auto-generating
src/driver.h: (virDrvConnectListAllInterfaces)
src/libvirt.c: Implement the public API
src/libvirt_public.syms: Export the symbol to public
This is to list the network objects, supported filtering flags
are: active|inactive, persistent|transient, autostart|no-autostart.
include/libvirt/libvirt.h.in: Declare enum virConnectListAllNetworkFlags
and virConnectListAllNetworks.
python/generator.py: Skip auto-generating
src/driver.h: (virDrvConnectListAllNetworks)
src/libvirt.c: Implement the public API
src/libvirt_public.syms: Export the symbol to public
Simply returns the storage volume objects. No supported filter
flags.
include/libvirt/libvirt.h.in: Declare the API
python/generator.py: Skip the function for generating. virStoragePool.py
will be added in later patch.
src/driver.h: virDrvStoragePoolListVolumesFlags
src/libvirt.c: Implementation for the API.
src/libvirt_public.syms: Export the symbol to public