This patch moves all src/parallels/parallels* files to vz/vz*
and fixes build accordingly.
No functional changes.
Signed-off-by: Maxim Nestratov <mnestratov@parallels.com>
Just one of the simplest functions that returns string "Clients: X"
where X is the number of connected clients to daemon's first
subserver (the original one), so it can be tested using virsh, ipython,
etc.
The subserver is gathered by incrementing its reference
counter (similarly to getting qemu capabilities), so there is no
deadlock with admin subserver in this API.
Here you can see how functions should be named in the client (virAdm*)
and server (adm*).
There is also a parameter @flags that must be 0, which helps testing
proper error propagation into the client.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
For this to pe properly separated from other protocols used by the
server, there is second server added which allows access to the whole
virNetDaemon to its clients.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Initial scratch of the admin library. It has its own virAdmConnectPtr
that inherits from virAbstractConnectPtr and thus trivially supports
error reporting.
There's pkg-config file added and spec-file adjusted as well.
Since the library should be "minimalistic" and not depend on any other
library, the list of files is especially crafted for it. Most of them
could've been put to it's own sub-libraries that would be LIBADD'd to
libvirt_util, libvirt_net_rpc and libvirt_setuid_rpc_client to minimize
the number of object files being built, but that's a refactoring that
isn't the orginal aim of this commit.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
qemuBlockJobSyncBegin and qemuBlockJobSyncEnd delimit a region of code
where block job events are processed "synchronously".
qemuBlockJobSyncWait and qemuBlockJobSyncWaitWithTimeout wait for an
event generated by a block job.
The Wait* functions may be called multiple times while the synchronous
block job is active. Any pending block job event will be processed by
only when Wait* or End is called. disk->blockJobStatus is reset by
these functions, so if it is needed a pointer to a
virConnectDomainEventBlockJobStatus variable should be passed as the
last argument. It is safe to pass NULL if you do not care about the
block job status.
All functions assume the VM object is locked. The Wait* functions will
unlock the object for as long as they are waiting. They will return -1
and report an error if the domain exits before an event is received.
Typical use is as follows:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
qemuBlockJobSyncBegin(disk);
... start block job ...
if (qemuBlockJobSyncWait(driver, vm, disk, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
... possibly start other block jobs
or wait for further events ...
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
To perform other tasks periodically while waiting for an event:
virQEMUDriverPtr driver;
virDomainObjPtr vm; /* locked */
virDomainDiskDefPtr disk;
virConnectDomainEventBlockJobStatus status;
unsigned long long timeout = 500 * 1000ull; /* milliseconds */
qemuBlockJobSyncBegin(disk);
... start block job ...
do {
... do other task ...
if (qemuBlockJobSyncWaitWithTimeout(driver, vm, disk,
timeout, &status) < 0) {
/* domain died while waiting for event */
ret = -1;
goto error;
}
} while (status == -1);
qemuBlockJobSyncEnd(driver, vm, disk, NULL);
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Each thread can use a thread local variable to keep the name of a job
which is currently running in the job.
The virThreadJobSetWorker API is supposed to be called once by any
thread which is used as a worker, i.e., it is waiting in a pool, woken
up to do a job, and returned back to the pool.
The virThreadJobSet/virThreadJobClear APIs are to be called at the
beginning/end of each job.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Adds the port type definitions and methods that will be used to bind
interfaces to the Midonet virtual ports.
virtnetdevmidonet.c adds the way to bind and unbind the ports by
calling into the Midonet Host Agent control command line (installed
with the midolman package).
Signed-off-by: Antoni Segura Puimedon <toni+libvirt@midokura.com>
Delete .po files which contain zero translated strings. Refresh
the .pot file and pull down latest translations from Zanata.
When refreshing the libvirt.pot, it can be pushed to zanata
and .po files resynchonized using
# cd po
# rm libvirt.pot
# make libvirt.pot
# zanata-cli push
# zanata-cli pull
Note there is no need for 'make update-po', as long as you do
a zanata push, immediately followed by zanata pull, as the
Zanata server will ensure the .po files downloaded match the
just pushed .pot file.
Note at time of writing, it is strongly recommended to only
use the zanata Java client binary (zanata-cli), and not the
python client binary (zanata). This is because the moderately
large size of the libvirt pot file is causing errors when the
python client tries to push, which have been known to result
in the loss of all translations on the server, as well as also
preventing uploading of .po files themselves :-(
For a while now there are two places that gather information about NUMA
related guest configuration. While the XML can't be changed we can at
least store the data in one place in the definition.
Rename the numatune_conf.[ch] files to numa_conf as later patches will
move the rest of the definitions from the cpu definition to this one.
The virDBusMethodCall method has a DBusError as one of its
parameters. If the caller wants to pass a non-NULL value
for this, it immediately makes the calling code require
DBus at build time. This has led to breakage of non-DBus
builds several times. It is desirable that only the virdbus.c
file should need WITH_DBUS conditionals, so we must ideally
remove the DBusError parameter from the method.
We can't simply raise a libvirt error, since the whole point
of this parameter is to give the callers a way to check if
the error is one they want to ignore, without having the logs
polluted with an error message. So, we add a virErrorPtr
parameter which the caller can then either ignore or raise
using the new virReportErrorObject method.
This new method is distinct from virSetError in that it
ensures the logging hooks are run.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Moving code for parsing and formatting network routes to
networkcommon_conf helps reusing those routes for domains. The route
definition has been hidden to help reducing the number of unnecessary
checks in the format function.
systemd-machined introduced a new method CreateMachineWithNetwork
that obsoletes CreateMachine. It expects to be given a list of
VETH/TAP device indexes for the host side device(s) associated
with a container/machine.
This falls back to the old CreateMachine method when the new
one is not supported.
Introduce a parser/formatter for the xl config format. Since the
deprecation of xm/xend, the VM config file format has diverged as
new features are added to libxl. This patch adds support for parsing
and formating the xl config format. It supports the existing xm config
format, plus adds support for spice graphics and xl disk config syntax.
Disk config is specified a bit differently in xl as compared to xm. In
xl, disk config consists of comma-separated positional parameters and
keyword/value pairs separated by commas. Positional parameters are
specified as follows
target, format, vdev, access
Supported keys for key=value options are
devtype, backendtype
The positional paramters can also be specified in key/value form. For
example the following xl disk config are equivalent
/dev/vg/guest-volume,,hda
/dev/vg/guest-volume,raw,hda,rw
format=raw, vdev=hda, access=rw, target=/dev/vg/guest-volume
See $xen_sources/docs/misc/xl-disk-configuration.txt for more details.
xl disk config is parsed with the help of xlu_disk_parse() from
libxlutil, libxl's utility library. Although the library exists
in all Xen versions supported by the libxl virt driver, only
recently has the corresponding header file been included. A check
for the header is done in configure.ac. If not found, xlu_disk_parse()
is declared externally.
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Introduce a Xen xl parser
This parser allows for users to convert the new xl disk format and
spice graphics config to libvirt xml format and vice versa. Regarding
the spice graphics config, the code is pretty much straight forward.
For the disk {formating, parsing}, this parser takes care of the new
xl format which include positional parameters and key/value parameters.
In xl format disk config a <diskspec> consists of parameters separated by
commas. If the parameters do not contain an '=' they are automatically
assigned to certain options following the order below
target, format, vdev, access
The above are the only mandatory parameters in the <diskspec> but there
are many more disk config options. These options can be specified as
key=value pairs. This takes care of the rest of the options such as
devtype, backend, backendtype, script, direct-io-safe,
The positional paramters can also be specified in key/value form
for example
/dev/vg/guest-volume,,hda
/dev/vg/guest-volume,raw,hda,rw
format=raw, vdev=hda, access=rw, target=/dev/vg/guest-volume
are interpleted to one config.
In xm format, the above diskspec would be written as
phy:/dev/vg/guest-volume,hda,w
The disk parser is based on the same parser used successfully by
the Xen project for several years now. Ian Jackson authored the
scanner, which is used by this commit with mimimal changes. Only
the PREFIX option is changed, to produce function and file names
more consistent with libvirt's convention.
Signed-off-by: Kiarie Kahurani <davidkiarie4@gmail.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Reboot requires more sophistication and is left as a future work item --
but at least part of the plumbing is in place.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If detect_scsi_host_caps reports errors but keeps libvirtd going on
startup, the user is misled by the error messages. Transforming them
into warning still shows the problems, but indicates this is not fatal.
We use typedef IMedium IHardDisk to make IHardDisk hierachy from
IMedium (Actually it did on vbox 2.2 and 3.0's C++ API).
So when calling
VBOX_MEDIUM_FUNC_ARG*(IHardDisk, func, args)
we can directly replace it to
gVBoxAPI.UIMedium.func(IHardDisk, args)
When dealing with this two types, we get some rules from it's
hierachy relationship.
When using IHardDisk and IMedium as input, we can't transfer a
IMedium to IHardDisk. Like:
gVBoxAPI.UIHardDisk.func(IHardDisk *hardDisk, args)
Here, we can't put a *IMedium as a argument.
When using IHardDisk and IMedium as output, we can't transfer a
IHardDisk to IMedium. Like:
gVBoxAPI.UIMachine.GetMedium(IMedium **out)
Here, we can't put a **IHardDisk as a argument. If this case
do happen, we either change the API to GetHardDisk or write a
new one.
This allows to implement libvirt functions that use streams, such as
virDoaminScreenshot, without the need to store the downloaded data in
a temporary file first. The stream driver directly interacts with
libcurl to send and receive data.
The driver uses the libcurl multi interface that allows to do a transfer
in multiple curl_multi_perform() calls. The easy interface would do the
whole transfer in a single curl_easy_perform() call. This doesn't work
with the libvirt stream API that is driven by multiple calls to the
virStreamSend() and virStreamRecv() functions.
The curl_multi_wait() function is used to do blocking operations. But it
was added in libcurl 7.28.0. For older versions it is emulated using the
socket callback of the multi interface.
The current driver only supports blocking operations. There is already
some code in place for non-blocking mode but it is not complete.
This patch rewrites two public APIs. They are vboxNetworkUndefine
and vboxNetworkDestroy. They use the same core function
vboxNetworkUndefineDestroy. I merged it in one patch.
Add files parallels_sdk.c and parallels_sdk.h for code
which works with SDK, so libvirt's code will not mix with
dealing with parallels SDK.
To use Parallels SDK you must first call PrlApi_InitEx function,
and then you will be able to connect to a server with
PrlSrv_LoginLocalEx function. When you've done you must call
PrlApi_Deinit. So let's call PrlApi_InitEx on first .connectOpen,
count number of connections and deinitialize, when this counter
becomes zero.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Spawning the pkcheck program every time a permission check is
required is hugely expensive on CPU. The pkcheck program is just
a dumb wrapper for the DBus API, so rewrite the code to use the
DBus API directly. This also simplifies error handling a bit.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
There are now two places in libvirt which use polkit. Currently
they use pkexec, which is set to be replaced by direct DBus API
calls. Add a common API which they will both be able to use for
this purpose.
No tests are added at this time, since the impl will be gutted
in favour of a DBus API call shortly.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
XM and XL config are very similar. Disks are specified differently
in XL, but the old XM disk config is still supported by XL. XL also
supports new config like spice that was never supported by XM.
This patch moves all the common parsing and formatting functions to
the new file xen_common.c and adapts the XM parser/formatter accordingly.
This restructuring paves way for introducing an XL parser/formatter in
the future.
While moving the code, fixup whitespace, comments, and style issues.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
src/xenxs contains parsing/formating functions for the various xen
config formats, and is better named src/xenconfig.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Introduce vbox_uniformed_api to deal with version conflicts. Use
vbox_install_api to register the currect vboxUniformedAPI with
vbox version.
vboxConnectOpen has been rewritten.
Implement ZFS storage backend driver. Currently supported
only on FreeBSD because of ZFS limitations on Linux.
Features supported:
- pool-start, pool-stop
- pool-info
- vol-list
- vol-create / vol-delete
Pool definition looks like that:
<pool type='zfs'>
<name>myzfspool</name>
<source>
<name>actualpoolname</name>
</source>
</pool>
The 'actualpoolname' value is a name of the pool on the system,
such as shown by 'zpool list' command. Target makes no sense
here because volumes path is always /dev/zvol/$poolname/$volname.
User has to create a pool on his own, this driver doesn't
support pool creation currently.
A volume could be used with Qemu by adding an entry like this:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='myzfspool' volume='vol5'/>
<target dev='hdc' bus='ide'/>
</disk>
There were numerous places where numatune configuration (and thus
domain config as well) was changed in different ways. On some
places this even resulted in persistent domain definition not to be
stable (it would change with daemon's restart).
In order to uniformly change how numatune config is dealt with, all
the internals are now accessible directly only in numatune_conf.c and
outside this file accessors must be used.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>