This patch enables the "none" USB controller for qemu guests and adds
valdiation on hot-plugged devices if the guest has USB disabled.
This patch also adds a set of tests to check parsing of domain XMLs that
use the "none" controller and some forbidden situations concerning it.
This patch adds helpers that validate domain's device configuration.
This will be needed later on to verify devices being hot-plugged to
guests. If the guest has no USB bus, then it's not valid to plug a USB
device to that guest.
Libvirt adds a USB controller to the guest even if the user does not
specify any in the XML. This is due to back-compat reasons.
To allow disabling USB for a guest this patch adds a new USB controller
type "none" that disables USB support for the guest.
The option 'srcSpec' to virsh command find-storage-pool-sources
is optional for logical type of storage pool, but mandatory for
netfs and iscsi type.
When missing the option for netfs and iscsi, libvirt reports XML
parsing error due to null string option srcSpec.
before
error: Failed to find any netfs pool sources
error: (storage_source_specification):1: Document is empty
(null)
after:
error: pool type 'iscsi' requires option --srcSpec for source discovery
One of our latest patches added some files to .gitignore. However,
not in the right place leaving the file not sorted. Since my git
is set up to sort these files contents, fix this issue as it keeps
showing up in git status.
The 'make check' was rebuilding the binaries just overrided,
so for more safety also override the C program
Also daemon-conf isn't built anymore so remove it from the list
To create a new VM in Parallels Clud Server we should issue
"prlctl create" command, and give path to the directory,
where VM should be created. VM's storage will be in that
directory later. So in this first version find out location
of first VM's hard disk and create VM there.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels Cloud Server has one serious discrepancy with libvirt:
libvirt stores domain configuration files in one place, and storage
files in other places (with the API of storage pools and storage volumes).
Parallels Cloud Server stores all domain data in a single directory,
for example, you may have domain with name fedora-15, which will be
located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be
in '/var/parallels/fedora-15.pvm/harddisk1.hdd'.
I've decided to create storage driver, which produces pseudo-volumes
(xml files with volume description), and they will be 'converted' to
real disk images after attaching to a VM.
So if someone creates VM with one hard disk using virt-manager,
at first virt-manager creates a new volume, and then defines a
domain. We can lookup a volume by path in XML domain definition
and find out location of new domain and size of its hard disk.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Add parallelsDomainDefineXML function, it works only for existing
domains for the present.
It's too hard to convert libvirt's XML domain configuration into
Parallel's one, so I've decided to compare virDomainDef structures:
current domain definition and the one created from XML, given to
the function. And change only different parameters.
Currently only name, description, number of cpus, memory amount
and video memory can be changed.
Video device and console added, because libvirt supposes that
VM must always have one video device, if there are some
graphics and one console.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Add support of collecting information about serial
ports. This change is needed mostly as an example,
support of other devices will be added later.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels driver is 'stateless', like vmware or openvz drivers.
It collects information about domains during startup using
command-line utility prlctl. VMs in Parallels are identified by UUIDs
or unique names, which can be used as respective fields in
virDomainDef structure. Currently only basic info, like
description, virtual cpus number and memory amount, is implemented.
Querying devices information will be added in the next patches.
Parallels doesn't support non-persistent domains - you can't run
a domain having only disk image, it must always be registered
in system.
Functions for querying domain info have been just copied from
test driver with some changes - they extract needed data from
previously created list of virDomainObj objects.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Add function virCommandNewVAList which is equivalent to the
virCommandNewArgList but with va_list instead of a variable number
of arguments.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels Cloud Server is a cloud-ready virtualization
solution that allows users to simultaneously run multiple virtual
machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/
Also beta version of Parallels Cloud Server can be downloaded there.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
The 'check-symfile' test case was checking the contents of
libvirt.syms against libvirt.so + all of libvirt_driver_XXX.so
This was in fact bogus - libvirt.syms should only refer to
stuff in libvirt.so, but it had some symbols from the various
driver modules in it too. Now that libvirt.syms has been
fixed, the check-symfile test can be simplified to only
consider libvirt.so
The nwfilter and secrets drivers are both stateful and are already
linked directly to libvirtd. Linking them to libvirt.so is thus
wrong, likewise exporting their symbols in libvirt.so is wrong
The network driver is stateful, so it is linked directly to libvirtd,
rather than libvirt.so. Thus there are no network symbols to be exported
in libvirt.so, and libvirt_network.syms can be deleted
Otherwise, a build may fail with:
lxc/lxc_conatiner.c: In function 'lxcContainerDropCapabilities':
lxc/lxc_container.c:1662:46: error: unused parameter 'keepReboot' [-Werror=unused-parameter]
* src/lxc/lxc_container.c (lxcContainerDropCapabilities): Mark
parameter unused.
Daemon uses the following pattern when dispatching APIs with typed
parameters:
VIR_ALLOC_N(params, nparams);
virDomain*(dom, params, &nparams, flags);
virTypedParameterArrayClear(params, nparams);
In case nparams was originally set to 0, virDomain* API would fill it
with the number of typed parameters it can provide and we would use this
number (rather than zero) to clear params. Because VIR_ALLOC* returns
non-NULL pointer even if size is 0, the code would end up walking
through random memory. If we were lucky enough and the memory contained
7 (VIR_TYPED_PARAM_STRING) at the right place, we would try to free a
random pointer and crash.
Let's make sure params stays NULL when nparams is 0.
Commit 6ed5a1b9bd adds close callback
functions to the public API but doesn't add python implementation. This
patch sets the function to be written manually (to fix the build), but
doesn't implement them yet.
If an LXC container is using a virtual network and that network
is not active, currently the user gets a rather unhelpful
error message about tap device setup failure. Add an explicit
check for whether the network is active, in exactly the same
way as the QEMU driver
The cfg.mk file rule to check for tab characters was not
applied to perl files. Much of our Perl code is full of
tabs as a result. Kill them, kill them all !
The reboot() syscall is allowed by new kernels for LXC containers.
The LXC controller can detect whether a reboot was requested
(instead of a normal shutdown) by looking at the "init" process
exit status. If a reboot was triggered, the exit status will
record SIGHUP as the kill reason.
The LXC controller has cleared all its capabilities, and the
veth network devices will no longer exist at this time. Thus
it cannot restart the container init process itself. Instead
it emits an event which is picked up by the LXC driver in
libvirtd. This will then re-create the container, using the
same configuration as it was previously running with (ie it
will not activate 'newDef').
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Check whether the reboot() system call is virtualized, and if
it is, then allow the container to keep CAP_SYS_REBOOT.
Based on an original patch by Serge Hallyn
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This defines a new RPC protocol to be used between the LXC
controller and the libvirtd LXC driver. There is only a
single RPC message defined thus far, an asynchronous "EXIT"
event that is emitted just before the LXC controller process
exits. This provides the LXC driver with details about how
the container shutdown - normally, or abnormally (crashed),
thus allowing the driver to emit better libvirt events.
Emitting the event in the LXC controller requires a few
little tricks with the RPC service. Simply calling the
virNetServiceClientSendMessage does not work, since this
merely queues the message for asynchronous processing.
In addition the main event loop is no longer running at
the point the event is emitted, so no I/O is processed.
Thus after invoking virNetServiceClientSendMessage it is
necessary to mark the client as being in "delayed close"
mode. Then the event loop is run again, until the client
completes its close - this happens only after the queued
message has been fully transmitted. The final complexity
is that it is not safe to run virNetServerQuit() from the
client close callback, since that is invoked from a
context where the server is locked. Thus a zero-second
timer is used to trigger shutdown of the event loop,
causing the controller to finally exit.
* src/Makefile.am: Add rules for generating RPC protocol
files and dispatch methods
* src/lxc/lxc_controller.c: Emit an RPC event immediately
before exiting
* src/lxc/lxc_domain.h: Record the shutdown reason
given by the controller
* src/lxc/lxc_monitor.c, src/lxc/lxc_monitor.h: Register
RPC program and event handler. Add callback to let
driver receive EXIT event.
* src/lxc/lxc_process.c: Use monitor exit event to decide
what kind of domain event to emit
* src/lxc/lxc_protocol.x: Define wire protocol for LXC
controller monitor.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Update the gendispatch.pl script to get a little closer to
being able to generate code for the LXC monitor, by passing
in the struct prefix separately from the procedure prefix.
Also allow method names using virCapitalLetters instead
of vir_underscore_separator
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Move the code that handles the LXC monitor out of the
lxc_process.c file and into lxc_monitor.{c,h}
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Update the LXC driver to use the virNetClient APIs for
connecting to the libvirt_lxc monitor, instead of the
low-level socket APIs. This is a step towards running
a full RPC protocol with libvirt_lxc
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Rename the lxc_driver_t struct typedef to virLXCDriver to more
closely follow normal libvirt naming conventions
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
For consistency all the APIs in the lxc_domain.c file should
have a virLXCDomain prefix in their name
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
For consistency all the APIs in the lxc_process.c file should
have a virLXCProcess prefix in their name
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
In the socket event handler for the RPC client we must deal
with read/write events, before checking for EOF, otherwise
we might close the socket before we've read & acted upon the
last RPC messages
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Update the remote driver to use the virNetClient close callback
to trigger the virConnectPtr close callbacks
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Allow detection of socket close in virNetClient via a callback
function, triggered on any condition that causes the socket to
be closed.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently if the keepalive timer triggers, the 'markClose'
flag is set on the virNetClient. A controlled shutdown will
then be performed. If an I/O error occurs during read or
write of the connection an error is raised back to the
caller, but the connection isn't marked for close. This
patch ensures that all I/O error scenarios always result
in the connection being marked for close.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Define new virConnect{Register,Unregister}CloseCallback() public APIs
which allows registering/unregistering a callback to be invoked when
the connection to a hypervisor is closed. The callback is provided
with the reason for the close, which may be 'error', 'eof', 'client'
or 'keepalive'.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>