Continue consolidation of process functions by moving some
helpers out of command.{c,h} into virprocess.{c,h}
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
(cherry picked from commit 9467ab6074)
Based exclusively on work by Eric Blake in a patch posted with the same
subject. However some modifications related to comments and my plans to
add another backend.
Added WITH_INTERFACE as the only automake variable deciding whether to
build the driver and using WITH_NETCF to identify that we're wanting to
use the netcf library as the backend.
* configure.ac: Added with_interface
* src/interface/netcf_driver.c: Renamed..
* src/interface/interface_backend_netcf.c: ..to this to match storage.
* src/interface/netcf_driver.h: Renamed..
* src/interface/interface_driver.h: ..to this.
* daemon/Makefile.am: Respect WITH_INTERFACE and WITH_NETCF.
* libvirt.spec.in: Add RPM support for --with-interface
This has several benefits:
1. Future snapshot-related code has a definite place to go (and I
_will_ be adding some)
2. Snapshot errors now use the VIR_FROM_DOMAIN_SNAPSHOT error
classification, which has been underutilized (previously only in
libvirt.c)
* src/conf/domain_conf.h, domain_conf.c: Split...
* src/conf/snapshot_conf.h, snapshot_conf.c: ...into new files.
* src/Makefile.am (DOMAIN_CONF_SOURCES): Build new files.
* po/POTFILES.in: Mark new file for translation.
* src/vbox/vbox_tmpl.c: Update caller.
* src/esx/esx_driver.c: Likewise.
* src/qemu/qemu_command.c: Likewise.
* src/qemu/qemu_domain.h: Likewise.
This patch adds helper functions that enable us to use libssh2 in
conjunction with libvirt's virNetSockets for ssh transport instead of
spawning "ssh" client process.
This implemetation supports tunneled plaintext, keyboard-interactive,
private key, ssh agent based and null authentication. Libvirt's Auth
callback is used for interaction with the user. (Keyboard interactive
authentication, adding of host keys, private key passphrases). This
enables seamless integration into the application using libvirt. No
helpers as "ssh-askpass" are needed.
Reading and writing of OpenSSH style "known_hosts" files is supported.
Communication is done using SSH exec channel, where the user may specify
arbitrary command to be executed on the remote side and reads and writes
to/from stdin/out are sent through the ssh channel. Usage of stderr is
not (yet) supported.
Move the functions the parse/format, and validate PCI addresses to
their own file so they can be conveniently used in other places
besides device_conf.c
Refactoring existing code without causing any functional changes to
prepare for new code.
This patch makes the code reusable.
Signed-off-by: Shradha Shah <sshah@solarflare.com>
The following config elements now support a <vlan> subelements:
within a domain: <interface>, and the <actual> subelement of <interface>
within a network: the toplevel, as well as any <portgroup>
Each vlan element must have one or more <tag id='n'/> subelements. If
there is more than one tag, it is assumed that vlan trunking is being
requested. If trunking is required with only a single tag, the
attribute "trunk='yes'" should be added to the toplevel <vlan>
element.
Some examples:
<interface type='hostdev'/>
<vlan>
<tag id='42'/>
</vlan>
<mac address='52:54:00:12:34:56'/>
...
</interface>
<network>
<name>vlan-net</name>
<vlan trunk='yes'>
<tag id='30'/>
</vlan>
<virtualport type='openvswitch'/>
</network>
<interface type='network'/>
<source network='vlan-net'/>
...
</interface>
<network>
<name>trunk-vlan</name>
<vlan>
<tag id='42'/>
<tag id='43'/>
</vlan>
...
</network>
<network>
<name>multi</name>
...
<portgroup name='production'/>
<vlan>
<tag id='42'/>
</vlan>
</portgroup>
<portgroup name='test'/>
<vlan>
<tag id='666'/>
</vlan>
</portgroup>
</network>
<interface type='network'/>
<source network='multi' portgroup='test'/>
...
</interface>
IMPORTANT NOTE: As of this patch there is no backend support for the
vlan element for *any* network device type. When support is added in
later patches, it will only be for those select network types that
support setting up a vlan on the host side, without the guest's
involvement. (For example, it will be possible to configure a vlan for
a guest connected to an openvswitch bridge, but it won't be possible
to do that for one that is connected to a standard Linux host bridge.)
An ESX server has one or more PhysicalNics that represent the actual
hardware NICs. Those can be listed via the interface driver.
A libvirt virtual network is mapped to a HostVirtualSwitch. On the
physical side a HostVirtualSwitch can be connected to PhysicalNics.
On the virtual side a HostVirtualSwitch has HostPortGroups that are
mapped to libvirt virtual network's portgroups. Typically there is
HostPortGroups named 'VM Network' that is used to connect virtual
machines to a HostVirtualSwitch. A second HostPortGroup typically
named 'Management Network' is used to connect the hypervisor itself
to the HostVirtualSwitch. This one is not mapped to a libvirt virtual
network's portgroup. There can be more HostPortGroups than those
typical two on a HostVirtualSwitch.
+---------------+-------------------+
...---| | | +-------------+
| HostPortGroup | |---| PhysicalNic |
| VM Network | | | vmnic0 |
...---| | | +-------------+
+---------------+ HostVirtualSwitch |
| vSwitch0 |
+---------------+ |
| HostPortGroup | |
...---| Management | |
| Network | |
+---------------+-------------------+
The virtual counterparts of the PhysicalNic is the HostVirtualNic for
the hypervisor and the VirtualEthernetCard for the virtual machines
that are grouped into HostPortGroups.
+---------------------+ +---------------+---...
| VirtualEthernetCard |---| |
+---------------------+ | HostPortGroup |
+---------------------+ | VM Network |
| VirtualEthernetCard |---| |
+---------------------+ +---------------+
|
+---------------+
+---------------------+ | HostPortGroup |
| HostVirtualNic |---| Management |
+---------------------+ | Network |
+---------------+---...
The currently implemented network driver can list, define and undefine
HostVirtualSwitches including HostPortGroups for virtual machines.
Existing HostVirtualSwitches cannot be edited yet. This will be added
in a followup patch.
Parallels Cloud Server has one serious discrepancy with libvirt:
libvirt stores domain configuration files in one place, and storage
files in other places (with the API of storage pools and storage volumes).
Parallels Cloud Server stores all domain data in a single directory,
for example, you may have domain with name fedora-15, which will be
located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be
in '/var/parallels/fedora-15.pvm/harddisk1.hdd'.
I've decided to create storage driver, which produces pseudo-volumes
(xml files with volume description), and they will be 'converted' to
real disk images after attaching to a VM.
So if someone creates VM with one hard disk using virt-manager,
at first virt-manager creates a new volume, and then defines a
domain. We can lookup a volume by path in XML domain definition
and find out location of new domain and size of its hard disk.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels driver is 'stateless', like vmware or openvz drivers.
It collects information about domains during startup using
command-line utility prlctl. VMs in Parallels are identified by UUIDs
or unique names, which can be used as respective fields in
virDomainDef structure. Currently only basic info, like
description, virtual cpus number and memory amount, is implemented.
Querying devices information will be added in the next patches.
Parallels doesn't support non-persistent domains - you can't run
a domain having only disk image, it must always be registered
in system.
Functions for querying domain info have been just copied from
test driver with some changes - they extract needed data from
previously created list of virDomainObj objects.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Parallels Cloud Server is a cloud-ready virtualization
solution that allows users to simultaneously run multiple virtual
machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/
Also beta version of Parallels Cloud Server can be downloaded there.
Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
Move the code that handles the LXC monitor out of the
lxc_process.c file and into lxc_monitor.{c,h}
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commands in node device group moved from virsh.c to virsh-nodedev.c,
* virsh.c: Remove commands in node device group.
* virsh-nodedev.c: New file, filled with commands in node device group
* po/POTFILES.in: Add virsh-nodedev.c
* cfg.mk: Skip to check config.h including for virsh-nodedev.c
Commands in host group moved from virsh.c to virsh-host.c,
* virsh.c: Remove commands in host group.
* virsh-host.c: New file, filled with commands in host group
* po/POTFILES.in: Add virsh-host.c
* cfg.mk: Skip to check config.h including for virsh-host.c
Commands to manage domain snapshot are moved from virsh.c to
virsh-snapshot.c.
* virsh.c: Remove domain snapshot commands.
* virsh-snapshot.c: New file, filled with domain snapshot commands.
* po/POTFILES.in: Add virsh-snapshot.c
* cfg.mk: Skip strcase and config.h including checking for
virsh-snapshot.c
Commands to manage secret are moved from virsh.c to virsh-secret.c,
with a few helpers for secret command use.
* virsh.c: Remove secret commands and a few helpers.
(vshCommandOptSecret, and vshCommandOptSecretBy)
* virsh-secret.c: New file, filled with secret commands and its helpers.
* po/POTFILES.in: Add virsh-secret.c
* cfg.mk: Skip to check config.h including for virsh-secret.c
Commands to manage network filter are moved from virsh.c to virsh-nwfilter.c,
with a few helpers for network filter command use.
* virsh.c: Remove network filter commands and a few helpers.
(vshCommandOptNWFilter, and vshCommandOptNWFilterBy)
* virsh-nwfilter.c: New file, filled with network filter commands and its helpers.
* po/POTFILES.in: Add virsh-nwfilter.c
* cfg.mk: Skip to check config.h including for virsh-nwfilter.c
Commands to manage host interface are moved from virsh.c to
virsh-interface.c, with a few helpers for interface command use.
* virsh.c: Remove interface commands and a few helpers.
(vshCommandOptInterface, vshCommandOptInterfaceBy)
* virsh-interface.c: New file, filled with interface commands and
its helpers.
* cfg.mk: Skip to check config.h including for virsh-interface.c
* po/POTFILES.in: Add virsh-interface.c
Commands to manage network are moved from virsh.c to virsh-network.c,
with a few helpers for network command use.
* virsh.c: Remove network commands and a few helpers.
* virsh-network.c: New file, filled with network commands and its
helpers.
* po/POTFILES.in: Add virsh-network.c
* cfg.mk: Skip to check config.h including for virsh-network.c
This splits commands of storage pool group into virsh-pool.c,
The helpers not for common use are moved too. Standard copyright
is added for the new file.
* tools/virsh.c:
Remove commands for storage storage pool and a few helpers.
(vshCommandOptVol, vshCommandOptVolBy).
* tools/virsh-pool.c:
New file, filled with commands of storage pool group and its
helpers.
* po/POTFILES.in:
Add virsh-pool.c
* cfg.mk:
Skip to check config.h including for virsh-pool.c
This splits commands of storage volume group into virsh-volume.c,
The helpers not for common use are moved too. Standard copyright
is added for the new file.
* tools/virsh.c:
Remove commands for storage storage volume and a few helpers.
(vshCommandOptVol, vshCommandOptVolBy).
* tools/virsh-volume.c:
New file, filled with commands of storage volume group and its
helpers.
* po/POTFILES.in:
Add virsh-volume.c
* cfg.mk:
Skip to check config.h including for virsh-volume.c
This splits commands to manage domain into virsh-domain.c,The helpers
not for common use are moved into them too. Standard copyright is added
for the new file.
* tools/virsh.c:
- Remove commands for domain group, and one helper
(vshDomainVcpuStateToString)
- vshStreamSink is moved before commands's definition for it's
also used by commands not of domain group, such as volUpload.
* tools/virsh-domain.c:
- New file, commands for domain group and the one helper are
moved into it.
* po/POTFILES.in:
- Add virsh-domain.c
* cfg.mk:
- Skip to check config.h including for virsh-domain.c
This splits commands commands to monitor domain status into
virsh-domain-monitor.c. The helpers not for common use are moved too.
Standard copyright is added.
* tools/virsh.c:
- Remove commands for domain monitoring group and a few helpers (
vshDomainIOErrorToString, vshGetDomainDescription,
vshDomainControlStateToString, vshDomainStateToString) not for
common use.
- Remove (incldue "intprops.h").
* tools/virsh-domain-monitor.c:
- New file, filled with commands of domain monitor group.
- Add "intprops.h".
* cfg.mk:
- Skip strcase checking for virsh-domain-monitor.c
- Skip to check config.h including for virsh-domain-monitor.c
* po/POTFILES.in
- Add virsh-domain-monitor.c
Move all the code that manages stop/start of LXC processes
into separate lxc_process.{c,h} file to make the lxc_driver.c
file smaller
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Move the cgroup setup code out of the lxc_controller.c file
and into lxc_cgroup.{c,h}. This reduces the size of the
lxc_controller.c file and paves the way to invoke cgroup
setup from lxc_driver.c instead of lxc_controller.c in the
future
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch brings support to manage sheepdog pools and volumes to libvirt.
It uses the "collie" command-line utility that comes with sheepdog for that.
A sheepdog pool in libvirt maps to a sheepdog cluster.
It needs a host and port to connect to, which in most cases
is just going to be the default of localhost on port 7000.
A sheepdog volume in libvirt maps to a sheepdog vdi.
To create one specify the pool, a name and the capacity.
Volumes can also be resized later.
In the volume XML the vdi name has to be put into the <target><path>.
To use the volume as a disk source for virtual machines specify
the vdi name as "name" attribute of the <source>.
The host and port information from the pool are specified inside the host tag.
<disk type='network'>
...
<source protocol="sheepdog" name="vdi_name">
<host name="localhost" port="7000"/>
</source>
</disk>
To work right this patch parses the output of collie,
so it relies on the raw output option. There recently was a bug which caused
size information to be reported wrong. This is fixed upstream already and
will be in the next release.
Signed-off-by: Sebastian Wiedenroth <wiedi@frubar.net>
The virnetdevtap.c and viruri.c files had two error report
messages which were not annotated with _(...)
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently, we either generate some cmd*Edit commands (cmdPoolEdit
and cmdNetworkEdit) via sed script or copy the body of cmdEdit
(e.g. cmdInterfaceEdit, cmdNWFilterEdit, etc.). This fact makes
it harder to implement any new feature to our editing system.
Therefore switch to new implementation - define macros to:
- dump XML (EDIT_GET_XML)
- take an action if XML wasn't changed,
usually just vshPrint() (EDIT_NOT_CHANGED)
- define new object (EDIT_DEFINE) - the edited XML is in @doc_edited
- free object defined by EDIT_DEFINE (EDIT_FREE)
and #include "virsh-edit.c"
This patch adds DHCP snooping support to libvirt. The learning method for
IP addresses is specified by setting the "CTRL_IP_LEARNING" variable to one of
"any" [default] (existing IP learning code), "none" (static only addresses)
or "dhcp" (DHCP snooping).
Active leases are saved in a lease file and reloaded on restart or HUP.
The following interface XML activates and uses the DHCP snooping:
<interface type='bridge'>
<source bridge='virbr0'/>
<filterref filter='clean-traffic'>
<parameter name='CTRL_IP_LEARNING' value='dhcp'/>
</filterref>
</interface>
All filters containing the variable 'IP' are automatically adjusted when
the VM receives an IP address via DHCP. However, multiple IP addresses per
interface are silently ignored in this patch, thus only supporting one IP
address per interface. Multiple IP address support is added in a later
patch in this series.
Signed-off-by: David L Stevens <dlstevens@us.ibm.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
To ensure consistent error reporting of invalid arguments,
provide a number of predefined helper methods & macros.
- An arg which must not be NULL:
virCheckNonNullArgReturn(argname, retvalue)
virCheckNonNullArgGoto(argname, label)
- An arg which must be NULL
virCheckNullArgGoto(argname, label)
- An arg which must be positive (ie 1 or greater)
virCheckPositiveArgGoto(argname, label)
- An arg which must not be 0
virCheckNonZeroArgGoto(argname, label)
- An arg which must be zero
virCheckZeroArgGoto(argname, label)
- An arg which must not be negative (ie 0 or greater)
virCheckNonNegativeArgGoto(argname, label)
* src/libvirt.c, src/libvirt-qemu.c,
src/nodeinfo.c, src/datatypes.c: Update to use
virCheckXXXX macros
* po/POTFILES.in: Add libvirt-qemu.c and virterror_internal.h
* src/internal.h: Define macros for checking invalid args
* src/util/virterror_internal.h: Define macros for reporting
invalid args
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds support for a new storage backend with RBD support.
RBD is the RADOS Block Device and is part of the Ceph distributed storage
system.
It comes in two flavours: Qemu-RBD and Kernel RBD, this storage backend only
supports Qemu-RBD, thus limiting the use of this storage driver to Qemu only.
To function this backend relies on librbd and librados being present on the
local system.
The backend also supports Cephx authentication for safe authentication with
the Ceph cluster.
For storing credentials it uses the built-in secret mechanism of libvirt.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
DBus connection. The HAL device code further requires that
the DBus connection is integrated with the event loop and
provides such glue logic itself.
The forthcoming FirewallD integration also requires a
dbus connection with event loop integration. Thus we need
to pull the current event loop glue out of the HAL driver.
Thus we create src/util/virdbus.{c,h} files. This contains
just one method virDBusGetSystemBus() which obtains a handle
to the single shared system bus instance, with event glue
automagically setup.
* configure.ac docs/news.html.in libvirt.spec.in: update for the release
* po/*.po*: updated a number of languages translation including new
indian languages and regenerated
To follow latest naming conventions, rename src/util/authhelper.[ch]
to src/util/virauth.[ch].
* src/util/authhelper.[ch]: Rename to src/util/virauth.[ch]
* src/esx/esx_driver.c, src/hyperv/hyperv_driver.c,
src/phyp/phyp_driver.c, src/xenapi/xenapi_driver.c: Update
for renamed include files
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The '.ini' file format is a useful alternative to the existing
config file style, when you need to have config files which
are hashes of hashes. The 'virKeyFilePtr' object provides a
way to parse these file types.
* src/Makefile.am, src/util/virkeyfile.c,
src/util/virkeyfile.h: Add .ini file parser
* tests/Makefile.am, tests/virkeyfiletest.c: Test
basic parsing capabilities
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
This patch adds a set of functions used in creating console streams for
domains using PTYs and ensures mutually exclusive access to the PTYs.
If mutually exclusive access is not used, two clients may open the same
console, which results in corruption on both clients as both of them
race to read data from the PTY.
Two approaches are used to ensure this:
1) Internal data structure holding open PTYs.
This is used internally and enables the user to forcibly
terminate another console connection eg. when somebody leaves
the console open on another host.
2) UUCP style lock files:
This uses UUCP lock files according to the FHS
( http://www.pathname.com/fhs/pub/fhs-2.3.html#VARLOCKLOCKFILES )
to check if other programs (like minicom) are not using the pty
device of the console.
This feature is disabled by default and may be enabled using
configure parameter
--with-console-lock-files=/path/to/lock/file/directory
or --with-console-lock-files=auto (which tries to infer the
location from OS used (currently only linux).
On usual linux systems, normal users may not write to the
/var/lock directory containing the locks. This poses problems
while in session mode. If the current user has no access to the
lockfile directory, check for presence of the file is still
done, but no lock file is created. This does NOT result in an
error.
This patch allows libvirt to add interfaces to already
existing Open vSwitch bridges. The following syntax in
domain XML file can be used:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'/>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
or if libvirt should auto-generate the interfaceid use
following syntax:
<interface type='bridge'>
<mac address='52:54:00:d0:3f:f2'/>
<source bridge='ovsbr'/>
<virtualport type='openvswitch'>
</virtualport>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x03' function='0x0'/>
</interface>
It is also possible to pass an optional profileid. To do that
use following syntax:
<interface type='bridge'>
<source bridge='ovsbr'/>
<mac address='00:55:1a:65:a2:8d'/>
<virtualport type='openvswitch'>
<parameters interfaceid='921a80cd-e6de-5a2e-db9c-ab27f15a6e1d'
profileid='test-profile'/>
</virtualport>
</interface>
To create Open vSwitch bridge install Open vSwitch and
run the following command:
ovs-vsctl add-br ovsbr
The auto-generated WWN comply with the new addressing schema of WWN:
<quote>
the first nibble is either hex 5 or 6 followed by a 3-byte vendor
identifier and 36 bits for a vendor-specified serial number.
</quote>
We choose hex 5 for the first nibble. And for the 3-bytes vendor ID,
we uses the OUI according to underlying hypervisor type, (invoking
virConnectGetType to get the virt type). e.g. If virConnectGetType
returns "QEMU", we use Qumranet's OUI (00:1A:4A), if returns
ESX|VMWARE, we use VMWARE's OUI (00:05:69). Currently it only
supports qemu|xen|libxl|xenapi|hyperv|esx|vmware drivers. The last
36 bits are auto-generated.
Rename the src/util/netlink files to src/util/virnetlink to
better fit the naming scheme. Also rename nlComm to virNetlinkCommand.
Signed-off-by: D. Herrendoerfer <d.herrendoerfer@herrendoerfer.name>