Commit Graph

232 Commits

Author SHA1 Message Date
John Eckersberg
4c85421c6c conf: Rename virconsole.* to virchrdev.*
This is just code motion, in preparation to rename identifiers to be
less console-specific.
2013-01-04 17:26:30 -07:00
Daniel P. Berrange
f24404a324 Rename virterror.c virterror_internal.h to virerror.{c,h} 2012-12-21 11:19:50 +00:00
Daniel P. Berrange
556cf5f617 Rename xml.{c,h} to virxml.{c,h} 2012-12-21 11:19:50 +00:00
Daniel P. Berrange
44f6ae27fe Rename util.{c,h} to virutil.{c,h} 2012-12-21 11:19:49 +00:00
Daniel P. Berrange
88ba722c12 Rename sysinfo.{c,h} to virsysinfo.{c,h} 2012-12-21 11:19:48 +00:00
Daniel P. Berrange
05dc8398dd Rename storage_file.{c,h} to virstoragefile.{c,h} 2012-12-21 11:19:48 +00:00
Daniel P. Berrange
fde9df8dcc Rename stats_linux.{c,h} to virstatslinux.{c,h} 2012-12-21 11:19:48 +00:00
Daniel P. Berrange
226ad9815a Rename sexpr.{c,h} to virsexpr.{c,h} 2012-12-21 11:19:48 +00:00
Daniel P. Berrange
f56c773bf8 Merge processinfo.{c,h} into virprocess.{c,h} 2012-12-21 11:19:45 +00:00
Daniel P. Berrange
3ddddd98c3 Rename pci.{c,h} to virpci.{c,h} 2012-12-21 11:17:14 +00:00
Daniel P. Berrange
6a095d0851 Rename json.{c,h} to virjson.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
47cdbac47d Rename iptables.{c,h} to viriptables.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
ebc8db5189 Rename hostusb.{c,h} to virusb.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
30f3a005ff Rename hooks.{c,h} to virhook.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
4d6050a8eb Rename event_poll.{c,h} to vireventpoll.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
4af71715be Rename dnsmasq.{c,h} to virdnsmasq.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
0f8454101d Rename conf.{c,h} to virconf.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
04d9510f50 Rename command.{c,h} to vircommand.{c,h} 2012-12-21 11:17:13 +00:00
Daniel P. Berrange
f9c7020c1f Rename cgroup.{h,c} to vircgroup.{h,c}
To bring in line with new naming practice, rename the=
src/util/cgroup.{h,c} files to vircgroup.{h,c}

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-12-21 11:17:12 +00:00
Daniel P. Berrange
95fef5f407 Add support for USB host device passthrough with LXC
This adds support for host device passthrough with the
LXC driver. Since there is only a single kernel image,
it doesn't make sense to pass through PCI devices, but
USB devices are fine. For the latter we merely need to
make the /dev/bus/usb/NNN/MMM character device exist
in the container's /dev

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-12-17 17:50:51 +00:00
Daniel P. Berrange
eb8268a4f6 Add a virtlockd client as a lock driver impl
This adds a 'lockd' lock driver which is just a client which
talks to the lockd daemon to perform all locking. This will
be the default lock driver for any hypervisor which needs one.

* src/Makefile.am: Add lockd.so plugin
* src/locking/lock_driver_lockd.c: Lockd driver impl

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-12-13 15:26:57 +00:00
Daniel P. Berrange
0e49b83912 Implement dispatch functions for lock protocol in virtlockd
Introduce a lock_daemon_dispatch.c file which implements the
server side dispatcher the RPC APIs previously defined in the
lock protocol.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-12-13 15:26:57 +00:00
Daniel P. Berrange
c57e3d8994 Introduce basic infrastructure for virtlockd daemon
The virtlockd daemon will maintain locks on behalf of libvirtd.
There are two reasons for it to be separate

 - Avoid risk of other libvirtd threads accidentally
   releasing fcntl() locks by opening + closing a file
   that is locked
 - Ensure locks can be preserved across libvirtd restarts.
   virtlockd will need to be able to re-exec itself while
   maintaining locks. This is simpler to achieve if its
   sole job is maintaining locks

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-12-13 15:26:57 +00:00
Cole Robinson
d13155c20c tools: Only install guests init script if --with-init=script=redhat
Most of this deals with moving the libvirt-guests.sh script which
does all the work to /usr/libexec, so it can be shared by both
systemd and traditional init. Previously systemd depended on
the script being in /etc/init.d

Required to fix https://bugzilla.redhat.com/show_bug.cgi?id=789747
2012-12-11 19:54:37 -05:00
Michal Privoznik
7cdbacb472 bandwidth: Create (un)plug functions
These set bridge part of QoS when bringing domain's interface up.
Long story short, if there's a 'floor' set, a new QoS class is created.
ClassID MUST be unique within the bridge and should be kept for
unplug phase.
2012-12-11 18:36:55 +01:00
Dmitry Guryanov
6034ce3130 parallels: add network driver
Parallels Cloud Server uses virtual networks model for network
configuration. It uses own tools for virtual network management.
So add network driver, which will be responsible for listing
virtual networks and performing different operations on them
(in consequent patched).

This patch only allows listing virtual network names, without
any parameters like DHCP server settings.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2012-12-11 22:46:16 +08:00
Dmitry Guryanov
68c6d3dc31 parallels: move parallelsParseError to parallels_utils.h
This macro will be used in another file in the next
patch, so move it to common header file.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2012-12-11 22:46:16 +08:00
Ata E Husain Bohra
60f0f55ee4 Add iSCSI backend storage driver for ESX
The patch adds the backend driver to support iSCSI format storage pools
and volumes for ESX host. The mapping of ESX iSCSI specifics to Libvirt
is as follows:

1. ESX static iSCSI target <------> Libvirt Storage Pools
2. ESX iSCSI LUNs          <------> Libvirt Storage Volumes.

The above understanding is based on http://libvirt.org/storage.html.

The operation supported on iSCSI pools includes:

1. List storage pools & volumes.
2. Get XML descriptor operaion on pools & volumes.
3. Lookup operation on pools & volumes by name, UUID and path (if applicable).

iSCSI pools does not support operations such as: Create / remove pools
and volumes.
2012-12-03 21:12:23 +01:00
Daniel P. Berrange
c4ef575c97 Add APIs for talking to init via /dev/initctl
To be able todo controlled shutdown/reboot of containers an
API to talk to init via /dev/initctl is required. Fortunately
this is quite straightforward to implement, and is supported
by both sysvinit and systemd. Upstart support for /dev/initctl
is unclear.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-11-30 19:17:30 +00:00
Gao feng
2a596dac5e add fuse support for libvirt lxc
this patch addes fuse support for libvirt lxc.
we can use fuse filesystem to generate sysinfo dynamically,
So we can isolate /proc/meminfo,cpuinfo and so on through
fuse filesystem.

we mount fuse filesystem for every container.
the mount name is libvirt,mount point is
localstatedir/run/libvirt/lxc/containername.

Signed-off-by: Gao feng <gaofeng@cn.fujitsu.com>
2012-11-28 10:28:49 +00:00
Ata E Husain Bohra
067e83ebee Refactor ESX storage driver to implement facade pattern
The patch refactors the current ESX storage driver due to following reasons:

1. Given most of the public APIs exposed by the storage driver in Libvirt
remains same, ESX storage driver should not implement logic specific
for only one supported format (current implementation only supports VMFS).
2. Decoupling interface from specific storage implementation gives us an
extensible design to hook implementation for other supported storage
formats.

This patch refactors the current driver to implement it as a facade pattern i.e.
the driver exposes all the public libvirt APIs, but uses backend drivers to get
the required task done. The backend drivers provide implementation specific to
the type of storage device.

File changes:
------------------
esx_storage_driver.c ----> esx_storage_driver.c (base storage driver)
                     |
                     |---> esx_storage_backend_vmfs.c (VMFS backend)
2012-11-26 22:46:13 +01:00
Li Zhang
9943a7341c Implement CPU model driver for PowerPC
Currently, the CPU model driver is not implemented for PowerPC.
Host's CPU information is needed to exposed to guests' XML file some
time.

This patch is to implement the callback functions of CPU model driver.

Signed-off-by: Li Zhang <zhlcindy@linux.vnet.ibm.com>
Acked-by: Michal Privoznik <mprivozn@redhat.com>
2012-10-17 10:03:34 +02:00
Daniel P. Berrange
0cc7925520 Add JSON serialization of virNetServerServicePtr objects for process re-exec()
Add two new APIs virNetServerServiceNewPostExecRestart and
virNetServerServicePreExecRestart which allow a virNetServerServicePtr
object to be created from a JSON object and saved to a
JSON object, for the purpose of re-exec'ing a process.

This includes serialization of the listening sockets associated
with the service

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-10-16 15:45:55 +01:00
Daniel P. Berrange
eca72d4759 Introduce an internal API for handling file based lockspaces
The previously introduced virFile{Lock,Unlock} APIs provide a
way to acquire/release fcntl() locks on individual files. For
unknown reason though, the POSIX spec says that fcntl() locks
are released when *any* file handle referring to the same path
is closed. In the following sequence

  threadA: fd1 = open("foo")
  threadB: fd2 = open("foo")
  threadA: virFileLock(fd1)
  threadB: virFileLock(fd2)
  threadB: close(fd2)

you'd expect threadA to come out holding a lock on 'foo', and
indeed it does hold a lock for a very short time. Unfortunately
when threadB does close(fd2) this releases the lock associated
with fd1. For the current libvirt use case for virFileLock -
pidfiles - this doesn't matter since the lock is acquired
at startup while single threaded an never released until
exit.

To provide a more generally useful API though, it is necessary
to introduce a slightly higher level abstraction, which is to
be referred to as a "lockspace".  This is to be provided by
a virLockSpacePtr object in src/util/virlockspace.{c,h}. The
core idea is that the lockspace keeps track of what files are
already open+locked. This means that when a 2nd thread comes
along and tries to acquire a lock, it doesn't end up opening
and closing a new FD. The lockspace just checks the current
list of held locks and immediately returns VIR_ERR_RESOURCE_BUSY.

NB, the API as it stands is designed on the basis that the
files being locked are not being otherwise opened and used
by the application code. One approach to using this API is to
acquire locks based on a hash of the filepath.

eg to lock /var/lib/libvirt/images/foo.img the application
might do

   virLockSpacePtr lockspace = virLockSpaceNew("/var/lib/libvirt/imagelocks");
   lockname = md5sum("/var/lib/libvirt/images/foo.img");
   virLockSpaceAcquireLock(lockspace, lockname);

NB, in this example, the caller should ensure that the path
is canonicalized before calculating the checksum.

It is also possible to do locks directly on resources by
using a NULL lockspace directory and then using the file
path as the lock name eg

   virLockSpacePtr lockspace = virLockSpaceNew(NULL);
   virLockSpaceAcquireLock(lockspace, "/var/lib/libvirt/images/foo.img");

This is only safe to do though if no other part of the process
will be opening the files. This will be the case when this
code is used inside the soon-to-be-reposted virlockd daemon

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-10-16 15:45:55 +01:00
Jiri Denemark
893647671b locking: Implement lock failure action in sanlock driver
While the changes to sanlock driver should be stable, the actual
implementation of sanlock_helper is supposed to be replaced in the
future. However, before we can implement a better sanlock_helper, we
need an administrative interface to libvirtd so that the helper can just
pass a "leases lost" event to the particular libvirt driver and
everything else will be taken care of internally. This approach will
also allow libvirt to pass such event to applications and use
appropriate reasons when changing domain states.

The temporary implementation handles all actions directly by calling
appropriate libvirt APIs (which among other things means that it needs
to know the credentials required to connect to libvirtd).
2012-10-11 14:41:42 +02:00
Doug Goldstein
5a33366f5c interface: add udev based backend for virInterface
Add a read-only udev based backend for virInterface. Useful for distros
that do not have netcf support yet. Multiple libvirt based utilities use
a HAL based fallback when virInterface is not available which is less
than ideal. This implements:
* virConnectNumOfInterfaces()
* virConnectListInterfaces()
* virConnectNumOfDefinedInterfaces()
* virConnectListDefinedInterfaces()
* virConnectListAllInterfaces()
* virConnectInterfaceLookupByName()
* virConnectInterfaceLookupByMACString()
2012-10-09 09:39:43 -06:00
Daniel P. Berrange
9467ab6074 Move virProcess{Kill,Abort,TranslateStatus} into virprocess.{c,h}
Continue consolidation of process functions by moving some
helpers out of command.{c,h} into virprocess.{c,h}

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-09-26 10:09:57 +01:00
Doug Goldstein
b95ad92e05 build: define WITH_INTERFACE for the driver
Based exclusively on work by Eric Blake in a patch posted with the same
subject. However some modifications related to comments and my plans to
add another backend.

Added WITH_INTERFACE as the only automake variable deciding whether to
build the driver and using WITH_NETCF to identify that we're wanting to
use the netcf library as the backend.

* configure.ac: Added with_interface
* src/interface/netcf_driver.c: Renamed..
* src/interface/interface_backend_netcf.c: ..to this to match storage.
* src/interface/netcf_driver.h: Renamed..
* src/interface/interface_driver.h: ..to this.
* daemon/Makefile.am: Respect WITH_INTERFACE and WITH_NETCF.
* libvirt.spec.in: Add RPM support for --with-interface
2012-09-19 08:27:01 -06:00
Eric Blake
6478ec1673 snapshot: split snapshot conf code into own file
This has several benefits:
1. Future snapshot-related code has a definite place to go (and I
_will_ be adding some)
2. Snapshot errors now use the VIR_FROM_DOMAIN_SNAPSHOT error
classification, which has been underutilized (previously only in
libvirt.c)

* src/conf/domain_conf.h, domain_conf.c: Split...
* src/conf/snapshot_conf.h, snapshot_conf.c: ...into new files.
* src/Makefile.am (DOMAIN_CONF_SOURCES): Build new files.
* po/POTFILES.in: Mark new file for translation.
* src/vbox/vbox_tmpl.c: Update caller.
* src/esx/esx_driver.c: Likewise.
* src/qemu/qemu_command.c: Likewise.
* src/qemu/qemu_domain.h: Likewise.
2012-08-24 09:51:08 -06:00
Peter Krempa
1193fc5f44 libssh2_transport: add main libssh2 transport implementation
This patch adds helper functions that enable us to use libssh2 in
conjunction with libvirt's virNetSockets for ssh transport instead of
spawning "ssh" client process.

This implemetation supports tunneled plaintext, keyboard-interactive,
private key, ssh agent based and null authentication. Libvirt's Auth
callback is used for interaction with the user. (Keyboard interactive
authentication, adding of host keys, private key passphrases). This
enables seamless integration into the application using libvirt. No
helpers as "ssh-askpass" are needed.

Reading and writing of OpenSSH style "known_hosts" files is supported.

Communication is done using SSH exec channel, where the user may specify
arbitrary command to be executed on the remote side and reads and writes
to/from stdin/out are sent through the ssh channel. Usage of stderr is
not (yet) supported.
2012-08-21 14:47:09 +02:00
Shradha Shah
f9150c8158 conf: move DevicePCIAddress functions to separate file
Move the functions the parse/format, and validate PCI addresses to
their own file so they can be conveniently used in other places
besides device_conf.c

Refactoring existing code without causing any functional changes to
prepare for new code.

This patch makes the code reusable.

Signed-off-by: Shradha Shah <sshah@solarflare.com>
2012-08-17 15:43:25 -04:00
Laine Stump
3f9274a524 conf: add <vlan> element to network and domain interface elements
The following config elements now support a <vlan> subelements:

within a domain: <interface>, and the <actual> subelement of <interface>
within a network: the toplevel, as well as any <portgroup>

Each vlan element must have one or more <tag id='n'/> subelements.  If
there is more than one tag, it is assumed that vlan trunking is being
requested. If trunking is required with only a single tag, the
attribute "trunk='yes'" should be added to the toplevel <vlan>
element.

Some examples:

  <interface type='hostdev'/>
    <vlan>
      <tag id='42'/>
    </vlan>
    <mac address='52:54:00:12:34:56'/>
    ...
  </interface>

  <network>
    <name>vlan-net</name>
    <vlan trunk='yes'>
      <tag id='30'/>
    </vlan>
    <virtualport type='openvswitch'/>
  </network>

  <interface type='network'/>
    <source network='vlan-net'/>
    ...
  </interface>

  <network>
    <name>trunk-vlan</name>
    <vlan>
      <tag id='42'/>
      <tag id='43'/>
    </vlan>
    ...
  </network>

  <network>
    <name>multi</name>
    ...
    <portgroup name='production'/>
      <vlan>
        <tag id='42'/>
      </vlan>
    </portgroup>
    <portgroup name='test'/>
      <vlan>
        <tag id='666'/>
      </vlan>
    </portgroup>
  </network>

  <interface type='network'/>
    <source network='multi' portgroup='test'/>
    ...
  </interface>

IMPORTANT NOTE: As of this patch there is no backend support for the
vlan element for *any* network device type. When support is added in
later patches, it will only be for those select network types that
support setting up a vlan on the host side, without the guest's
involvement. (For example, it will be possible to configure a vlan for
a guest connected to an openvswitch bridge, but it won't be possible
to do that for one that is connected to a standard Linux host bridge.)
2012-08-15 13:10:57 -04:00
Matthias Bolte
b8fa5fd071 esx: Implement network driver
An ESX server has one or more PhysicalNics that represent the actual
hardware NICs. Those can be listed via the interface driver.

A libvirt virtual network is mapped to a HostVirtualSwitch. On the
physical side a HostVirtualSwitch can be connected to PhysicalNics.
On the virtual side a HostVirtualSwitch has HostPortGroups that are
mapped to libvirt virtual network's portgroups. Typically there is
HostPortGroups named 'VM Network' that is used to connect virtual
machines to a HostVirtualSwitch. A second HostPortGroup typically
named 'Management Network' is used to connect the hypervisor itself
to the HostVirtualSwitch. This one is not mapped to a libvirt virtual
network's portgroup. There can be more HostPortGroups than those
typical two on a HostVirtualSwitch.

         +---------------+-------------------+
   ...---|               |                   |   +-------------+
         | HostPortGroup |                   |---| PhysicalNic |
         |   VM Network  |                   |   |    vmnic0   |
   ...---|               |                   |   +-------------+
         +---------------+ HostVirtualSwitch |
                         |     vSwitch0      |
         +---------------+                   |
         | HostPortGroup |                   |
   ...---|   Management  |                   |
         |    Network    |                   |
         +---------------+-------------------+

The virtual counterparts of the PhysicalNic is the HostVirtualNic for
the hypervisor and the VirtualEthernetCard for the virtual machines
that are grouped into HostPortGroups.

   +---------------------+   +---------------+---...
   | VirtualEthernetCard |---|               |
   +---------------------+   | HostPortGroup |
   +---------------------+   |   VM Network  |
   | VirtualEthernetCard |---|               |
   +---------------------+   +---------------+
                                             |
                             +---------------+
   +---------------------+   | HostPortGroup |
   |    HostVirtualNic   |---|   Management  |
   +---------------------+   |    Network    |
                             +---------------+---...

The currently implemented network driver can list, define and undefine
HostVirtualSwitches including HostPortGroups for virtual machines.
Existing HostVirtualSwitches cannot be edited yet. This will be added
in a followup patch.
2012-08-09 22:31:47 +02:00
Dmitry Guryanov
aa296e6c29 parallels: add storage driver
Parallels Cloud Server has one serious discrepancy with libvirt:
libvirt stores domain configuration files in one place, and storage
files in other places (with the API of storage pools and storage volumes).
Parallels Cloud Server stores all domain data in a single directory,
for example, you may have domain with name fedora-15, which will be
located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be
in '/var/parallels/fedora-15.pvm/harddisk1.hdd'.

I've decided to create storage driver, which produces pseudo-volumes
(xml files with volume description), and they will be 'converted' to
real disk images after attaching to a VM.

So if someone creates VM with one hard disk using virt-manager,
at first virt-manager creates a new volume, and then defines a
domain. We can lookup a volume by path in XML domain definition
and find out location of new domain and size of its hard disk.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2012-08-01 11:48:01 +08:00
Dmitry Guryanov
e93c33a987 parallels: add functions to list domains and get info
Parallels driver is 'stateless', like vmware or openvz drivers.
It collects information about domains during startup using
command-line utility prlctl. VMs in Parallels are identified by UUIDs
or unique names, which can be used as respective fields in
virDomainDef structure. Currently only basic info, like
description, virtual cpus number and memory amount, is implemented.
Querying devices information will be added in the next patches.

Parallels doesn't support non-persistent domains - you can't run
a domain having only disk image, it must always be registered
in system.

Functions for querying domain info have been just copied from
test driver with some changes - they extract needed data from
previously created list of virDomainObj objects.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2012-08-01 11:44:36 +08:00
Dmitry Guryanov
cafc26ff5f parallels: add driver skeleton
Parallels Cloud Server is a cloud-ready virtualization
solution that allows users to simultaneously run multiple virtual
machines and containers on the same physical server.

More information can be found here: http://www.parallels.com/products/pcs/
Also beta version of Parallels Cloud Server can be downloaded there.

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com>
2012-08-01 11:44:26 +08:00
Daniel P. Berrange
de4b32e4bf Move LXC monitor code out into separate file
Move the code that handles the LXC monitor out of the
lxc_process.c file and into lxc_monitor.{c,h}

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2012-07-30 12:50:22 +01:00
Osier Yang
49989d7025 virsh: Split cmds in node device group from virsh.c
Commands in node device group moved from virsh.c to virsh-nodedev.c,

* virsh.c: Remove commands in node device group.
* virsh-nodedev.c: New file, filled with commands in node device group
* po/POTFILES.in: Add virsh-nodedev.c
* cfg.mk: Skip to check config.h including for virsh-nodedev.c
2012-07-26 12:00:43 +08:00
Osier Yang
290eb0d9f2 virsh: Split cmds in host group from virsh.c
Commands in host group moved from virsh.c to virsh-host.c,

* virsh.c: Remove commands in host group.
* virsh-host.c: New file, filled with commands in host group
* po/POTFILES.in: Add virsh-host.c
* cfg.mk: Skip to check config.h including for virsh-host.c
2012-07-26 12:00:43 +08:00
Osier Yang
648ad2471b virsh: Split cmds to manage domain snapshot from virsh.c
Commands to manage domain snapshot are moved from virsh.c to
virsh-snapshot.c.

* virsh.c: Remove domain snapshot commands.
* virsh-snapshot.c: New file, filled with domain snapshot commands.
* po/POTFILES.in: Add virsh-snapshot.c
* cfg.mk: Skip strcase and config.h including checking for
          virsh-snapshot.c
2012-07-26 12:00:43 +08:00