* proxy/libvirt_proxy.c and docs/* typo fixing

Atsushi
This commit is contained in:
Atsushi SAKAI 2008-04-24 09:17:29 +00:00
parent bc4dacb286
commit 2ef22ecee3
13 changed files with 21 additions and 17 deletions

View File

@ -1,3 +1,7 @@
Thu Apr 24 18:00:21 JST 2008 Atsushi SAKAI <sakaia@jp.fujitsu.com>
* proxy/libvirt_proxy.c docs/* fixing typos
Thu Apr 24 09:54:19 CEST 2008 Daniel Veillard <veillard@redhat.com> Thu Apr 24 09:54:19 CEST 2008 Daniel Veillard <veillard@redhat.com>
* AUTHORS: indicate that the Logo is by Diana Fong * AUTHORS: indicate that the Logo is by Diana Fong

View File

@ -27,7 +27,7 @@
<h1>Bindings for other languages</h1> <h1>Bindings for other languages</h1>
<p>Libvirt comes with bindings to support other languages than <p>Libvirt comes with bindings to support other languages than
pure C. First the headers embeds the necessary declarations to pure C. First the headers embeds the necessary declarations to
allow direct acces from C++ code, but also we have bindings for allow direct access from C++ code, but also we have bindings for
higher level kind of languages:</p> higher level kind of languages:</p>
<ul><li>Python: Libvirt comes with direct support for the Python language <ul><li>Python: Libvirt comes with direct support for the Python language
(just make sure you installed the libvirt-python package if not (just make sure you installed the libvirt-python package if not

View File

@ -45,7 +45,7 @@
</pre> </pre>
<h2>Built from CVS / GIT</h2> <h2>Built from CVS / GIT</h2>
<p> <p>
When building from CVS it is neccessary to generate the autotools When building from CVS it is necessary to generate the autotools
support files. This requires having <code>autoconf</code>, support files. This requires having <code>autoconf</code>,
<code>automake</code>, <code>libtool</code> and <code>intltool</code> <code>automake</code>, <code>libtool</code> and <code>intltool</code>
installed. The process can be automated with the <code>autogen.sh</code> installed. The process can be automated with the <code>autogen.sh</code>

View File

@ -33,14 +33,14 @@
<h2>Hourly development snapshots</h2> <h2>Hourly development snapshots</h2>
<p> <p>
Once an hour, an automated snapshot is made from the latest CVS server Once an hour, an automated snapshot is made from the latest CVS server
source tree. These snapshots should be usable, but we make no guarentees source tree. These snapshots should be usable, but we make no guarantees
about their stability: about their stability:
</p> </p>
<ul><li><a href="ftp://libvirt.org/libvirt/libvirt-cvs-snapshot.tar.gz">libvirt.org FTP server</a></li><li><a href="http://libvirt.org/sources/libvirt-cvs-snapshot.tar.gz">libvirt.org HTTP server</a></li></ul> <ul><li><a href="ftp://libvirt.org/libvirt/libvirt-cvs-snapshot.tar.gz">libvirt.org FTP server</a></li><li><a href="http://libvirt.org/sources/libvirt-cvs-snapshot.tar.gz">libvirt.org HTTP server</a></li></ul>
<h2>CVS repository access</h2> <h2>CVS repository access</h2>
<p> <p>
The master source repository uses <a href="http://ximbiot.com/cvs/cvshome/docs/">CVS</a> The master source repository uses <a href="http://ximbiot.com/cvs/cvshome/docs/">CVS</a>
and anonymous access is provided. Prior to accessing the server is it neccessary and anonymous access is provided. Prior to accessing the server is it necessary
to authenticate using the password <code>anoncvs</code>. This can be accomplished with the to authenticate using the password <code>anoncvs</code>. This can be accomplished with the
<code>cvs login</code> command: <code>cvs login</code> command:
</p> </p>
@ -57,7 +57,7 @@
</pre> </pre>
<p> <p>
The libvirt build process uses GNU autotools, so after obtaining a checkout The libvirt build process uses GNU autotools, so after obtaining a checkout
it is neccessary to generate the configure script and Makefile.in templates it is necessary to generate the configure script and Makefile.in templates
using the <code>autogen.sh</code> command. As an example, to do a complete using the <code>autogen.sh</code> command. As an example, to do a complete
build and install it into your home directory run: build and install it into your home directory run:
</p> </p>

View File

@ -18,7 +18,7 @@
<p> <p>
Once an hour, an automated snapshot is made from the latest CVS server Once an hour, an automated snapshot is made from the latest CVS server
source tree. These snapshots should be usable, but we make no guarentees source tree. These snapshots should be usable, but we make no guarantees
about their stability: about their stability:
</p> </p>
@ -31,7 +31,7 @@
<p> <p>
The master source repository uses <a href="http://ximbiot.com/cvs/cvshome/docs/">CVS</a> The master source repository uses <a href="http://ximbiot.com/cvs/cvshome/docs/">CVS</a>
and anonymous access is provided. Prior to accessing the server is it neccessary and anonymous access is provided. Prior to accessing the server is it necessary
to authenticate using the password <code>anoncvs</code>. This can be accomplished with the to authenticate using the password <code>anoncvs</code>. This can be accomplished with the
<code>cvs login</code> command: <code>cvs login</code> command:
</p> </p>
@ -52,7 +52,7 @@
<p> <p>
The libvirt build process uses GNU autotools, so after obtaining a checkout The libvirt build process uses GNU autotools, so after obtaining a checkout
it is neccessary to generate the configure script and Makefile.in templates it is necessary to generate the configure script and Makefile.in templates
using the <code>autogen.sh</code> command. As an example, to do a complete using the <code>autogen.sh</code> command. As an example, to do a complete
build and install it into your home directory run: build and install it into your home directory run:
</p> </p>

View File

@ -34,7 +34,7 @@
</p> </p>
<h2>Hypervisor drivers</h2> <h2>Hypervisor drivers</h2>
<p> <p>
The hypervisor drivers currently supported by livirt are: The hypervisor drivers currently supported by libvirt are:
</p> </p>
<ul><li><strong><a href="drvxen.html">Xen</a></strong></li><li><strong><a href="drvqemu.html">QEMU</a></strong></li><li><strong><a href="drvlxc.html">LXC</a></strong></li><li><strong><a href="drvtest.html">Test</a></strong></li><li><strong><a href="drvopenvz.html">OpenVZ</a></strong></li></ul> <ul><li><strong><a href="drvxen.html">Xen</a></strong></li><li><strong><a href="drvqemu.html">QEMU</a></strong></li><li><strong><a href="drvlxc.html">LXC</a></strong></li><li><strong><a href="drvtest.html">Test</a></strong></li><li><strong><a href="drvopenvz.html">OpenVZ</a></strong></li></ul>
</div> </div>

View File

@ -13,7 +13,7 @@
<h2>Hypervisor drivers</h2> <h2>Hypervisor drivers</h2>
<p> <p>
The hypervisor drivers currently supported by livirt are: The hypervisor drivers currently supported by libvirt are:
</p> </p>
<ul> <ul>

View File

@ -173,7 +173,7 @@ be created. For device based pools it will be the name of the directory in which
devices nodes exist. For the latter <code>/dev/</code> may seem devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on guaranteed stable across reboots, since they are allocated on
demand. It is preferrable to use a stable location such as one demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations. of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd><dt>format</dt><dd>Provides information about the pool specific volume format. </dd><dt>format</dt><dd>Provides information about the pool specific volume format.
For disk pools it will provide the partition type. For filesystem For disk pools it will provide the partition type. For filesystem

View File

@ -209,7 +209,7 @@ be created. For device based pools it will be the name of the directory in which
devices nodes exist. For the latter <code>/dev/</code> may seem devices nodes exist. For the latter <code>/dev/</code> may seem
like the logical choice, however, devices nodes there are not like the logical choice, however, devices nodes there are not
guaranteed stable across reboots, since they are allocated on guaranteed stable across reboots, since they are allocated on
demand. It is preferrable to use a stable location such as one demand. It is preferable to use a stable location such as one
of the <code>/dev/disk/by-{path,id,uuid,label</code> locations. of the <code>/dev/disk/by-{path,id,uuid,label</code> locations.
</dd> </dd>
<dt>format</dt> <dt>format</dt>

View File

@ -56,7 +56,7 @@
Storage on IDE/SCSI/USB disks, FibreChannel, LVM, iSCSI, NFS and filesystems Storage on IDE/SCSI/USB disks, FibreChannel, LVM, iSCSI, NFS and filesystems
</li></ul> </li></ul>
<h2>libvirt provides:</h2> <h2>libvirt provides:</h2>
<ul><li>Remote management using TLS encryption and x509 certificates</li><li>Remote management authenticating with Kerberos and SASL</li><li>Local access control using PolicyKit</li><li>Zero-conf discovery using Avahi mulicast-DNS</li><li>Management of virtual machines, virtual networks and storage</li></ul> <ul><li>Remote management using TLS encryption and x509 certificates</li><li>Remote management authenticating with Kerberos and SASL</li><li>Local access control using PolicyKit</li><li>Zero-conf discovery using Avahi multicast-DNS</li><li>Management of virtual machines, virtual networks and storage</li></ul>
<p class="image"> <p class="image">
<img src="libvirtLogo.png" alt="libvirt Logo" /></p> <img src="libvirtLogo.png" alt="libvirt Logo" /></p>
</div> </div>

View File

@ -57,7 +57,7 @@
<li>Remote management using TLS encryption and x509 certificates</li> <li>Remote management using TLS encryption and x509 certificates</li>
<li>Remote management authenticating with Kerberos and SASL</li> <li>Remote management authenticating with Kerberos and SASL</li>
<li>Local access control using PolicyKit</li> <li>Local access control using PolicyKit</li>
<li>Zero-conf discovery using Avahi mulicast-DNS</li> <li>Zero-conf discovery using Avahi multicast-DNS</li>
<li>Management of virtual machines, virtual networks and storage</li> <li>Management of virtual machines, virtual networks and storage</li>
</ul> </ul>

View File

@ -103,7 +103,7 @@
<span>Driver for the Linux native container API</span> <span>Driver for the Linux native container API</span>
</li><li> </li><li>
<a href="drvtest.html">Test</a> <a href="drvtest.html">Test</a>
<span>Psuedo-driver simulating APIs in memory for test suites</span> <span>Pseudo-driver simulating APIs in memory for test suites</span>
</li><li> </li><li>
<a href="drvremote.html">Remote</a> <a href="drvremote.html">Remote</a>
<span>Driver providing secure remote to the libvirt APIs</span> <span>Driver providing secure remote to the libvirt APIs</span>

View File

@ -151,7 +151,7 @@ proxyCloseUnixSocket(void) {
/** /**
* proxyListenUnixSocket: * proxyListenUnixSocket:
* @path: the fileame for the socket * @path: the filename for the socket
* *
* create a new abstract socket based on that path and listen on it * create a new abstract socket based on that path and listen on it
* *
@ -700,7 +700,7 @@ proxyProcessRequests(void) {
if (exit_timeout == 0) { if (exit_timeout == 0) {
done = 1; done = 1;
if (debug > 0) { if (debug > 0) {
fprintf(stderr, "Exitting after 30s without clients\n"); fprintf(stderr, "Exiting after 30s without clients\n");
} }
} }
} else } else