docs: revamp api_extension example, using vcpu patch series

* docs/api_extension/*: Replace example files.
* docs/api_extension.html.in: Rewrite to match new example files.
This commit is contained in:
Eric Blake 2010-10-20 10:04:19 -06:00
parent 8efebd1761
commit 66a0409067
24 changed files with 4451 additions and 1775 deletions

View File

@ -10,8 +10,13 @@
<p>
This document walks you through the process of implementing a new
API in libvirt. It uses as an example the addition of the node device
create and destroy APIs.
API in libvirt. It uses as an example the addition of an API for
separating maximum from current vcpu usage of a domain.
Remember that new API consists of any new public functions, as
well as the addition of flags or extensions of XML used by
existing functions. The example in this document adds both new
functions and an XML extension. Not all libvirt API additions
require quite as many patches.
</p>
<p>
@ -23,7 +28,12 @@
added to libvirt. Someone may already be working on the feature you
want. Also, recognize that everything you write is likely to undergo
significant rework as you discuss it with the other developers, so
don't wait too long before getting feedback.
don't wait too long before getting feedback. In the vcpu example
below, list feedback was first requested
<a href="https://www.redhat.com/archives/libvir-list/2010-September/msg00423.html">here</a>
and resulted in several rounds of improvements before coding
began. In turn, this example is slightly rearranged from the actual
order of the commits.
</p>
<p>
@ -46,11 +56,22 @@
<li>define the public API</li>
<li>define the internal driver API</li>
<li>implement the public API</li>
<li>define the wire protocol format</li>
<li>implement the RPC client</li>
<li>implement the server side dispatcher</li>
<li>implement the driver methods</li>
<li>implement the remote protocol:
<ol>
<li>define the wire protocol format</li>
<li>implement the RPC client</li>
<li>implement the server side dispatcher</li>
</ol>
</li>
<li>use new API where appropriate in drivers</li>
<li>add virsh support</li>
<li>add common handling for new API</li>
<li>for each driver that can support the new API:
<ol>
<li>add prerequisite support</li>
<li>fully implement new API</li>
</ol>
</li>
</ol>
<p>
@ -66,11 +87,10 @@
functionality--get the whole thing working and make sure you're happy
with it. Then use git or some other version control system that lets
you rewrite your commit history and break patches into pieces so you
don't drop a big blob of code on the mailing list at one go. For
example, I didn't follow my own advice when I originally submitted the
example code to the libvirt list but rather submitted it in several
large chunks. I've used git's ability to rewrite my commit history to
break the code apart into the example patches shown.
don't drop a big blob of code on the mailing list in one go.
Also, you should follow the upstream tree, and rebase your
series to adapt your patches to work with any other changes
that were accepted upstream during your development.
</p>
<p>
@ -86,9 +106,24 @@
<h2><a name='publicapi'>Defining the public API</a></h2>
<p>The first task is to define the public API and add it to:</p>
<p>The first task is to define the public API. If the new API
involves an XML extension, you have to enhance the RelaxNG
schema and document the new elements or attributes:</p>
<p><code>include/libvirt/libvirt.h.in</code></p>
<p><code>
docs/schemas/domain.rng<br/>
docs/formatdomain.html.in
</code></p>
<p>If the API extension involves a new function, you have to add a
declaration in the public header, and arrange to export the
function name (symbol) so other programs can link against the
libvirt library and call the new function:</p>
<p><code>
include/libvirt/libvirt.h.in
src/libvirt_public.syms
</code></p>
<p>
This task is in many ways the most important to get right, since once
@ -99,12 +134,9 @@
rework it as you go through the process of implementing it.
</p>
<p>Once you have defined the API, you have to add the symbol names to:</p>
<p><code>src/libvirt_public.syms</code></p>
<p class="example">See <a href="api_extension/0001-Step-1-of-8-Define-the-public-API.patch">0001-Step-1-of-8-Define-the-public-API.patch</a> for example code.</p>
<p class="example">See <a href="api_extension/0001-Step-1-of-15-add-to-xml.patch">0001-Step-1-of-15-add-to-xml.patch</a>
and <a href="api_extension/0002-Step-2-of-15-add-new-public-API.patch">0002-Step-2-of-15-add-new-public-API.patch</a>
for example code.</p>
<h2><a name='internalapi'>Defining the internal API</a></h2>
@ -118,7 +150,7 @@
<p>
Of course, it's possible that the new API will involve the creation of
an entire new driver type, in which case the changes will include the
an entirely new driver type, in which case the changes will include the
creation of a new struct type to represent the new driver type.
</p>
@ -129,10 +161,11 @@
<p>
To define the internal API, first typedef the driver function
prototype and then add a new field for it to the relevant driver
struct.
struct. Then, update all existing instances of the driver to
provide a <code>NULL</code> stub for the new function.
</p>
<p class="example">See <a href="api_extension/0002-Step-2-of-8-Define-the-internal-driver-API.patch">0002-Step-2-of-8-Define-the-internal-driver-API.patch</a></p>
<p class="example">See <a href="api_extension/0003-Step-3-of-15-define-internal-driver-API.patch">0003-Step-3-of-15-define-internal-driver-API.patch</a></p>
<h2><a name='implpublic'>Implementing the public API</a></h2>
@ -166,16 +199,24 @@
<p><code>src/libvirt.c</code></p>
<p class="example">See <a href="api_extension/0003-Step-3-of-8-Implement-the-public-API.patch">0003-Step-3-of-8-Implement-the-public-API.patch</a></p>
<p class="example">See <a href="api_extension/0004-Step-4-of-15-implement-the-public-APIs.patch">0004-Step-4-of-15-implement-the-public-APIs.patch</a></p>
<h2><a name='wireproto'>Defining the wire protocol format</a></h2>
<h2><a name='remoteproto'>Implementing the remote protocol</a></h2>
<p>
Defining the wire protocol is essentially a straightforward exercise
which is probably most easily understood by referring to the existing
remote protocol wire format definitions and the example patch. It
involves making two additions to:
Implementing the remote protocol is essentially a
straightforward exercise which is probably most easily
understood by referring to the existing code and the example
patch. It involves several related changes, including the
regeneration of derived files, with further details below.
</p>
<p class="example">See <a href="api_extension/0005-Step-5-of-15-implement-the-remote-protocol.patch">0005-Step-5-of-15-implement-the-remote-protocol.patch</a></p>
<h3><a name='wireproto'>Defining the wire protocol format</a></h3>
<p>
Defining the wire protocol involves making additions to:
</p>
<p><code>src/remote/remote_protocol.x</code></p>
@ -185,7 +226,7 @@
to the API. One struct describes the parameters to be passed to the
remote function, and a second struct describes the value returned by
the remote function. The one exception to this rule is that functions
that return only integer status do not require a struct for returned
that return only 0 or -1 for status do not require a struct for returned
data.
</p>
@ -194,23 +235,28 @@
added to the API.
</p>
<p class="example">See <a href="api_extension/0004-Step-4-of-8-Define-the-wire-protocol-format.patch">0004-Step-4-of-8-Define-the-wire-protocol-format.patch</a></p>
<p>
Once these changes are in place, it's necessary to run 'make rpcgen'
in the src directory to create the .c and .h files required by the
remote protocol code. This must be done on a Linux host using the
GLibC rpcgen program. Other rpcgen versions may generate code which
results in bogus compile time warnings
results in bogus compile time warnings. This regenerates the
following files:
</p>
<p><code>
daemon/remote_dispatch_args.h
daemon/remote_dispatch_prototypes.h
daemon/remote_dispatch_table.h
src/remote/remote_protocol.c
src/remote/remote_protocol.h
</code></p>
<h2><a name='rpcclient'>Implement the RPC client</a></h2>
<h3><a name='rpcclient'>Implement the RPC client</a></h3>
<p>
Implementing the RPC client is also relatively mechanical, so refer to
the exising code and example patch for guidance. The RPC client uses
the rpcgen generated .h files. The remote method calls go in:
Implementing the uses the rpcgen generated .h files. The remote
method calls go in:
</p>
<p><code>src/remote/remote_internal.c</code></p>
@ -227,17 +273,10 @@
<li>unlocks the remote driver.</li>
</ol>
<p>
Once you have created the remote method calls, you have to add fields
for them to the driver structs for the appropriate remote driver.
</p>
<p class="example">See <a href="api_extension/0005-Step-5-of-8-Implement-the-RPC-client.patch">0005-Step-5-of-8-Implement-the-RPC-client.patch</a></p>
<h2><a name="serverdispatch">Implement the server side dispatcher</a></h2>
<h3><a name="serverdispatch">Implement the server side dispatcher</a></h3>
<p>
Implementing the server side of the remote function calls is simply a
Implementing the server side of the remote function call is simply a
matter of deserializing the parameters passed in from the remote
caller and passing them to the corresponding internal API function.
The server side dispatchers are implemented in:
@ -247,8 +286,64 @@
<p>Again, this step uses the .h files generated by make rpcgen.</p>
<p class="example">See <a href="api_extension/0006-Step-6-of-8-Implement-the-server-side-dispatcher.patch">0006-Step-6-of-8-Implement-the-server-side-dispatcher.patch</a></p>
<p>
After all three pieces of the remote protocol are complete, and
the generated files have been updated, it will be necessary to
update the file:</p>
<p><code>src/remote_protocol-structs</code></p>
<p>
This file should only have new lines added; modifications to
existing lines probably imply a backwards-incompatible API change.
</p>
<p class="example">See <a href="api_extension/0005-Step-5-of-15-implement-the-remote-protocol.patch">0005-Step-5-of-15-implement-the-remote-protocol.patch</a></p>
<h2><a name="internaluseapi">Use the new API internally</a></h2>
<p>
Sometimes, a new API serves as a superset of existing API, by
adding more granularity in what can be managed. When this is
the case, it makes sense to share a common implementation by
making the older API become a trivial wrapper around the new
API, rather than duplicating the common code. This step should
not introduce any semantic differences for the old API, and is
not necessary if the new API has no relation to existing API.
</p>
<p class="example">See <a href="api_extension/0006-Step-6-of-15-make-old-API-trivially-wrap-to-new-API.patch">0006-Step-6-of-15-make-old-API-trivially-wrap-to-new-API.patch</a></p>
<h2><a name="virshuseapi">Expose the new API in virsh</a></h2>
<p>
All new API should be manageable from the virsh command line
shell. This proves that the API is sufficient for the intended
purpose, and helps to identify whether the proposed API needs
slight changes for easier usage. However, remember that virsh
is used to connect to hosts running older versions of libvirtd,
so new commands should have fallbacks to an older API if
possible; implementing the virsh hooks at this point makes it
very easy to test these fallbacks. Also remember to document
virsh additions.
</p>
<p>
A virsh command is composed of a few pieces of code. You need to
define an array of vshCmdInfo structs for each new command that
contain the help text and the command description text. You also need
an array of vshCmdOptDef structs to describe the command options.
Once you have those pieces in place you can write the function
implementing the virsh command. Finally, you need to add the new
command to the commands[] array. The following files need changes:
</p>
<p><code>
tools/virsh.c<br/>
tools/virsh.pod
</code></p>
<p class="example">See <a href="api_extension/0007-Step-7-of-15-add-virsh-support.patch">0007-Step-7-of-15-add-virsh-support.patch</a></p>
<h2><a name="driverimpl">Implement the driver methods</a></h2>
@ -261,42 +356,77 @@
adding.
</p>
<h3><a name="commonimpl">Implement common handling</a></h3>
<p>
In the example code, the extension is only an additional two function
calls in the node device API, so most of the new code is additions to
existing files. The only new files are there for multi-platform
implementation convenience, as some of the new code is Linux specific.
If the new API is applicable to more than one driver, it may
make sense to provide some utility routines, or to factor some
of the work into the dispatcher, to avoid reimplementing the
same code in every driver. In the example code, this involved
adding a member to the virDomainDefPtr struct for mapping
between the XML API addition and the in-memory representation of
a domain, along with updating all clients to use the new member.
Up to this point, there have been no changes to existing
semantics, and the new APIs will fail unless they are used in
the same way as the older API wrappers.
</p>
<p class="example">See <a href="api_extension/0008-Step-8-of-15-support-new-xml.patch">0008-Step-8-of-15-support-new-xml.patch</a></p>
<h3><a name="drivercode">Implement driver handling</a></h3>
<p>
The remaining patches should only touch one driver at a time.
It is possible to implement all changes for a driver in one
patch, but for review purposes it may still make sense to break
things into simpler steps. Here is where the new APIs finally
start working.
</p>
<p>
The example code is probably uninteresting unless you're concerned
with libvirt storage, but I've included it here to show how new files
are added to the build environment.
In the example patches, three separate drivers are supported:
test, qemu, and xen. It is always a good idea to patch the test
driver in addition to the target driver, to prove that the API
can be used for more than one driver. The example updates the
test driver in one patch:
</p>
<p class="example">See <a href="api_extension/0007-Step-7-of-8-Implement-the-driver-methods.patch">0007-Step-7-of-8-Implement-the-driver-methods.patch</a></p>
<h2><a name="virsh">Implement virsh commands</a></h2>
<p class="example">See <a href="api_extension/0009-Step-9-of-15-support-all-flags-in-test-driver.patch">0009-Step-9-of-15-support-all-flags-in-test-driver.patch</a></p>
<p>
Once you have the new functionality in place, the easiest way to test
it and also to provide it to end users is to implement support for it
in virsh.
The qemu changes were easier to split into two phases, one for
updating the mapping between the new XML and the hypervisor
command line arguments, and one for supporting all possible
flags of the new API:
</p>
<p class="example">See <a href="api_extension/0010-Step-10-of-15-improve-vcpu-support-in-qemu-command-line.patch">0010-Step-10-of-15-improve-vcpu-support-in-qemu-command-line.patch</a>
and <a href="api_extension/0011-Step-11-of-15-complete-vcpu-support-in-qemu-driver.patch">0011-Step-11-of-15-complete-vcpu-support-in-qemu-driver.patch</a></p>
<p>
Finally, the example breaks the xen driver changes across four
patches. One maps the XML changes to the hypervisor command,
the next two are independently implementing the getter and
setter APIs, and the last one provides cleanup of code that was
rendered dead by the new API.
</p>
<p class="example">See <a href="api_extension/0012-Step-12-of-15-improve-vcpu-support-in-xen-command-line.patch">0012-Step-12-of-15-improve-vcpu-support-in-xen-command-line.patch</a>,
<a href="api_extension/0013-Step-13-of-15-improve-support-for-getting-xen-vcpu-counts.patch">0013-Step-13-of-15-improve-support-for-getting-xen-vcpu-counts.patch</a>,
<a href="api_extension/0014-Step-14-of-15-improve-support-for-setting-xen-vcpu-counts.patch">0014-Step-14-of-15-improve-support-for-setting-xen-vcpu-counts.patch</a>,
and <a href="api_extension/0015-Step-15-of-15-remove-dead-xen-code.patch">0015-Step-15-of-15-remove-dead-xen-code.patch</a></p>
<p>
The exact details of the example code are probably uninteresting
unless you're concerned with virtual cpu management.
</p>
<p>
A virsh command is composed of a few pieces of code. You need to
define an array of vshCmdInfo structs for each new command that
contain the help text and the command description text. You also need
an array of vshCmdOptDef structs to describe the command options.
Once you have those pieces of data in place you can write the function
implementing the virsh command. Finally, you need to add the new
command to the commands[] array.
Once you have working functionality, run make check and make
syntax-check on each patch of the series before submitting
patches. It may also be worth writing tests for the libvirt-TCK
testsuite to exercise your new API, although those patches are
not kept in the libvirt repository.
</p>
<p class="example">See <a href="api_extension/0008-Step-8-of-8-Add-virsh-support.patch">0008-Step-8-of-8-Add-virsh-support.patch</a></p>
<p>Once you have working functionality, run make check and make
syntax-check before generating patches.</p>
</body>
</html>

View File

@ -0,0 +1,145 @@
From a74f4e44649906dcd82151f7ef837f66d7fa2ab1 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Mon, 27 Sep 2010 17:36:06 -0600
Subject: [PATCH 01/15] vcpu: add current attribute to <vcpu> element
Syntax agreed on in
https://www.redhat.com/archives/libvir-list/2010-September/msg00476.html
<domain ...>
<vcpu current='x'>y</vcpu>
...
can now be used to specify 1 <= x <= y current vcpus, in relation
to the boot-time max of y vcpus. If current is omitted, then
current and max are assumed to be the same value.
* docs/schemas/domain.rng: Add new attribute.
* docs/formatdomain.html.in: Document it.
* tests/qemuxml2argvdata/qemuxml2argv-smp.xml: Add to
domainschematest.
* tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml: Likewise.
---
docs/formatdomain.html.in | 9 +++++--
docs/schemas/domain.rng | 5 ++++
tests/qemuxml2argvdata/qemuxml2argv-smp.xml | 28 +++++++++++++++++++++++++++
tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml | 22 +++++++++++++++++++++
4 files changed, 61 insertions(+), 3 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index a8a1fac..96de121 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -200,7 +200,7 @@
&lt;swap_hard_limit&gt;2097152&lt;/swap_hard_limit&gt;
&lt;min_guarantee&gt;65536&lt;/min_guarantee&gt;
&lt;/memtune&gt;
- &lt;vcpu cpuset="1-4,^3,6"&gt;2&lt;/vcpu&gt;
+ &lt;vcpu cpuset="1-4,^3,6" current="1"&gt;2&lt;/vcpu&gt;
...</pre>
<dl>
@@ -238,7 +238,7 @@
minimum memory allocation for the guest. The units for this value are
kilobytes (i.e. blocks of 1024 bytes)</dd>
<dt><code>vcpu</code></dt>
- <dd>The content of this element defines the number of virtual
+ <dd>The content of this element defines the maximum number of virtual
CPUs allocated for the guest OS, which must be between 1 and
the maximum supported by the hypervisor. <span class="since">Since
0.4.4</span>, this element can contain an optional
@@ -246,7 +246,10 @@
list of physical CPU numbers that virtual CPUs can be pinned
to. Each element in that list is either a single CPU number,
a range of CPU numbers, or a caret followed by a CPU number to
- be excluded from a previous range.
+ be excluded from a previous range. <span class="since">Since
+ 0.8.5</span>, the optional attribute <code>current</code> can
+ be used to specify whether fewer than the maximum number of
+ virtual CPUs should be enabled.
</dd>
</dl>
diff --git a/docs/schemas/domain.rng b/docs/schemas/domain.rng
index f230263..a934a77 100644
--- a/docs/schemas/domain.rng
+++ b/docs/schemas/domain.rng
@@ -337,6 +337,11 @@
<ref name="cpuset"/>
</attribute>
</optional>
+ <optional>
+ <attribute name="current">
+ <ref name="countCPU"/>
+ </attribute>
+ </optional>
<ref name="countCPU"/>
</element>
</optional>
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-smp.xml b/tests/qemuxml2argvdata/qemuxml2argv-smp.xml
new file mode 100644
index 0000000..975f873
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-smp.xml
@@ -0,0 +1,28 @@
+<domain type='qemu'>
+ <name>QEMUGuest1</name>
+ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
+ <memory>219200</memory>
+ <currentMemory>219200</currentMemory>
+ <vcpu current='1'>2</vcpu>
+ <os>
+ <type arch='i686' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <cpu>
+ <topology sockets='2' cores='1' threads='1'/>
+ </cpu>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu</emulator>
+ <disk type='block' device='disk'>
+ <source dev='/dev/HostVG/QEMUGuest1'/>
+ <target dev='hda' bus='ide'/>
+ <address type='drive' controller='0' bus='0' unit='0'/>
+ </disk>
+ <controller type='ide' index='0'/>
+ <memballoon model='virtio'/>
+ </devices>
+</domain>
diff --git a/tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml b/tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml
new file mode 100644
index 0000000..d061e11
--- /dev/null
+++ b/tests/xml2sexprdata/xml2sexpr-pv-vcpus.xml
@@ -0,0 +1,22 @@
+<domain type='xen' id='15'>
+ <name>pvtest</name>
+ <uuid>596a5d2171f48fb2e068e2386a5c413e</uuid>
+ <os>
+ <type>linux</type>
+ <kernel>/var/lib/xen/vmlinuz.2Dn2YT</kernel>
+ <initrd>/var/lib/xen/initrd.img.0u-Vhq</initrd>
+ <cmdline> method=http://download.fedora.devel.redhat.com/pub/fedora/linux/core/test/5.91/x86_64/os </cmdline>
+ </os>
+ <memory>430080</memory>
+ <vcpu current='2'>4</vcpu>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>destroy</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <disk type='file' device='disk'>
+ <source file='/root/some.img'/>
+ <target dev='xvda'/>
+ </disk>
+ <console tty='/dev/pts/4'/>
+ </devices>
+</domain>
--
1.7.2.3

View File

@ -1,44 +0,0 @@
From 2ae8fd62a1e5e085b7902da9bc207b806d84fd91 Mon Sep 17 00:00:00 2001
From: David Allan <dallan@redhat.com>
Date: Tue, 19 May 2009 16:16:11 -0400
Subject: [PATCH] Step 1 of 8 Define the public API
---
include/libvirt/libvirt.h.in | 6 ++++++
src/libvirt_public.syms | 6 ++++++
2 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index a028b21..2f7076f 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -1124,6 +1124,12 @@ int virNodeDeviceDettach (virNodeDevicePtr dev);
int virNodeDeviceReAttach (virNodeDevicePtr dev);
int virNodeDeviceReset (virNodeDevicePtr dev);
+virNodeDevicePtr virNodeDeviceCreateXML (virConnectPtr conn,
+ const char *xmlDesc,
+ unsigned int flags);
+
+int virNodeDeviceDestroy (virNodeDevicePtr dev);
+
/*
* Domain Event Notification
*/
diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms
index f7ebbc3..b8f9128 100644
--- a/src/libvirt_public.syms
+++ b/src/libvirt_public.syms
@@ -258,4 +258,10 @@ LIBVIRT_0.6.1 {
virNodeGetSecurityModel;
} LIBVIRT_0.6.0;
+LIBVIRT_0.6.3 {
+ global:
+ virNodeDeviceCreateXML;
+ virNodeDeviceDestroy;
+} LIBVIRT_0.6.1;
+
# .... define new API here using predicted next version number ....
--
1.6.0.6

View File

@ -0,0 +1,62 @@
From ea3f5c68093429c6ad507b45689cdf209c2c257b Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Fri, 24 Sep 2010 16:48:45 -0600
Subject: [PATCH 02/15] vcpu: add new public API
API agreed on in
https://www.redhat.com/archives/libvir-list/2010-September/msg00456.html,
but modified for enum names to be consistent with virDomainDeviceModifyFlags.
* include/libvirt/libvirt.h.in (virDomainVcpuFlags)
(virDomainSetVcpusFlags, virDomainGetVcpusFlags): New
declarations.
* src/libvirt_public.syms: Export new symbols.
---
include/libvirt/libvirt.h.in | 15 +++++++++++++++
src/libvirt_public.syms | 2 ++
2 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 2eba61e..d0cc4c0 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -915,8 +915,23 @@ struct _virVcpuInfo {
};
typedef virVcpuInfo *virVcpuInfoPtr;
+/* Flags for controlling virtual CPU hot-plugging. */
+typedef enum {
+ /* Must choose at least one of these two bits; SetVcpus can choose both */
+ VIR_DOMAIN_VCPU_LIVE = (1 << 0), /* Affect active domain */
+ VIR_DOMAIN_VCPU_CONFIG = (1 << 1), /* Affect next boot */
+
+ /* Additional flags to be bit-wise OR'd in */
+ VIR_DOMAIN_VCPU_MAXIMUM = (1 << 2), /* Max rather than current count */
+} virDomainVcpuFlags;
+
int virDomainSetVcpus (virDomainPtr domain,
unsigned int nvcpus);
+int virDomainSetVcpusFlags (virDomainPtr domain,
+ unsigned int nvcpus,
+ unsigned int flags);
+int virDomainGetVcpusFlags (virDomainPtr domain,
+ unsigned int flags);
int virDomainPinVcpu (virDomainPtr domain,
unsigned int vcpu,
diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms
index fceb516..a8091b1 100644
--- a/src/libvirt_public.syms
+++ b/src/libvirt_public.syms
@@ -409,6 +409,8 @@ LIBVIRT_0.8.5 {
global:
virDomainSetMemoryParameters;
virDomainGetMemoryParameters;
+ virDomainGetVcpusFlags;
+ virDomainSetVcpusFlags;
} LIBVIRT_0.8.2;
# .... define new API here using predicted next version number ....
--
1.7.2.3

View File

@ -1,36 +0,0 @@
From b26d7fc2d64e7e6e4d3ea2b43361015d3620d7a6 Mon Sep 17 00:00:00 2001
From: David Allan <dallan@redhat.com>
Date: Tue, 19 May 2009 16:19:14 -0400
Subject: [PATCH] Step 2 of 8 Define the internal driver API
---
src/driver.h | 7 +++++++
1 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/src/driver.h b/src/driver.h
index 39dc413..c357b76 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -684,6 +684,11 @@ typedef int (*virDevMonDeviceListCaps)(virNodeDevicePtr dev,
char **const names,
int maxnames);
+typedef virNodeDevicePtr (*virDrvNodeDeviceCreateXML)(virConnectPtr conn,
+ const char *xmlDesc,
+ unsigned int flags);
+typedef int (*virDrvNodeDeviceDestroy)(virNodeDevicePtr dev);
+
/**
* _virDeviceMonitor:
*
@@ -702,6 +707,8 @@ struct _virDeviceMonitor {
virDevMonDeviceGetParent deviceGetParent;
virDevMonDeviceNumOfCaps deviceNumOfCaps;
virDevMonDeviceListCaps deviceListCaps;
+ virDrvNodeDeviceCreateXML deviceCreateXML;
+ virDrvNodeDeviceDestroy deviceDestroy;
};
/*
--
1.6.0.6

View File

@ -0,0 +1,222 @@
From dd255d64053e9960cd375994ce8f056522e12acc Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Mon, 27 Sep 2010 09:18:22 -0600
Subject: [PATCH 03/15] vcpu: define internal driver API
* src/driver.h (virDrvDomainSetVcpusFlags)
(virDrvDomainGetVcpusFlags): New typedefs.
(_virDriver): New callback members.
* src/esx/esx_driver.c (esxDriver): Add stub for driver.
* src/lxc/lxc_driver.c (lxcDriver): Likewise.
* src/opennebula/one_driver.c (oneDriver): Likewise.
* src/openvz/openvz_driver.c (openvzDriver): Likewise.
* src/phyp/phyp_driver.c (phypDriver): Likewise.
* src/qemu/qemu_driver.c (qemuDriver): Likewise.
* src/remote/remote_driver.c (remote_driver): Likewise.
* src/test/test_driver.c (testDriver): Likewise.
* src/uml/uml_driver.c (umlDriver): Likewise.
* src/vbox/vbox_tmpl.c (Driver): Likewise.
* src/xen/xen_driver.c (xenUnifiedDriver): Likewise.
* src/xenapi/xenapi_driver.c (xenapiDriver): Likewise.
---
src/driver.h | 9 +++++++++
src/esx/esx_driver.c | 2 ++
src/lxc/lxc_driver.c | 2 ++
src/opennebula/one_driver.c | 2 ++
src/openvz/openvz_driver.c | 2 ++
src/phyp/phyp_driver.c | 2 ++
src/qemu/qemu_driver.c | 2 ++
src/remote/remote_driver.c | 2 ++
src/test/test_driver.c | 2 ++
src/uml/uml_driver.c | 2 ++
src/vbox/vbox_tmpl.c | 2 ++
src/xen/xen_driver.c | 2 ++
src/xenapi/xenapi_driver.c | 2 ++
13 files changed, 33 insertions(+), 0 deletions(-)
diff --git a/src/driver.h b/src/driver.h
index 32aeb04..79a96c1 100644
--- a/src/driver.h
+++ b/src/driver.h
@@ -185,6 +185,13 @@ typedef int
(*virDrvDomainSetVcpus) (virDomainPtr domain,
unsigned int nvcpus);
typedef int
+ (*virDrvDomainSetVcpusFlags) (virDomainPtr domain,
+ unsigned int nvcpus,
+ unsigned int flags);
+typedef int
+ (*virDrvDomainGetVcpusFlags) (virDomainPtr domain,
+ unsigned int flags);
+typedef int
(*virDrvDomainPinVcpu) (virDomainPtr domain,
unsigned int vcpu,
unsigned char *cpumap,
@@ -520,6 +527,8 @@ struct _virDriver {
virDrvDomainRestore domainRestore;
virDrvDomainCoreDump domainCoreDump;
virDrvDomainSetVcpus domainSetVcpus;
+ virDrvDomainSetVcpusFlags domainSetVcpusFlags;
+ virDrvDomainGetVcpusFlags domainGetVcpusFlags;
virDrvDomainPinVcpu domainPinVcpu;
virDrvDomainGetVcpus domainGetVcpus;
virDrvDomainGetMaxVcpus domainGetMaxVcpus;
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index 1b4ee29..2a32374 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -4160,6 +4160,8 @@ static virDriver esxDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
esxDomainSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
esxDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c
index df814da..7563a8c 100644
--- a/src/lxc/lxc_driver.c
+++ b/src/lxc/lxc_driver.c
@@ -2768,6 +2768,8 @@ static virDriver lxcDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
NULL, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
NULL, /* domainGetMaxVcpus */
diff --git a/src/opennebula/one_driver.c b/src/opennebula/one_driver.c
index ced9a38..199fca3 100644
--- a/src/opennebula/one_driver.c
+++ b/src/opennebula/one_driver.c
@@ -751,6 +751,8 @@ static virDriver oneDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
NULL, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
NULL, /* domainGetMaxVcpus */
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index 92cf4a1..9d19aeb 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -1590,6 +1590,8 @@ static virDriver openvzDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
openvzDomainSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
openvzDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c
index e63d8d9..6e0a5e9 100644
--- a/src/phyp/phyp_driver.c
+++ b/src/phyp/phyp_driver.c
@@ -3941,6 +3941,8 @@ static virDriver phypDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
phypDomainSetCPU, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
phypGetLparCPUMAX, /* domainGetMaxVcpus */
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index abd8e9d..3d17e04 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -12938,6 +12938,8 @@ static virDriver qemuDriver = {
qemudDomainRestore, /* domainRestore */
qemudDomainCoreDump, /* domainCoreDump */
qemudDomainSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
qemudDomainPinVcpu, /* domainPinVcpu */
qemudDomainGetVcpus, /* domainGetVcpus */
qemudDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
index 0b10406..1a687ad 100644
--- a/src/remote/remote_driver.c
+++ b/src/remote/remote_driver.c
@@ -10468,6 +10468,8 @@ static virDriver remote_driver = {
remoteDomainRestore, /* domainRestore */
remoteDomainCoreDump, /* domainCoreDump */
remoteDomainSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
remoteDomainPinVcpu, /* domainPinVcpu */
remoteDomainGetVcpus, /* domainGetVcpus */
remoteDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 7d4d119..6a00558 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -5260,6 +5260,8 @@ static virDriver testDriver = {
testDomainRestore, /* domainRestore */
testDomainCoreDump, /* domainCoreDump */
testSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
testDomainPinVcpu, /* domainPinVcpu */
testDomainGetVcpus, /* domainGetVcpus */
testDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/uml/uml_driver.c b/src/uml/uml_driver.c
index 3dcd321..5161012 100644
--- a/src/uml/uml_driver.c
+++ b/src/uml/uml_driver.c
@@ -2129,6 +2129,8 @@ static virDriver umlDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
NULL, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
NULL, /* domainGetMaxVcpus */
diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c
index 7e7d8e4..cb9193a 100644
--- a/src/vbox/vbox_tmpl.c
+++ b/src/vbox/vbox_tmpl.c
@@ -8267,6 +8267,8 @@ virDriver NAME(Driver) = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
vboxDomainSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
vboxDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c
index c2a4de3..7d67ced 100644
--- a/src/xen/xen_driver.c
+++ b/src/xen/xen_driver.c
@@ -1951,6 +1951,8 @@ static virDriver xenUnifiedDriver = {
xenUnifiedDomainRestore, /* domainRestore */
xenUnifiedDomainCoreDump, /* domainCoreDump */
xenUnifiedDomainSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
xenUnifiedDomainPinVcpu, /* domainPinVcpu */
xenUnifiedDomainGetVcpus, /* domainGetVcpus */
xenUnifiedDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c
index e62a139..753169c 100644
--- a/src/xenapi/xenapi_driver.c
+++ b/src/xenapi/xenapi_driver.c
@@ -1754,6 +1754,8 @@ static virDriver xenapiDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
xenapiDomainSetVcpus, /* domainSetVcpus */
+ NULL, /* domainSetVcpusFlags */
+ NULL, /* domainGetVcpusFlags */
xenapiDomainPinVcpu, /* domainPinVcpu */
xenapiDomainGetVcpus, /* domainGetVcpus */
xenapiDomainGetMaxVcpus, /* domainGetMaxVcpus */
--
1.7.2.3

View File

@ -1,119 +0,0 @@
From fc585594a207dfb9149e7d3d01c9eb1c79b6d52d Mon Sep 17 00:00:00 2001
From: David Allan <dallan@redhat.com>
Date: Tue, 19 May 2009 16:22:23 -0400
Subject: [PATCH] Step 3 of 8 Implement the public API
---
src/libvirt.c | 97 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 97 insertions(+), 0 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c
index f3d4484..ded18a7 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -7509,6 +7509,103 @@ error:
}
+/**
+ * virNodeDeviceCreateXML:
+ * @conn: pointer to the hypervisor connection
+ * @xmlDesc: string containing an XML description of the device to be created
+ * @flags: callers should always pass 0
+ *
+ * Create a new device on the VM host machine, for example, virtual
+ * HBAs created using vport_create.
+ *
+ * Returns a node device object if successful, NULL in case of failure
+ */
+virNodeDevicePtr
+virNodeDeviceCreateXML(virConnectPtr conn,
+ const char *xmlDesc,
+ unsigned int flags)
+{
+ VIR_DEBUG("conn=%p, xmlDesc=%s, flags=%d", conn, xmlDesc, flags);
+
+ virResetLastError();
+
+ if (!VIR_IS_CONNECT(conn)) {
+ virLibConnError(NULL, VIR_ERR_INVALID_CONN, __FUNCTION__);
+ return NULL;
+ }
+
+ if (conn->flags & VIR_CONNECT_RO) {
+ virLibConnError(conn, VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+ goto error;
+ }
+
+ if (xmlDesc == NULL) {
+ virLibConnError(conn, VIR_ERR_INVALID_ARG, __FUNCTION__);
+ goto error;
+ }
+
+ if (conn->deviceMonitor &&
+ conn->deviceMonitor->deviceCreateXML) {
+ virNodeDevicePtr dev = conn->deviceMonitor->deviceCreateXML(conn, xmlDesc, flags);
+ if (dev == NULL)
+ goto error;
+ return dev;
+ }
+
+ virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+ /* Copy to connection error object for back compatability */
+ virSetConnError(conn);
+ return NULL;
+}
+
+
+/**
+ * virNodeDeviceDestroy:
+ * @dev: a device object
+ *
+ * Destroy the device object. The virtual device is removed from the host operating system.
+ * This function may require privileged access
+ *
+ * Returns 0 in case of success and -1 in case of failure.
+ */
+int
+virNodeDeviceDestroy(virNodeDevicePtr dev)
+{
+ DEBUG("dev=%p", dev);
+
+ virResetLastError();
+
+ if (!VIR_IS_CONNECTED_NODE_DEVICE(dev)) {
+ virLibNodeDeviceError(NULL, VIR_ERR_INVALID_NODE_DEVICE, __FUNCTION__);
+ return (-1);
+ }
+
+ if (dev->conn->flags & VIR_CONNECT_RO) {
+ virLibConnError(dev->conn, VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+ goto error;
+ }
+
+ if (dev->conn->deviceMonitor &&
+ dev->conn->deviceMonitor->deviceDestroy) {
+ int retval = dev->conn->deviceMonitor->deviceDestroy(dev);
+ if (retval < 0) {
+ goto error;
+ }
+
+ return 0;
+ }
+
+ virLibConnError (dev->conn, VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+ /* Copy to connection error object for back compatability */
+ virSetConnError(dev->conn);
+ return -1;
+}
+
+
/*
* Domain Event Notification
*/
--
1.6.0.6

View File

@ -0,0 +1,188 @@
From 9d2c60799271d605f82dfd4bfa6ed7d14ad87e26 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Mon, 27 Sep 2010 09:37:22 -0600
Subject: [PATCH 04/15] vcpu: implement the public APIs
Factors common checks (such as nonzero vcpu count) up front, but
drivers will still need to do additional flag checks.
* src/libvirt.c (virDomainSetVcpusFlags, virDomainGetVcpusFlags):
New functions.
(virDomainSetVcpus, virDomainGetMaxVcpus): Refer to new API.
---
src/libvirt.c | 140 ++++++++++++++++++++++++++++++++++++++++++++++++++++++---
1 files changed, 134 insertions(+), 6 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c
index 629d97b..1b39210 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -5192,7 +5192,9 @@ error:
* This function requires privileged access to the hypervisor.
*
* This command only changes the runtime configuration of the domain,
- * so can only be called on an active domain.
+ * so can only be called on an active domain. It is hypervisor-dependent
+ * whether it also affects persistent configuration; for more control,
+ * use virDomainSetVcpusFlags().
*
* Returns 0 in case of success, -1 in case of failure.
*/
@@ -5237,13 +5239,139 @@ error:
}
/**
+ * virDomainSetVcpusFlags:
+ * @domain: pointer to domain object, or NULL for Domain0
+ * @nvcpus: the new number of virtual CPUs for this domain, must be at least 1
+ * @flags: an OR'ed set of virDomainVcpuFlags
+ *
+ * Dynamically change the number of virtual CPUs used by the domain.
+ * Note that this call may fail if the underlying virtualization hypervisor
+ * does not support it or if growing the number is arbitrary limited.
+ * This function requires privileged access to the hypervisor.
+ *
+ * @flags must include VIR_DOMAIN_VCPU_LIVE to affect a running
+ * domain (which may fail if domain is not active), or
+ * VIR_DOMAIN_VCPU_CONFIG to affect the next boot via the XML
+ * description of the domain. Both flags may be set.
+ *
+ * If @flags includes VIR_DOMAIN_VCPU_MAXIMUM, then
+ * VIR_DOMAIN_VCPU_LIVE must be clear, and only the maximum virtual
+ * CPU limit is altered; generally, this value must be less than or
+ * equal to virConnectGetMaxVcpus(). Otherwise, this call affects the
+ * current virtual CPU limit, which must be less than or equal to the
+ * maximum limit.
+ *
+ * Returns 0 in case of success, -1 in case of failure.
+ */
+
+int
+virDomainSetVcpusFlags(virDomainPtr domain, unsigned int nvcpus,
+ unsigned int flags)
+{
+ virConnectPtr conn;
+ DEBUG("domain=%p, nvcpus=%u, flags=%u", domain, nvcpus, flags);
+
+ virResetLastError();
+
+ if (!VIR_IS_CONNECTED_DOMAIN(domain)) {
+ virLibDomainError(NULL, VIR_ERR_INVALID_DOMAIN, __FUNCTION__);
+ virDispatchError(NULL);
+ return (-1);
+ }
+ if (domain->conn->flags & VIR_CONNECT_RO) {
+ virLibDomainError(domain, VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+ goto error;
+ }
+
+ /* Perform some argument validation common to all implementations. */
+ if (nvcpus < 1 || (unsigned short) nvcpus != nvcpus ||
+ (flags & (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG)) == 0) {
+ virLibDomainError(domain, VIR_ERR_INVALID_ARG, __FUNCTION__);
+ goto error;
+ }
+ conn = domain->conn;
+
+ if (conn->driver->domainSetVcpusFlags) {
+ int ret;
+ ret = conn->driver->domainSetVcpusFlags (domain, nvcpus, flags);
+ if (ret < 0)
+ goto error;
+ return ret;
+ }
+
+ virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+ virDispatchError(domain->conn);
+ return -1;
+}
+
+/**
+ * virDomainGetVcpusFlags:
+ * @domain: pointer to domain object, or NULL for Domain0
+ * @flags: an OR'ed set of virDomainVcpuFlags
+ *
+ * Query the number of virtual CPUs used by the domain. Note that
+ * this call may fail if the underlying virtualization hypervisor does
+ * not support it. This function requires privileged access to the
+ * hypervisor.
+ *
+ * @flags must include either VIR_DOMAIN_VCPU_ACTIVE to query a
+ * running domain (which will fail if domain is not active), or
+ * VIR_DOMAIN_VCPU_PERSISTENT to query the XML description of the
+ * domain. It is an error to set both flags.
+ *
+ * If @flags includes VIR_DOMAIN_VCPU_MAXIMUM, then the maximum
+ * virtual CPU limit is queried. Otherwise, this call queries the
+ * current virtual CPU limit.
+ *
+ * Returns 0 in case of success, -1 in case of failure.
+ */
+
+int
+virDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags)
+{
+ virConnectPtr conn;
+ DEBUG("domain=%p, flags=%u", domain, flags);
+
+ virResetLastError();
+
+ if (!VIR_IS_CONNECTED_DOMAIN(domain)) {
+ virLibDomainError(NULL, VIR_ERR_INVALID_DOMAIN, __FUNCTION__);
+ virDispatchError(NULL);
+ return (-1);
+ }
+
+ /* Exactly one of these two flags should be set. */
+ if (!(flags & VIR_DOMAIN_VCPU_LIVE) == !(flags & VIR_DOMAIN_VCPU_CONFIG)) {
+ virLibDomainError(domain, VIR_ERR_INVALID_ARG, __FUNCTION__);
+ goto error;
+ }
+ conn = domain->conn;
+
+ if (conn->driver->domainGetVcpusFlags) {
+ int ret;
+ ret = conn->driver->domainGetVcpusFlags (domain, flags);
+ if (ret < 0)
+ goto error;
+ return ret;
+ }
+
+ virLibConnError (conn, VIR_ERR_NO_SUPPORT, __FUNCTION__);
+
+error:
+ virDispatchError(domain->conn);
+ return -1;
+}
+
+/**
* virDomainPinVcpu:
* @domain: pointer to domain object, or NULL for Domain0
* @vcpu: virtual CPU number
* @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes) (IN)
- * Each bit set to 1 means that corresponding CPU is usable.
- * Bytes are stored in little-endian order: CPU0-7, 8-15...
- * In each byte, lowest CPU number is least significant bit.
+ * Each bit set to 1 means that corresponding CPU is usable.
+ * Bytes are stored in little-endian order: CPU0-7, 8-15...
+ * In each byte, lowest CPU number is least significant bit.
* @maplen: number of bytes in cpumap, from 1 up to size of CPU map in
* underlying virtualization system (Xen...).
* If maplen < size, missing bytes are set to zero.
@@ -5371,9 +5499,9 @@ error:
*
* Provides the maximum number of virtual CPUs supported for
* the guest VM. If the guest is inactive, this is basically
- * the same as virConnectGetMaxVcpus. If the guest is running
+ * the same as virConnectGetMaxVcpus(). If the guest is running
* this will reflect the maximum number of virtual CPUs the
- * guest was booted with.
+ * guest was booted with. For more details, see virDomainGetVcpusFlags().
*
* Returns the maximum of virtual CPU or -1 in case of error.
*/
--
1.7.2.3

View File

@ -1,47 +0,0 @@
From bce8f1243b0454c0d70e3db832a039d22faab09a Mon Sep 17 00:00:00 2001
From: David Allan <dallan@redhat.com>
Date: Wed, 20 May 2009 13:58:58 -0400
Subject: [PATCH] Step 4 of 8 Define the wire protocol format
---
qemud/remote_protocol.x | 18 +++++++++++++++++-
1 files changed, 17 insertions(+), 1 deletions(-)
diff --git a/qemud/remote_protocol.x b/qemud/remote_protocol.x
index 2d8e6a2..2c79949 100644
--- a/qemud/remote_protocol.x
+++ b/qemud/remote_protocol.x
@@ -1109,6 +1109,19 @@ struct remote_node_device_reset_args {
remote_nonnull_string name;
};
+struct remote_node_device_create_xml_args {
+ remote_nonnull_string xml_desc;
+ int flags;
+};
+
+struct remote_node_device_create_xml_ret {
+ remote_nonnull_node_device dev;
+};
+
+struct remote_node_device_destroy_args {
+ remote_nonnull_string name;
+};
+
/**
* Events Register/Deregister:
@@ -1270,7 +1283,10 @@ enum remote_procedure {
REMOTE_PROC_NODE_DEVICE_RESET = 120,
REMOTE_PROC_DOMAIN_GET_SECURITY_LABEL = 121,
- REMOTE_PROC_NODE_GET_SECURITY_MODEL = 122
+ REMOTE_PROC_NODE_GET_SECURITY_MODEL = 122,
+
+ REMOTE_PROC_NODE_DEVICE_CREATE_XML = 123,
+ REMOTE_PROC_NODE_DEVICE_DESTROY = 124
};
/* Custom RPC structure. */
--
1.6.0.6

View File

@ -0,0 +1,421 @@
From eb826444f90c2563dadf148630b0cd6a9b41ba1e Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Mon, 27 Sep 2010 10:10:06 -0600
Subject: [PATCH 05/15] vcpu: implement the remote protocol
Done by editing the first three files, then running
'make -C src rpcgen', then editing src/remote_protocol-structs
to match.
* daemon/remote.c (remoteDispatchDomainSetVcpusFlags)
(remoteDispatchDomainGetVcpusFlags): New functions.
* src/remote/remote_driver.c (remoteDomainSetVcpusFlags)
(remoteDomainGetVcpusFlags, remote_driver): Client side
serialization.
* src/remote/remote_protocol.x
(remote_domain_set_vcpus_flags_args)
(remote_domain_get_vcpus_flags_args)
(remote_domain_get_vcpus_flags_ret)
(REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS)
(REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS): Define wire format.
* daemon/remote_dispatch_args.h: Regenerate.
* daemon/remote_dispatch_prototypes.h: Likewise.
* daemon/remote_dispatch_table.h: Likewise.
* src/remote/remote_protocol.c: Likewise.
* src/remote/remote_protocol.h: Likewise.
* src/remote_protocol-structs: Likewise.
---
daemon/remote.c | 53 ++++++++++++++++++++++++++++++++
daemon/remote_dispatch_args.h | 2 +
daemon/remote_dispatch_prototypes.h | 16 ++++++++++
daemon/remote_dispatch_ret.h | 1 +
daemon/remote_dispatch_table.h | 10 ++++++
src/remote/remote_driver.c | 57 +++++++++++++++++++++++++++++++++-
src/remote/remote_protocol.c | 33 ++++++++++++++++++++
src/remote/remote_protocol.h | 26 ++++++++++++++++
src/remote/remote_protocol.x | 19 +++++++++++-
src/remote_protocol-structs | 12 +++++++
10 files changed, 226 insertions(+), 3 deletions(-)
diff --git a/daemon/remote.c b/daemon/remote.c
index 7a96e29..323f00c 100644
--- a/daemon/remote.c
+++ b/daemon/remote.c
@@ -1751,6 +1751,33 @@ oom:
}
static int
+remoteDispatchDomainGetVcpusFlags (struct qemud_server *server ATTRIBUTE_UNUSED,
+ struct qemud_client *client ATTRIBUTE_UNUSED,
+ virConnectPtr conn,
+ remote_message_header *hdr ATTRIBUTE_UNUSED,
+ remote_error *rerr,
+ remote_domain_get_vcpus_flags_args *args,
+ remote_domain_get_vcpus_flags_ret *ret)
+{
+ virDomainPtr dom;
+
+ dom = get_nonnull_domain (conn, args->dom);
+ if (dom == NULL) {
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+
+ ret->num = virDomainGetVcpusFlags (dom, args->flags);
+ if (ret->num == -1) {
+ virDomainFree(dom);
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+ virDomainFree(dom);
+ return 0;
+}
+
+static int
remoteDispatchDomainMigratePrepare (struct qemud_server *server ATTRIBUTE_UNUSED,
struct qemud_client *client ATTRIBUTE_UNUSED,
virConnectPtr conn,
@@ -2568,6 +2595,32 @@ remoteDispatchDomainSetVcpus (struct qemud_server *server ATTRIBUTE_UNUSED,
}
static int
+remoteDispatchDomainSetVcpusFlags (struct qemud_server *server ATTRIBUTE_UNUSED,
+ struct qemud_client *client ATTRIBUTE_UNUSED,
+ virConnectPtr conn,
+ remote_message_header *hdr ATTRIBUTE_UNUSED,
+ remote_error *rerr,
+ remote_domain_set_vcpus_flags_args *args,
+ void *ret ATTRIBUTE_UNUSED)
+{
+ virDomainPtr dom;
+
+ dom = get_nonnull_domain (conn, args->dom);
+ if (dom == NULL) {
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+
+ if (virDomainSetVcpusFlags (dom, args->nvcpus, args->flags) == -1) {
+ virDomainFree(dom);
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+ virDomainFree(dom);
+ return 0;
+}
+
+static int
remoteDispatchDomainShutdown (struct qemud_server *server ATTRIBUTE_UNUSED,
struct qemud_client *client ATTRIBUTE_UNUSED,
virConnectPtr conn,
diff --git a/daemon/remote_dispatch_args.h b/daemon/remote_dispatch_args.h
index d8528b6..9583e9c 100644
--- a/daemon/remote_dispatch_args.h
+++ b/daemon/remote_dispatch_args.h
@@ -167,3 +167,5 @@
remote_domain_create_with_flags_args val_remote_domain_create_with_flags_args;
remote_domain_set_memory_parameters_args val_remote_domain_set_memory_parameters_args;
remote_domain_get_memory_parameters_args val_remote_domain_get_memory_parameters_args;
+ remote_domain_set_vcpus_flags_args val_remote_domain_set_vcpus_flags_args;
+ remote_domain_get_vcpus_flags_args val_remote_domain_get_vcpus_flags_args;
diff --git a/daemon/remote_dispatch_prototypes.h b/daemon/remote_dispatch_prototypes.h
index b674bb4..6b35851 100644
--- a/daemon/remote_dispatch_prototypes.h
+++ b/daemon/remote_dispatch_prototypes.h
@@ -306,6 +306,14 @@ static int remoteDispatchDomainGetVcpus(
remote_error *err,
remote_domain_get_vcpus_args *args,
remote_domain_get_vcpus_ret *ret);
+static int remoteDispatchDomainGetVcpusFlags(
+ struct qemud_server *server,
+ struct qemud_client *client,
+ virConnectPtr conn,
+ remote_message_header *hdr,
+ remote_error *err,
+ remote_domain_get_vcpus_flags_args *args,
+ remote_domain_get_vcpus_flags_ret *ret);
static int remoteDispatchDomainHasCurrentSnapshot(
struct qemud_server *server,
struct qemud_client *client,
@@ -554,6 +562,14 @@ static int remoteDispatchDomainSetVcpus(
remote_error *err,
remote_domain_set_vcpus_args *args,
void *ret);
+static int remoteDispatchDomainSetVcpusFlags(
+ struct qemud_server *server,
+ struct qemud_client *client,
+ virConnectPtr conn,
+ remote_message_header *hdr,
+ remote_error *err,
+ remote_domain_set_vcpus_flags_args *args,
+ void *ret);
static int remoteDispatchDomainShutdown(
struct qemud_server *server,
struct qemud_client *client,
diff --git a/daemon/remote_dispatch_ret.h b/daemon/remote_dispatch_ret.h
index 17c9bca..3723b00 100644
--- a/daemon/remote_dispatch_ret.h
+++ b/daemon/remote_dispatch_ret.h
@@ -136,3 +136,4 @@
remote_domain_get_block_info_ret val_remote_domain_get_block_info_ret;
remote_domain_create_with_flags_ret val_remote_domain_create_with_flags_ret;
remote_domain_get_memory_parameters_ret val_remote_domain_get_memory_parameters_ret;
+ remote_domain_get_vcpus_flags_ret val_remote_domain_get_vcpus_flags_ret;
diff --git a/daemon/remote_dispatch_table.h b/daemon/remote_dispatch_table.h
index 47d95eb..dd2adc7 100644
--- a/daemon/remote_dispatch_table.h
+++ b/daemon/remote_dispatch_table.h
@@ -997,3 +997,13 @@
.args_filter = (xdrproc_t) xdr_remote_domain_get_memory_parameters_args,
.ret_filter = (xdrproc_t) xdr_remote_domain_get_memory_parameters_ret,
},
+{ /* DomainSetVcpusFlags => 199 */
+ .fn = (dispatch_fn) remoteDispatchDomainSetVcpusFlags,
+ .args_filter = (xdrproc_t) xdr_remote_domain_set_vcpus_flags_args,
+ .ret_filter = (xdrproc_t) xdr_void,
+},
+{ /* DomainGetVcpusFlags => 200 */
+ .fn = (dispatch_fn) remoteDispatchDomainGetVcpusFlags,
+ .args_filter = (xdrproc_t) xdr_remote_domain_get_vcpus_flags_args,
+ .ret_filter = (xdrproc_t) xdr_remote_domain_get_vcpus_flags_ret,
+},
diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c
index 1a687ad..37c37ef 100644
--- a/src/remote/remote_driver.c
+++ b/src/remote/remote_driver.c
@@ -2580,6 +2580,59 @@ done:
}
static int
+remoteDomainSetVcpusFlags (virDomainPtr domain, unsigned int nvcpus,
+ unsigned int flags)
+{
+ int rv = -1;
+ remote_domain_set_vcpus_flags_args args;
+ struct private_data *priv = domain->conn->privateData;
+
+ remoteDriverLock(priv);
+
+ make_nonnull_domain (&args.dom, domain);
+ args.nvcpus = nvcpus;
+ args.flags = flags;
+
+ if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS,
+ (xdrproc_t) xdr_remote_domain_set_vcpus_flags_args,
+ (char *) &args,
+ (xdrproc_t) xdr_void, (char *) NULL) == -1)
+ goto done;
+
+ rv = 0;
+
+done:
+ remoteDriverUnlock(priv);
+ return rv;
+}
+
+static int
+remoteDomainGetVcpusFlags (virDomainPtr domain, unsigned int flags)
+{
+ int rv = -1;
+ remote_domain_get_vcpus_flags_args args;
+ remote_domain_get_vcpus_flags_ret ret;
+ struct private_data *priv = domain->conn->privateData;
+
+ remoteDriverLock(priv);
+
+ make_nonnull_domain (&args.dom, domain);
+ args.flags = flags;
+
+ memset (&ret, 0, sizeof ret);
+ if (call (domain->conn, priv, 0, REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS,
+ (xdrproc_t) xdr_remote_domain_get_vcpus_flags_args, (char *) &args,
+ (xdrproc_t) xdr_remote_domain_get_vcpus_flags_ret, (char *) &ret) == -1)
+ goto done;
+
+ rv = ret.num;
+
+done:
+ remoteDriverUnlock(priv);
+ return rv;
+}
+
+static int
remoteDomainPinVcpu (virDomainPtr domain,
unsigned int vcpu,
unsigned char *cpumap,
@@ -10468,8 +10521,8 @@ static virDriver remote_driver = {
remoteDomainRestore, /* domainRestore */
remoteDomainCoreDump, /* domainCoreDump */
remoteDomainSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ remoteDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ remoteDomainGetVcpusFlags, /* domainGetVcpusFlags */
remoteDomainPinVcpu, /* domainPinVcpu */
remoteDomainGetVcpus, /* domainGetVcpus */
remoteDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/remote/remote_protocol.c b/src/remote/remote_protocol.c
index 5c55713..38ea050 100644
--- a/src/remote/remote_protocol.c
+++ b/src/remote/remote_protocol.c
@@ -1355,6 +1355,39 @@ xdr_remote_domain_set_vcpus_args (XDR *xdrs, remote_domain_set_vcpus_args *objp)
}
bool_t
+xdr_remote_domain_set_vcpus_flags_args (XDR *xdrs, remote_domain_set_vcpus_flags_args *objp)
+{
+
+ if (!xdr_remote_nonnull_domain (xdrs, &objp->dom))
+ return FALSE;
+ if (!xdr_u_int (xdrs, &objp->nvcpus))
+ return FALSE;
+ if (!xdr_u_int (xdrs, &objp->flags))
+ return FALSE;
+ return TRUE;
+}
+
+bool_t
+xdr_remote_domain_get_vcpus_flags_args (XDR *xdrs, remote_domain_get_vcpus_flags_args *objp)
+{
+
+ if (!xdr_remote_nonnull_domain (xdrs, &objp->dom))
+ return FALSE;
+ if (!xdr_u_int (xdrs, &objp->flags))
+ return FALSE;
+ return TRUE;
+}
+
+bool_t
+xdr_remote_domain_get_vcpus_flags_ret (XDR *xdrs, remote_domain_get_vcpus_flags_ret *objp)
+{
+
+ if (!xdr_int (xdrs, &objp->num))
+ return FALSE;
+ return TRUE;
+}
+
+bool_t
xdr_remote_domain_pin_vcpu_args (XDR *xdrs, remote_domain_pin_vcpu_args *objp)
{
char **objp_cpp0 = (char **) (void *) &objp->cpumap.cpumap_val;
diff --git a/src/remote/remote_protocol.h b/src/remote/remote_protocol.h
index 756da11..d75e76c 100644
--- a/src/remote/remote_protocol.h
+++ b/src/remote/remote_protocol.h
@@ -750,6 +750,24 @@ struct remote_domain_set_vcpus_args {
};
typedef struct remote_domain_set_vcpus_args remote_domain_set_vcpus_args;
+struct remote_domain_set_vcpus_flags_args {
+ remote_nonnull_domain dom;
+ u_int nvcpus;
+ u_int flags;
+};
+typedef struct remote_domain_set_vcpus_flags_args remote_domain_set_vcpus_flags_args;
+
+struct remote_domain_get_vcpus_flags_args {
+ remote_nonnull_domain dom;
+ u_int flags;
+};
+typedef struct remote_domain_get_vcpus_flags_args remote_domain_get_vcpus_flags_args;
+
+struct remote_domain_get_vcpus_flags_ret {
+ int num;
+};
+typedef struct remote_domain_get_vcpus_flags_ret remote_domain_get_vcpus_flags_ret;
+
struct remote_domain_pin_vcpu_args {
remote_nonnull_domain dom;
int vcpu;
@@ -2281,6 +2299,8 @@ enum remote_procedure {
REMOTE_PROC_DOMAIN_CREATE_WITH_FLAGS = 196,
REMOTE_PROC_DOMAIN_SET_MEMORY_PARAMETERS = 197,
REMOTE_PROC_DOMAIN_GET_MEMORY_PARAMETERS = 198,
+ REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS = 199,
+ REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS = 200,
};
typedef enum remote_procedure remote_procedure;
@@ -2422,6 +2442,9 @@ extern bool_t xdr_remote_domain_define_xml_args (XDR *, remote_domain_define_xm
extern bool_t xdr_remote_domain_define_xml_ret (XDR *, remote_domain_define_xml_ret*);
extern bool_t xdr_remote_domain_undefine_args (XDR *, remote_domain_undefine_args*);
extern bool_t xdr_remote_domain_set_vcpus_args (XDR *, remote_domain_set_vcpus_args*);
+extern bool_t xdr_remote_domain_set_vcpus_flags_args (XDR *, remote_domain_set_vcpus_flags_args*);
+extern bool_t xdr_remote_domain_get_vcpus_flags_args (XDR *, remote_domain_get_vcpus_flags_args*);
+extern bool_t xdr_remote_domain_get_vcpus_flags_ret (XDR *, remote_domain_get_vcpus_flags_ret*);
extern bool_t xdr_remote_domain_pin_vcpu_args (XDR *, remote_domain_pin_vcpu_args*);
extern bool_t xdr_remote_domain_get_vcpus_args (XDR *, remote_domain_get_vcpus_args*);
extern bool_t xdr_remote_domain_get_vcpus_ret (XDR *, remote_domain_get_vcpus_ret*);
@@ -2762,6 +2785,9 @@ extern bool_t xdr_remote_domain_define_xml_args ();
extern bool_t xdr_remote_domain_define_xml_ret ();
extern bool_t xdr_remote_domain_undefine_args ();
extern bool_t xdr_remote_domain_set_vcpus_args ();
+extern bool_t xdr_remote_domain_set_vcpus_flags_args ();
+extern bool_t xdr_remote_domain_get_vcpus_flags_args ();
+extern bool_t xdr_remote_domain_get_vcpus_flags_ret ();
extern bool_t xdr_remote_domain_pin_vcpu_args ();
extern bool_t xdr_remote_domain_get_vcpus_args ();
extern bool_t xdr_remote_domain_get_vcpus_ret ();
diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x
index e80fb5f..d57e6d0 100644
--- a/src/remote/remote_protocol.x
+++ b/src/remote/remote_protocol.x
@@ -768,6 +768,21 @@ struct remote_domain_set_vcpus_args {
int nvcpus;
};
+struct remote_domain_set_vcpus_flags_args {
+ remote_nonnull_domain dom;
+ unsigned int nvcpus;
+ unsigned int flags;
+};
+
+struct remote_domain_get_vcpus_flags_args {
+ remote_nonnull_domain dom;
+ unsigned int flags;
+};
+
+struct remote_domain_get_vcpus_flags_ret {
+ int num;
+};
+
struct remote_domain_pin_vcpu_args {
remote_nonnull_domain dom;
int vcpu;
@@ -2062,7 +2077,9 @@ enum remote_procedure {
REMOTE_PROC_DOMAIN_EVENT_IO_ERROR_REASON = 195,
REMOTE_PROC_DOMAIN_CREATE_WITH_FLAGS = 196,
REMOTE_PROC_DOMAIN_SET_MEMORY_PARAMETERS = 197,
- REMOTE_PROC_DOMAIN_GET_MEMORY_PARAMETERS = 198
+ REMOTE_PROC_DOMAIN_GET_MEMORY_PARAMETERS = 198,
+ REMOTE_PROC_DOMAIN_SET_VCPUS_FLAGS = 199,
+ REMOTE_PROC_DOMAIN_GET_VCPUS_FLAGS = 200
/*
* Notice how the entries are grouped in sets of 10 ?
diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs
index 838423e..d505886 100644
--- a/src/remote_protocol-structs
+++ b/src/remote_protocol-structs
@@ -461,6 +461,18 @@ struct remote_domain_set_vcpus_args {
remote_nonnull_domain dom;
int nvcpus;
};
+struct remote_domain_set_vcpus_flags_args {
+ remote_nonnull_domain dom;
+ u_int nvcpus;
+ u_int flags;
+};
+struct remote_domain_get_vcpus_flags_args {
+ remote_nonnull_domain dom;
+ u_int flags;
+};
+struct remote_domain_get_vcpus_flags_ret {
+ int num;
+};
struct remote_domain_pin_vcpu_args {
remote_nonnull_domain dom;
int vcpu;
--
1.7.2.3

View File

@ -1,84 +0,0 @@
From ff272552c297966ace3492aefe91fc830152251a Mon Sep 17 00:00:00 2001
From: David Allan <dallan@redhat.com>
Date: Tue, 19 May 2009 16:26:12 -0400
Subject: [PATCH] Step 5 of 8 Implement the RPC client
---
src/remote_internal.c | 55 +++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 55 insertions(+), 0 deletions(-)
diff --git a/src/remote_internal.c b/src/remote_internal.c
index 4b3afb0..e665ef8 100644
--- a/src/remote_internal.c
+++ b/src/remote_internal.c
@@ -4978,6 +4978,59 @@ done:
}
+static virNodeDevicePtr
+remoteNodeDeviceCreateXML(virConnectPtr conn,
+ const char *xmlDesc,
+ unsigned int flags)
+{
+ remote_node_device_create_xml_args args;
+ remote_node_device_create_xml_ret ret;
+ virNodeDevicePtr dev = NULL;
+ struct private_data *priv = conn->privateData;
+
+ remoteDriverLock(priv);
+
+ memset(&ret, 0, sizeof ret);
+ args.xml_desc = (char *)xmlDesc;
+ args.flags = flags;
+
+ if (call(conn, priv, 0, REMOTE_PROC_NODE_DEVICE_CREATE_XML,
+ (xdrproc_t) xdr_remote_node_device_create_xml_args, (char *) &args,
+ (xdrproc_t) xdr_remote_node_device_create_xml_ret, (char *) &ret) == -1)
+ goto done;
+
+ dev = get_nonnull_node_device(conn, ret.dev);
+ xdr_free ((xdrproc_t) xdr_remote_node_device_create_xml_ret, (char *) &ret);
+
+done:
+ remoteDriverUnlock(priv);
+ return dev;
+}
+
+static int
+remoteNodeDeviceDestroy(virNodeDevicePtr dev)
+{
+ int rv = -1;
+ remote_node_device_destroy_args args;
+ struct private_data *priv = dev->conn->privateData;
+
+ remoteDriverLock(priv);
+
+ args.name = dev->name;
+
+ if (call(dev->conn, priv, 0, REMOTE_PROC_NODE_DEVICE_DESTROY,
+ (xdrproc_t) xdr_remote_node_device_destroy_args, (char *) &args,
+ (xdrproc_t) xdr_void, (char *) NULL) == -1)
+ goto done;
+
+ rv = 0;
+
+done:
+ remoteDriverUnlock(priv);
+ return rv;
+}
+
+
/*----------------------------------------------------------------------*/
static int
@@ -6982,6 +7035,8 @@ static virDeviceMonitor dev_monitor = {
.deviceGetParent = remoteNodeDeviceGetParent,
.deviceNumOfCaps = remoteNodeDeviceNumOfCaps,
.deviceListCaps = remoteNodeDeviceListCaps,
+ .deviceCreateXML = remoteNodeDeviceCreateXML,
+ .deviceDestroy = remoteNodeDeviceDestroy
};
--
1.6.0.6

View File

@ -0,0 +1,735 @@
From 50c51f13e2af04afac46e181c4ed62581545a488 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Mon, 27 Sep 2010 16:37:53 -0600
Subject: [PATCH 06/15] vcpu: make old API trivially wrap to new API
Note - this wrapping is completely mechanical; the old API will
function identically, since the new API validates that the exact
same flags are provided by the old API. On a per-driver basis,
it may make sense to have the old API pass a different set of flags,
but that should be done in the per-driver patch that implements
the full range of flag support in the new API.
* src/esx/esx_driver.c (esxDomainSetVcpus, escDomainGetMaxVpcus):
Move guts...
(esxDomainSetVcpusFlags, esxDomainGetVcpusFlags): ...to new
functions.
(esxDriver): Trivially support the new API.
* src/openvz/openvz_driver.c (openvzDomainSetVcpus)
(openvzDomainSetVcpusFlags, openvzDomainGetMaxVcpus)
(openvzDomainGetVcpusFlags, openvzDriver): Likewise.
* src/phyp/phyp_driver.c (phypDomainSetCPU)
(phypDomainSetVcpusFlags, phypGetLparCPUMAX)
(phypDomainGetVcpusFlags, phypDriver): Likewise.
* src/qemu/qemu_driver.c (qemudDomainSetVcpus)
(qemudDomainSetVcpusFlags, qemudDomainGetMaxVcpus)
(qemudDomainGetVcpusFlags, qemuDriver): Likewise.
* src/test/test_driver.c (testSetVcpus, testDomainSetVcpusFlags)
(testDomainGetMaxVcpus, testDomainGetVcpusFlags, testDriver):
Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainSetVcpus)
(vboxDomainSetVcpusFlags, virDomainGetMaxVcpus)
(virDomainGetVcpusFlags, virDriver): Likewise.
* src/xen/xen_driver.c (xenUnifiedDomainSetVcpus)
(xenUnifiedDomainSetVcpusFlags, xenUnifiedDomainGetMaxVcpus)
(xenUnifiedDomainGetVcpusFlags, xenUnifiedDriver): Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainSetVcpus)
(xenapiDomainSetVcpusFlags, xenapiDomainGetMaxVcpus)
(xenapiDomainGetVcpusFlags, xenapiDriver): Likewise.
(xenapiError): New helper macro.
---
src/esx/esx_driver.c | 32 +++++++++++++++++++---
src/openvz/openvz_driver.c | 34 +++++++++++++++++++++---
src/phyp/phyp_driver.c | 32 ++++++++++++++++++++---
src/qemu/qemu_driver.c | 38 +++++++++++++++++++++++++---
src/test/test_driver.c | 36 ++++++++++++++++++++++---
src/vbox/vbox_tmpl.c | 36 +++++++++++++++++++++++---
src/xen/xen_driver.c | 34 ++++++++++++++++++++++---
src/xenapi/xenapi_driver.c | 60 ++++++++++++++++++++++++++++++++++++++------
8 files changed, 263 insertions(+), 39 deletions(-)
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index 2a32374..b3e1284 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -2384,7 +2384,8 @@ esxDomainGetInfo(virDomainPtr domain, virDomainInfoPtr info)
static int
-esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus)
+esxDomainSetVcpusFlags(virDomainPtr domain, unsigned int nvcpus,
+ unsigned int flags)
{
int result = -1;
esxPrivate *priv = domain->conn->privateData;
@@ -2394,6 +2395,11 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus)
esxVI_ManagedObjectReference *task = NULL;
esxVI_TaskInfoState taskInfoState;
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
if (nvcpus < 1) {
ESX_ERROR(VIR_ERR_INVALID_ARG, "%s",
_("Requested number of virtual CPUs must at least be 1"));
@@ -2453,15 +2459,26 @@ esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus)
}
+static int
+esxDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus)
+{
+ return esxDomainSetVcpusFlags(domain, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
static int
-esxDomainGetMaxVcpus(virDomainPtr domain)
+esxDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags)
{
esxPrivate *priv = domain->conn->privateData;
esxVI_String *propertyNameList = NULL;
esxVI_ObjectContent *hostSystem = NULL;
esxVI_DynamicProperty *dynamicProperty = NULL;
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ ESX_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
if (priv->maxVcpus > 0) {
return priv->maxVcpus;
}
@@ -2507,7 +2524,12 @@ esxDomainGetMaxVcpus(virDomainPtr domain)
return priv->maxVcpus;
}
-
+static int
+esxDomainGetMaxVcpus(virDomainPtr domain)
+{
+ return esxDomainGetVcpusFlags(domain, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
static char *
esxDomainDumpXML(virDomainPtr domain, int flags)
@@ -4160,8 +4182,8 @@ static virDriver esxDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
esxDomainSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ esxDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ esxDomainGetVcpusFlags, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
esxDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index 9d19aeb..0f3cfdf 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -67,7 +67,6 @@
static int openvzGetProcessInfo(unsigned long long *cpuTime, int vpsid);
static int openvzGetMaxVCPUs(virConnectPtr conn, const char *type);
static int openvzDomainGetMaxVcpus(virDomainPtr dom);
-static int openvzDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus);
static int openvzDomainSetVcpusInternal(virDomainObjPtr vm,
unsigned int nvcpus);
static int openvzDomainSetMemoryInternal(virDomainObjPtr vm,
@@ -1211,11 +1210,24 @@ static int openvzGetMaxVCPUs(virConnectPtr conn ATTRIBUTE_UNUSED,
return -1;
}
+static int
+openvzDomainGetVcpusFlags(virDomainPtr dom ATTRIBUTE_UNUSED,
+ unsigned int flags)
+{
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ openvzError(VIR_ERR_INVALID_ARG, _("unsupported flags (0x%x)"), flags);
+ return -1;
+ }
-static int openvzDomainGetMaxVcpus(virDomainPtr dom ATTRIBUTE_UNUSED) {
return openvzGetMaxVCPUs(NULL, "openvz");
}
+static int openvzDomainGetMaxVcpus(virDomainPtr dom)
+{
+ return openvzDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
+
static int openvzDomainSetVcpusInternal(virDomainObjPtr vm,
unsigned int nvcpus)
{
@@ -1241,12 +1253,18 @@ static int openvzDomainSetVcpusInternal(virDomainObjPtr vm,
return 0;
}
-static int openvzDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus)
+static int openvzDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
+ unsigned int flags)
{
virDomainObjPtr vm;
struct openvz_driver *driver = dom->conn->privateData;
int ret = -1;
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ openvzError(VIR_ERR_INVALID_ARG, _("unsupported flags (0x%x)"), flags);
+ return -1;
+ }
+
openvzDriverLock(driver);
vm = virDomainFindByUUID(&driver->domains, dom->uuid);
openvzDriverUnlock(driver);
@@ -1272,6 +1290,12 @@ cleanup:
return ret;
}
+static int
+openvzDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus)
+{
+ return openvzDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
static virDrvOpenStatus openvzOpen(virConnectPtr conn,
virConnectAuthPtr auth ATTRIBUTE_UNUSED,
int flags ATTRIBUTE_UNUSED)
@@ -1590,8 +1614,8 @@ static virDriver openvzDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
openvzDomainSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ openvzDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ openvzDomainGetVcpusFlags, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
openvzDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c
index 6e0a5e9..e284ae0 100644
--- a/src/phyp/phyp_driver.c
+++ b/src/phyp/phyp_driver.c
@@ -1497,15 +1497,27 @@ phypGetLparCPU(virConnectPtr conn, const char *managed_system, int lpar_id)
}
static int
-phypGetLparCPUMAX(virDomainPtr dom)
+phypDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags)
{
phyp_driverPtr phyp_driver = dom->conn->privateData;
char *managed_system = phyp_driver->managed_system;
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ PHYP_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
return phypGetLparCPUGeneric(dom->conn, managed_system, dom->id, 1);
}
static int
+phypGetLparCPUMAX(virDomainPtr dom)
+{
+ return phypDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
+
+static int
phypGetRemoteSlot(virConnectPtr conn, const char *managed_system,
const char *lpar_name)
{
@@ -3831,7 +3843,8 @@ phypConnectGetCapabilities(virConnectPtr conn)
}
static int
-phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus)
+phypDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
+ unsigned int flags)
{
ConnectionData *connection_data = dom->conn->networkPrivateData;
phyp_driverPtr phyp_driver = dom->conn->privateData;
@@ -3846,6 +3859,11 @@ phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus)
unsigned int amount = 0;
virBuffer buf = VIR_BUFFER_INITIALIZER;
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ PHYP_ERROR(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
if ((ncpus = phypGetLparCPU(dom->conn, managed_system, dom->id)) == 0)
return 0;
@@ -3891,6 +3909,12 @@ phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus)
}
+static int
+phypDomainSetCPU(virDomainPtr dom, unsigned int nvcpus)
+{
+ return phypDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
static virDrvOpenStatus
phypVIOSDriverOpen(virConnectPtr conn,
virConnectAuthPtr auth ATTRIBUTE_UNUSED,
@@ -3941,8 +3965,8 @@ static virDriver phypDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
phypDomainSetCPU, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ phypDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ phypDomainGetVcpusFlags, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
phypGetLparCPUMAX, /* domainGetMaxVcpus */
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 3d17e04..7a2ea8f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -5934,13 +5934,22 @@ unsupported:
}
-static int qemudDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) {
+static int
+qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
+ unsigned int flags)
+{
struct qemud_driver *driver = dom->conn->privateData;
virDomainObjPtr vm;
const char * type;
int max;
int ret = -1;
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
+ flags);
+ return -1;
+ }
+
qemuDriverLock(driver);
vm = virDomainFindByUUID(&driver->domains, dom->uuid);
qemuDriverUnlock(driver);
@@ -5994,6 +6003,12 @@ cleanup:
return ret;
}
+static int
+qemudDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus)
+{
+ return qemudDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
static int
qemudDomainPinVcpu(virDomainPtr dom,
@@ -6150,12 +6165,20 @@ cleanup:
}
-static int qemudDomainGetMaxVcpus(virDomainPtr dom) {
+static int
+qemudDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags)
+{
struct qemud_driver *driver = dom->conn->privateData;
virDomainObjPtr vm;
const char *type;
int ret = -1;
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
+ flags);
+ return -1;
+ }
+
qemuDriverLock(driver);
vm = virDomainFindByUUID(&driver->domains, dom->uuid);
qemuDriverUnlock(driver);
@@ -6183,6 +6206,13 @@ cleanup:
return ret;
}
+static int
+qemudDomainGetMaxVcpus(virDomainPtr dom)
+{
+ return qemudDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
+
static int qemudDomainGetSecurityLabel(virDomainPtr dom, virSecurityLabelPtr seclabel)
{
struct qemud_driver *driver = (struct qemud_driver *)dom->conn->privateData;
@@ -12938,8 +12968,8 @@ static virDriver qemuDriver = {
qemudDomainRestore, /* domainRestore */
qemudDomainCoreDump, /* domainCoreDump */
qemudDomainSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ qemudDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ qemudDomainGetVcpusFlags, /* domainGetVcpusFlags */
qemudDomainPinVcpu, /* domainPinVcpu */
qemudDomainGetVcpus, /* domainGetVcpus */
qemudDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index 6a00558..b70c80d 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -2029,17 +2029,37 @@ cleanup:
return ret;
}
-static int testDomainGetMaxVcpus(virDomainPtr domain)
+static int
+testDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags)
{
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ testError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
return testGetMaxVCPUs(domain->conn, "test");
}
-static int testSetVcpus(virDomainPtr domain,
- unsigned int nrCpus) {
+static int
+testDomainGetMaxVcpus(virDomainPtr domain)
+{
+ return testDomainGetVcpusFlags(domain, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
+
+static int
+testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus,
+ unsigned int flags)
+{
testConnPtr privconn = domain->conn->privateData;
virDomainObjPtr privdom = NULL;
int ret = -1, maxvcpus;
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ testError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
/* Do this first before locking */
maxvcpus = testDomainGetMaxVcpus(domain);
if (maxvcpus < 0)
@@ -2082,6 +2102,12 @@ cleanup:
return ret;
}
+static int
+testSetVcpus(virDomainPtr domain, unsigned int nrCpus)
+{
+ return testDomainSetVcpusFlags(domain, nrCpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
static int testDomainGetVcpus(virDomainPtr domain,
virVcpuInfoPtr info,
int maxinfo,
@@ -5260,8 +5286,8 @@ static virDriver testDriver = {
testDomainRestore, /* domainRestore */
testDomainCoreDump, /* domainCoreDump */
testSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ testDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ testDomainGetVcpusFlags, /* domainGetVcpusFlags */
testDomainPinVcpu, /* domainPinVcpu */
testDomainGetVcpus, /* domainGetVcpus */
testDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c
index cb9193a..0cbe8b3 100644
--- a/src/vbox/vbox_tmpl.c
+++ b/src/vbox/vbox_tmpl.c
@@ -1839,13 +1839,21 @@ cleanup:
return ret;
}
-static int vboxDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus) {
+static int
+vboxDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
+ unsigned int flags)
+{
VBOX_OBJECT_CHECK(dom->conn, int, -1);
IMachine *machine = NULL;
vboxIID *iid = NULL;
PRUint32 CPUCount = nvcpus;
nsresult rc;
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ vboxError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
#if VBOX_API_VERSION == 2002
if (VIR_ALLOC(iid) < 0) {
virReportOOMError();
@@ -1887,11 +1895,24 @@ cleanup:
return ret;
}
-static int vboxDomainGetMaxVcpus(virDomainPtr dom) {
+static int
+vboxDomainSetVcpus(virDomainPtr dom, unsigned int nvcpus)
+{
+ return vboxDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
+static int
+vboxDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags)
+{
VBOX_OBJECT_CHECK(dom->conn, int, -1);
ISystemProperties *systemProperties = NULL;
PRUint32 maxCPUCount = 0;
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ vboxError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ return -1;
+ }
+
/* Currently every domain supports the same number of max cpus
* as that supported by vbox and thus take it directly from
* the systemproperties.
@@ -1909,6 +1930,13 @@ static int vboxDomainGetMaxVcpus(virDomainPtr dom) {
return ret;
}
+static int
+vboxDomainGetMaxVcpus(virDomainPtr dom)
+{
+ return vboxDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
+
static char *vboxDomainDumpXML(virDomainPtr dom, int flags) {
VBOX_OBJECT_CHECK(dom->conn, char *, NULL);
virDomainDefPtr def = NULL;
@@ -8267,8 +8295,8 @@ virDriver NAME(Driver) = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
vboxDomainSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ vboxDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ vboxDomainGetVcpusFlags, /* domainGetVcpusFlags */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
vboxDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c
index 7d67ced..d6c9c57 100644
--- a/src/xen/xen_driver.c
+++ b/src/xen/xen_driver.c
@@ -1069,11 +1069,18 @@ xenUnifiedDomainCoreDump (virDomainPtr dom, const char *to, int flags)
}
static int
-xenUnifiedDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus)
+xenUnifiedDomainSetVcpusFlags (virDomainPtr dom, unsigned int nvcpus,
+ unsigned int flags)
{
GET_PRIVATE(dom->conn);
int i;
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ xenUnifiedError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
+ flags);
+ return -1;
+ }
+
/* Try non-hypervisor methods first, then hypervisor direct method
* as a last resort.
*/
@@ -1093,6 +1100,12 @@ xenUnifiedDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus)
}
static int
+xenUnifiedDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus)
+{
+ return xenUnifiedDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
+static int
xenUnifiedDomainPinVcpu (virDomainPtr dom, unsigned int vcpu,
unsigned char *cpumap, int maplen)
{
@@ -1126,11 +1139,17 @@ xenUnifiedDomainGetVcpus (virDomainPtr dom,
}
static int
-xenUnifiedDomainGetMaxVcpus (virDomainPtr dom)
+xenUnifiedDomainGetVcpusFlags (virDomainPtr dom, unsigned int flags)
{
GET_PRIVATE(dom->conn);
int i, ret;
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ xenUnifiedError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
+ flags);
+ return -1;
+ }
+
for (i = 0; i < XEN_UNIFIED_NR_DRIVERS; ++i)
if (priv->opened[i] && drivers[i]->domainGetMaxVcpus) {
ret = drivers[i]->domainGetMaxVcpus (dom);
@@ -1140,6 +1159,13 @@ xenUnifiedDomainGetMaxVcpus (virDomainPtr dom)
return -1;
}
+static int
+xenUnifiedDomainGetMaxVcpus (virDomainPtr dom)
+{
+ return xenUnifiedDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
+
static char *
xenUnifiedDomainDumpXML (virDomainPtr dom, int flags)
{
@@ -1951,8 +1977,8 @@ static virDriver xenUnifiedDriver = {
xenUnifiedDomainRestore, /* domainRestore */
xenUnifiedDomainCoreDump, /* domainCoreDump */
xenUnifiedDomainSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ xenUnifiedDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ xenUnifiedDomainGetVcpusFlags, /* domainGetVcpusFlags */
xenUnifiedDomainPinVcpu, /* domainPinVcpu */
xenUnifiedDomainGetVcpus, /* domainGetVcpus */
xenUnifiedDomainGetMaxVcpus, /* domainGetMaxVcpus */
diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c
index 753169c..7d4ab8d 100644
--- a/src/xenapi/xenapi_driver.c
+++ b/src/xenapi/xenapi_driver.c
@@ -40,6 +40,11 @@
#include "xenapi_driver_private.h"
#include "xenapi_utils.h"
+#define VIR_FROM_THIS VIR_FROM_XENAPI
+
+#define xenapiError(code, ...) \
+ virReportErrorHelper(NULL, VIR_FROM_THIS, code, __FILE__, \
+ __FUNCTION__, __LINE__, __VA_ARGS__)
/*
* getCapsObject
@@ -987,19 +992,26 @@ xenapiDomainGetInfo (virDomainPtr dom, virDomainInfoPtr info)
/*
- * xenapiDomainSetVcpus
+ * xenapiDomainSetVcpusFlags
*
* Sets the VCPUs on the domain
* Return 0 on success or -1 in case of error
*/
static int
-xenapiDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus)
+xenapiDomainSetVcpusFlags (virDomainPtr dom, unsigned int nvcpus,
+ unsigned int flags)
{
-
/* vm.set_vcpus_max */
xen_vm vm;
xen_vm_set *vms;
xen_session *session = ((struct _xenapiPrivate *)(dom->conn->privateData))->session;
+
+ if (flags != VIR_DOMAIN_VCPU_LIVE) {
+ xenapiError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
+ flags);
+ return -1;
+ }
+
if (xen_vm_get_by_name_label(session, &vms, dom->name) && vms->size > 0) {
if (vms->size != 1) {
xenapiSessionErrorHandler(dom->conn, VIR_ERR_INTERNAL_ERROR,
@@ -1019,6 +1031,18 @@ xenapiDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus)
}
/*
+ * xenapiDomainSetVcpus
+ *
+ * Sets the VCPUs on the domain
+ * Return 0 on success or -1 in case of error
+ */
+static int
+xenapiDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus)
+{
+ return xenapiDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+}
+
+/*
* xenapiDomainPinVcpu
*
* Dynamically change the real CPUs which can be allocated to a virtual CPU
@@ -1140,19 +1164,26 @@ xenapiDomainGetVcpus (virDomainPtr dom,
}
/*
- * xenapiDomainGetMaxVcpus
+ * xenapiDomainGetVcpusFlags
*
*
- * Returns maximum number of Vcpus on success or -1 in case of error
+ * Returns Vcpus count on success or -1 in case of error
*/
static int
-xenapiDomainGetMaxVcpus (virDomainPtr dom)
+xenapiDomainGetVcpusFlags (virDomainPtr dom, unsigned int flags)
{
xen_vm vm;
xen_vm_set *vms;
int64_t maxvcpu = 0;
enum xen_vm_power_state state;
xen_session *session = ((struct _xenapiPrivate *)(dom->conn->privateData))->session;
+
+ if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
+ xenapiError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
+ flags);
+ return -1;
+ }
+
if (xen_vm_get_by_name_label(session, &vms, dom->name) && vms->size > 0) {
if (vms->size != 1) {
xenapiSessionErrorHandler(dom->conn, VIR_ERR_INTERNAL_ERROR,
@@ -1176,6 +1207,19 @@ xenapiDomainGetMaxVcpus (virDomainPtr dom)
}
/*
+ * xenapiDomainGetMaxVcpus
+ *
+ *
+ * Returns maximum number of Vcpus on success or -1 in case of error
+ */
+static int
+xenapiDomainGetMaxVcpus (virDomainPtr dom)
+{
+ return xenapiDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_MAXIMUM));
+}
+
+/*
* xenapiDomainDumpXML
*
*
@@ -1754,8 +1798,8 @@ static virDriver xenapiDriver = {
NULL, /* domainRestore */
NULL, /* domainCoreDump */
xenapiDomainSetVcpus, /* domainSetVcpus */
- NULL, /* domainSetVcpusFlags */
- NULL, /* domainGetVcpusFlags */
+ xenapiDomainSetVcpusFlags, /* domainSetVcpusFlags */
+ xenapiDomainGetVcpusFlags, /* domainGetVcpusFlags */
xenapiDomainPinVcpu, /* domainPinVcpu */
xenapiDomainGetVcpus, /* domainGetVcpus */
xenapiDomainGetMaxVcpus, /* domainGetMaxVcpus */
--
1.7.2.3

View File

@ -1,70 +0,0 @@
From 4c5166df583459574526841234d61d6ae5be19a0 Mon Sep 17 00:00:00 2001
From: David Allan <dallan@redhat.com>
Date: Tue, 19 May 2009 16:26:55 -0400
Subject: [PATCH] Step 6 of 8 Implement the server side dispatcher
---
qemud/remote.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 48 insertions(+), 0 deletions(-)
diff --git a/qemud/remote.c b/qemud/remote.c
index e27820f..8d24a3a 100644
--- a/qemud/remote.c
+++ b/qemud/remote.c
@@ -4323,6 +4323,54 @@ remoteDispatchNodeDeviceReset (struct qemud_server *server ATTRIBUTE_UNUSED,
}
+static int
+remoteDispatchNodeDeviceCreateXml(struct qemud_server *server ATTRIBUTE_UNUSED,
+ struct qemud_client *client ATTRIBUTE_UNUSED,
+ virConnectPtr conn,
+ remote_error *rerr,
+ remote_node_device_create_xml_args *args,
+ remote_node_device_create_xml_ret *ret)
+{
+ virNodeDevicePtr dev;
+
+ dev = virNodeDeviceCreateXML (conn, args->xml_desc, args->flags);
+ if (dev == NULL) {
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+
+ make_nonnull_node_device (&ret->dev, dev);
+ virNodeDeviceFree(dev);
+
+ return 0;
+}
+
+
+static int
+remoteDispatchNodeDeviceDestroy(struct qemud_server *server ATTRIBUTE_UNUSED,
+ struct qemud_client *client ATTRIBUTE_UNUSED,
+ virConnectPtr conn,
+ remote_error *rerr,
+ remote_node_device_destroy_args *args,
+ void *ret ATTRIBUTE_UNUSED)
+{
+ virNodeDevicePtr dev;
+
+ dev = virNodeDeviceLookupByName(conn, args->name);
+ if (dev == NULL) {
+ remoteDispatchFormatError(rerr, "%s", _("node_device not found"));
+ return -1;
+ }
+
+ if (virNodeDeviceDestroy(dev) == -1) {
+ remoteDispatchConnError(rerr, conn);
+ return -1;
+ }
+
+ return 0;
+}
+
+
/**************************
* Async Events
**************************/
--
1.6.0.6

View File

@ -0,0 +1,388 @@
From bf945ee97b72d3b0c4fc2da04530f5294f529d66 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Wed, 29 Sep 2010 15:20:23 -0600
Subject: [PATCH 08/15] vcpu: add virsh support
* tools/virsh.c (cmdSetvcpus): Add new flags. Let invalid
commands through to driver, to ease testing of hypervisor argument
validation.
(cmdMaxvcpus, cmdVcpucount): New commands.
(commands): Add new commands.
* tools/virsh.pod (setvcpus, vcpucount, maxvcpus): Document new
behavior.
---
tools/virsh.c | 247 ++++++++++++++++++++++++++++++++++++++++++++++++++-----
tools/virsh.pod | 38 ++++++++-
2 files changed, 262 insertions(+), 23 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 4f8c495..7fb7fbd 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -2281,10 +2281,216 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd)
}
/*
+ * "maxvcpus" command
+ */
+static const vshCmdInfo info_maxvcpus[] = {
+ {"help", N_("connection vcpu maximum")},
+ {"desc", N_("Show maximum number of virtual CPUs for guests on this connection.")},
+ {NULL, NULL}
+};
+
+static const vshCmdOptDef opts_maxvcpus[] = {
+ {"type", VSH_OT_STRING, 0, N_("domain type")},
+ {NULL, 0, 0, NULL}
+};
+
+static int
+cmdMaxvcpus(vshControl *ctl, const vshCmd *cmd)
+{
+ char *type;
+ int vcpus;
+
+ type = vshCommandOptString(cmd, "type", NULL);
+
+ if (!vshConnectionUsability(ctl, ctl->conn))
+ return FALSE;
+
+ vcpus = virConnectGetMaxVcpus(ctl->conn, type);
+ if (vcpus < 0)
+ return FALSE;
+ vshPrint(ctl, "%d\n", vcpus);
+
+ return TRUE;
+}
+
+/*
+ * "vcpucount" command
+ */
+static const vshCmdInfo info_vcpucount[] = {
+ {"help", N_("domain vcpu counts")},
+ {"desc", N_("Returns the number of virtual CPUs used by the domain.")},
+ {NULL, NULL}
+};
+
+static const vshCmdOptDef opts_vcpucount[] = {
+ {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")},
+ {"maximum", VSH_OT_BOOL, 0, N_("get maximum cap on vcpus")},
+ {"current", VSH_OT_BOOL, 0, N_("get current vcpu usage")},
+ {"config", VSH_OT_BOOL, 0, N_("get value to be used on next boot")},
+ {"live", VSH_OT_BOOL, 0, N_("get value from running domain")},
+ {NULL, 0, 0, NULL}
+};
+
+static int
+cmdVcpucount(vshControl *ctl, const vshCmd *cmd)
+{
+ virDomainPtr dom;
+ int ret = TRUE;
+ int maximum = vshCommandOptBool(cmd, "maximum");
+ int current = vshCommandOptBool(cmd, "current");
+ int config = vshCommandOptBool(cmd, "config");
+ int live = vshCommandOptBool(cmd, "live");
+ bool all = maximum + current + config + live == 0;
+ int count;
+
+ if (maximum && current) {
+ vshError(ctl, "%s",
+ _("--maximum and --current cannot both be specified"));
+ return FALSE;
+ }
+ if (config && live) {
+ vshError(ctl, "%s",
+ _("--config and --live cannot both be specified"));
+ return FALSE;
+ }
+ /* We want one of each pair of mutually exclusive options; that
+ * is, use of flags requires exactly two options. */
+ if (maximum + current + config + live == 1) {
+ vshError(ctl,
+ _("when using --%s, either --%s or --%s must be specified"),
+ (maximum ? "maximum" : current ? "current"
+ : config ? "config" : "live"),
+ maximum + current ? "config" : "maximum",
+ maximum + current ? "live" : "current");
+ return FALSE;
+ }
+
+ if (!vshConnectionUsability(ctl, ctl->conn))
+ return FALSE;
+
+ if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
+ return FALSE;
+
+ /* In all cases, try the new API first; if it fails because we are
+ * talking to an older client, try a fallback API before giving
+ * up. */
+ if (all || (maximum && config)) {
+ count = virDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_MAXIMUM |
+ VIR_DOMAIN_VCPU_CONFIG));
+ if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT
+ || last_error->code == VIR_ERR_INVALID_ARG)) {
+ char *tmp;
+ char *xml = virDomainGetXMLDesc(dom, VIR_DOMAIN_XML_INACTIVE);
+ if (xml && (tmp = strstr(xml, "<vcpu"))) {
+ tmp = strchr(tmp, '>');
+ if (!tmp || virStrToLong_i(tmp + 1, &tmp, 10, &count) < 0)
+ count = -1;
+ }
+ VIR_FREE(xml);
+ }
+
+ if (count < 0) {
+ virshReportError(ctl);
+ ret = FALSE;
+ } else if (all) {
+ vshPrint(ctl, "%-12s %-12s %3d\n", _("maximum"), _("config"),
+ count);
+ } else {
+ vshPrint(ctl, "%d\n", count);
+ }
+ virFreeError(last_error);
+ last_error = NULL;
+ }
+
+ if (all || (maximum && live)) {
+ count = virDomainGetVcpusFlags(dom, (VIR_DOMAIN_VCPU_MAXIMUM |
+ VIR_DOMAIN_VCPU_LIVE));
+ if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT
+ || last_error->code == VIR_ERR_INVALID_ARG)) {
+ count = virDomainGetMaxVcpus(dom);
+ }
+
+ if (count < 0) {
+ virshReportError(ctl);
+ ret = FALSE;
+ } else if (all) {
+ vshPrint(ctl, "%-12s %-12s %3d\n", _("maximum"), _("live"),
+ count);
+ } else {
+ vshPrint(ctl, "%d\n", count);
+ }
+ virFreeError(last_error);
+ last_error = NULL;
+ }
+
+ if (all || (current && config)) {
+ count = virDomainGetVcpusFlags(dom, VIR_DOMAIN_VCPU_CONFIG);
+ if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT
+ || last_error->code == VIR_ERR_INVALID_ARG)) {
+ char *tmp, *end;
+ char *xml = virDomainGetXMLDesc(dom, VIR_DOMAIN_XML_INACTIVE);
+ if (xml && (tmp = strstr(xml, "<vcpu"))) {
+ end = strchr(tmp, '>');
+ if (end) {
+ *end = '\0';
+ tmp = strstr(tmp, "current=");
+ if (!tmp)
+ tmp = end + 1;
+ else {
+ tmp += strlen("current=");
+ tmp += *tmp == '\'' || *tmp == '"';
+ }
+ }
+ if (!tmp || virStrToLong_i(tmp, &tmp, 10, &count) < 0)
+ count = -1;
+ }
+ VIR_FREE(xml);
+ }
+
+ if (count < 0) {
+ virshReportError(ctl);
+ ret = FALSE;
+ } else if (all) {
+ vshPrint(ctl, "%-12s %-12s %3d\n", _("current"), _("config"),
+ count);
+ } else {
+ vshPrint(ctl, "%d\n", count);
+ }
+ virFreeError(last_error);
+ last_error = NULL;
+ }
+
+ if (all || (current && live)) {
+ count = virDomainGetVcpusFlags(dom, VIR_DOMAIN_VCPU_LIVE);
+ if (count < 0 && (last_error->code == VIR_ERR_NO_SUPPORT
+ || last_error->code == VIR_ERR_INVALID_ARG)) {
+ virDomainInfo info;
+ if (virDomainGetInfo(dom, &info) == 0)
+ count = info.nrVirtCpu;
+ }
+
+ if (count < 0) {
+ virshReportError(ctl);
+ ret = FALSE;
+ } else if (all) {
+ vshPrint(ctl, "%-12s %-12s %3d\n", _("current"), _("live"),
+ count);
+ } else {
+ vshPrint(ctl, "%d\n", count);
+ }
+ virFreeError(last_error);
+ last_error = NULL;
+ }
+
+ virDomainFree(dom);
+ return ret;
+}
+
+/*
* "vcpuinfo" command
*/
static const vshCmdInfo info_vcpuinfo[] = {
- {"help", N_("domain vcpu information")},
+ {"help", N_("detailed domain vcpu information")},
{"desc", N_("Returns basic information about the domain virtual CPUs.")},
{NULL, NULL}
};
@@ -2514,6 +2720,9 @@ static const vshCmdInfo info_setvcpus[] = {
static const vshCmdOptDef opts_setvcpus[] = {
{"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id or uuid")},
{"count", VSH_OT_DATA, VSH_OFLAG_REQ, N_("number of virtual CPUs")},
+ {"maximum", VSH_OT_BOOL, 0, N_("set maximum limit on next boot")},
+ {"config", VSH_OT_BOOL, 0, N_("affect next boot")},
+ {"live", VSH_OT_BOOL, 0, N_("affect running domain")},
{NULL, 0, 0, NULL}
};
@@ -2522,8 +2731,13 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd)
{
virDomainPtr dom;
int count;
- int maxcpu;
int ret = TRUE;
+ int maximum = vshCommandOptBool(cmd, "maximum");
+ int config = vshCommandOptBool(cmd, "config");
+ int live = vshCommandOptBool(cmd, "live");
+ int flags = ((maximum ? VIR_DOMAIN_VCPU_MAXIMUM : 0) |
+ (config ? VIR_DOMAIN_VCPU_CONFIG : 0) |
+ (live ? VIR_DOMAIN_VCPU_LIVE : 0));
if (!vshConnectionUsability(ctl, ctl->conn))
return FALSE;
@@ -2532,26 +2746,15 @@ cmdSetvcpus(vshControl *ctl, const vshCmd *cmd)
return FALSE;
count = vshCommandOptInt(cmd, "count", &count);
- if (count <= 0) {
- vshError(ctl, "%s", _("Invalid number of virtual CPUs."));
- virDomainFree(dom);
- return FALSE;
- }
-
- maxcpu = virDomainGetMaxVcpus(dom);
- if (maxcpu <= 0) {
- virDomainFree(dom);
- return FALSE;
- }
-
- if (count > maxcpu) {
- vshError(ctl, "%s", _("Too many virtual CPUs."));
- virDomainFree(dom);
- return FALSE;
- }
- if (virDomainSetVcpus(dom, count) != 0) {
- ret = FALSE;
+ if (!flags) {
+ if (virDomainSetVcpus(dom, count) != 0) {
+ ret = FALSE;
+ }
+ } else {
+ if (virDomainSetVcpusFlags(dom, count, flags) < 0) {
+ ret = FALSE;
+ }
}
virDomainFree(dom);
@@ -9642,6 +9845,7 @@ static const vshCmdDef commands[] = {
{"freecell", cmdFreecell, opts_freecell, info_freecell},
{"hostname", cmdHostname, NULL, info_hostname},
{"list", cmdList, opts_list, info_list},
+ {"maxvcpus", cmdMaxvcpus, opts_maxvcpus, info_maxvcpus},
{"migrate", cmdMigrate, opts_migrate, info_migrate},
{"migrate-setmaxdowntime", cmdMigrateSetMaxDowntime, opts_migrate_setmaxdowntime, info_migrate_setmaxdowntime},
@@ -9748,6 +9952,7 @@ static const vshCmdDef commands[] = {
{"vol-name", cmdVolName, opts_vol_name, info_vol_name},
{"vol-key", cmdVolKey, opts_vol_key, info_vol_key},
+ {"vcpucount", cmdVcpucount, opts_vcpucount, info_vcpucount},
{"vcpuinfo", cmdVcpuinfo, opts_vcpuinfo, info_vcpuinfo},
{"vcpupin", cmdVcpupin, opts_vcpupin, info_vcpupin},
{"version", cmdVersion, NULL, info_version},
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 943a563..dbcc680 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -443,7 +443,14 @@ Remove the managed save file for a domain if it exists. The next time the
domain is started it will not restore to its previous state but instead will
do a full boot.
-=item B<migrate> optional I<--live> I<--suspend> I<domain-id> I<desturi> I<migrateuri>
+=item B<maxvcpus> optional I<type>
+
+Provide the maximum number of virtual CPUs supported for a guest VM on
+this connection. If provided, the I<type> parameter must be a valid
+type attribute for the <domain> element of XML.
+
+=item B<migrate> optional I<--live> I<--suspend> I<domain-id> I<desturi>
+I<migrateuri>
Migrate domain to another host. Add --live for live migration; --suspend
leaves the domain paused on the destination host. The I<desturi> is the
@@ -521,7 +528,8 @@ Displays the domain memory parameters.
Allows you to set the domain memory parameters. LXC and QEMU/KVM supports these parameters.
-=item B<setvcpus> I<domain-id> I<count>
+=item B<setvcpus> I<domain-id> I<count> optional I<--maximum> I<--config>
+I<--live>
Change the number of virtual CPUs active in the guest domain. Note that
I<count> may be limited by host, hypervisor or limit coming from the
@@ -530,6 +538,17 @@ original description of domain.
For Xen, you can only adjust the virtual CPUs of a running domain if
the domain is paravirtualized.
+If I<--config> is specified, the change will only affect the next
+boot of a domain. If I<--live> is specified, the domain must be
+running, and the change takes place immediately. Both flags may be
+specified, if supported by the hypervisor. If neither flag is given,
+then I<--live> is implied and it is up to the hypervisor whether
+I<--config> is also implied.
+
+If I<--maximum> is specified, then you must use I<--config> and
+avoid I<--live>; this flag controls the maximum limit of vcpus that
+can be hot-plugged the next time the domain is booted.
+
=item B<shutdown> I<domain-id>
Gracefully shuts down a domain. This coordinates with the domain OS
@@ -568,6 +587,21 @@ is not available the processes will provide an exit code of 1.
Undefine the configuration for an inactive domain. Since it's not running
the domain name or UUID must be used as the I<domain-id>.
+=item B<vcpucount> I<domain-id> optional I<--maximum> I<--current>
+I<--config> I<--live>
+
+Print information about the virtual cpu counts of the given
+I<domain-id>. If no flags are specified, all possible counts are
+listed in a table; otherwise, the output is limited to just the
+numeric value requested.
+
+I<--maximum> requests information on the maximum cap of vcpus that a
+domain can add via B<setvcpus>, while I<--current> shows the current
+usage; these two flags cannot both be specified. I<--config>
+requests information regarding the next time the domain will be
+booted, while I<--live> requires a running domain and lists current
+values; these two flags cannot both be specified.
+
=item B<vcpuinfo> I<domain-id>
Returns basic information about the domain virtual CPUs, like the number of
--
1.7.2.3

View File

@ -0,0 +1,519 @@
From 4617eedfaeee2b187a1f14691d25746ba3ff31b6 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Wed, 29 Sep 2010 10:20:07 -0600
Subject: [PATCH 07/15] vcpu: support maxvcpu in domain_conf
Although this patch adds a distinction between maximum vcpus and
current vcpus in the XML, the values should be identical for all
drivers at this point. Only in subsequent per-driver patches will
a distinction be made.
In general, virDomainGetInfo should prefer the current vcpus.
* src/conf/domain_conf.h (_virDomainDef): Adjust vcpus to unsigned
short, to match virDomainGetInfo limit. Add maxvcpus member.
* src/conf/domain_conf.c (virDomainDefParseXML)
(virDomainDefFormat): parse and print out vcpu details.
* src/xen/xend_internal.c (xenDaemonParseSxpr)
(xenDaemonFormatSxpr): Manage both vcpu numbers, and require them
to be equal for now.
* src/xen/xm_internal.c (xenXMDomainConfigParse)
(xenXMDomainConfigFormat): Likewise.
* src/phyp/phyp_driver.c (phypDomainDumpXML): Likewise.
* src/openvz/openvz_conf.c (openvzLoadDomains): Likewise.
* src/openvz/openvz_driver.c (openvzDomainDefineXML)
(openvzDomainCreateXML, openvzDomainSetVcpusInternal): Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainDumpXML, vboxDomainDefineXML):
Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainDumpXML): Likewise.
* src/xenapi/xenapi_utils.c (createVMRecordFromXml): Likewise.
* src/esx/esx_vmx.c (esxVMX_ParseConfig, esxVMX_FormatConfig):
Likewise.
* src/qemu/qemu_conf.c (qemuBuildSmpArgStr)
(qemuParseCommandLineSmp, qemuParseCommandLine): Likewise.
* src/qemu/qemu_driver.c (qemudDomainHotplugVcpus): Likewise.
* src/opennebula/one_conf.c (xmlOneTemplate): Likewise.
---
src/conf/domain_conf.c | 45 +++++++++++++++++++++++++++++++++++++------
src/conf/domain_conf.h | 3 +-
src/esx/esx_vmx.c | 24 ++++++++++++++--------
src/opennebula/one_conf.c | 9 +++++--
src/openvz/openvz_conf.c | 7 +++--
src/openvz/openvz_driver.c | 15 +++++++++----
src/phyp/phyp_driver.c | 2 +-
src/qemu/qemu_conf.c | 14 +++++++++++-
src/qemu/qemu_driver.c | 5 ++-
src/vbox/vbox_tmpl.c | 12 +++++++---
src/xen/xend_internal.c | 9 ++++---
src/xen/xm_internal.c | 11 ++++++---
src/xenapi/xenapi_driver.c | 2 +-
src/xenapi/xenapi_utils.c | 4 +-
14 files changed, 114 insertions(+), 48 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 78d7a6a..a997e06 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -4203,6 +4203,7 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps,
int i, n;
long id = -1;
virDomainDefPtr def;
+ unsigned long count;
if (VIR_ALLOC(def) < 0) {
virReportOOMError();
@@ -4287,8 +4288,37 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps,
&def->mem.swap_hard_limit) < 0)
def->mem.swap_hard_limit = 0;
- if (virXPathULong("string(./vcpu[1])", ctxt, &def->vcpus) < 0)
- def->vcpus = 1;
+ n = virXPathULong("string(./vcpu[1])", ctxt, &count);
+ if (n == -2) {
+ virDomainReportError(VIR_ERR_XML_ERROR, "%s",
+ _("maximum vcpus must be an integer"));
+ goto error;
+ } else if (n < 0) {
+ def->maxvcpus = 1;
+ } else {
+ def->maxvcpus = count;
+ if (def->maxvcpus != count || count == 0) {
+ virDomainReportError(VIR_ERR_XML_ERROR,
+ _("invalid maxvcpus %lu"), count);
+ goto error;
+ }
+ }
+
+ n = virXPathULong("string(./vcpu[1]/@current)", ctxt, &count);
+ if (n == -2) {
+ virDomainReportError(VIR_ERR_XML_ERROR, "%s",
+ _("current vcpus must be an integer"));
+ goto error;
+ } else if (n < 0) {
+ def->vcpus = def->maxvcpus;
+ } else {
+ def->vcpus = count;
+ if (def->vcpus != count || count == 0 || def->maxvcpus < count) {
+ virDomainReportError(VIR_ERR_XML_ERROR,
+ _("invalid current vcpus %lu"), count);
+ goto error;
+ }
+ }
tmp = virXPathString("string(./vcpu[1]/@cpuset)", ctxt);
if (tmp) {
@@ -6462,17 +6492,18 @@ char *virDomainDefFormat(virDomainDefPtr def,
if (def->cpumask[n] != 1)
allones = 0;
- if (allones) {
- virBufferVSprintf(&buf, " <vcpu>%lu</vcpu>\n", def->vcpus);
- } else {
+ virBufferAddLit(&buf, " <vcpu");
+ if (!allones) {
char *cpumask = NULL;
if ((cpumask =
virDomainCpuSetFormat(def->cpumask, def->cpumasklen)) == NULL)
goto cleanup;
- virBufferVSprintf(&buf, " <vcpu cpuset='%s'>%lu</vcpu>\n",
- cpumask, def->vcpus);
+ virBufferVSprintf(&buf, " cpuset='%s'", cpumask);
VIR_FREE(cpumask);
}
+ if (def->vcpus != def->maxvcpus)
+ virBufferVSprintf(&buf, " current='%u'", def->vcpus);
+ virBufferVSprintf(&buf, ">%u</vcpu>\n", def->maxvcpus);
if (def->os.bootloader) {
virBufferEscapeString(&buf, " <bootloader>%s</bootloader>\n",
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index db09c23..5499f28 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -885,7 +885,8 @@ struct _virDomainDef {
unsigned long min_guarantee;
unsigned long swap_hard_limit;
} mem;
- unsigned long vcpus;
+ unsigned short vcpus;
+ unsigned short maxvcpus;
int cpumasklen;
char *cpumask;
diff --git a/src/esx/esx_vmx.c b/src/esx/esx_vmx.c
index 7ec8c0e..0a26614 100644
--- a/src/esx/esx_vmx.c
+++ b/src/esx/esx_vmx.c
@@ -50,7 +50,7 @@ def->uuid = <value> <=> uuid.bios = "<value>"
def->name = <value> <=> displayName = "<value>"
def->mem.max_balloon = <value kilobyte> <=> memsize = "<value megabyte>" # must be a multiple of 4, defaults to 32
def->mem.cur_balloon = <value kilobyte> <=> sched.mem.max = "<value megabyte>" # defaults to "unlimited" -> def->mem.cur_balloon = def->mem.max_balloon
-def->vcpus = <value> <=> numvcpus = "<value>" # must be 1 or a multiple of 2, defaults to 1
+def->maxvcpus = <value> <=> numvcpus = "<value>" # must be 1 or a multiple of 2, defaults to 1
def->cpumask = <uint list> <=> sched.cpu.affinity = "<uint list>"
@@ -1075,7 +1075,7 @@ esxVMX_ParseConfig(esxVMX_Context *ctx, virCapsPtr caps, const char *vmx,
goto cleanup;
}
- def->vcpus = numvcpus;
+ def->maxvcpus = def->vcpus = numvcpus;
/* vmx:sched.cpu.affinity -> def:cpumask */
// VirtualMachine:config.cpuAffinity.affinitySet
@@ -2609,16 +2609,22 @@ esxVMX_FormatConfig(esxVMX_Context *ctx, virCapsPtr caps, virDomainDefPtr def,
(int)(def->mem.cur_balloon / 1024));
}
- /* def:vcpus -> vmx:numvcpus */
- if (def->vcpus <= 0 || (def->vcpus % 2 != 0 && def->vcpus != 1)) {
+ /* def:maxvcpus -> vmx:numvcpus */
+ if (def->vcpus != def->maxvcpus) {
+ ESX_ERROR(VIR_ERR_CONFIG_UNSUPPORTED,
+ _("No support for domain XML entry 'vcpu' attribute "
+ "'current'"));
+ goto cleanup;
+ }
+ if (def->maxvcpus <= 0 || (def->maxvcpus % 2 != 0 && def->maxvcpus != 1)) {
ESX_ERROR(VIR_ERR_INTERNAL_ERROR,
_("Expecting domain XML entry 'vcpu' to be an unsigned "
"integer (1 or a multiple of 2) but found %d"),
- (int)def->vcpus);
+ def->maxvcpus);
goto cleanup;
}
- virBufferVSprintf(&buffer, "numvcpus = \"%d\"\n", (int)def->vcpus);
+ virBufferVSprintf(&buffer, "numvcpus = \"%d\"\n", def->maxvcpus);
/* def:cpumask -> vmx:sched.cpu.affinity */
if (def->cpumasklen > 0) {
@@ -2632,11 +2638,11 @@ esxVMX_FormatConfig(esxVMX_Context *ctx, virCapsPtr caps, virDomainDefPtr def,
}
}
- if (sched_cpu_affinity_length < def->vcpus) {
+ if (sched_cpu_affinity_length < def->maxvcpus) {
ESX_ERROR(VIR_ERR_INTERNAL_ERROR,
_("Expecting domain XML attribute 'cpuset' of entry "
- "'vcpu' to contains at least %d CPU(s)"),
- (int)def->vcpus);
+ "'vcpu' to contain at least %d CPU(s)"),
+ def->maxvcpus);
goto cleanup;
}
diff --git a/src/opennebula/one_conf.c b/src/opennebula/one_conf.c
index 44e28dc..2079c51 100644
--- a/src/opennebula/one_conf.c
+++ b/src/opennebula/one_conf.c
@@ -1,5 +1,7 @@
/*----------------------------------------------------------------------------------*/
-/* Copyright 2002-2009, Distributed Systems Architecture Group, Universidad
+/*
+ * Copyright (C) 2010 Red Hat, Inc.
+ * Copyright 2002-2009, Distributed Systems Architecture Group, Universidad
* Complutense de Madrid (dsa-research.org)
*
* This library is free software; you can redistribute it and/or
@@ -169,9 +171,10 @@ char* xmlOneTemplate(virDomainDefPtr def)
{
int i;
virBuffer buf= VIR_BUFFER_INITIALIZER;
- virBufferVSprintf(&buf,"#OpenNebula Template automatically generated by libvirt\nNAME = %s\nCPU = %ld\nMEMORY = %ld\n",
+ virBufferVSprintf(&buf,"#OpenNebula Template automatically generated "
+ "by libvirt\nNAME = %s\nCPU = %d\nMEMORY = %ld\n",
def->name,
- def->vcpus,
+ def->maxvcpus,
(def->mem.max_balloon)/1024);
/*Optional Booting OpenNebula Information:*/
diff --git a/src/openvz/openvz_conf.c b/src/openvz/openvz_conf.c
index ec11bbc..c84a6f3 100644
--- a/src/openvz/openvz_conf.c
+++ b/src/openvz/openvz_conf.c
@@ -507,11 +507,12 @@ int openvzLoadDomains(struct openvz_driver *driver) {
veid);
goto cleanup;
} else if (ret > 0) {
- dom->def->vcpus = strtoI(temp);
+ dom->def->maxvcpus = strtoI(temp);
}
- if (ret == 0 || dom->def->vcpus == 0)
- dom->def->vcpus = openvzGetNodeCPUs();
+ if (ret == 0 || dom->def->maxvcpus == 0)
+ dom->def->maxvcpus = openvzGetNodeCPUs();
+ dom->def->vcpus = dom->def->maxvcpus;
/* XXX load rest of VM config data .... */
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index 0f3cfdf..b7c2754 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -925,8 +925,13 @@ openvzDomainDefineXML(virConnectPtr conn, const char *xml)
if (openvzDomainSetNetworkConfig(conn, vm->def) < 0)
goto cleanup;
- if (vm->def->vcpus > 0) {
- if (openvzDomainSetVcpusInternal(vm, vm->def->vcpus) < 0) {
+ if (vm->def->vcpus != vm->def->maxvcpus) {
+ openvzError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("current vcpu count must equal maximum"));
+ goto cleanup;
+ }
+ if (vm->def->maxvcpus > 0) {
+ if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) {
openvzError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Could not set number of virtual cpu"));
goto cleanup;
@@ -1019,8 +1024,8 @@ openvzDomainCreateXML(virConnectPtr conn, const char *xml,
vm->def->id = vm->pid;
vm->state = VIR_DOMAIN_RUNNING;
- if (vm->def->vcpus > 0) {
- if (openvzDomainSetVcpusInternal(vm, vm->def->vcpus) < 0) {
+ if (vm->def->maxvcpus > 0) {
+ if (openvzDomainSetVcpusInternal(vm, vm->def->maxvcpus) < 0) {
openvzError(VIR_ERR_INTERNAL_ERROR, "%s",
_("Could not set number of virtual cpu"));
goto cleanup;
@@ -1249,7 +1254,7 @@ static int openvzDomainSetVcpusInternal(virDomainObjPtr vm,
return -1;
}
- vm->def->vcpus = nvcpus;
+ vm->def->maxvcpus = vm->def->vcpus = nvcpus;
return 0;
}
diff --git a/src/phyp/phyp_driver.c b/src/phyp/phyp_driver.c
index e284ae0..3d0ed11 100644
--- a/src/phyp/phyp_driver.c
+++ b/src/phyp/phyp_driver.c
@@ -3540,7 +3540,7 @@ phypDomainDumpXML(virDomainPtr dom, int flags)
goto err;
}
- if ((def.vcpus =
+ if ((def.maxvcpus = def.vcpus =
phypGetLparCPU(dom->conn, managed_system, dom->id)) == 0) {
VIR_ERROR0(_("Unable to determine domain's CPU."));
goto err;
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 83c0f83..38c8351 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -3711,7 +3711,7 @@ qemuBuildSmpArgStr(const virDomainDefPtr def,
{
virBuffer buf = VIR_BUFFER_INITIALIZER;
- virBufferVSprintf(&buf, "%lu", def->vcpus);
+ virBufferVSprintf(&buf, "%u", def->vcpus);
if ((qemuCmdFlags & QEMUD_CMD_FLAG_SMP_TOPOLOGY)) {
/* sockets, cores, and threads are either all zero
@@ -3722,11 +3722,18 @@ qemuBuildSmpArgStr(const virDomainDefPtr def,
virBufferVSprintf(&buf, ",threads=%u", def->cpu->threads);
}
else {
- virBufferVSprintf(&buf, ",sockets=%lu", def->vcpus);
+ virBufferVSprintf(&buf, ",sockets=%u", def->maxvcpus);
virBufferVSprintf(&buf, ",cores=%u", 1);
virBufferVSprintf(&buf, ",threads=%u", 1);
}
}
+ if (def->vcpus != def->maxvcpus) {
+ virBufferFreeAndReset(&buf);
+ qemuReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("setting current vcpu count less than maximum is "
+ "not supported yet"));
+ return NULL;
+ }
if (virBufferError(&buf)) {
virBufferFreeAndReset(&buf);
@@ -6178,6 +6185,8 @@ qemuParseCommandLineSmp(virDomainDefPtr dom,
}
}
+ dom->maxvcpus = dom->vcpus;
+
if (sockets && cores && threads) {
virCPUDefPtr cpu;
@@ -6247,6 +6256,7 @@ virDomainDefPtr qemuParseCommandLine(virCapsPtr caps,
def->id = -1;
def->mem.cur_balloon = def->mem.max_balloon = 64 * 1024;
+ def->maxvcpus = 1;
def->vcpus = 1;
def->clock.offset = VIR_DOMAIN_CLOCK_OFFSET_UTC;
def->features = (1 << VIR_DOMAIN_FEATURE_ACPI)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 7a2ea8f..c66dc04 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -2425,8 +2425,9 @@ qemuDetectVcpuPIDs(struct qemud_driver *driver,
if (ncpupids != vm->def->vcpus) {
qemuReportError(VIR_ERR_INTERNAL_ERROR,
- _("got wrong number of vCPU pids from QEMU monitor. got %d, wanted %d"),
- ncpupids, (int)vm->def->vcpus);
+ _("got wrong number of vCPU pids from QEMU monitor. "
+ "got %d, wanted %d"),
+ ncpupids, vm->def->vcpus);
VIR_FREE(cpupids);
return -1;
}
diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c
index 0cbe8b3..5a859a4 100644
--- a/src/vbox/vbox_tmpl.c
+++ b/src/vbox/vbox_tmpl.c
@@ -2028,7 +2028,7 @@ static char *vboxDomainDumpXML(virDomainPtr dom, int flags) {
def->mem.max_balloon = memorySize * 1024;
machine->vtbl->GetCPUCount(machine, &CPUCount);
- def->vcpus = CPUCount;
+ def->maxvcpus = def->vcpus = CPUCount;
/* Skip cpumasklen, cpumask, onReboot, onPoweroff, onCrash */
@@ -4598,11 +4598,15 @@ static virDomainPtr vboxDomainDefineXML(virConnectPtr conn, const char *xml) {
def->mem.cur_balloon, (unsigned)rc);
}
- rc = machine->vtbl->SetCPUCount(machine, def->vcpus);
+ if (def->vcpus != def->maxvcpus) {
+ vboxError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("current vcpu count must equal maximum"));
+ }
+ rc = machine->vtbl->SetCPUCount(machine, def->maxvcpus);
if (NS_FAILED(rc)) {
vboxError(VIR_ERR_INTERNAL_ERROR,
- _("could not set the number of virtual CPUs to: %lu, rc=%08x"),
- def->vcpus, (unsigned)rc);
+ _("could not set the number of virtual CPUs to: %u, rc=%08x"),
+ def->maxvcpus, (unsigned)rc);
}
#if VBOX_API_VERSION < 3001
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index 5ffc3c8..456b477 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -2190,7 +2190,8 @@ xenDaemonParseSxpr(virConnectPtr conn,
}
}
- def->vcpus = sexpr_int(root, "domain/vcpus");
+ def->maxvcpus = sexpr_int(root, "domain/vcpus");
+ def->vcpus = def->maxvcpus;
tmp = sexpr_node(root, "domain/on_poweroff");
if (tmp != NULL) {
@@ -5649,7 +5650,7 @@ xenDaemonFormatSxprInput(virDomainInputDefPtr input,
*
* Generate an SEXPR representing the domain configuration.
*
- * Returns the 0 terminatedi S-Expr string or NULL in case of error.
+ * Returns the 0 terminated S-Expr string or NULL in case of error.
* the caller must free() the returned value.
*/
char *
@@ -5666,7 +5667,7 @@ xenDaemonFormatSxpr(virConnectPtr conn,
virBufferVSprintf(&buf, "(name '%s')", def->name);
virBufferVSprintf(&buf, "(memory %lu)(maxmem %lu)",
def->mem.cur_balloon/1024, def->mem.max_balloon/1024);
- virBufferVSprintf(&buf, "(vcpus %lu)", def->vcpus);
+ virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus);
if (def->cpumask) {
char *ranges = virDomainCpuSetFormat(def->cpumask, def->cpumasklen);
@@ -5761,7 +5762,7 @@ xenDaemonFormatSxpr(virConnectPtr conn,
else
virBufferVSprintf(&buf, "(kernel '%s')", def->os.loader);
- virBufferVSprintf(&buf, "(vcpus %lu)", def->vcpus);
+ virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus);
for (i = 0 ; i < def->os.nBootDevs ; i++) {
switch (def->os.bootDevs[i]) {
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index 8e42a1c..bf20a64 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -678,6 +678,7 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) {
int i;
const char *defaultArch, *defaultMachine;
int vmlocaltime = 0;
+ unsigned long count;
if (VIR_ALLOC(def) < 0) {
virReportOOMError();
@@ -770,9 +771,11 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) {
def->mem.cur_balloon *= 1024;
def->mem.max_balloon *= 1024;
-
- if (xenXMConfigGetULong(conf, "vcpus", &def->vcpus, 1) < 0)
+ if (xenXMConfigGetULong(conf, "vcpus", &count, 1) < 0 ||
+ (unsigned short) count != count)
goto cleanup;
+ def->maxvcpus = count;
+ def->vcpus = def->maxvcpus;
if (xenXMConfigGetString(conf, "cpus", &str, NULL) < 0)
goto cleanup;
@@ -1650,7 +1653,7 @@ int xenXMDomainSetVcpus(virDomainPtr domain, unsigned int vcpus) {
if (!(entry = virHashLookup(priv->configCache, filename)))
goto cleanup;
- entry->def->vcpus = vcpus;
+ entry->def->maxvcpus = entry->def->vcpus = vcpus;
/* If this fails, should we try to undo our changes to the
* in-memory representation of the config file. I say not!
@@ -2241,7 +2244,7 @@ virConfPtr xenXMDomainConfigFormat(virConnectPtr conn,
if (xenXMConfigSetInt(conf, "memory", def->mem.cur_balloon / 1024) < 0)
goto no_memory;
- if (xenXMConfigSetInt(conf, "vcpus", def->vcpus) < 0)
+ if (xenXMConfigSetInt(conf, "vcpus", def->maxvcpus) < 0)
goto no_memory;
if ((def->cpumask != NULL) &&
diff --git a/src/xenapi/xenapi_driver.c b/src/xenapi/xenapi_driver.c
index 7d4ab8d..5ccdede 100644
--- a/src/xenapi/xenapi_driver.c
+++ b/src/xenapi/xenapi_driver.c
@@ -1335,7 +1335,7 @@ xenapiDomainDumpXML (virDomainPtr dom, int flags ATTRIBUTE_UNUSED)
} else {
defPtr->mem.cur_balloon = memory;
}
- defPtr->vcpus = xenapiDomainGetMaxVcpus(dom);
+ defPtr->maxvcpus = defPtr->vcpus = xenapiDomainGetMaxVcpus(dom);
enum xen_on_normal_exit action;
if (xen_vm_get_actions_after_shutdown(session, &action, vm)) {
defPtr->onPoweroff = xenapiNormalExitEnum2virDomainLifecycle(action);
diff --git a/src/xenapi/xenapi_utils.c b/src/xenapi/xenapi_utils.c
index be55491..a7e2a4b 100644
--- a/src/xenapi/xenapi_utils.c
+++ b/src/xenapi/xenapi_utils.c
@@ -510,8 +510,8 @@ createVMRecordFromXml (virConnectPtr conn, virDomainDefPtr def,
else
(*record)->memory_dynamic_max = (*record)->memory_static_max;
- if (def->vcpus) {
- (*record)->vcpus_max = (int64_t) def->vcpus;
+ if (def->maxvcpus) {
+ (*record)->vcpus_max = (int64_t) def->maxvcpus;
(*record)->vcpus_at_startup = (int64_t) def->vcpus;
}
if (def->onPoweroff)
--
1.7.2.3

View File

@ -1,131 +0,0 @@
From 193cc4abbb6c2fc5557d3699f86ff0103d5a21ef Mon Sep 17 00:00:00 2001
From: David Allan <dallan@redhat.com>
Date: Tue, 19 May 2009 16:47:31 -0400
Subject: [PATCH 8/8] Step 8 of 8 Add virsh support
---
src/virsh.c | 103 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 103 insertions(+), 0 deletions(-)
diff --git a/src/virsh.c b/src/virsh.c
index cb32ede..ab2a2b7 100644
--- a/src/virsh.c
+++ b/src/virsh.c
@@ -2962,6 +2962,106 @@ cmdPoolCreate(vshControl *ctl, const vshCmd *cmd)
/*
+ * "nodedev-create" command
+ */
+static const vshCmdInfo info_node_device_create[] = {
+ {"help", N_("create a device defined by an XML file on the node")},
+ {"desc", N_("Create a device on the node. Note that this "
+ "command creates devices on the physical host "
+ "that can then be assigned to a virtual machine.")},
+ {NULL, NULL}
+};
+
+static const vshCmdOptDef opts_node_device_create[] = {
+ {"file", VSH_OT_DATA, VSH_OFLAG_REQ,
+ N_("file containing an XML description of the device")},
+ {NULL, 0, 0, NULL}
+};
+
+static int
+cmdNodeDeviceCreate(vshControl *ctl, const vshCmd *cmd)
+{
+ virNodeDevicePtr dev = NULL;
+ char *from;
+ int found = 0;
+ int ret = TRUE;
+ char *buffer;
+
+ if (!vshConnectionUsability(ctl, ctl->conn, TRUE))
+ return FALSE;
+
+ from = vshCommandOptString(cmd, "file", &found);
+ if (!found) {
+ return FALSE;
+ }
+
+ if (virFileReadAll(from, VIRSH_MAX_XML_FILE, &buffer) < 0) {
+ return FALSE;
+ }
+
+ dev = virNodeDeviceCreateXML(ctl->conn, buffer, 0);
+ free (buffer);
+
+ if (dev != NULL) {
+ vshPrint(ctl, _("Node device %s created from %s\n"),
+ virNodeDeviceGetName(dev), from);
+ } else {
+ vshError(ctl, FALSE, _("Failed to create node device from %s"), from);
+ ret = FALSE;
+ }
+
+ return ret;
+}
+
+
+/*
+ * "nodedev-destroy" command
+ */
+static const vshCmdInfo info_node_device_destroy[] = {
+ {"help", N_("destroy a device on the node")},
+ {"desc", N_("Destroy a device on the node. Note that this "
+ "command destroys devices on the physical host")},
+ {NULL, NULL}
+};
+
+static const vshCmdOptDef opts_node_device_destroy[] = {
+ {"name", VSH_OT_DATA, VSH_OFLAG_REQ,
+ N_("name of the device to be destroyed")},
+ {NULL, 0, 0, NULL}
+};
+
+static int
+cmdNodeDeviceDestroy(vshControl *ctl, const vshCmd *cmd)
+{
+ virNodeDevicePtr dev = NULL;
+ int ret = TRUE;
+ int found = 0;
+ char *name;
+
+ if (!vshConnectionUsability(ctl, ctl->conn, TRUE)) {
+ return FALSE;
+ }
+
+ name = vshCommandOptString(cmd, "name", &found);
+ if (!found) {
+ return FALSE;
+ }
+
+ dev = virNodeDeviceLookupByName(ctl->conn, name);
+
+ if (virNodeDeviceDestroy(dev) == 0) {
+ vshPrint(ctl, _("Destroyed node device '%s'\n"), name);
+ } else {
+ vshError(ctl, FALSE, _("Failed to destroy node device '%s'"), name);
+ ret = FALSE;
+ }
+
+ virNodeDeviceFree(dev);
+ return ret;
+}
+
+
+/*
* XML Building helper for pool-define-as and pool-create-as
*/
static const vshCmdOptDef opts_pool_X_as[] = {
@@ -5895,6 +5996,8 @@ static const vshCmdDef commands[] = {
{"nodedev-dettach", cmdNodeDeviceDettach, opts_node_device_dettach, info_node_device_dettach},
{"nodedev-reattach", cmdNodeDeviceReAttach, opts_node_device_reattach, info_node_device_reattach},
{"nodedev-reset", cmdNodeDeviceReset, opts_node_device_reset, info_node_device_reset},
+ {"nodedev-create", cmdNodeDeviceCreate, opts_node_device_create, info_node_device_create},
+ {"nodedev-destroy", cmdNodeDeviceDestroy, opts_node_device_destroy, info_node_device_destroy},
{"pool-autostart", cmdPoolAutostart, opts_pool_autostart, info_pool_autostart},
{"pool-build", cmdPoolBuild, opts_pool_build, info_pool_build},
--
1.6.0.6

View File

@ -0,0 +1,197 @@
From 6c9e6b956453d0f0c4ff542ef8a184d663a39266 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Mon, 4 Oct 2010 17:01:12 -0600
Subject: [PATCH 09/15] vcpu: support all flags in test driver
* src/test/test_driver.c (testDomainGetVcpusFlags)
(testDomainSetVcpusFlags): Support all flags.
(testDomainUpdateVCPUs): Update cpu count here.
---
src/test/test_driver.c | 128 ++++++++++++++++++++++++++++++++++++++++-------
1 files changed, 109 insertions(+), 19 deletions(-)
diff --git a/src/test/test_driver.c b/src/test/test_driver.c
index b70c80d..a9d3d89 100644
--- a/src/test/test_driver.c
+++ b/src/test/test_driver.c
@@ -450,6 +450,7 @@ testDomainUpdateVCPUs(virConnectPtr conn,
goto cleanup;
}
+ dom->def->vcpus = nvcpus;
ret = 0;
cleanup:
return ret;
@@ -2032,12 +2033,51 @@ cleanup:
static int
testDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags)
{
- if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
- testError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ testConnPtr privconn = domain->conn->privateData;
+ virDomainObjPtr vm;
+ virDomainDefPtr def;
+ int ret = -1;
+
+ virCheckFlags(VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_CONFIG |
+ VIR_DOMAIN_VCPU_MAXIMUM, -1);
+
+ /* Exactly one of LIVE or CONFIG must be set. */
+ if (!(flags & VIR_DOMAIN_VCPU_LIVE) == !(flags & VIR_DOMAIN_VCPU_CONFIG)) {
+ testError(VIR_ERR_INVALID_ARG,
+ _("invalid flag combination: (0x%x)"), flags);
return -1;
}
- return testGetMaxVCPUs(domain->conn, "test");
+ testDriverLock(privconn);
+ vm = virDomainFindByUUID(&privconn->domains, domain->uuid);
+ testDriverUnlock(privconn);
+
+ if (!vm) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(domain->uuid, uuidstr);
+ testError(VIR_ERR_NO_DOMAIN,
+ _("no domain with matching uuid '%s'"), uuidstr);
+ goto cleanup;
+ }
+
+ if (flags & VIR_DOMAIN_VCPU_LIVE) {
+ if (!virDomainObjIsActive(vm)) {
+ testError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("domain not active"));
+ goto cleanup;
+ }
+ def = vm->def;
+ } else {
+ def = vm->newDef ? vm->newDef : vm->def;
+ }
+
+ ret = (flags & VIR_DOMAIN_VCPU_MAXIMUM) ? def->maxvcpus : def->vcpus;
+
+cleanup:
+ if (vm)
+ virDomainObjUnlock(vm);
+ return ret;
}
static int
@@ -2053,21 +2093,30 @@ testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus,
{
testConnPtr privconn = domain->conn->privateData;
virDomainObjPtr privdom = NULL;
+ virDomainDefPtr def;
int ret = -1, maxvcpus;
- if (flags != VIR_DOMAIN_VCPU_LIVE) {
- testError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"), flags);
+ virCheckFlags(VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_CONFIG |
+ VIR_DOMAIN_VCPU_MAXIMUM, -1);
+
+ /* At least one of LIVE or CONFIG must be set. MAXIMUM cannot be
+ * mixed with LIVE. */
+ if ((flags & (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG)) == 0 ||
+ (flags & (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) ==
+ (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) {
+ testError(VIR_ERR_INVALID_ARG,
+ _("invalid flag combination: (0x%x)"), flags);
+ return -1;
+ }
+ if (!nrCpus || (maxvcpus = testGetMaxVCPUs(domain->conn, NULL)) < nrCpus) {
+ testError(VIR_ERR_INVALID_ARG,
+ _("argument out of range: %d"), nrCpus);
return -1;
}
-
- /* Do this first before locking */
- maxvcpus = testDomainGetMaxVcpus(domain);
- if (maxvcpus < 0)
- goto cleanup;
testDriverLock(privconn);
- privdom = virDomainFindByName(&privconn->domains,
- domain->name);
+ privdom = virDomainFindByUUID(&privconn->domains, domain->uuid);
testDriverUnlock(privconn);
if (privdom == NULL) {
@@ -2075,13 +2124,17 @@ testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus,
goto cleanup;
}
- if (!virDomainObjIsActive(privdom)) {
+ if (!virDomainObjIsActive(privdom) && (flags & VIR_DOMAIN_VCPU_LIVE)) {
testError(VIR_ERR_OPERATION_INVALID,
"%s", _("cannot hotplug vcpus for an inactive domain"));
goto cleanup;
}
- /* We allow more cpus in guest than host */
+ /* We allow more cpus in guest than host, but not more than the
+ * domain's starting limit. */
+ if ((flags & (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) ==
+ VIR_DOMAIN_VCPU_LIVE && privdom->def->maxvcpus < maxvcpus)
+ maxvcpus = privdom->def->maxvcpus;
if (nrCpus > maxvcpus) {
testError(VIR_ERR_INVALID_ARG,
"requested cpu amount exceeds maximum (%d > %d)",
@@ -2089,12 +2142,49 @@ testDomainSetVcpusFlags(virDomainPtr domain, unsigned int nrCpus,
goto cleanup;
}
- /* Update VCPU state for the running domain */
- if (testDomainUpdateVCPUs(domain->conn, privdom, nrCpus, 0) < 0)
- goto cleanup;
+ switch (flags) {
+ case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_CONFIG:
+ def = privdom->def;
+ if (virDomainObjIsActive(privdom)) {
+ if (privdom->newDef)
+ def = privdom->newDef;
+ else {
+ testError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("no persistent state"));
+ goto cleanup;
+ }
+ }
+ def->maxvcpus = nrCpus;
+ if (nrCpus < def->vcpus)
+ def->vcpus = nrCpus;
+ ret = 0;
+ break;
- privdom->def->vcpus = nrCpus;
- ret = 0;
+ case VIR_DOMAIN_VCPU_CONFIG:
+ def = privdom->def;
+ if (virDomainObjIsActive(privdom)) {
+ if (privdom->newDef)
+ def = privdom->newDef;
+ else {
+ testError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("no persistent state"));
+ goto cleanup;
+ }
+ }
+ def->vcpus = nrCpus;
+ ret = 0;
+ break;
+
+ case VIR_DOMAIN_VCPU_LIVE:
+ ret = testDomainUpdateVCPUs(domain->conn, privdom, nrCpus, 0);
+ break;
+
+ case VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG:
+ ret = testDomainUpdateVCPUs(domain->conn, privdom, nrCpus, 0);
+ if (ret == 0 && privdom->newDef)
+ privdom->newDef->vcpus = nrCpus;
+ break;
+ }
cleanup:
if (privdom)
--
1.7.2.3

View File

@ -0,0 +1,122 @@
From d67c189e80e6aef7adf13e5763365555cfc1a02a Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Wed, 29 Sep 2010 15:58:47 -0600
Subject: [PATCH 10/15] vcpu: improve vcpu support in qemu command line
* src/qemu/qemu_conf.c (qemuParseCommandLineSmp): Distinguish
between vcpus and maxvcpus, for new enough qemu.
* tests/qemuargv2xmltest.c (mymain): Add new test.
* tests/qemuxml2argvtest.c (mymain): Likewise.
* tests/qemuxml2xmltest.c (mymain): Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-smp.args: New file.
---
src/qemu/qemu_conf.c | 13 +++++++++----
tests/qemuargv2xmltest.c | 2 ++
tests/qemuxml2argvdata/qemuxml2argv-smp.args | 1 +
tests/qemuxml2argvtest.c | 2 ++
tests/qemuxml2xmltest.c | 2 ++
5 files changed, 16 insertions(+), 4 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-smp.args
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 38c8351..ffe184b 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -3714,6 +3714,8 @@ qemuBuildSmpArgStr(const virDomainDefPtr def,
virBufferVSprintf(&buf, "%u", def->vcpus);
if ((qemuCmdFlags & QEMUD_CMD_FLAG_SMP_TOPOLOGY)) {
+ if (def->vcpus != def->maxvcpus)
+ virBufferVSprintf(&buf, ",maxcpus=%u", def->maxvcpus);
/* sockets, cores, and threads are either all zero
* or all non-zero, thus checking one of them is enough */
if (def->cpu && def->cpu->sockets) {
@@ -3726,12 +3728,12 @@ qemuBuildSmpArgStr(const virDomainDefPtr def,
virBufferVSprintf(&buf, ",cores=%u", 1);
virBufferVSprintf(&buf, ",threads=%u", 1);
}
- }
- if (def->vcpus != def->maxvcpus) {
+ } else if (def->vcpus != def->maxvcpus) {
virBufferFreeAndReset(&buf);
+ /* FIXME - consider hot-unplugging cpus after boot for older qemu */
qemuReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
_("setting current vcpu count less than maximum is "
- "not supported yet"));
+ "not supported with this QEMU binary"));
return NULL;
}
@@ -6153,6 +6155,7 @@ qemuParseCommandLineSmp(virDomainDefPtr dom,
unsigned int sockets = 0;
unsigned int cores = 0;
unsigned int threads = 0;
+ unsigned int maxcpus = 0;
int i;
int nkws;
char **kws;
@@ -6180,12 +6183,14 @@ qemuParseCommandLineSmp(virDomainDefPtr dom,
cores = n;
else if (STREQ(kws[i], "threads"))
threads = n;
+ else if (STREQ(kws[i], "maxcpus"))
+ maxcpus = n;
else
goto syntax;
}
}
- dom->maxvcpus = dom->vcpus;
+ dom->maxvcpus = maxcpus ? maxcpus : dom->vcpus;
if (sockets && cores && threads) {
virCPUDefPtr cpu;
diff --git a/tests/qemuargv2xmltest.c b/tests/qemuargv2xmltest.c
index 4f9ec84..d941b0b 100644
--- a/tests/qemuargv2xmltest.c
+++ b/tests/qemuargv2xmltest.c
@@ -221,6 +221,8 @@ mymain(int argc, char **argv)
DO_TEST("hostdev-pci-address");
+ DO_TEST("smp");
+
DO_TEST_FULL("restore-v1", 0, "stdio");
DO_TEST_FULL("restore-v2", 0, "stdio");
DO_TEST_FULL("restore-v2", 0, "exec:cat");
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-smp.args b/tests/qemuxml2argvdata/qemuxml2argv-smp.args
new file mode 100644
index 0000000..3ec8f15
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-smp.args
@@ -0,0 +1 @@
+LC_ALL=C PATH=/bin HOME=/home/test USER=test LOGNAME=test /usr/bin/qemu -S -M pc -m 214 -smp 1,maxcpus=2,sockets=2,cores=1,threads=1 -nographic -monitor unix:/tmp/test-monitor,server,nowait -no-acpi -boot c -hda /dev/HostVG/QEMUGuest1 -net none -serial none -parallel none -usb
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 92d5b18..551d6c4 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -385,6 +385,8 @@ mymain(int argc, char **argv)
DO_TEST("qemu-ns", 0);
+ DO_TEST("smp", QEMUD_CMD_FLAG_SMP_TOPOLOGY);
+
free(driver.stateDir);
virCapabilitiesFree(driver.caps);
diff --git a/tests/qemuxml2xmltest.c b/tests/qemuxml2xmltest.c
index a33d435..cdc4390 100644
--- a/tests/qemuxml2xmltest.c
+++ b/tests/qemuxml2xmltest.c
@@ -180,6 +180,8 @@ mymain(int argc, char **argv)
DO_TEST("encrypted-disk");
DO_TEST("memtune");
+ DO_TEST("smp");
+
/* These tests generate different XML */
DO_TEST_DIFFERENT("balloon-device-auto");
DO_TEST_DIFFERENT("channel-virtio-auto");
--
1.7.2.3

View File

@ -0,0 +1,169 @@
From 28a3605906385cba43df77051dc26e865f237b09 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Wed, 29 Sep 2010 17:40:45 -0600
Subject: [PATCH 11/15] vcpu: complete vcpu support in qemu driver
* src/qemu/qemu_driver.c (qemudDomainSetVcpusFlags)
(qemudDomainGetVcpusFlags): Support all feasible flag
combinations.
---
src/qemu/qemu_driver.c | 100 ++++++++++++++++++++++++++++++++++++++++-------
1 files changed, 85 insertions(+), 15 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index c66dc04..a9e057f 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -5941,13 +5941,27 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
{
struct qemud_driver *driver = dom->conn->privateData;
virDomainObjPtr vm;
+ virDomainDefPtr def;
const char * type;
int max;
int ret = -1;
- if (flags != VIR_DOMAIN_VCPU_LIVE) {
- qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
- flags);
+ virCheckFlags(VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_CONFIG |
+ VIR_DOMAIN_VCPU_MAXIMUM, -1);
+
+ /* At least one of LIVE or CONFIG must be set. MAXIMUM cannot be
+ * mixed with LIVE. */
+ if ((flags & (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG)) == 0 ||
+ (flags & (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) ==
+ (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) {
+ qemuReportError(VIR_ERR_INVALID_ARG,
+ _("invalid flag combination: (0x%x)"), flags);
+ return -1;
+ }
+ if (!nvcpus || (unsigned short) nvcpus != nvcpus) {
+ qemuReportError(VIR_ERR_INVALID_ARG,
+ _("argument out of range: %d"), nvcpus);
return -1;
}
@@ -5966,7 +5980,7 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
if (qemuDomainObjBeginJob(vm) < 0)
goto cleanup;
- if (!virDomainObjIsActive(vm)) {
+ if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_VCPU_LIVE)) {
qemuReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
goto endjob;
@@ -5985,6 +5999,11 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
goto endjob;
}
+ if ((flags & (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) ==
+ VIR_DOMAIN_VCPU_LIVE && vm->def->maxvcpus < max) {
+ max = vm->def->maxvcpus;
+ }
+
if (nvcpus > max) {
qemuReportError(VIR_ERR_INVALID_ARG,
_("requested vcpus is greater than max allowable"
@@ -5992,7 +6011,49 @@ qemudDomainSetVcpusFlags(virDomainPtr dom, unsigned int nvcpus,
goto endjob;
}
- ret = qemudDomainHotplugVcpus(vm, nvcpus);
+ switch (flags) {
+ case VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_CONFIG:
+ def = vm->def;
+ if (virDomainObjIsActive(vm)) {
+ if (vm->newDef)
+ def = vm->newDef;
+ else{
+ qemuReportError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("no persistent state"));
+ goto endjob;
+ }
+ }
+ def->maxvcpus = nvcpus;
+ if (nvcpus < vm->newDef->vcpus)
+ def->vcpus = nvcpus;
+ ret = 0;
+ break;
+
+ case VIR_DOMAIN_VCPU_CONFIG:
+ def = vm->def;
+ if (virDomainObjIsActive(vm)) {
+ if (vm->newDef)
+ def = vm->newDef;
+ else {
+ qemuReportError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("no persistent state"));
+ goto endjob;
+ }
+ }
+ def->vcpus = nvcpus;
+ ret = 0;
+ break;
+
+ case VIR_DOMAIN_VCPU_LIVE:
+ ret = qemudDomainHotplugVcpus(vm, nvcpus);
+ break;
+
+ case VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG:
+ ret = qemudDomainHotplugVcpus(vm, nvcpus);
+ if (ret == 0 && vm->newDef)
+ vm->newDef->vcpus = nvcpus;
+ break;
+ }
endjob:
if (qemuDomainObjEndJob(vm) == 0)
@@ -6171,12 +6232,17 @@ qemudDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags)
{
struct qemud_driver *driver = dom->conn->privateData;
virDomainObjPtr vm;
- const char *type;
+ virDomainDefPtr def;
int ret = -1;
- if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
- qemuReportError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
- flags);
+ virCheckFlags(VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_CONFIG |
+ VIR_DOMAIN_VCPU_MAXIMUM, -1);
+
+ /* Exactly one of LIVE or CONFIG must be set. */
+ if (!(flags & VIR_DOMAIN_VCPU_LIVE) == !(flags & VIR_DOMAIN_VCPU_CONFIG)) {
+ qemuReportError(VIR_ERR_INVALID_ARG,
+ _("invalid flag combination: (0x%x)"), flags);
return -1;
}
@@ -6192,14 +6258,18 @@ qemudDomainGetVcpusFlags(virDomainPtr dom, unsigned int flags)
goto cleanup;
}
- if (!(type = virDomainVirtTypeToString(vm->def->virtType))) {
- qemuReportError(VIR_ERR_INTERNAL_ERROR,
- _("unknown virt type in domain definition '%d'"),
- vm->def->virtType);
- goto cleanup;
+ if (flags & VIR_DOMAIN_VCPU_LIVE) {
+ if (!virDomainObjIsActive(vm)) {
+ qemuReportError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("domain not active"));
+ goto cleanup;
+ }
+ def = vm->def;
+ } else {
+ def = vm->newDef ? vm->newDef : vm->def;
}
- ret = qemudGetMaxVCPUs(NULL, type);
+ ret = (flags & VIR_DOMAIN_VCPU_MAXIMUM) ? def->maxvcpus : def->vcpus;
cleanup:
if (vm)
--
1.7.2.3

View File

@ -0,0 +1,294 @@
From 0fab10e5ed971ab4f960a53e9640b0672f4b8ac3 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Tue, 5 Oct 2010 08:18:52 -0600
Subject: [PATCH 12/15] vcpu: improve vcpu support in xen command line
This patch series focuses on xendConfigVersion 2 (xm_internal) and 3
(xend_internal), but leaves out changes for xenapi drivers.
See this link for more details about vcpu_avail for xm usage.
http://lists.xensource.com/archives/html/xen-devel/2009-11/msg01061.html
This relies on the fact that def->maxvcpus can be at most 32 with xen.
* src/xen/xend_internal.c (xenDaemonParseSxpr)
(sexpr_to_xend_domain_info, xenDaemonFormatSxpr): Use vcpu_avail
when current vcpus is less than maximum.
* src/xen/xm_internal.c (xenXMDomainConfigParse)
(xenXMDomainConfigFormat): Likewise.
* tests/xml2sexprdata/xml2sexpr-pv-vcpus.sexpr: New file.
* tests/sexpr2xmldata/sexpr2xml-pv-vcpus.sexpr: Likewise.
* tests/sexpr2xmldata/sexpr2xml-pv-vcpus.xml: Likewise.
* tests/xmconfigdata/test-paravirt-vcpu.cfg: Likewise.
* tests/xmconfigdata/test-paravirt-vcpu.xml: Likewise.
* tests/xml2sexprtest.c (mymain): New test.
* tests/sexpr2xmltest.c (mymain): Likewise.
* tests/xmconfigtest.c (mymain): Likewise.
---
src/xen/xend_internal.c | 19 +++++++++++++--
src/xen/xm_internal.c | 10 ++++++-
tests/sexpr2xmldata/sexpr2xml-pv-vcpus.sexpr | 1 +
tests/sexpr2xmldata/sexpr2xml-pv-vcpus.xml | 27 +++++++++++++++++++++
tests/sexpr2xmltest.c | 1 +
tests/xmconfigdata/test-paravirt-vcpu.cfg | 17 +++++++++++++
tests/xmconfigdata/test-paravirt-vcpu.xml | 32 ++++++++++++++++++++++++++
tests/xmconfigtest.c | 1 +
tests/xml2sexprdata/xml2sexpr-pv-vcpus.sexpr | 1 +
tests/xml2sexprtest.c | 1 +
10 files changed, 105 insertions(+), 5 deletions(-)
create mode 100644 tests/sexpr2xmldata/sexpr2xml-pv-vcpus.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-pv-vcpus.xml
create mode 100644 tests/xmconfigdata/test-paravirt-vcpu.cfg
create mode 100644 tests/xmconfigdata/test-paravirt-vcpu.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-pv-vcpus.sexpr
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index 456b477..dfc6415 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -44,6 +44,7 @@
#include "xen_hypervisor.h"
#include "xs_internal.h" /* To extract VNC port & Serial console TTY */
#include "memory.h"
+#include "count-one-bits.h"
/* required for cpumap_t */
#include <xen/dom0_ops.h>
@@ -2191,7 +2192,9 @@ xenDaemonParseSxpr(virConnectPtr conn,
}
def->maxvcpus = sexpr_int(root, "domain/vcpus");
- def->vcpus = def->maxvcpus;
+ def->vcpus = count_one_bits(sexpr_int(root, "domain/vcpu_avail"));
+ if (!def->vcpus || def->maxvcpus < def->vcpus)
+ def->vcpus = def->maxvcpus;
tmp = sexpr_node(root, "domain/on_poweroff");
if (tmp != NULL) {
@@ -2433,7 +2436,7 @@ sexpr_to_xend_domain_info(virDomainPtr domain, const struct sexpr *root,
virDomainInfoPtr info)
{
const char *flags;
-
+ int vcpus;
if ((root == NULL) || (info == NULL))
return (-1);
@@ -2464,7 +2467,11 @@ sexpr_to_xend_domain_info(virDomainPtr domain, const struct sexpr *root,
info->state = VIR_DOMAIN_NOSTATE;
}
info->cpuTime = sexpr_float(root, "domain/cpu_time") * 1000000000;
- info->nrVirtCpu = sexpr_int(root, "domain/vcpus");
+ vcpus = sexpr_int(root, "domain/vcpus");
+ info->nrVirtCpu = count_one_bits(sexpr_int(root, "domain/vcpu_avail"));
+ if (!info->nrVirtCpu || vcpus < info->nrVirtCpu)
+ info->nrVirtCpu = vcpus;
+
return (0);
}
@@ -5668,6 +5675,9 @@ xenDaemonFormatSxpr(virConnectPtr conn,
virBufferVSprintf(&buf, "(memory %lu)(maxmem %lu)",
def->mem.cur_balloon/1024, def->mem.max_balloon/1024);
virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus);
+ /* Computing the vcpu_avail bitmask works because MAX_VIRT_CPUS is 32. */
+ if (def->vcpus < def->maxvcpus)
+ virBufferVSprintf(&buf, "(vcpu_avail %u)", (1U << def->vcpus) - 1);
if (def->cpumask) {
char *ranges = virDomainCpuSetFormat(def->cpumask, def->cpumasklen);
@@ -5763,6 +5773,9 @@ xenDaemonFormatSxpr(virConnectPtr conn,
virBufferVSprintf(&buf, "(kernel '%s')", def->os.loader);
virBufferVSprintf(&buf, "(vcpus %u)", def->maxvcpus);
+ if (def->vcpus < def->maxvcpus)
+ virBufferVSprintf(&buf, "(vcpu_avail %u)",
+ (1U << def->vcpus) - 1);
for (i = 0 ; i < def->os.nBootDevs ; i++) {
switch (def->os.bootDevs[i]) {
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index bf20a64..f7121ab 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -46,6 +46,7 @@
#include "util.h"
#include "memory.h"
#include "logging.h"
+#include "count-one-bits.h"
#define VIR_FROM_THIS VIR_FROM_XENXM
@@ -772,10 +773,12 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) {
def->mem.max_balloon *= 1024;
if (xenXMConfigGetULong(conf, "vcpus", &count, 1) < 0 ||
- (unsigned short) count != count)
+ MAX_VIRT_CPUS < count)
goto cleanup;
def->maxvcpus = count;
- def->vcpus = def->maxvcpus;
+ if (xenXMConfigGetULong(conf, "vcpu_avail", &count, -1) < 0)
+ goto cleanup;
+ def->vcpus = MIN(count_one_bits(count), def->maxvcpus);
if (xenXMConfigGetString(conf, "cpus", &str, NULL) < 0)
goto cleanup;
@@ -2246,6 +2249,9 @@ virConfPtr xenXMDomainConfigFormat(virConnectPtr conn,
if (xenXMConfigSetInt(conf, "vcpus", def->maxvcpus) < 0)
goto no_memory;
+ if (def->vcpus < def->maxvcpus &&
+ xenXMConfigSetInt(conf, "vcpu_avail", (1U << def->vcpus) - 1) < 0)
+ goto no_memory;
if ((def->cpumask != NULL) &&
((cpus = virDomainCpuSetFormat(def->cpumask,
diff --git a/tests/sexpr2xmldata/sexpr2xml-pv-vcpus.sexpr b/tests/sexpr2xmldata/sexpr2xml-pv-vcpus.sexpr
new file mode 100644
index 0000000..2be6822
--- /dev/null
+++ b/tests/sexpr2xmldata/sexpr2xml-pv-vcpus.sexpr
@@ -0,0 +1 @@
+(domain (domid 6)(name 'pvtest')(memory 420)(maxmem 420)(vcpus 4)(vcpu_avail 3)(uuid '596a5d2171f48fb2e068e2386a5c413e')(on_poweroff 'destroy')(on_reboot 'destroy')(on_crash 'destroy')(image (linux (kernel '/var/lib/xen/vmlinuz.2Dn2YT')(ramdisk '/var/lib/xen/initrd.img.0u-Vhq')(args ' method=http://download.fedora.devel.redhat.com/pub/fedora/linux/core/test/5.91/x86_64/os ')))(device (vbd (dev 'xvda')(uname 'file:/root/some.img')(mode 'w'))))
diff --git a/tests/sexpr2xmldata/sexpr2xml-pv-vcpus.xml b/tests/sexpr2xmldata/sexpr2xml-pv-vcpus.xml
new file mode 100644
index 0000000..0d6bf11
--- /dev/null
+++ b/tests/sexpr2xmldata/sexpr2xml-pv-vcpus.xml
@@ -0,0 +1,27 @@
+<domain type='xen' id='6'>
+ <name>pvtest</name>
+ <uuid>596a5d21-71f4-8fb2-e068-e2386a5c413e</uuid>
+ <memory>430080</memory>
+ <currentMemory>430080</currentMemory>
+ <vcpu current='2'>4</vcpu>
+ <os>
+ <type>linux</type>
+ <kernel>/var/lib/xen/vmlinuz.2Dn2YT</kernel>
+ <initrd>/var/lib/xen/initrd.img.0u-Vhq</initrd>
+ <cmdline> method=http://download.fedora.devel.redhat.com/pub/fedora/linux/core/test/5.91/x86_64/os </cmdline>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>destroy</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <disk type='file' device='disk'>
+ <driver name='file'/>
+ <source file='/root/some.img'/>
+ <target dev='xvda' bus='xen'/>
+ </disk>
+ <console type='pty'>
+ <target type='xen' port='0'/>
+ </console>
+ </devices>
+</domain>
diff --git a/tests/sexpr2xmltest.c b/tests/sexpr2xmltest.c
index d62b44f..f100dd8 100644
--- a/tests/sexpr2xmltest.c
+++ b/tests/sexpr2xmltest.c
@@ -132,6 +132,7 @@ mymain(int argc, char **argv)
DO_TEST("pv-vfb-type-crash", "pv-vfb-type-crash", 3);
DO_TEST("fv-autoport", "fv-autoport", 3);
DO_TEST("pv-bootloader", "pv-bootloader", 1);
+ DO_TEST("pv-vcpus", "pv-vcpus", 1);
DO_TEST("disk-file", "disk-file", 2);
DO_TEST("disk-block", "disk-block", 2);
diff --git a/tests/xmconfigdata/test-paravirt-vcpu.cfg b/tests/xmconfigdata/test-paravirt-vcpu.cfg
new file mode 100644
index 0000000..24c78f4
--- /dev/null
+++ b/tests/xmconfigdata/test-paravirt-vcpu.cfg
@@ -0,0 +1,17 @@
+name = "XenGuest1"
+uuid = "c7a5fdb0-cdaf-9455-926a-d65c16db1809"
+maxmem = 579
+memory = 394
+vcpus = 4
+vcpu_avail = 3
+bootloader = "/usr/bin/pygrub"
+on_poweroff = "destroy"
+on_reboot = "restart"
+on_crash = "restart"
+sdl = 0
+vnc = 1
+vncunused = 1
+vnclisten = "127.0.0.1"
+vncpasswd = "123poi"
+disk = [ "phy:/dev/HostVG/XenGuest1,xvda,w" ]
+vif = [ "mac=00:16:3e:66:94:9c,bridge=br0,script=vif-bridge" ]
diff --git a/tests/xmconfigdata/test-paravirt-vcpu.xml b/tests/xmconfigdata/test-paravirt-vcpu.xml
new file mode 100644
index 0000000..0be9456
--- /dev/null
+++ b/tests/xmconfigdata/test-paravirt-vcpu.xml
@@ -0,0 +1,32 @@
+<domain type='xen'>
+ <name>XenGuest1</name>
+ <uuid>c7a5fdb0-cdaf-9455-926a-d65c16db1809</uuid>
+ <memory>592896</memory>
+ <currentMemory>403456</currentMemory>
+ <vcpu current='2'>4</vcpu>
+ <bootloader>/usr/bin/pygrub</bootloader>
+ <os>
+ <type arch='i686' machine='xenpv'>linux</type>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <devices>
+ <disk type='block' device='disk'>
+ <driver name='phy'/>
+ <source dev='/dev/HostVG/XenGuest1'/>
+ <target dev='xvda' bus='xen'/>
+ </disk>
+ <interface type='bridge'>
+ <mac address='00:16:3e:66:94:9c'/>
+ <source bridge='br0'/>
+ <script path='vif-bridge'/>
+ </interface>
+ <console type='pty'>
+ <target type='xen' port='0'/>
+ </console>
+ <input type='mouse' bus='xen'/>
+ <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1' passwd='123poi'/>
+ </devices>
+</domain>
diff --git a/tests/xmconfigtest.c b/tests/xmconfigtest.c
index 221b322..ea00747 100644
--- a/tests/xmconfigtest.c
+++ b/tests/xmconfigtest.c
@@ -210,6 +210,7 @@ mymain(int argc, char **argv)
DO_TEST("paravirt-new-pvfb-vncdisplay", 3);
DO_TEST("paravirt-net-e1000", 3);
DO_TEST("paravirt-net-vifname", 3);
+ DO_TEST("paravirt-vcpu", 2);
DO_TEST("fullvirt-old-cdrom", 1);
DO_TEST("fullvirt-new-cdrom", 2);
DO_TEST("fullvirt-utc", 2);
diff --git a/tests/xml2sexprdata/xml2sexpr-pv-vcpus.sexpr b/tests/xml2sexprdata/xml2sexpr-pv-vcpus.sexpr
new file mode 100644
index 0000000..e886545
--- /dev/null
+++ b/tests/xml2sexprdata/xml2sexpr-pv-vcpus.sexpr
@@ -0,0 +1 @@
+(vm (name 'pvtest')(memory 420)(maxmem 420)(vcpus 4)(vcpu_avail 3)(uuid '596a5d21-71f4-8fb2-e068-e2386a5c413e')(on_poweroff 'destroy')(on_reboot 'destroy')(on_crash 'destroy')(image (linux (kernel '/var/lib/xen/vmlinuz.2Dn2YT')(ramdisk '/var/lib/xen/initrd.img.0u-Vhq')(args ' method=http://download.fedora.devel.redhat.com/pub/fedora/linux/core/test/5.91/x86_64/os ')))(device (vbd (dev 'xvda')(uname 'file:/root/some.img')(mode 'w'))))
\ No newline at end of file
diff --git a/tests/xml2sexprtest.c b/tests/xml2sexprtest.c
index 77cf760..9cf8d39 100644
--- a/tests/xml2sexprtest.c
+++ b/tests/xml2sexprtest.c
@@ -118,6 +118,7 @@ mymain(int argc, char **argv)
DO_TEST("pv-vfb-new", "pv-vfb-new", "pvtest", 3);
DO_TEST("pv-vfb-new-auto", "pv-vfb-new-auto", "pvtest", 3);
DO_TEST("pv-bootloader", "pv-bootloader", "pvtest", 1);
+ DO_TEST("pv-vcpus", "pv-vcpus", "pvtest", 1);
DO_TEST("disk-file", "disk-file", "pvtest", 2);
DO_TEST("disk-block", "disk-block", "pvtest", 2);
--
1.7.2.3

View File

@ -0,0 +1,216 @@
From 290ea33111be7bdf1f1381b90de33eb0e67c1a15 Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Wed, 6 Oct 2010 17:54:41 -0600
Subject: [PATCH 13/15] vcpu: improve support for getting xen vcpu counts
* src/xen/xen_driver.c (xenUnifiedDomainGetVcpusFlags): Support
more flags.
* src/xen/xend_internal.h (xenDaemonDomainGetVcpusFlags): New
prototype.
* src/xen/xm_internal.h (xenXMDomainGetVcpusFlags): Likewise.
* src/xen/xend_internal.c (virDomainGetVcpusFlags): New function.
* src/xen/xm_internal.c (xenXMDomainGetVcpusFlags): Likewise.
---
src/xen/xen_driver.c | 31 +++++++++++++++++++--------
src/xen/xend_internal.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++
src/xen/xend_internal.h | 2 +
src/xen/xm_internal.c | 47 ++++++++++++++++++++++++++++++++++++++++++
src/xen/xm_internal.h | 1 +
5 files changed, 124 insertions(+), 9 deletions(-)
diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c
index d6c9c57..fe2ff86 100644
--- a/src/xen/xen_driver.c
+++ b/src/xen/xen_driver.c
@@ -1142,20 +1142,33 @@ static int
xenUnifiedDomainGetVcpusFlags (virDomainPtr dom, unsigned int flags)
{
GET_PRIVATE(dom->conn);
- int i, ret;
+ int ret;
- if (flags != (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_MAXIMUM)) {
- xenUnifiedError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
- flags);
+ virCheckFlags(VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_CONFIG |
+ VIR_DOMAIN_VCPU_MAXIMUM, -1);
+
+ /* Exactly one of LIVE or CONFIG must be set. */
+ if (!(flags & VIR_DOMAIN_VCPU_LIVE) == !(flags & VIR_DOMAIN_VCPU_CONFIG)) {
+ xenUnifiedError(VIR_ERR_INVALID_ARG,
+ _("invalid flag combination: (0x%x)"), flags);
return -1;
}
- for (i = 0; i < XEN_UNIFIED_NR_DRIVERS; ++i)
- if (priv->opened[i] && drivers[i]->domainGetMaxVcpus) {
- ret = drivers[i]->domainGetMaxVcpus (dom);
- if (ret != 0) return ret;
- }
+ if (priv->opened[XEN_UNIFIED_XEND_OFFSET]) {
+ ret = xenDaemonDomainGetVcpusFlags(dom, flags);
+ if (ret != -2)
+ return ret;
+ }
+ if (priv->opened[XEN_UNIFIED_XM_OFFSET]) {
+ ret = xenXMDomainGetVcpusFlags(dom, flags);
+ if (ret != -2)
+ return ret;
+ }
+ if (flags == (VIR_DOMAIN_VCPU_CONFIG | VIR_DOMAIN_VCPU_MAXIMUM))
+ return xenHypervisorGetVcpuMax(dom);
+ xenUnifiedError(VIR_ERR_NO_SUPPORT, __FUNCTION__);
return -1;
}
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index dfc6415..3642296 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -3620,6 +3620,58 @@ xenDaemonDomainPinVcpu(virDomainPtr domain, unsigned int vcpu,
}
/**
+ * xenDaemonDomainGetVcpusFlags:
+ * @domain: pointer to domain object
+ * @flags: bitwise-ORd from virDomainVcpuFlags
+ *
+ * Extract information about virtual CPUs of domain according to flags.
+ *
+ * Returns the number of vcpus on success, -1 if an error message was
+ * issued, and -2 if the unified driver should keep trying.
+
+ */
+int
+xenDaemonDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags)
+{
+ struct sexpr *root;
+ int ret;
+ xenUnifiedPrivatePtr priv;
+
+ if (domain == NULL || domain->conn == NULL || domain->name == NULL) {
+ virXendError(VIR_ERR_INVALID_ARG, __FUNCTION__);
+ return -1;
+ }
+
+ priv = (xenUnifiedPrivatePtr) domain->conn->privateData;
+
+ /* If xendConfigVersion is 2, then we can only report _LIVE (and
+ * xm_internal reports _CONFIG). If it is 3, then _LIVE and
+ * _CONFIG are always in sync for a running system. */
+ if (domain->id < 0 && priv->xendConfigVersion < 3)
+ return -2;
+ if (domain->id < 0 && (flags & VIR_DOMAIN_VCPU_LIVE)) {
+ virXendError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("domain not active"));
+ return -1;
+ }
+
+ root = sexpr_get(domain->conn, "/xend/domain/%s?detail=1", domain->name);
+ if (root == NULL)
+ return -1;
+
+ ret = sexpr_int(root, "domain/vcpus");
+ if (!(flags & VIR_DOMAIN_VCPU_MAXIMUM)) {
+ int vcpus = count_one_bits(sexpr_int(root, "domain/vcpu_avail"));
+ if (vcpus)
+ ret = MIN(vcpus, ret);
+ }
+ if (!ret)
+ ret = -2;
+ sexpr_free(root);
+ return ret;
+}
+
+/**
* virDomainGetVcpus:
* @domain: pointer to domain object, or NULL for Domain0
* @info: pointer to an array of virVcpuInfo structures (OUT)
diff --git a/src/xen/xend_internal.h b/src/xen/xend_internal.h
index c757716..923cebd 100644
--- a/src/xen/xend_internal.h
+++ b/src/xen/xend_internal.h
@@ -155,6 +155,8 @@ int xenDaemonDomainPinVcpu (virDomainPtr domain,
unsigned int vcpu,
unsigned char *cpumap,
int maplen);
+int xenDaemonDomainGetVcpusFlags (virDomainPtr domain,
+ unsigned int flags);
int xenDaemonDomainGetVcpus (virDomainPtr domain,
virVcpuInfoPtr info,
int maxinfo,
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index f7121ab..4ea4245 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -1671,6 +1671,53 @@ cleanup:
}
/**
+ * xenXMDomainGetVcpusFlags:
+ * @domain: pointer to domain object
+ * @flags: bitwise-ORd from virDomainVcpuFlags
+ *
+ * Extract information about virtual CPUs of domain according to flags.
+ *
+ * Returns the number of vcpus on success, -1 if an error message was
+ * issued, and -2 if the unified driver should keep trying.
+ */
+int
+xenXMDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags)
+{
+ xenUnifiedPrivatePtr priv;
+ const char *filename;
+ xenXMConfCachePtr entry;
+ int ret = -2;
+
+ if ((domain == NULL) || (domain->conn == NULL) || (domain->name == NULL)) {
+ xenXMError(VIR_ERR_INVALID_ARG, __FUNCTION__);
+ return -1;
+ }
+
+ if (domain->id != -1)
+ return -2;
+ if (flags & VIR_DOMAIN_VCPU_LIVE) {
+ xenXMError(VIR_ERR_OPERATION_FAILED, "%s", _("domain not active"));
+ return -1;
+ }
+
+ priv = domain->conn->privateData;
+ xenUnifiedLock(priv);
+
+ if (!(filename = virHashLookup(priv->nameConfigMap, domain->name)))
+ goto cleanup;
+
+ if (!(entry = virHashLookup(priv->configCache, filename)))
+ goto cleanup;
+
+ ret = ((flags & VIR_DOMAIN_VCPU_MAXIMUM) ? entry->def->maxvcpus
+ : entry->def->vcpus);
+
+cleanup:
+ xenUnifiedUnlock(priv);
+ return ret;
+}
+
+/**
* xenXMDomainPinVcpu:
* @domain: pointer to domain object
* @vcpu: virtual CPU number (reserved)
diff --git a/src/xen/xm_internal.h b/src/xen/xm_internal.h
index 3ad3456..3295fbd 100644
--- a/src/xen/xm_internal.h
+++ b/src/xen/xm_internal.h
@@ -45,6 +45,7 @@ int xenXMDomainSetMemory(virDomainPtr domain, unsigned long memory);
int xenXMDomainSetMaxMemory(virDomainPtr domain, unsigned long memory);
unsigned long xenXMDomainGetMaxMemory(virDomainPtr domain);
int xenXMDomainSetVcpus(virDomainPtr domain, unsigned int vcpus);
+int xenXMDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags);
int xenXMDomainPinVcpu(virDomainPtr domain, unsigned int vcpu,
unsigned char *cpumap, int maplen);
virDomainPtr xenXMDomainLookupByName(virConnectPtr conn, const char *domname);
--
1.7.2.3

View File

@ -0,0 +1,342 @@
From e443a003129a172a7332f3cb6e40b3c39363ed5e Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Thu, 14 Oct 2010 16:17:18 -0600
Subject: [PATCH 14/15] vcpu: improve support for setting xen vcpu counts
Tested with RHEL 5.6 (xendConfigVersion 2, where xend_internal
controls live domains and xm_internal controls inactive domains).
Hopefully this works with xendConfigVersion 3 (where xend_internal
controls everything).
* src/xen/xen_driver.c (xenUnifiedDomainSetVcpusFlags): Support
more flags.
(xenUnifiedGetMaxVcpus): Export.
* src/xen/xm_internal.h (xenXMDomainSetVcpusFlags): New prototype.
* src/xen/xend_internal.h (xenDaemonDomainSetVcpusFlags): Likewise.
* src/xen/xen_driver.h (xenUnifiedGetMaxVcpus): Likewise.
* src/xen/xm_internal.c (xenXMDomainSetVcpusFlags): New function.
* src/xen/xend_internal.c (xenDaemonDomainSetVcpusFlags): Likewise.
---
src/xen/xen_driver.c | 60 ++++++++++++++++++++++++---------
src/xen/xen_driver.h | 1 +
src/xen/xend_internal.c | 76 +++++++++++++++++++++++++++++++++++++++++++
src/xen/xend_internal.h | 3 ++
src/xen/xm_internal.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++
src/xen/xm_internal.h | 2 +
6 files changed, 208 insertions(+), 17 deletions(-)
diff --git a/src/xen/xen_driver.c b/src/xen/xen_driver.c
index fe2ff86..66e8518 100644
--- a/src/xen/xen_driver.c
+++ b/src/xen/xen_driver.c
@@ -508,7 +508,7 @@ xenUnifiedIsSecure(virConnectPtr conn)
return ret;
}
-static int
+int
xenUnifiedGetMaxVcpus (virConnectPtr conn, const char *type)
{
GET_PRIVATE(conn);
@@ -1073,36 +1073,62 @@ xenUnifiedDomainSetVcpusFlags (virDomainPtr dom, unsigned int nvcpus,
unsigned int flags)
{
GET_PRIVATE(dom->conn);
- int i;
+ int ret;
+
+ virCheckFlags(VIR_DOMAIN_VCPU_LIVE |
+ VIR_DOMAIN_VCPU_CONFIG |
+ VIR_DOMAIN_VCPU_MAXIMUM, -1);
- if (flags != VIR_DOMAIN_VCPU_LIVE) {
- xenUnifiedError(VIR_ERR_INVALID_ARG, _("unsupported flags: (0x%x)"),
- flags);
+ /* At least one of LIVE or CONFIG must be set. MAXIMUM cannot be
+ * mixed with LIVE. */
+ if ((flags & (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG)) == 0 ||
+ (flags & (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) ==
+ (VIR_DOMAIN_VCPU_MAXIMUM | VIR_DOMAIN_VCPU_LIVE)) {
+ xenUnifiedError(VIR_ERR_INVALID_ARG,
+ _("invalid flag combination: (0x%x)"), flags);
+ return -1;
+ }
+ if (!nvcpus || (unsigned short) nvcpus != nvcpus) {
+ xenUnifiedError(VIR_ERR_INVALID_ARG,
+ _("argument out of range: %d"), nvcpus);
return -1;
}
/* Try non-hypervisor methods first, then hypervisor direct method
* as a last resort.
*/
- for (i = 0; i < XEN_UNIFIED_NR_DRIVERS; ++i)
- if (i != XEN_UNIFIED_HYPERVISOR_OFFSET &&
- priv->opened[i] &&
- drivers[i]->domainSetVcpus &&
- drivers[i]->domainSetVcpus (dom, nvcpus) == 0)
- return 0;
-
- if (priv->opened[XEN_UNIFIED_HYPERVISOR_OFFSET] &&
- drivers[XEN_UNIFIED_HYPERVISOR_OFFSET]->domainSetVcpus &&
- drivers[XEN_UNIFIED_HYPERVISOR_OFFSET]->domainSetVcpus (dom, nvcpus) == 0)
- return 0;
+ if (priv->opened[XEN_UNIFIED_XEND_OFFSET]) {
+ ret = xenDaemonDomainSetVcpusFlags(dom, nvcpus, flags);
+ if (ret != -2)
+ return ret;
+ }
+ if (priv->opened[XEN_UNIFIED_XM_OFFSET]) {
+ ret = xenXMDomainSetVcpusFlags(dom, nvcpus, flags);
+ if (ret != -2)
+ return ret;
+ }
+ if (flags == VIR_DOMAIN_VCPU_LIVE)
+ return xenHypervisorSetVcpus(dom, nvcpus);
+ xenUnifiedError(VIR_ERR_NO_SUPPORT, __FUNCTION__);
return -1;
}
static int
xenUnifiedDomainSetVcpus (virDomainPtr dom, unsigned int nvcpus)
{
- return xenUnifiedDomainSetVcpusFlags(dom, nvcpus, VIR_DOMAIN_VCPU_LIVE);
+ unsigned int flags = VIR_DOMAIN_VCPU_LIVE;
+ xenUnifiedPrivatePtr priv;
+
+ /* Per the documented API, it is hypervisor-dependent whether this
+ * affects just _LIVE or _LIVE|_CONFIG; in xen's case, that
+ * depends on xendConfigVersion. */
+ if (dom) {
+ priv = dom->conn->privateData;
+ if (priv->xendConfigVersion >= 3)
+ flags |= VIR_DOMAIN_VCPU_CONFIG;
+ }
+ return xenUnifiedDomainSetVcpusFlags(dom, nvcpus, flags);
}
static int
diff --git a/src/xen/xen_driver.h b/src/xen/xen_driver.h
index 3e7c1d0..115a26a 100644
--- a/src/xen/xen_driver.h
+++ b/src/xen/xen_driver.h
@@ -220,6 +220,7 @@ int xenUnifiedRemoveDomainInfo(xenUnifiedDomainInfoListPtr info,
void xenUnifiedDomainEventDispatch (xenUnifiedPrivatePtr priv,
virDomainEventPtr event);
unsigned long xenUnifiedVersion(void);
+int xenUnifiedGetMaxVcpus(virConnectPtr conn, const char *type);
# ifndef PROXY
void xenUnifiedLock(xenUnifiedPrivatePtr priv);
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index 3642296..55c2cc4 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -3535,6 +3535,82 @@ xenDaemonLookupByID(virConnectPtr conn, int id) {
}
/**
+ * xenDaemonDomainSetVcpusFlags:
+ * @domain: pointer to domain object
+ * @nvcpus: the new number of virtual CPUs for this domain
+ * @flags: bitwise-ORd from virDomainVcpuFlags
+ *
+ * Change virtual CPUs allocation of domain according to flags.
+ *
+ * Returns 0 on success, -1 if an error message was issued, and -2 if
+ * the unified driver should keep trying.
+ */
+int
+xenDaemonDomainSetVcpusFlags(virDomainPtr domain, unsigned int vcpus,
+ unsigned int flags)
+{
+ char buf[VIR_UUID_BUFLEN];
+ xenUnifiedPrivatePtr priv;
+ int max;
+
+ if ((domain == NULL) || (domain->conn == NULL) || (domain->name == NULL)
+ || (vcpus < 1)) {
+ virXendError(VIR_ERR_INVALID_ARG, __FUNCTION__);
+ return (-1);
+ }
+
+ priv = (xenUnifiedPrivatePtr) domain->conn->privateData;
+
+ if ((domain->id < 0 && priv->xendConfigVersion < 3) ||
+ (flags & VIR_DOMAIN_VCPU_MAXIMUM))
+ return -2;
+
+ /* With xendConfigVersion 2, only _LIVE is supported. With
+ * xendConfigVersion 3, only _LIVE|_CONFIG is supported for
+ * running domains, or _CONFIG for inactive domains. */
+ if (priv->xendConfigVersion < 3) {
+ if (flags & VIR_DOMAIN_VCPU_CONFIG) {
+ virXendError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("Xend version does not support modifying "
+ "persistent config"));
+ return -1;
+ }
+ } else if (domain->id < 0) {
+ if (flags & VIR_DOMAIN_VCPU_LIVE) {
+ virXendError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("domain not running"));
+ return -1;
+ }
+ } else {
+ if ((flags & (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG)) !=
+ (VIR_DOMAIN_VCPU_LIVE | VIR_DOMAIN_VCPU_CONFIG)) {
+ virXendError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("Xend only supports modifying both live and "
+ "persistent config"));
+ }
+ }
+
+ /* Unfortunately, xend_op does not validate whether this exceeds
+ * the maximum. */
+ flags |= VIR_DOMAIN_VCPU_MAXIMUM;
+ if ((max = xenDaemonDomainGetVcpusFlags(domain, flags)) < 0) {
+ virXendError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("could not determin max vcpus for the domain"));
+ return -1;
+ }
+ if (vcpus > max) {
+ virXendError(VIR_ERR_INVALID_ARG,
+ _("requested vcpus is greater than max allowable"
+ " vcpus for the domain: %d > %d"), vcpus, max);
+ return -1;
+ }
+
+ snprintf(buf, sizeof(buf), "%d", vcpus);
+ return xend_op(domain->conn, domain->name, "op", "set_vcpus", "vcpus",
+ buf, NULL);
+}
+
+/**
* xenDaemonDomainSetVcpus:
* @domain: pointer to domain object
* @nvcpus: the new number of virtual CPUs for this domain
diff --git a/src/xen/xend_internal.h b/src/xen/xend_internal.h
index 923cebd..53f5d2c 100644
--- a/src/xen/xend_internal.h
+++ b/src/xen/xend_internal.h
@@ -151,6 +151,9 @@ int xenDaemonDomainUndefine(virDomainPtr domain);
int xenDaemonDomainSetVcpus (virDomainPtr domain,
unsigned int vcpus);
+int xenDaemonDomainSetVcpusFlags (virDomainPtr domain,
+ unsigned int vcpus,
+ unsigned int flags);
int xenDaemonDomainPinVcpu (virDomainPtr domain,
unsigned int vcpu,
unsigned char *cpumap,
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index 4ea4245..2b8e51e 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -1670,6 +1670,89 @@ cleanup:
return ret;
}
+/*
+ * xenXMDomainSetVcpusFlags:
+ * @domain: pointer to domain object
+ * @nvcpus: number of vcpus
+ * @flags: bitwise-ORd from virDomainVcpuFlags
+ *
+ * Change virtual CPUs allocation of domain according to flags.
+ *
+ * Returns 0 on success, -1 if an error message was issued, and -2 if
+ * the unified driver should keep trying.
+ */
+int
+xenXMDomainSetVcpusFlags(virDomainPtr domain, unsigned int vcpus,
+ unsigned int flags)
+{
+ xenUnifiedPrivatePtr priv;
+ const char *filename;
+ xenXMConfCachePtr entry;
+ int ret = -1;
+ int max;
+
+ if ((domain == NULL) || (domain->conn == NULL) || (domain->name == NULL)) {
+ xenXMError(VIR_ERR_INVALID_ARG, __FUNCTION__);
+ return -1;
+ }
+ if (domain->conn->flags & VIR_CONNECT_RO) {
+ xenXMError(VIR_ERR_OPERATION_DENIED, __FUNCTION__);
+ return -1;
+ }
+ if (domain->id != -1)
+ return -2;
+ if (flags & VIR_DOMAIN_VCPU_LIVE) {
+ xenXMError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("domain is not running"));
+ return -1;
+ }
+
+ priv = domain->conn->privateData;
+ xenUnifiedLock(priv);
+
+ if (!(filename = virHashLookup(priv->nameConfigMap, domain->name)))
+ goto cleanup;
+
+ if (!(entry = virHashLookup(priv->configCache, filename)))
+ goto cleanup;
+
+ /* Hypervisor maximum. */
+ if ((max = xenUnifiedGetMaxVcpus(domain->conn, NULL)) < 0) {
+ xenXMError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("could not determin max vcpus for the domain"));
+ goto cleanup;
+ }
+ /* Can't specify a current larger than stored maximum; but
+ * reducing maximum can silently reduce current. */
+ if (!(flags & VIR_DOMAIN_VCPU_MAXIMUM))
+ max = entry->def->maxvcpus;
+ if (vcpus > max) {
+ xenXMError(VIR_ERR_INVALID_ARG,
+ _("requested vcpus is greater than max allowable"
+ " vcpus for the domain: %d > %d"), vcpus, max);
+ goto cleanup;
+ }
+
+ if (flags & VIR_DOMAIN_VCPU_MAXIMUM) {
+ entry->def->maxvcpus = vcpus;
+ if (entry->def->vcpus > vcpus)
+ entry->def->vcpus = vcpus;
+ } else {
+ entry->def->vcpus = vcpus;
+ }
+
+ /* If this fails, should we try to undo our changes to the
+ * in-memory representation of the config file. I say not!
+ */
+ if (xenXMConfigSaveFile(domain->conn, entry->filename, entry->def) < 0)
+ goto cleanup;
+ ret = 0;
+
+cleanup:
+ xenUnifiedUnlock(priv);
+ return ret;
+}
+
/**
* xenXMDomainGetVcpusFlags:
* @domain: pointer to domain object
diff --git a/src/xen/xm_internal.h b/src/xen/xm_internal.h
index 3295fbd..a46e1a2 100644
--- a/src/xen/xm_internal.h
+++ b/src/xen/xm_internal.h
@@ -45,6 +45,8 @@ int xenXMDomainSetMemory(virDomainPtr domain, unsigned long memory);
int xenXMDomainSetMaxMemory(virDomainPtr domain, unsigned long memory);
unsigned long xenXMDomainGetMaxMemory(virDomainPtr domain);
int xenXMDomainSetVcpus(virDomainPtr domain, unsigned int vcpus);
+int xenXMDomainSetVcpusFlags(virDomainPtr domain, unsigned int vcpus,
+ unsigned int flags);
int xenXMDomainGetVcpusFlags(virDomainPtr domain, unsigned int flags);
int xenXMDomainPinVcpu(virDomainPtr domain, unsigned int vcpu,
unsigned char *cpumap, int maplen);
--
1.7.2.3

View File

@ -0,0 +1,228 @@
From b013788742183afec9aa5068d3cfd185a3b5c62e Mon Sep 17 00:00:00 2001
From: Eric Blake <eblake@redhat.com>
Date: Thu, 7 Oct 2010 08:59:27 -0600
Subject: [PATCH 15/15] vcpu: remove dead xen code
* src/xen/xen_driver.h (xenUnifiedDriver): Remove now-unused
domainGetMaxVcpus, domainSetVcpus.
* src/xen/proxy_internal.c (xenProxyDriver): Likewise.
* src/xen/xen_hypervisor.c (xenHypervisorDriver): Likewise.
* src/xen/xen_inotify.c (xenInotifyDriver): Likewise.
* src/xen/xend_internal.c (xenDaemonDriver)
(xenDaemonDomainSetVcpus): Likewise.
* src/xen/xm_internal.c (xenXMDriver, xenXMDomainSetVcpus):
Likewise.
* src/xen/xs_internal.c (xenStoreDriver): Likewise.
---
src/xen/proxy_internal.c | 2 --
src/xen/xen_driver.h | 4 +---
src/xen/xen_hypervisor.c | 2 --
src/xen/xen_inotify.c | 2 --
src/xen/xend_internal.c | 33 ---------------------------------
src/xen/xm_internal.c | 43 -------------------------------------------
src/xen/xs_internal.c | 2 --
7 files changed, 1 insertions(+), 87 deletions(-)
diff --git a/src/xen/proxy_internal.c b/src/xen/proxy_internal.c
index 335dfc4..4033727 100644
--- a/src/xen/proxy_internal.c
+++ b/src/xen/proxy_internal.c
@@ -67,10 +67,8 @@ struct xenUnifiedDriver xenProxyDriver = {
NULL, /* domainSave */
NULL, /* domainRestore */
NULL, /* domainCoreDump */
- NULL, /* domainSetVcpus */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
- NULL, /* domainGetMaxVcpus */
NULL, /* listDefinedDomains */
NULL, /* numOfDefinedDomains */
NULL, /* domainCreate */
diff --git a/src/xen/xen_driver.h b/src/xen/xen_driver.h
index 115a26a..53f97d4 100644
--- a/src/xen/xen_driver.h
+++ b/src/xen/xen_driver.h
@@ -1,7 +1,7 @@
/*
* xen_unified.c: Unified Xen driver.
*
- * Copyright (C) 2007 Red Hat, Inc.
+ * Copyright (C) 2007, 2010 Red Hat, Inc.
*
* See COPYING.LIB for the License of this software
*
@@ -84,10 +84,8 @@ struct xenUnifiedDriver {
virDrvDomainSave domainSave;
virDrvDomainRestore domainRestore;
virDrvDomainCoreDump domainCoreDump;
- virDrvDomainSetVcpus domainSetVcpus;
virDrvDomainPinVcpu domainPinVcpu;
virDrvDomainGetVcpus domainGetVcpus;
- virDrvDomainGetMaxVcpus domainGetMaxVcpus;
virDrvListDefinedDomains listDefinedDomains;
virDrvNumOfDefinedDomains numOfDefinedDomains;
virDrvDomainCreate domainCreate;
diff --git a/src/xen/xen_hypervisor.c b/src/xen/xen_hypervisor.c
index 6246513..3797865 100644
--- a/src/xen/xen_hypervisor.c
+++ b/src/xen/xen_hypervisor.c
@@ -784,10 +784,8 @@ struct xenUnifiedDriver xenHypervisorDriver = {
NULL, /* domainSave */
NULL, /* domainRestore */
NULL, /* domainCoreDump */
- xenHypervisorSetVcpus, /* domainSetVcpus */
xenHypervisorPinVcpu, /* domainPinVcpu */
xenHypervisorGetVcpus, /* domainGetVcpus */
- xenHypervisorGetVcpuMax, /* domainGetMaxVcpus */
NULL, /* listDefinedDomains */
NULL, /* numOfDefinedDomains */
NULL, /* domainCreate */
diff --git a/src/xen/xen_inotify.c b/src/xen/xen_inotify.c
index d24b20f..9507061 100644
--- a/src/xen/xen_inotify.c
+++ b/src/xen/xen_inotify.c
@@ -71,10 +71,8 @@ struct xenUnifiedDriver xenInotifyDriver = {
NULL, /* domainSave */
NULL, /* domainRestore */
NULL, /* domainCoreDump */
- NULL, /* domainSetVcpus */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
- NULL, /* domainGetMaxVcpus */
NULL, /* listDefinedDomains */
NULL, /* numOfDefinedDomains */
NULL, /* domainCreate */
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index 55c2cc4..b90c331 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -3611,37 +3611,6 @@ xenDaemonDomainSetVcpusFlags(virDomainPtr domain, unsigned int vcpus,
}
/**
- * xenDaemonDomainSetVcpus:
- * @domain: pointer to domain object
- * @nvcpus: the new number of virtual CPUs for this domain
- *
- * Dynamically change the number of virtual CPUs used by the domain.
- *
- * Returns 0 for success; -1 (with errno) on error
- */
-int
-xenDaemonDomainSetVcpus(virDomainPtr domain, unsigned int vcpus)
-{
- char buf[VIR_UUID_BUFLEN];
- xenUnifiedPrivatePtr priv;
-
- if ((domain == NULL) || (domain->conn == NULL) || (domain->name == NULL)
- || (vcpus < 1)) {
- virXendError(VIR_ERR_INVALID_ARG, __FUNCTION__);
- return (-1);
- }
-
- priv = (xenUnifiedPrivatePtr) domain->conn->privateData;
-
- if (domain->id < 0 && priv->xendConfigVersion < 3)
- return(-1);
-
- snprintf(buf, sizeof(buf), "%d", vcpus);
- return(xend_op(domain->conn, domain->name, "op", "set_vcpus", "vcpus",
- buf, NULL));
-}
-
-/**
* xenDaemonDomainPinCpu:
* @domain: pointer to domain object
* @vcpu: virtual CPU number
@@ -5213,10 +5182,8 @@ struct xenUnifiedDriver xenDaemonDriver = {
xenDaemonDomainSave, /* domainSave */
xenDaemonDomainRestore, /* domainRestore */
xenDaemonDomainCoreDump, /* domainCoreDump */
- xenDaemonDomainSetVcpus, /* domainSetVcpus */
xenDaemonDomainPinVcpu, /* domainPinVcpu */
xenDaemonDomainGetVcpus, /* domainGetVcpus */
- NULL, /* domainGetMaxVcpus */
xenDaemonListDefinedDomains, /* listDefinedDomains */
xenDaemonNumOfDefinedDomains,/* numOfDefinedDomains */
xenDaemonDomainCreate, /* domainCreate */
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index 2b8e51e..430d40b 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -103,10 +103,8 @@ struct xenUnifiedDriver xenXMDriver = {
NULL, /* domainSave */
NULL, /* domainRestore */
NULL, /* domainCoreDump */
- xenXMDomainSetVcpus, /* domainSetVcpus */
xenXMDomainPinVcpu, /* domainPinVcpu */
NULL, /* domainGetVcpus */
- NULL, /* domainGetMaxVcpus */
xenXMListDefinedDomains, /* listDefinedDomains */
xenXMNumOfDefinedDomains, /* numOfDefinedDomains */
xenXMDomainCreate, /* domainCreate */
@@ -1630,47 +1628,6 @@ cleanup:
}
/*
- * Set the VCPU count in config
- */
-int xenXMDomainSetVcpus(virDomainPtr domain, unsigned int vcpus) {
- xenUnifiedPrivatePtr priv;
- const char *filename;
- xenXMConfCachePtr entry;
- int ret = -1;
-
- if ((domain == NULL) || (domain->conn == NULL) || (domain->name == NULL)) {
- xenXMError(VIR_ERR_INVALID_ARG, __FUNCTION__);
- return (-1);
- }
- if (domain->conn->flags & VIR_CONNECT_RO)
- return (-1);
- if (domain->id != -1)
- return (-1);
-
- priv = domain->conn->privateData;
- xenUnifiedLock(priv);
-
- if (!(filename = virHashLookup(priv->nameConfigMap, domain->name)))
- goto cleanup;
-
- if (!(entry = virHashLookup(priv->configCache, filename)))
- goto cleanup;
-
- entry->def->maxvcpus = entry->def->vcpus = vcpus;
-
- /* If this fails, should we try to undo our changes to the
- * in-memory representation of the config file. I say not!
- */
- if (xenXMConfigSaveFile(domain->conn, entry->filename, entry->def) < 0)
- goto cleanup;
- ret = 0;
-
-cleanup:
- xenUnifiedUnlock(priv);
- return ret;
-}
-
-/*
* xenXMDomainSetVcpusFlags:
* @domain: pointer to domain object
* @nvcpus: number of vcpus
diff --git a/src/xen/xs_internal.c b/src/xen/xs_internal.c
index 9296f25..a9817b1 100644
--- a/src/xen/xs_internal.c
+++ b/src/xen/xs_internal.c
@@ -67,10 +67,8 @@ struct xenUnifiedDriver xenStoreDriver = {
NULL, /* domainSave */
NULL, /* domainRestore */
NULL, /* domainCoreDump */
- NULL, /* domainSetVcpus */
NULL, /* domainPinVcpu */
NULL, /* domainGetVcpus */
- NULL, /* domainGetMaxVcpus */
NULL, /* listDefinedDomains */
NULL, /* numOfDefinedDomains */
NULL, /* domainCreate */
--
1.7.2.3