libvirt/tests/xmconfigdata/test-paravirt-vcpu.cfg
Jim Fehlig 5b74103b0b Xen: support maxvcpus in xm and xl config
From: Ian Campbell <ian.campbell@citrix.com>

xend prior to 4.0 understands vcpus as maxvcpus and vcpu_avail
as a bit map of which cpus are online (default is all).

xend from 4.0 onwards understands maxvcpus as maxvcpus and
vcpus as the number which are online (from 0..N-1). The
upstream commit (68a94cf528e6 "xm: Add maxvcpus support")
claims that if maxvcpus is omitted then the old behaviour
(i.e. obeying vcpu_avail) is retained, but AFAICT it was not,
in this case vcpu==maxcpus==online cpus. This is good for us
because handling anything else would be fiddly.

This patch changes parsing of the virDomainDef maxvcpus and vcpus
entries to use the corresponding 'maxvcpus' and 'vcpus' settings
from xm and xl config. It also drops use of the old Xen 3.x
'vcpu_avail' setting.

The change also removes the maxvcpus limit of MAX_VIRT_VCPUS (since
maxvcpus is simply a count, not a bit mask), which is particularly
crucial on ARM where MAX_VIRT_CPUS == 1 (since all guests are
expected to support vcpu placement, and therefore only the boot
vcpu's info lives in the shared info page).

Existing tests adjusted accordingly, and new tests added for the
'maxvcpus' setting.
2015-12-18 17:52:00 -07:00

14 lines
335 B
INI

name = "XenGuest1"
uuid = "c7a5fdb0-cdaf-9455-926a-d65c16db1809"
maxmem = 579
memory = 394
maxvcpus = 4
vcpus = 2
localtime = 0
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
vif = [ "mac=00:16:3e:66:94:9c,bridge=br0,script=vif-bridge" ]
bootloader = "/usr/bin/pygrub"
disk = [ "phy:/dev/HostVG/XenGuest1,xvda,w" ]