Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1113474
When we set the MAC address of a network device as a part of setting
up macvtap "passthrough" mode (where the domain has an emulated netdev
connected to a host macvtap device that has exclusive use of the
physical device, and sets the device MAC address to match its own,
i.e. "<interface type='direct'> <source mode='passthrough' .../>"), we
use ioctl(SIOCSIFHWADDR) giving it the name of that device. This is
true even if it is an SRIOV Virtual Function (VF).
But, when we are setting the MAC address / vlan ID of a VF in
preparation for "hostdev network" passthrough (this is where we set
the MAC address and vlan id of the VF after detaching the host net
driver and before assigning the device to the domain with PCI
passthrough, i.e. "<interface type='hostdev'>", we do the setting via
a netlink RTM_SETLINK message for that VF's Physical Function (PF),
telling it the VF# we want to change. This sets an "administratively
changed MAC" flag for that VF in the PF's driver, and from that point
on (until the PF driver is reloaded, *not* merely the VF driver) that
VF's MAC address can't be changed using ioctl(SIOCSIFHWADDR) - the
only way to change it is via the PF with RTM_SETLINK.
This means that if a VF is used for hostdev passthrough, it will have
the admin flag set, and future attempts to use that VF for macvtap
passthrough will fail.
The solution to this problem is to check if the device being used for
macvtap passthrough is actually a VF; if so, we use the netlink
RTM_SETLINK message to the PF to set the VF's mac address instead of
ioctl(SIOCSIFHWADDR) directly to the VF; if not, behavior does not
change from previously.
There are three pieces to making this work:
1) virNetDevMacVLan(Create|Delete)WithVPortProfile() now call
virNetDev(Replace|Restore)NetConfig() rather than
virNetDev(Replace|Restore)MacAddress() (simply passing -1 for VF#
and vlanid).
2) virNetDev(Replace|Restore)NetConfig() check to see if the device is
a VF. If so, they find the PF's name and VF#, allowing them to call
virNetDev(Replace|Restore)VfConfig().
3) To prevent mixups when detaching a macvtap passthrough device that
had been attached while running an older version of libvirt,
virNetDevRestoreVfConfig() is potentially given the preserved name
of the VF, and if the proper statefile for a VF can't be found in
the stateDir (${stateDir}/${pfname}_vf${vfid}),
virNetDevRestoreMacAddress() is called instead (which will look in
the file named ${stateDir}/${vfname}).
This problem has existed in every version of libvirt that has both
macvtap passthrough and interface type='hostdev'. Fortunately people
seem to use one or the other though, so it hasn't caused any real
world problem reports.
- Remove all qemu emulators
- Restart libvirtd
- Install qemu emulators
- Call 'virsh version' -> errors
The only thing that will force the qemu driver to refresh it's cached
capablities info is an explict API call to GetCapabilities.
However in the case when the initial caps lookup at driver connect didn't
find a single qemu emulator to poll, the driver is effectively useless
and really can't do anything until it's populated some qemu capabilities
info.
With the above steps, the user would have to either know about the
magic refresh capabilities call, or restart libvirtd to pick up the
changes.
Instead, this patch changes things so that every time a part of th
driver requests access to capabilities info, check to see if
we've previously seen any emulators. If not, force a refresh.
In the case of 'still no emulators found', this is still very quick, so
I can't think of a downside.
https://bugzilla.redhat.com/show_bug.cgi?id=1000116
https://bugzilla.redhat.com/show_bug.cgi?id=1171933
Adjust the processLU error returns to be a bit more logical. Currently,
the calling code cannot determine the difference between a non disk/lun
volume and a processed/found disk/lun. It can also not differentiate
between perhaps real/fatal error and one that won't necessarily stop
the code from finding other volumes.
After this patch virStorageBackendSCSIFindLUsInternal will stop processing
as soon as a "fatal" message occurs rather than continuting processing
for no apparent reason. It will also only set the *found value when
at least one of the processLU's was successful.
With the failed return, if the reason for the stop was that the pool
target path did not exist, was /dev, was /dev/, or did not start with
/dev, then iSCSI pool startup and refresh will fail.
Rather than passing/returning a pointer to a boolean to indicate that
perhaps we should try again - adjust the return of the call to return
the count of LU's found during processing, then let the caller decide
what to do with that value.
Use virStorageBackendPoolUseDevPath API to determine whether creation of
stable target path is possible for the volume.
This will differentiate a failed virStorageBackendStablePath which won't
need to be fatal. Thus, we'll add a -2 return value to differentiate that
the failure was a result of either the inability to find the symlink for
the device or failure to open the target path directory
For virStorageBackendStablePath, in order to make decisions in other code
split out the checks regarding whether the pool's target is empty, using /dev,
using /dev/, or doesn't start with /dev
Commit 70f446631f (from 2008) introduced
some functions for testing whether xend was returning correct sound
models. Those functions have long gone, but the function prototypes
remain. This commit removes the unused prototypes.
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
This needs to specified in way too many places for a simple validation
check. The ostype/arch/virttype validation checks later in
DomainDefParseXML should catch most of the cases that this was covering.
This revealed that GuestDefaultEmulator was a bit buggy, capable
of returning an emulator that didn't match the passed domain type. Fix
up the test suite input to continue to pass.
This is a helper function to look up all capabilities data for all
the OS bits that are relevant to <domain>. This is
- os type
- arch
- domain type
- emulator
- machine type
This will be used to replace several functions in later commits.
But the internal API stays the same, and we just convert the value as
needed. Not useful yet, but this is the beginning step of using an enum
for ostype throughout the code.
When parsing XML, we validate the passed ostype + arch combo against
the detected hypervisor capabilities. This has led to the following
problem:
- Define x86 qemu guest
- qemu is inadvertently removed from the host
- libvirtd is restarted. fails to parse VM config since arch is removed
- 'virsh list --all' is now empty, user is wondering where their VMs went
Add a new internal flag VIR_DOMAIN_DEF_PARSE_SKIP_OSTYPE_CHECKS. Use
it when loading VM and snapshot configs from disk.
https://bugzilla.redhat.com/show_bug.cgi?id=1043572
If no <os><type> was specified:
before: unknown OS type no OS type
after : xml error: an os <type> must be specified
If an <os><type> is specified that's not in our capabiliities data:
before: unknown OS type: $type
after : unsupported configuration: no support found for os <type> '$type'
VIR_ERR_OS_TYPE is now unused (as it should be frankly) so drop its strings
as well to save our translators some effort.
In Parallels we do not support device name hints
aka <target dev=../> option and full-fledged device
disk device addressing through
<address type=.. controller=.. bus=.. target=.. unit=../>
and have only one index instead.
In this situation to be consistent we can only take
one-to-one mapping from some reasonable subset
of full address. Values outside this subset are
invalid to create Parallels VMs.
Reasonable mapping is default one defined in virDomainDiskDefAssignAddress.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@parallels.com>
We should return VIR_DRV_OPEN_ERROR in case
if we handle scheme in query but some
error occur. Previously we sometimes
return VIR_DRV_OPEN_DECLINE.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@parallels.com>
# virsh -c lxc:/// start helloworld
error: Failed to start domain helloworld
error: internal error: guest failed to start: Unknown
failure in libvirt_lxc startup
Return success when there are no cpuset.mems to be set,
instead of failing without setting an error.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
# virsh -c lxc:/// start helloworld
error: Failed to start domain helloworld
error: internal error: guest failed to start: Invalid value '1-3'
for 'cpuset.mems': Invalid argument
Free the cpu mask to avoid reusing it as a mem mask
in virCgroupSetCpusetMems
if virDomainNumatuneMaybeFormatNodeset does not set a mask.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1209948
So we have this bug. The virConnectGetDomainCapabilities() API
performs a couple of checks before it produces any result. One of
the checks is if the architecture requested by user can be run by
the binary (again user provided). However, the check is pretty
dumb. It merely compares if the default binary architecture
matches the one provided by user. However, a qemu binary can run
multiple architectures. For instance: qemu-system-ppc64 can run:
ppc, ppcle, ppc64, ppc64le and ppcemb. The default is ppc64, so
if user requested something else, like ppc64le, the check would
have failed without obvious reason.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When a qemu domain is to be rebooted, from outside, at libvirt
level it looks like regular shutdown. To really restart the
domain, libvirt needs to issue reset command on the monitor once
SHUTDOWN event appeared. So, in order to differentiate bare
shutdown and reboot libvirt uses a variable within domain private
data. It's called fakeReboot. When the reboot API is called, the
variable is set, but when the shutdown API is called it must be
cleared out. But it was not for every possible case. So if user
called virDomainReboot(), and there was no ACPI daemon running
inside the guest (so guest didn't initiated shutdown sequence)
and then virDomainShutdown(mode=agent) was called bad thing
happened. We remembered the fakeReboot and instead of shutting
the domain down, we just rebooted it.
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This is a simple wrapper around virNetDevBandwidthManipulateFilter() that
will update the desired filter on an interface (usually a network bridge)
with a new MAC address. Although, the MAC address in question usually
refers to some other interface - the one that the filter is constructed
for. Yeah, hard to parse. Thing is, our NATed network has a bridge where
some part of QoS takes place. And vNICs from guests are plugged into
the bridge. However, if a guest decides to change the MAC of its vNIC,
the corresponding qemu process emits an event which we can use to
update the QoS configuration based on the new MAC address.. However,
our QoS hierarchy is currently not notified, therefore it falls apart.
This function (when called in response to the aforementioned event)
will update our QoS hierarchy and duct tape it together again.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Not only this simplifies the code a bit, it prepares the
environment for upcoming patches. The new
virNetDevBandwidthManipulateFilter() function is capable of both
removing a filter and adding a new one. At the same time! Yeah,
this is not currently used anywhere but look at the next commit
where you'll see it.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, when constructing traffic shaping rules, the ingress
filter is created without any priority specified on the command
line. This makes kernel to make up one. While this works, it
simplifies things a bit if we provide the filter priority. In
this case, since it's the root filter lets give it the highest
priority of number 1.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
After a360912179 the formatting of virDomainActualNetDefPtr was
changed a bit. However, during the function rewrite, iface's class_id
is not formatted as frequently as it could be. In fact, after rewrite
it's formatted only for iface of type VIR_DOMAIN_NET_TYPE_DIRECT where
it makes no sense and is unused. While where needed (_TYPE_NETWORK) is
not formatted at all. This makes the daemon forget it upon daemon
restart resulting in bad behaviour.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1211436
This reverts commit b7829f959b.
The previous fix was not correct. Like everywhere else, a driver is a
global variable allocated in stateInitialize function (or something
similar for stateless drivers). Later, when a driver API is called,
it's possible that the global variable is accessed and dereferenced.
Now, some drivers require root privileges because they undertake some
actions reserved only for the system admin (e.g. manipulating host
firewall). And here's the trouble, the NWFilter state initializer
exited too early when finding out it's running unprivileged, leaving
the global NWFilter driver variable uninitialized. Any subsequent
API call that tried to lock the driver resulted in dereferencing the
driver and thus crash.
On the other hand, in order to not resurrect the bug the original
commit was fixing, Let's forbid the nwfilter define in session mode.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Conflicts:
src/nwfilter/nwfilter_driver.c: Context. Code changed a bit
since 2013.
There is a possibility that we jump onto error label with @lockpath
still initialized to NULL. Here, the @lockpath should be unlink()-ed,
but passing there a NULL is not a good idea. Don't do that. In fact,
we should call unlink() only if we created the lock file successfully.
Reported-by: John Ferlan <jferlan@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
The 802.11 interfaces can not be moved by themselves, their Phy has to move too.
If there are other interfaces, they have to move too -- hopefully it's not too
confusing. This is a less-invasive alternative to defining a new hostdev type
for PHYs.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
A destroy operation can take considerable time on large memory
domains due to scrubbing the domain's memory. Unlock the
virDomainObj while libxl_domain_destroy is executing.
Implement libxlDomainDestroyInternal wrapper to handle unlocking,
calling destroy, and locking. Change all callers of
libxl_domain_destroy to use the wrapper.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
A job should be acquired at the beginning of a domain destroy operation,
not at the end when cleaning up the domain. Fix two occurrences of this
late job acquisition in the libxl driver. Doing so renders
libxlDomainCleanupJob unused, so it is removed.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Let callers of libxlDomainStart decide when it is appropriate to
acquire a job on the associated virDomainObj.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Add support for HVM direct kernel boot in libxl. Also add a
test to verify domXML <-> native conversions.
Signed-off-by: Chunyan Liu <cyliu@suse.com>
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
In xl config, hvmloader is implied for hvm guests. It is not
specified with the "kernel" option like xm config. The "kernel"
option, along with "ramdisk" and "extra", is used for HVM direct
kernel boot. Instead of using "kernel" option to populate
virDomainDef object's os.loader->path, use hvmloader discovered
when gathering capabilities.
This change required fixing initialization of capabilities in
the test utils and removing 'kernel = "/usr/lib/xen/boot/hvmloader"'
from the test config files.
xl and xm differ a bit in how <os> configuration is represented.
E.g. xl config supports <os><nvram .../></os> via its "bios"
setting.
Move the xenParseOS and xenFormatOS functions from xen_common.c
and copy to xen_xl.c and xen_xm.c so they can be customized for
xm vs xl config. An unfortunate fallout is reordering of entries
in the test config files.
device_model is parsed in xenParseOS(), then later in
xenParseConfigCommon(). <emulator> is not part of <os>,
so makes sense to remove the parsing in xenParseOS().
On rhel-6 is broken gcc that reports this warning:
util/virbuffer.c:500: error: logical '&&' with non-zero constant will
always evaluate as true [-Wlogical-op]
Move the pragma directive before function virBufferEscapeString because
since commit aeb5262e this function uses 'strchr' too.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
ts.tv_nsec was off by a factor of 1000, making timeouts less than a
second in the future often expiring immediately.
Signed-off-by: Michael Chapman <mike@very.puzzling.org>
Among all the monitor APIs some where checking if mon is NULL and some
were not. Since it's possible to have mon equal to NULL in case a second
call is attempted once entered the monitor. This requires that every
single API checks for the monitor.
This patch adds a macro that helps checking the state of the monitor and
either refactors existing checking code to use the macro or adds it in
case it was missing.
Rather than erroring out make the best attempt to retrieve other data if
disks are inaccessible or missing. The failure will still be logged
though.
Since the bulk stats API is called on multiple domains an error like
this makes the API unusable. This regression was introduced by commit
596a137134
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1209394
The comment is describing arguments passed to the function.
However, there's no @ifmac argument. In 955af4d4 it was replaced
with @ifmac_ptr. Unfortunately, the comment wasn't updated.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Add virStringHasControlChars that checks if the string has
any control characters other than \t\r\n,
and virStringStripControlChars that removes them in-place.
Throughout the code, we have several places need to construct a path
somewhere in /sys/class/net/... They are not consistent and nearly
each code piece invents its own way how to do it. So unify this by:
1) use virNetDevSysfsFile() wherever possible
2) At least use common macro SYSFS_NET_DIR declared in virnetdev.h at
the rest of places which can't go with 1)
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If a virAsprintf() within the function fails, we call VIR_FREE()
over @rundir variable and jump onto cleanup label, where it is
freed again. It doesn't hurt, but not make much sense too.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit f6563bc3 introduced HMP impl of the function (so that a different
uglier function could be removed). Before the HMP code is called there's
a leftover check that the monitor is JSON which inhibits the code from
working.
https://bugzilla.redhat.com/show_bug.cgi?id=1200149
Even though we have a mutex mechanism so that two clients don't spawn
two daemons, it's not strong enough. It can happen that while one
client is spawning the daemon, the other one fails to connect.
Basically two possible errors can happen:
error: Failed to connect socket to '/home/mprivozn/.cache/libvirt/libvirt-sock': Connection refused
or:
error: Failed to connect socket to '/home/mprivozn/.cache/libvirt/libvirt-sock': No such file or directory
The problem in both cases is, the daemon is only starting up, while we
are trying to connect (and fail). We should postpone the connecting
phase until the daemon is started (by the other thread that is
spawning it). In order to do that, create a file lock 'libvirt-lock'
in the directory where session daemon would create its socket. So even
when called from multiple processes, spawning a daemon will serialize
on the file lock. So only the first to come will spawn the daemon.
Tested-by: Richard W. M. Jones <rjones@redhat.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Two non-static functions in virjson.c were missing their export info in
libvirt_private.syms, so they couldn't be used anywhere it the code (and
that's about to get changed).
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Luckily we are allocating structs as clean memory and
PTHREAD_MUTEX_INITIALIZER is "{ 0 }", so nothing happened, but it should
still be created as lockable object.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Check the proposed pool source host XML definition against existing gluster
pools to ensure the incoming definition doesn't use the same source dir and
soure host XML definition as an existing pool.
Check the proposed pool source host XML definition against existing sheepdog
pools to ensure the incoming definition doesn't use the same source host XML
definition as an existing pool.
Rather than have duplicate code doing the same check, have the netfs
matching processing code use the new virStoragePoolSourceMatchSingleHost.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Create a separate iSCSI Source matching subroutine. Makes the calling
code a bit cleaner as well as sets up for future patches which need to
do better source hosts[0].name processing/checking.
As part of the effort the logic will be inverted from a multi-level
if statement to a series of single level checks for better readability
and further separation
Signed-off-by: John Ferlan <jferlan@redhat.com>
When acquiring resource via sanlock fails, we would report it as
VIR_ERR_INTERNAL_ERROR, which is not very friendly to applications using
libvirt. Moreover, the lockd driver would report the same failure as
VIR_ERR_RESOURCE_BUSY, which looks better.
Unfortunately, in sanlock driver we don't really know if acquiring the
resource failed because it was already locked or there was another
reason behind. But the end result is the same and I think using
VIR_ERR_RESOURCE_BUSY reason for all acquire failures is still better
than what we have now.
https://bugzilla.redhat.com/show_bug.cgi?id=1165119
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Commit 49ed6cff is broken on mingw and other non-linux platforms:
CCLD libvirt.la
Cannot export virNetDevSysfsFile: symbol not defined
collect2: error: ld returned 1 exit status
* src/util/virnetdev.c: Provide virNetDevSysfsFile fallback.
Signed-off-by: Eric Blake <eblake@redhat.com>
Found by ./autobuild.sh during a mingw cross-compile:
Commit 8a96e87 was not innocuous - glibc happens to leak the
definition of time() through other headers, so that even without
<sys/select.h>, virrandom.c compiled just fine. But on mingw,
we were not so lucky; <sys/select.h> was important for its side
effect of dragging in <time.h>, and we now have nothing providing
the declaration of time():
../../src/util/virrandom.c: In function 'virRandomOnceInit':
../../src/util/virrandom.c:65:5: error: implicit declaration of function 'time' [-Werror=implicit-function-declaration]
unsigned int seed = time(NULL) ^ getpid();
^
../../src/util/virrandom.c:65:5: error: nested extern declaration of 'time' [-Werror=nested-externs]
Signed-off-by: Eric Blake <eblake@redhat.com>
Changing the prototype to not have "int *index" since we'll soon be
disallowing index as a name. Curiously the original commit (a4504ac)
for the function used 'int idx' in the function - so they didn't match.
Now they do.
It is there even with -nodefaults and -no-user-config, so count with
that so we can start sparc domains.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
The variable 'last_processed_hostdev_vf' indicates index of the last
successfully configed vf. When resetvfnetconfig because of failure,
hostdevs[last_processed_hostdev_vf] should also be reset.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
1. 'last_good_net' indicates the index of last successfully configured
net. so def->nets[last_good_net] should also be clean up if error occurs.
2. if error occurs in 'virNetDevMacVLanVPortProfileRegisterCallback'
(second 'goto err_exit' in loop), we should also do
'virNetDevVPortProfileDisassociate' cleanup for the
'virNetDevVPortProfileAssociate'(first code block in loop). So we should
consider the net is successfully configured after first code block in
loop finishes.
Signed-off-by: Huanle Han <hanxueluo@gmail.com>
After set memory parameters for running domain, save the change to live
xml is needed otherwise it will disappear after restart libvirtd.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1211548
Signed-off-by: Shanzhi Yu <shyu@redhat.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Apparently for Xen-devel 'index' is a global and causes a build failure,
so just use the shortened 'idx' instead to avoid the conflict.
Signed-off-by: John Ferlan <jferlan@redhat.com>
QEMU does not abandon the mirror. The job carries on in the synchronised
phase and it might be either pivoted again or cancelled. The commit
hints that the described behavior was happening in a downstream version.
If the command returns false there are two possible options:
1) qemu did not reach the point where it would ask the block job to
pivot
2) pivotting failed in the actual qemu coroutine
If either of those would happen we return failure and reset the
condition that waits for the block job to complete. This makes the API
fail but in case where qemu would actually abandon the mirror the fact
is notified via the event and handled asynchronously.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1202704
qemuDomainBlockJobImpl become an unmaintainable mess over the years of
adding new stuff to it. This patch starts splitting up individual
functions from it until it can be killed entirely.
In bulk this will add lines of code rather than delete them but it will
be traded for maintainability.
My intention is to split qemuMonitorJSONBlockJob() into simpler separate
functions for every block job type. Since the error handling code is the
same for all block jobs, this patch extracts the code into a separate
function that will later be reused in more places.
With the new helper qemuMonitorJSONErrorIsClass we can save a few
function calls as we can extract the error object once.
Split out the function that checks the actual error class string into a
separate helper as it will be useful later and refactor
qemuMonitorJSONHasError to return bool type and remove few useless
checks.
Basically virJSONValueObjectHasKey are useless here since the next call
to virJSONValueObjectGet is checking the return value again (which can't
fail at that point). By removing the first check we save a function
call.
Previously we checked that the vcpu we are trying to set is in range of
the number of threads presented by qemu. The problem is that if the VM
is offline the count is 0. Since the condition subtracted 1 from the
count the number would overflow and the check would never trigger.
Change the condition for more sensible ones with specific error
messages.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1208434
Refactor the code to parse the vcpupin in a similar way the iothreadpin
code is now structured. This allows to get rid of some very strange
conditions and error messages.
Additionally since a existing bug
( https://bugzilla.redhat.com/show_bug.cgi?id=1208434 ) allows to add
vcpupin definitions for vcpus that don't exist, this patch makes the
parser to ignore all vcpupins that don't have a matching vCPU in the
definition rather than just offlined ones.
Defining a domain with the following config:
<domain ...>
...
<iothreads>1</iothreads>
<cputune>
<iothreadpin cpuset='1'/>
will result in the following config formatted back:
<domain type='kvm'>
...
<iothreads>1</iothreads>
<cputune>
<iothreadpin iothread='0' cpuset='1'/>
After restart the VM would vanish. Since our schema requires the
@iothread field to be present in <iothreadpin> make it required by the
code too.