As virDomainNumatuneSet now doesn't allocate the virDomainNuma object
any longer it's not necessary to pass the pointer to a pointer to store
the object as it will not change any longer.
While touching the parameter definitions I've also changed the name of
the parameter to "numa".
Since our formatter now handles well if the config is allocated and not
filled we can safely always-allocate the NUMA config and remove the
ad-hoc allocation code.
This will help in later patches as the parser will be refactored to just
fill the data.
Move the existing virDomainDefNew to virDomainDefNewFull as it's setting
a few things in the conf and re-introduce virDomainDefNew as a function
without parameters for common use.
Do a content-aware check if formatting of the <numatune> element is
necessary. Later on the def->numa structure will be always present so we
cannot decide only on the basis whether it's allocated.
Shuffling around the logic will allow to simplify the code quite a bit.
As an additional bonus the change in the logic now reports an error if
automatic placement is selected and individual placement is configured.
Currently the code would exit without reporting an error as
virBitmapParse reports one only if it fails to parse the bitmap, whereas
the code was jumping to the error label even in case 0 cpus were
correctly parsed in the map.
It's easier to recalculate the number in the one place it's used as
having a separate variable to track it. It will also help with moving
the NUMA code to the separate module.
Name it virNumaMemAccess and add it to conf/numa_conf.[ch]
Note that to avoid a circular dependency the type of the NUMA cell
memAccess variable was changed to int. It will be turned back later
after the circular dependency will not exist.
The mask was stored both as a bitmap and as a string. The string is used
for XML output only. Remove the string, as it can be reconstructed from
the bitmap.
The test change is necessary as the bitmap formatter doesn't "optimize"
using the '^' operator.
Rewrite the function to save a few local variables and reorder the code
to make more sense.
Additionally the ncells_max member of the virCPUDef structure is used
only for tracking allocation when parsing the numa definition, which can
be avoided by switching to VIR_ALLOC_N as the array is not resized
after initial allocation.
For weird historical reasons NUMA cells are added as a subelement of
<cpu> while the actual configuration is done in <numatune>.
This patch splits out the cell parser code from cpu config to NUMA
config. Note that the changes to the code are minimal just to make it
work and the function will be refactored in the next patch.
For a while now there are two places that gather information about NUMA
related guest configuration. While the XML can't be changed we can at
least store the data in one place in the definition.
Rename the numatune_conf.[ch] files to numa_conf as later patches will
move the rest of the definitions from the cpu definition to this one.
The "virDomainGetInfo" will get for running domain only live info and for
offline domain only config info. There was no way how to get config info
for running domain. We will use "vshCPUCountCollect" instead to get the
correct cpu count that we need to pass to "virDomainGetVcpuPinInfo".
Also cleanup some unnecessary variables and checks that are done by
drivers.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1160559
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Not all machine types support all devices, device properties, backends,
etc. So until we create a matrix of [machineType, qemuCaps], lets just
filter out some capabilities before we return them to the consumer
(which is going to make decisions based on them straight away).
Currently, as qemu is unable to tell which capabilities are (not)
enabled for given machine types, it's us who has to hardcode the matrix.
One day maybe the hardcoding will go away and we can create the matrix
dynamically on the fly based on a few monitor calls.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
When editing a domain with 'virsh edit' and failing validation, the
usual message pops up:
Failed. Try again? [y,n,f,?]:
Turning off validation can be useful, mainly for testing (but other
purposes too), so this patch adds support for relaxing definition in
virsh-edit and makes 'virsh edit <domain>' more usable.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
1. Delete all boot devices for VM instance
2. Find the first HDD from XML and set it as bootable
Now we support only one boot device and it should be HDD.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Not all files we want to find using virFileFindResource{,Full} are
generated when libvirt is built, some of them (such as RNG schemas) are
distributed with sources. The current API was not able to find source
files if libvirt was built in VPATH.
Both RNG schemas and cpu_map.xml are distributed in source tarball.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1179678
When migrating with storage, libvirt iterates over domain disks and
instruct qemu to migrate the ones we are interested in (shared, RO and
source-less disks are skipped). The disks are migrated in series. No
new disk is transferred until the previous one hasn't been quiesced.
This is checked on the qemu monitor via 'query-jobs' command. If the
disk has been quiesced, it practically went from copying its content
to mirroring state, where all disk writes are mirrored to the other
side of migration too. Having said that, there's one inherent error in
the design. The monitor command we use reports only active jobs. So if
the job fails for whatever reason, we will not see it anymore in the
command output. And this can happen fairly simply: just try to migrate
a domain with storage. If the storage migration fails (e.g. due to
ENOSPC on the destination) we resume the host on the destination and
let it run on partly copied disk.
The proper fix is what even the comment in the code says: listen for
qemu events instead of polling. If storage migration changes state an
event is emitted and we can act accordingly: either consider disk
copied and continue the process, or consider disk mangled and abort
the migration.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Upon BLOCK_JOB_COMPLETED event delivery, we check if the job has
completed (in qemuMonitorJSONHandleBlockJobImpl()). For better image,
the event looks something like this:
"timestamp": {"seconds": 1423582694, "microseconds": 372666}, "event":
"BLOCK_JOB_COMPLETED", "data": {"device": "drive-virtio-disk0", "len":
8412790784, "offset": 409993216, "speed": 8796093022207, "type":
"mirror", "error": "No space left on device"}}
If "len" does not equal "offset" it's considered an error, and we can
clearly see "error" field filled in. However, later in the event
processing this case was handled no differently to case of job being
aborted via separate API. It's time that we start differentiate these
two because of the future work.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, upon BLOCK_JOB_* event, disk->mirrorState is not updated
each time. The callback code handling the events checks if a blockjob
was started via our public APIs prior to setting the mirrorState.
However, some block jobs may be started internally (e.g. during
storage migration), in which case we don't bother with setting
disk->mirror (there's nothing we can set it to anyway), or other
fields. But it will come handy if we update the mirrorState in these
cases too. The event wasn't delivered just for fun - we've started the
job after all.
So, in this commit, the mirrorState is set to whatever job status
we've obtained. Of course, there are some actions on some statuses
that we want to perform. But instead of if {} else if {} else {} ...
enumeration, let's move to switch().
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Libvirt could crash with segfault if user issue "service reload" right
after "service start". One possible way to crash libvirt is to run reload
during initialization of QEMU driver.
It could happen when qemu driver will initialize qemu_driver_lock but
don't have a time to set it's "config" and the SIGHUP arrives. The
reload handler tries to get qemu_drv->config during "virStorageAutostart"
and dereference it which ends with segfault.
Let's ignore all reload requests until all drivers are initialized. In
addition set driversInitialized before we enter virStateCleanup to
ignore reload request while we are shutting down.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1179981
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
In preparation for adding docs about virtlockd, split out
the sanlock setup docs into a separate page.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If 'virNumaGetHostNodeset()' fails then the error path will try to free
uninitialized pointer mem_mask. Introduced by commit af2a1f058.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
PowerPC : Forbid NULL CPU model with 'host-model' mode in qemu command line.
This ensures that an XML such as following:
...
<cpu mode='host-model'>
<model fallback='allow'/>
</cpu>
...
will not generate a '-cpu host,compat=(null)' command line with qemu-system-ppc64.
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
PowerPC : Explicitly associate 'qemu-system-ppc64' as the
default emulator for all 64-bit PowerPC guests ( both Big & Little Endian )
Signed-off-by: Prerna Saxena <prerna@linux.vnet.ibm.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1126762
Commit 43b67f introduced a deadlock issue when we use numatune
to change numa settings to a vm in session mode.
Jump to endjob instead of jump to cleanup.
Signed-off-by: Luyao Huang <lhuang@redhat.com>
So, when building the '-numa' command line, the
qemuBuildMemoryBackendStr() function does quite a lot of checks to
chose the best backend, or to check if one is in fact needed. However,
it returned that backend is needed even for this little fella:
<numatune>
<memory mode="strict" nodeset="0,2"/>
</numatune>
This can be guaranteed via CGroups entirely, there's no need to use
memory-backend-ram to let qemu know where to get memory from. Well, as
long as there's no <memnode/> element, which explicitly requires the
backend. Long story short, we wouldn't have to care, as qemu works
either way. However, the problem is migration (as always). Previously,
libvirt would have started qemu with:
-numa node,memory=X
in this case and restricted memory placement in CGroups. Today, libvirt
creates more complicated command line:
-object memory-backend-ram,id=ram-node0,size=X
-numa node,memdev=ram-node0
Again, one wouldn't find anything wrong with these two approaches.
Both work just fine. Unless you try to migrated from the older libvirt
into the newer one. These two approaches are, unfortunately, not
compatible. My suggestion is, in order to allow users to migrate, lets
use the older approach for as long as the newer one is not needed.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Well, we can pretend that we've asked numad for its suggestion and let
qemu command line be built with that respect. Again, this alone has no
big value, but see later commits which build on the top of this.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Periodically my Coverity scan will return a checked_return failure
for libxlDomainShutdownThread call to libxlDomainStart. Followed the
libxlAutostartDomain example in order to check the status, emit a
message, and continue on.
Jumping to the cleanup label prior to starting the container failed to
properly clean everything up that is handled by the virLXCProcessCleanup
which is called if virLXCProcessStop is called on failure after the
container properly starts. Most importantly is prior to this patch none
of the stop/release hooks, host device reattachment, and network cleanup
(that is reverse of virLXCProcessSetupInterfaces).
Signed-off-by: Luyao Huang <lhuang@redhat.com>
Modify the VIR_DEBUG message in virLXCProcessCleanup to make it clearer
about the path. Also add some more VIR_DEBUG messages in virLXCProcessStart
in order to help debug error flow.
https://bugzilla.redhat.com/show_bug.cgi?id=1176503
Move the two console checks - one for zero nconsoles present and the
other for an invalid console type to earlier in the processing rather than
getting after performing some setup that has to be undone for what amounts
to an invalid configuration.
This resolves the above bug since it's not not possible to have changed
the security labels when we cause the configuration check failure.