This test data was gathered on an AMD MagnyCours machine that reports it
has only one NUMA node although the hardware is consisting of 4. As
duplicate core id's are ignored the reported topology was bogous. This
should be fixed by the previous patch.
Reported and data provided by George-Cristian Bîrzan.
Lately there were a few reports of the output of the virsh nodeinfo
command being inaccurate. This patch tries to avoid that by checking if
the topology actually makes sense. If it doesn't we then report a
synthetic topology that indicates to the user that the host capabilities
should be checked for the actual topology.
Currently, if user calls virDomainAbortJob we just issue
'migrate_cancel' and hope for the best. However, if user calls
the API in wrong phase when migration hasn't been started yet
(perform phase) the cancel request is just ignored. With this
patch, the request is remembered and as soon as perform phase
starts, migration is cancelled.
For S390, the default console target type cannot be of type 'serial'.
It is necessary to at least interpret the 'arch' attribute
value of the os/type element to produce the correct default type.
Therefore we need to extend the signature of defaultConsoleTargetType
to account for architecture. As a consequence all the drivers
supporting this capability function must be updated.
Despite the amount of changed files, the only change in behavior is
that for S390 the default console target type will be 'virtio'.
N.B.: A more future-proof approach could be to to use hypervisor
specific capabilities to determine the best possible console type.
For instance one could add an opaque private data pointer to the
virCaps structure (in case of QEMU to hold capsCache) which could
then be passed to the defaultConsoleTargetType callback to determine
the console target type.
Seems to be however a bit overengineered for the use case...
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
When the libvirt daemon is restarted it tries to reconnect to running
qemu domains. Since commit d38897a5d4 the
re-connection code runs in separate threads. In the original
implementation the maximum of domain ID's (that is used as an
initializer for numbering guests created next) while libvirt was
reconnecting to the guest.
With the threaded implementation this opens a possibility for race
conditions with the thread that is autostarting guests. When there's a
guest running with id 1 and the daemon is restarted. The autostart code
is reached first and spawns the first guest that should be autostarted
as id 1. This results into the following unwanted situation:
# virsh list
Id Name State
----------------------------------------------------
1 guest1 running
1 guest2 running
This patch extracts the detection code before the re-connection threads
are started so that the maximum id of the guests being reconnected to is
known.
The only semantic change created by this is if the guest with greatest ID
quits before we are able to reconnect it's ID is used anyway as the
greatest one as without this patch the greatest ID of a process we could
successfuly reconnect to would be used.
82507838 refactored the code to keep both the raw and canonicalized form
of the backingStore, which breaks badly when the storage pool contains a
storage volume, which is missing its backing store file:
# ./daemon/libvirtd -l
2012-11-07 12:43:33.279+0000: 22175: info : libvirt version: 1.0.0
2012-11-07 12:43:33.279+0000: 22175: error : absolutePathFromBaseFile:542 : Can't canonicalize path '/var/lib/libvirt/images/base.qcow2': No such file or directory
2012-11-07 12:43:33.280+0000: 22175: error : storageDriverAutostart:115 : Failed to autostart storage pool 'default': Can't canonicalize path '/var/lib/libvirt/images/base.qcow2': No such file or directory
This is because virStorageFileGetMetadataFromBuf() aborts with -1 if the
filename of the backingStore can not be canonicalized:
#0 absolutePathFromBaseFile () at util/storage_file.c:541
#1 virStorageFileGetMetadataFromBuf () at util/storage_file.c:728
#2 virStorageFileGetMetadataFromFD () at util/storage_file.c:932
#3 virStorageBackendProbeTarget () at storage/storage_backend_fs.c:94
#4 virStorageBackendFileSystemRefresh () at storage/storage_backend_fs.c:849
#5 storagePoolStart () at storage/storage_driver.c:700
#6 virStoragePoolCreate () at libvirt.c:12471
...
Treat files which miss their backing file as standalone files.
Signed-off-by: Philipp Hahn <hahn@univention.de>
This patch adds support for external disk snapshots of inactive domains.
The snapshot is created by calling using qemu-img by calling:
qemu-img create -f format_of_snapshot -o
backing_file=/path/to/src,backing_fmt=format_of_backing_image
/path/to/snapshot
in case the backing image format is known or probing is allowed and
otherwise:
qemu-img create -f format_of_snapshot -o backing_file=/path/to/src
/path/to/snapshot
on each of the disks selected for snapshotting. This patch also modifies
the snapshot preparing function to support creating external snapshots
and to sanitize arguments. For now the user isn't able to mix external
and internal snapshots but this restriction might be lifted in the
future.
External checkpoints could be created with snapshot-create, but
without libvirt supplying a default name for the memory file,
it is essential to add a new argument to snapshot-create-as to
allow the user to choose the memory file name. This adds the
option --memspec [file=]name[,snapshot=type], where type can
be none, internal, or external. For an example,
virsh snapshot-create-as $dom --memspec /path/to/file
is the shortest possible command line for creating an external
checkpoint, named after the current timestamp.
* tools/virsh-snapshot.c (vshParseSnapshotMemspec): New function.
(cmdSnapshotCreateAs): Use it.
* tests/virsh-optparse (test_url): Test it.
* tools/virsh.pod (snapshot-create-as): Document it.
So far, none of the existing callers of vshStringToArray expected
the user to ever pass a literal comma; meanwhile, snapshot parsing
had rolled its own array parser. Moving the comma escaping into
the common function won't affect any existing callers, and will make
this function reusable for adding memory handling to snapshot parsing.
As a bonus, the testsuite was already testing snapshot parsing, so
the fact that the test still passes means that we are now giving
testsuite exposure to vshStringToArray.
* tools/virsh-snapshot.c (vshParseSnapshotDiskspec): Move ,,
parsing...
* tools/virsh.c (vshStringToArray): ...into common function.
Also, vshStrdup can't fail.
Some operations, APIs needs domain to be paused prior operation can be
performed, e.g. (managed-) save of a domain. The processors should be
restored in the end. However, if 'cont' fails for some reason, we log a
message but this is not sufficient as an event should be emitted as
well. Mgmt application can then decide what to do.
This is supposed to be thrown every time we need to pause domain
because of API execution (e.g. qemuDomainSaveInternal) but fails
to restore it back after. In this case, domain remains paused,
however, none of existing reasons can fit this scenario.
Make it clear that the alternate terms have no difference except
for length of time they were supported.
* tools/virsh.pod (start, shutdown, reboot): More documentation.
https://bugzilla.redhat.com/show_bug.cgi?id=873344 suggested that
the grouping 'boot', 'shutdown', 'reboot'; as well as the grouping
'start', 'stop', 'restart'; might be easier to remember than the
current mix of 'start', 'shutdown', 'reboot'.
Also, touch up the wording of 'reboot' to be more accurate.
* tools/virsh-domain.c (domManagementCmds): Add other command names.
* tools/virsh.pod (start, shutdown, reboot): Document the aliases.
The code that was split out into the qemuDomainSaveMemory expands the
pointer containing the XML description of the domain that it gets from
higher layers. If the pointer changes the old one is invalid and the
upper layer function tries to free it causing an abort.
This patch changes the expansion of the original string to a new
allocation and copy of the contents.
After the connection to ESX 5.1 being broken since g1e7cd39, the fix
in bab7752c helped a bit, but still missed a spot, so the connection
is now successful, but some APIs (for example defineXML) don't work.
Two cases missing are added in this patch to avoid that.
qemu is sensitive to the order of arguments passed. Hence, if a
device requires a controller, the controller cmd string must
precede device cmd string. The same apply for controllers, when
for instance ccid controller requires usb controller. So
controllers create partial ordering in which they should be added
to qemu cmd line.
In Fedora 16, we quit enabling cgconfig because systemd set up
default cgroups that were good enough for our use. But in F17,
when we switched to systemd, we reverted and started up cgconfig
again. See also the tail of this thread:
https://www.redhat.com/archives/libvir-list/2012-October/msg01657.html
* libvirt.spec.in (with_systemd): Rely on systemd for cgroups.
Some FDs may not implement fdatasync() functionality,
e.g. pipes. In that case EINVAL or EROFS is returned.
We don't want to fail then nor report any error.
Reported-by: Christophe Fergeau <cfergeau@redhat.com>
Some of the pre-snapshot check have restrictions wired in regarding
configuration options that influence taking of external checkpoints.
This patch removes restrictions that would inhibit taking of such a
snapshot.
This patch adds support to take external system checkpoints.
The functionality is layered on top of the previous disk-only snapshot
code. When the checkpoint is requested the domain memory is saved to the
memory image file using migration to file. (The user may specify to
take the memory image while the guest is live with the
VIR_DOMAIN_SNAPSHOT_CREATE_LIVE flag.)
The memory save image shares format with the image created by
virDomainSave() API.
Before now, libvirt supported only internal snapshots for active guests.
This patch renames this function to qemuDomainSnapshotCreateActiveInternal
to prepare the grounds for external active snapshots.
The new external system checkpoints will require an async job while the
snapshot is taken. This patch adds QEMU_ASYNC_JOB_SNAPSHOT to track this
job type.
The default behavior while creating external checkpoints is to pause the
guest while the memory state is captured. We want the users to sacrifice
space saving for creating the memory save image while the guest is live
to minimize downtime.
This patch adds a flag that causes the guest not to be paused before
taking the snapshot.
*include/libvirt/libvirt.h.in:
- add new paused reason: VIR_DOMAIN_PAUSED_SNAPSHOT
- add new flag for taking snapshot: VIR_DOMAIN_SNAPSHOT_CREATE_LIVE
*tools/virsh-domain-monitor.c:
- add string representation for VIR_DOMAIN_PAUSED_SNAPSHOT
*tools/virsh-snapshot.c:
- add support for VIR_DOMAIN_SNAPSHOT_CREATE_LIVE
*tools/virsh.pod:
- add docs for --live option added to use
VIR_DOMAIN_SNAPSHOT_CREATE_LIVE flag
The code that saves domain memory by migration to file can be reused
while doing external checkpoints of a machine. This patch extracts the
common code and places it in a separate function.
When pausing the guest while migration is running (to speed up
convergence) the virDomainSuspend API checks if the migration job is
active before entering the job. This could cause a possible race if the
virDomainSuspend is called while the job is active but ends before the
Suspend API enters the job (this would require that the migration is
aborted). This would cause a incorrect event to be emitted.
Both system checkpoint snapshots and disk snapshots were iterating
over all disks, doing a final sanity check before doing any work.
But since future patches will allow offline snapshots to be either
external or internal, it makes sense to share the pass over all
disks, and then relax restrictions in that pass as new modes are
implemented. Future patches can then handle external disks when
the domain is offline, then handle offline --disk-snapshot, and
finally, combine with migration to file to gain a complete external
system checkpoint snapshot of an active domain without using 'savevm'.
* src/qemu/qemu_driver.c (qemuDomainSnapshotDiskPrepare)
(qemuDomainSnapshotIsAllowed): Merge...
(qemuDomainSnapshotPrepare): ...into one function.
(qemuDomainSnapshotCreateXML): Update caller.
Now that the XML supports listing internal snapshots, it is worth
always populating the <memory> and <disks> element to match.
* src/qemu/qemu_driver.c (qemuDomainSnapshotCreateXML): Always
parse disk info and set memory info.
There were not previous callers with require_match set to true.
I originally implemented this bool with the intent of supporting
ESX snapshot semantics, where the choice of internal vs. external
vs. non-checkpointable must be made at domain start, but as ESX
has not been wired up to use it yet, we might as well fix it to
work with our next qemu patch for now, and worry about any further
improvements (changing the bool to a flags argument) if the ESX
driver decides to use this function in the future.
* src/conf/snapshot_conf.c (virDomainSnapshotAlignDisks): Alter
logic when require_match is true to deal with new XML.
Each <domainsnapshot> can now contain an optional <memory>
element that describes how the VM state was handled, similar
to disk snapshots. The new element will always appear in
output; for back-compat, an input that lacks the element will
assume 'no' or 'internal' according to the domain state.
Along with this change, it is now possible to pass <disks> in
the XML for an offline snapshot; this also needs to be wired up
in a future patch, to make it possible to choose internal vs.
external on a per-disk basis for each disk in an offline domain.
At that point, using the --disk-only flag for an offline domain
will be able to work.
For some examples below, remember that qemu supports the
following snapshot actions:
qemu-img: offline external and internal disk
savevm: online internal VM and disk
migrate: online external VM
transaction: online external disk
=====
<domainsnapshot>
<memory snapshot='no'/>
...
</domainsnapshot>
implies that there is no VM state saved (mandatory for
offline and disk-only snapshots, not possible otherwise);
using qemu-img for offline domains and transaction for online.
=====
<domainsnapshot>
<memory snapshot='internal'/>
...
</domainsnapshot>
state is saved inside one of the disks (as in qemu's 'savevm'
system checkpoint implementation). If needed in the future,
we can also add an attribute pointing out _which_ disk saved
the internal state; maybe disk='vda'.
=====
<domainsnapshot>
<memory snapshot='external' file='/path/to/state'/>
...
</domainsnapshot>
This is not wired up yet, but future patches will allow this to
control a combination of 'virsh save /path/to/state' plus disk
snapshots from the same point in time.
=====
So for 1.0.1 (and later, as needed), I plan to implement this table
of combinations, with '*' designating new code and '+' designating
existing code reached through new combinations of xml and/or the
existing DISK_ONLY flag:
domain memory disk disk-only | result
-----------------------------------------
offline omit omit any | memory=no disk=int, via qemu-img
offline no omit any |+memory=no disk=int, via qemu-img
offline omit/no no any | invalid combination (nothing to snapshot)
offline omit/no int any |+memory=no disk=int, via qemu-img
offline omit/no ext any |*memory=no disk=ext, via qemu-img
offline int/ext any any | invalid combination (no memory to save)
online omit omit off | memory=int disk=int, via savevm
online omit omit on | memory=no disk=default, via transaction
online omit no/ext off | unsupported for now
online omit no on | invalid combination (nothing to snapshot)
online omit ext on | memory=no disk=ext, via transaction
online omit int off |+memory=int disk=int, via savevm
online omit int on | unsupported for now
online no omit any |+memory=no disk=default, via transaction
online no no any | invalid combination (nothing to snapshot)
online no int any | unsupported for now
online no ext any |+memory=no disk=ext, via transaction
online int/ext any on | invalid combination (disk-only vs. memory)
online int omit off |+memory=int disk=int, via savevm
online int no/ext off | unsupported for now
online int int off |+memory=int disk=int, via savevm
online ext omit off |*memory=ext disk=default, via migrate+trans
online ext no off |+memory=ext disk=no, via migrate
online ext int off | unsupported for now
online ext ext off |*memory=ext disk=ext, via migrate+transaction
* docs/schemas/domainsnapshot.rng (memory): New RNG element.
* docs/formatsnapshot.html.in: Document it.
* src/conf/snapshot_conf.h (virDomainSnapshotDef): New fields.
* src/conf/domain_conf.c (virDomainSnapshotDefFree)
(virDomainSnapshotDefParseString, virDomainSnapshotDefFormat):
Manage new fields.
* tests/domainsnapshotxml2xmltest.c: New test.
* tests/domainsnapshotxml2xmlin/*.xml: Update existing tests.
* tests/domainsnapshotxml2xmlout/*.xml: Likewise.
This documents the following whitespace rules
if(foo) // Bad
if (foo) // Good
int foo (int wizz) // Bad
int foo(int wizz) // Good
bar = foo (wizz); // Bad
bar = foo(wizz); // Good
typedef int (*foo) (int wizz); // Bad
typedef int (*foo)(int wizz); // Good
int foo( int wizz ); // Bad
int foo(int wizz); // Good
There is a syntax-check rule extension to validate all these rules.
Checking for 'function (...args...)' is quite difficult since it
needs to ignore valid usage with keywords like 'if (...test...)'
and while/for/switch. It must also ignore source comments and
quoted strings.
It is not possible todo this with a simple regex in the normal
syntax-check style. So a short Perl script is created instead
to analyse the source. In practice this works well enough. The
only thing it can't cope with is multi-line quoted strings of
the form
"start of string\
more lines\
more line\
the end"
but this can and should be written as
"start of string"
"more lines"
"more line"
"the end"
with this simple change, the bracket checking script does not
have any false positives across libvirt source, provided it
is only run against .c files. It is not practical to run it
against .h files, since those use whitespace extensively to
get alignment (though this is somewhat inconsistent and could
arguably be fixed).
The only limitation is that it cannot detect a violation where
the first arg starts with a '*', eg
foo(*wizz);
since this generates too many false positives on function
typedefs which can't be supressed efficiently.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The libvirt coding standard is to use 'function(...args...)'
instead of 'function (...args...)'. A non-trivial number of
places did not follow this rule and are fixed in this patch.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When assigning the new persistent definition for a transient network
(thus making it persistent) the network needs to be marked persistent
before actually atempting to assign the definition.
Until now, the network undefine API was able to undefine only inactive
networks. The restriction doesn't make sense any more so this patch
implements changing networks to transient.
When a transient network was created some of the checks weren't run on
the definition allowing to start invalid networks.
This patch splits out code to the network validation function and
re-uses that code when creating transient networks.
The network driver didn't care about config files when a network was
destroyed, just when it was undefined leaving behind files for transient
networks.
This patch splits out the cleanup code to a helper function that handles
the cleanup if the inactive network object is being removed and re-uses
this code when getting rid of inactive networks.
The hosts file was created in the network definition function. This
patch moves the place the file is being created to the point where
dnsmasq is being started.