To allow using the array manipulation macros on the arrays returned by
virStringSplit we need to know the count of the elements in the array.
Modify virStringSplit to return this value, rename it and add a helper
with the old name so that we don't need to update all the code.
Use the new backing store parser in the backing chain crawler. This
change needs one test change where information about the NBD image are
now parsed differently.
Add parsers for relative and absolute backing names for local and remote
storage files.
This parser parses relative paths as relative to their parents and
absolute paths according to the protocol or local access.
For remote storage volumes, all URI based backing file names are
supported and for the qemu colon syntax the NBD protocol is supported.
Use virStorageFileReadHeader() to read headers of storage files possibly
on remote storage to retrieve the image metadata.
The backend information is now parsed by
virStorageFileGetMetadataInternal which is now exported from the util
source and virStorageFileGetMetadataFromFDInternal now doesn't need to
be exported.
Use the virStorageFileGetUniqueIdentifier() function to get a unique
identifier regardless of the target storage type instead of relying on
canonicalize_path().
A new function that checks whether we support a given image is
introduced to avoid errors for unimplemented backends.
Add a new function wrapper and tweak the storage file backend lookup
function so that it can be used without reporting error. This will be
useful in the metadata crawler code where we need silently break if
metadata retrieval is not supported for the current storage type.
When walking the backing chain we previously set the storage type to
_FILE and let the virStorageFileGetMetadataFromFDInternal update it to
the correct type later on.
This patch moves the actual storage type determination to the place
where we parse the backing store name so that the code can later be
switched to use virStorageFileReadHeader() directly.
My future work will modify the metadata crawler function to use the
storage driver file APIs to access the files instead of accessing them
directly so that we will be able to request the metadata for remote
files too. To avoid linking the storage driver to every helper file
using the utils code, the backing chain traversal function needs to be
moved to the storage driver source.
Additionally the virt-aa-helper and virstoragetest programs need to be
linked with the storage driver as a result of this change.
Different protocols have different means to uniquely identify a storage
file. This patch implements a storage driver API to retrieve a unique
string describing a volume. The current implementation works for local
storage only and returns the canonical path of the volume.
To add caching support the local filesystem driver now has a private
structure holding the cached string, which is created only when it's
initially accessed.
This patch provides the implementation for local files only for start.
It was just very recently that we transfered from:
enum virSomeEnumName{
...
};
to:
typedef enum {
...
} virSomeEnumName;
This change requires some code adaptation, which wasn't done for
xenapi driver. With this fix we are able to build again.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In 9dd02965 the virNumaGetNodeMemory was introduced, however the
comment describing the function mentions virNumaGetNodeMemorySize.
And there's one typo in virNumaIsAvailable() description.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In "src/conf/domain_conf.h" there are many enum declarations. The
cleanup in this header filer was started, but it wasn't enough and
there are many other files that has enum variables declared. So, the
commit was starting to be big. This commit finish the cleanup in this
header file and in other files that has enum variables, parameters,
or functions declared.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
In "src/conf/domain_conf.h" there are many enumerations (enum)
declarations to be converted as a typedef too. As mentioned before,
it's better to use a typedef for variable types, function types and
other usages. I think this file has most of those enum declarations
at "src/conf/". So, me and Eric Blake plan to keep the cleanups all
over the source code. This time, most of the files changed in this
commit are related to part of one file: "src/conf/domain_conf.h".
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
In "src/cpu/" there are some enumerations (enum) declarations.
Similar to the recent cleanup to "src/util", "src/conf" and other
directories, it's better to use a typedef for variable types,
function types and other usages. Other enumeration and folders will
be changed to typedef's in the future. Specially, in files that are
in different places of "src/util" and "src/conf". Most of the files
changed in this commit are related to CPU (cpu_map.h) enums.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Currently, we don not acquire any job when removing a device after
DEVICE_DELETED event was received from QEMU. This means that if there is
another API running at the time DEVICE_DELETED is delivered and the API
acquired a job, we may happily change the definition of the domain the
API is working with whenever it unlocks the domain object (e.g., to talk
with its monitor). That said, we have to acquire a job before finishing
device removal to make things safe. However, doing so in the main event
loop would cause a deadlock so we need to move most of the event handler
into a separate thread.
Another good reason for both acquiring a job and handling the event in a
separate thread is that we currently remove a device backend immediately
after removing its frontend while we should only remove the backend once
we already received DEVICE_DELETED event. That is, we will have to talk
to QEMU monitor from the event handler.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If QEMU supports DEVICE_DELETED event, we always call
qemuDomainRemoveDevice from the event handler. However, we will need to
push this call away from the main event loop and begin a job for it (see
the following commit), we need to make sure the device is fully removed
by the original thread (and within its existing job) in case the
DEVICE_DELETED event arrives before qemuDomainWaitForDeviceRemoval times
out.
Without this patch, device removals would be guaranteed to never finish
before the timeout because the could would be blocked by the original
job being still active.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Introduce helper program to catch events from dnsmasq and maintain a custom
lease file per network. It supports dhcpv4 and dhcpv6. The file is saved as
"<interface-name>.status".
Each lease contains the following info:
<expiry-time (epoch time)> <mac> <iaid> <ip-address> <hostname> <clientid>
Example of custom leases file content:
[
{
"iaid": "1221229",
"ip-address": "2001:db8:ca2:2:1::95",
"mac-address": "52:54:00:12:a2:6d",
"hostname": "Fedora20",
"client-id": "00:04:1a:c1:d9:6b:5a:0a:e2:bc:f8:4b:1e:37:2e:38:22:55",
"expiry-time": 1393244216
},
{
"ip-address": "192.168.150.208",
"mac-address": "52:54:00:11:56:b3",
"hostname": "Wani-PC",
"client-id": "01:52:54:00:11:56:b3",
"expiry-time": 1393244248
}
]
src/Makefile.am:
* Add options to compile the helper program
src/network/bridge_driver.c:
* Introduce networkDnsmasqLeaseFileNameCustom()
* Invoke helper program along with dnsmasq
* Delete the .status file when corresponding n/w is destroyed.
src/network/leaseshelper.c
* Helper program to create the custom lease file
Currently we don't support mixed (external + internal) snapshots. The
code detecting the snapshot type didn't make sure that the memory image
was consistent with the snapshot type leading into strange error
message:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=external,file=/tmp/blah
error: internal error: unexpected code path
Fix the mixed detection code to detect this kind of mistake:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=external,file=/tmp/blah
error: unsupported configuration: mixing internal and external targets for a snapshot is not yet supported
A internal snapshot of a active VM with the memory snapshot disabled
explicitly would actually still take the memory snapshot. Reject it
explicitly.
Before:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=no
Domain snapshot 1401353155 created
After:
$ virsh snapshot-create-as --domain VM --diskspec vda,snapshot=internal --memspec snapshot=no
error: Operation not supported: internal snapshot of a running VM must include the memory state
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1083345
For guests backed by gluster volumes (or other network storage) we don't
fill the backing chain (see qemuDomainDetermineDiskChain). This leaves
the "relPath" field of the top image NULL. This causes a crash in
virStorageFileChainLookup() when looking up a backing element for such a
disk.
Since I'm working on adding support for network storage and one of the
steps will make the "relPath" field optional let's use STREQ_NULLABLE
instead of STREQ in virStorageFileChainLookup() to avoid the problem.
The original version of virTimeLocalOffsetFromUTC() would fail for
certain times of the day if daylight savings time was active. This
could most easily be seen by uncommenting the TEST_LOCALOFFSET() cases
that include a DST setting.
After a lot of experimenting, I found that the way to solve it in
almost all test cases is to set tm_isdst = -1 in the struct tm prior
to calling mktime(). Once this is done, the correct offset is returned
for all test cases at all times except the two hours just after
00:00:00 Jan 1 UTC - during that time, any timezone that is *behind*
UTC, and that is supposed to always be in DST will not have DST
accounted for in its offset.
I believe that the code of virTimeLocalOffsetFromUTC() actually is
correct for all cases, but the problem still encountered is due to our
inability to come up with a TZ string that properly forces DST to
*always* be active. Since a modfication of the (currently fixed)
expected result data to account for this would necessarily use the
same functions that we're trying to test, I've instead just made the
test program conditionally bypass the problematic cases if the current
date is either December 31 or January 1. This way we get maximum
testing during 363 days of the year, but don't get false failures on
Dec 31 and Jan 1.
Even successful start of a VM from a managed save image would spam the
logs with the following message:
Unable to restore from managed state [path]. Maybe the file is
corrupted?
Re-arrange the logic to output the warning only when the image is
corrupted.
The flaw was introduced in commit cfc28c66.
Use virStorageFileGetMetadataFromFD instead in
virStorageBackendProbeTarget as it now returns all required data and the
storage file is already open in a filedescriptor.
Also fix improper error code being returned when virFileReadHeaderFD
would fail as virStorageBackendUpdateVolTargetInfoFD would set the
return code to 0.
Add argument to return backing file format of a file probed by
virStorageFileGetMetadataFromFD so that it can be used in place of
virStorageFileGetMetadataFromBuf.
qemu 2.0 added the ability to commit the active layer, but slightly
differently than what libvirt had been anticipating in its
implementation of the virDomainBlockCommit call. As a result, if
you attempt to do a 'virsh blockcommit $dom vda', qemu gets into a
state where it is waiting on libvirt to end the job, while libvirt
is waiting on qemu to end the job, and the guest is effectively
hung with regards to further commands for that block device.
I have patches coming down the pipeline that will add full support
for blockcommit of the active layer when coupled with qemu 2.0 or
later; but they depend on Peter's improvements to block job handling
and form enough of a new feature that they are not ready for
inclusion in the 1.2.5 release. So for now, just reject the
attempt, rather than letting the user get stuck. This is no worse
than the behavior of qemu 1.7 rejecting the job.
* src/qemu/qemu_driver.c (qemuDomainBlockCommit): Reject active
commit.
Signed-off-by: Eric Blake <eblake@redhat.com>
QEMU ppce500 board uses the legacy -serial option.
Other PPC boards don't give any way to explicitly wire in a -chardev
except pseries which uses -device spapr-vty with -chardev.
Add test case for -serial option for ppce500
Signed-off-by: Olivia Yin <Hong-Hua.Yin@freescale.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1088787
Clean up unix socket files for chardevs using mode='bind',
like we clean up the monitor socket.
They are created by QEMU on startup and not really useful
after shutting it down.
For a clock element as above, libvirt simply converts current system
time with localtime_r(), then starts qemu with a time string that
doesn't contain any timezone information. So, from qemu's point of
view, the -rtc string it gets for:
<clock offset='variable' basis='utc' adjustment='10800'/>
is identical to the -rtc string it gets for:
<clock offset='variable' basis='localtime' adjustment='0'/>
(assuming the host is in a timezone that is 10800 seconds ahead of
UTC, as is the case on the machine where this message is being
written).
Since the commandlines are identical, qemu will behave identically
after this point in either case.
There are two problems in the case of basis='localtime' though:
Problem 1) If the guest modifies its RTC, for example to add 20
seconds, the RTC_CHANGE event from qemu will then contain offset:20 in
both cases. But libvirt will have saved the original adjustment into
adjustment0, and will add that value onto the offset in the
event. This means that in the case of basis=;utc', it will properly
emit an event with offset:10820, but in the case of basis='localtime'
the event will contain offset:20, which is *not* the new offset of the
RTC from UTC (as the event it documented to provide).
Problem 2) If the guest is migrated to another host that is in a
different timezone, or if it is migrated or saved/restored after the
DST status has changed from what it was when the guest was originally
started, the newly restarted guest will have a different RTC (since it
will be based on the new localtime, which could have shifted by
several hours).
The solution to both of these problems is simple - rather than
maintaining the original adjustment value along with
"basis='localtime'" in the domain status, when the domain is started
we convert the adjustment offset to one relative to UTC, and set the
status to "basis='utc'". Thus, whatever the RTC offset was from UTC
when it was initially started, that offset will be maintained when
migrating across timezones and DST settings, and the RTC_CHANGE events
will automatically contain the proper offset (which should by
definition always be relative to UTC).
This fixes a problem that was implied but not openly stated in:
https://bugzilla.redhat.com/show_bug.cgi?id=964177
commit e31b5cf393 attempted to fix libvirt's
VIR_DOMAIN_EVENT_ID_RTC_CHANGE, which is documentated to always
provide the new offset of the domain's real time clock from UTC. The
problem was that, in the case that qemu is provided with an "-rtc
base=x" where x is an absolute time (rather than "utc" or
"localtime"), the offset sent by qemu's RTC_CHANGE event is *not* the
new offset from UTC, but rather is the sum of all changes to the
domain's RTC since it was started with base=x.
So, despite what was said in commit e31b5cf393, if we assume that
the original value stored in "adjustment" was the offset from UTC at
the time the domain was started, we can always determine the current
offset from UTC by simply adding the most recent (i.e. current) offset
from qemu to that original adjustment.
This patch accomplishes that by storing the initial adjustment in the
domain's status as "adjustment0". Each time a new RTC_CHANGE event is
received from qemu, we simply add adjustment0 to the value sent by
qemu, store that as the new adjustment, and forward that value on to
any event handler.
This patch (*not* e31b5cf393, which should be reverted prior to
applying this patch) fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=964177
(for the case where basis='utc'. It does not fix basis='localtime')
This reverts commit e31b5cf393.
This commit attempted to work around a bug in the offset value
reported by qemu's RTC_CHANGE event in the case that a variable base
date was given on the qemu commandline. The patch mixed up the math
involved in arriving at the corrected offset to report, and in the
process added an unnecessary private attribute to the clock
element. Since that element is private/internal and not used by anyone
else, it makes sense to simplify things by removing it.
Since there isn't a single libc API to get this value, this patch
supplies one which gets the value by grabbing current time, then
converting that into a struct tm with gmtime_r(), then back to a
time_t using mktime.
The returned value is the difference between UTC and localtime in
seconds. If localtime is ahead of UTC (east) the offset will be a
positive number, and if localtime is behind UTC (west) the offset will
be negative.
This function should be POSIX-compliant, and is threadsafe, but not
async signal safe. If it was ever necessary to know this value in a
child process, we could cache it with a one-time init function when
libvirtd starts, then just supply the cached value, but that
complexity isn't needed for current usage; that would also have the
problem that it might not be accurate after a local daylight savings
boundary.
(If it weren't for DST, we could simply replace this entire function
with "-timezone"; timezone contains the offset of the current timezone
(negated from what we want) but doesn't account for DST. And in spite
of being guaranteed by POSIX, it isn't available on older versions of
mingw.)
Signed-off-by: Eric Blake <eblake@redhat.com>
Add storage driver based functions to access headers of storage files
for metadata extraction. Along with this patch a local filesystem and
gluster via libgfapi implementation is provided. The gluster
implementation is based on code of the saferead_lim function.
To allow using the storage driver APIs to access files on various
storage sources in a universal fashion possibly on storage such as nfs
with root squash we'll need to store the desired uid/gid in the
metadata.
Add new initialisation API that will store the desired uid/gid and a
wrapper for the current use. Additionally add docs for the two APIs.
Currently the protocol type with index 0 was NBD which made it hard to
distinguish whether the protocol type was actually assigned. Add a new
protocol type with index 0 to distinguish it explicitly.
Print the debug statements of individual file access functions from the
main API functions instead of the individual backend functions.
Also enhance initialization debug messages on a per-backend basis.
The gluster volume name was previously stored as part of the source path
string. This is unfortunate when we want to do operations on the path as
the volume is used separately.
Parse and store the volume name separately for gluster storage volumes
and use the newly stored variable appropriately.
Refactor the function to accept a virStorageSourcePtr instead of just
the path, add a check to run it only on local storage and fix callers
(possibly by using a newly introduced wrapper that wraps a path in the
virStorageSource struct for legacy code)
Refresh the disk backing chains when reconnecting to a qemu process
after daemon restart. There are a few internal fields that don't get
refreshed from the XML. Until we are able to do that, let's reload all
the metadata by the backing chain crawler.
This is similar to the previous commit in that we need to explicitly
send migrate_cancel when libvirt detects an error other than those
reported by query-migrate. However, the possibility to hit such error is
pretty small.
When QEMU reports failed or cancelled migration, we don't need to send
it migrate_cancel QMP command. But in all other error paths, such as if
we detect broken connection to a destination daemon or something else
happens inside libvirt, we need to explicitly send migrate_cancel
command instead of relying on the migration to be implicitly cancelled
when destination QEMU is killed.
Because we were not doing so, one could end up with a paused domain
after failed migration.
https://bugzilla.redhat.com/show_bug.cgi?id=1098833