The following XML is the recommended default clock configuration for
qemu:
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
However we weren't testing any of those timer elements.
Commit 2d74822a9e renamed
"freebsdNodeGetCPUCount" to "appleFreebsdNodeGetCPUCount", leaving one
call to "freebsdNodeGetCPUCount". Fix this other case.
Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The test case average timing code has not been used by any test
case ever. Delete it to remove complexity.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The current OOM test impl is too inefficient to be used on the
large test suites. It loops running 'main' multiple times, once
for each alloc in the 'main' method. This has complexity
'n * (n + 1) / 2' in terms of total alloc count. It will be
replaced by a more efficient impl whicih runs individual test
cases instead. This will have same complexity but the values
of 'n' will be much smaller, so a net win.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The qemuxml2xmltest.c function testCompareXMLToXMLHelper would
clobber the 'ret' variable causing it to mis-diagnose OOM
errors.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
We currently have other error codes in singular form, e.g.
VIR_ERR_NETWORK_EXIST. Cleanup the previous patch to match the form.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
I created a storage volume(eg: test) from a storage pool(eg:vg10) using
the following command:"virsh vol-create-as --pool vg10 --name test --capacity 300M."
When I re-executed the above command, the output was as the following:
"error: Failed to create vol test
error: Storage volume not found: storage vol 'test' already exists"
I think the output "Storage volume not found" is not appropriate. Because in fact storage
vol test has been found at this time. And then I think virErrorNumber should includes
VIR_ERR_STORAGE_EXIST which can also be used elsewhere. So I make this patch. The result
is as following:
"error: Failed to create vol test
error: storage volume 'test' exists already"
Make it much easier to test a configuration built without readline
support, by reusing our existing library probe machinery. It gets
a bit tricky with readline, which does not provide a pkg-config
snippet, and which on some platforms requires one of several
terminal libraries as a prerequiste, but the end result should be
the same default behavior but now with the option to disable things.
* m4/virt-readline.m4 (LIBVIRT_CHECK_READLINE): Simplify by using
LIBVIRT_CHECK_LIB.
* tools/virsh.c: Convert USE_READLINE to WITH_READLINE.
Signed-off-by: Eric Blake <eblake@redhat.com>
A future patch will allow disabling readline; doing this in an
isolated file instead of configure.ac will make the task easier.
* configure.ac: Move readline code...
* m4/virt-readline.m4: ...here.
Signed-off-by: Eric Blake <eblake@redhat.com>
The automake manual recommends against the use of disabling
maintainer mode by default:
https://www.gnu.org/software/automake/manual/automake.html#maintainer_002dmode
because when it is disabled, the user gets no indication if they
touch a file that would normally require a rebuild. Automake
1.11 changed things so that AM_MAINTAINER_MODE([enable]) will set
the mode to enabled by default; but RHEL 5 still uses automake 1.9,
where AM_MAINTAINER_MODE did not recognize an argument, and
therefore disables maintainer mode by default. Having the default
be different according to which version of automake built the
project is annoying, and I _have_ been bitten on RHEL 5 rebuilds
where the default disabled mode led to silently incorrect builds.
The automake manual admits that being able to disable maintainer
mode still makes sense for projects that still store generated
files from the autotools in version control; but we have dropped
that for several years now. As such, it's finally time to just
ditch the whole idea of maintainer mode, and unconditionally
rebuild autotools files if a dependency changes, without offering
a configure option to disable that mode.
* configure.ac (AM_MAINTAINER_MODE): Drop.
Signed-off-by: Eric Blake <eblake@redhat.com>
The virConnectPtr is passed around loads of nwfilter code in
order to provide it as a parameter to the callback registered
by the virt drivers. None of the virt drivers use this param
though, so it serves no purpose.
Avoiding the need to pass a virConnectPtr means that the
nwfilterStateReload method no longer needs to open a bogus
QEMU driver connection. This addresses a race condition that
can lead to a crash on startup.
The nwfilter driver starts before the QEMU driver and registers
some callbacks with DBus to detect firewalld reload. If the
firewalld reload happens while the QEMU driver is still starting
up though, the nwfilterStateReload method will open a connection
to the partially initialized QEMU driver and cause a crash.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The nwfilter driver only needs a reference to its private
state object, not a full virConnectPtr. Update the domUpdateCBStruct
struct to have a 'void *opaque' field instead of a virConnectPtr.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit 27e81517a8 set the payload size to 256 KB, which is
actually the max packet size, including the size of the header.
Reduce this by VIR_NET_MESSAGE_HEADER_MAX (24) and set
VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX to 262120, which was the original
value before increasing the limit in commit eb635de1fe.
This fixes the following error:
error : nodeGetInfo:933 : this function is not supported
by the connection driver: node info not implemented on this platform
The freebsdNodeGetCPUCount was renamed to appleFreebsdNodeGetCPUCount
in order to make more visible the fact, that it works on Mac OS X too.
Mac OS X can use sysctlbyname as same as FreeBSD to get the CPU
frequency. However, the MIB style name is different from FreeBSD's.
And the unit of the return frequency is also different.
Signed-off-by: Ryota Ozaki <ozaki.ryota@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This fixes the following error:
error : virGetUserEnt:703 : Failed to find user record for uid '32654'
'32654' (it's random and varies) comes from getsockopt with
LOCAL_PEERCRED option. getsockopt returns w/o error but seems
to not set any value to the buffer for uid.
For Mac OS X, LOCAL_PEERCRED has to be used with SOL_LOCAL level.
With SOL_LOCAL, getsockopt returns a correct uid.
Note that SOL_LOCAL can be found in
/System/Library/Frameworks/Kernel.framework/Versions/A/Headers/sys/un.h.
Signed-off-by: Ryota Ozaki <ozaki.ryota@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemumonitorjsontest.c: In function 'testQemuMonitorJSONqemuMonitorJSONGetBalloonInfo':
qemumonitorjsontest.c:1134: warning: integer constant is too large for 'long' type
* tests/qemumonitorjsontest.c
(testQemuMonitorJSONqemuMonitorJSONGetBalloonInfo)
(testQemuMonitorJSONqemuMonitorJSONGetBlockStatsInfo): Use correct
type.
Signed-off-by: Eric Blake <eblake@redhat.com>
On RHEL 5, compilation fails with:
storage/storage_backend.c: In function 'createRawFile':
storage/storage_backend.c:339: warning: implicit declaration of function 'fallocate'
storage/storage_backend.c:339: warning: nested extern declaration of 'fallocate' [-Wnested-externs]
But:
$ grep HAVE_FALLOCATE config.h
/* #undef HAVE_FALLOCATE */
Huh? It turns out that in kernels that old, fallocate() is not
implemented (config.h is correct), but <linux/fs.h> defines
HAVE_FALLOCATE as an empty witness macro for a completely
different purpose. Since storage_backend.c is including
<linux/fs.h> on RHEL 5, we are hosed by the kernel definition.
Newer kernels no longer pollute the namespace, and it's fairly
easy to convert to an expression that works with both the old
kernel witness and the new-style config.h (undefined or 1).
Problem introduced in commit 532fef3.
* src/storage/storage_backend.c (createRawFile): Avoid namespace
pollution from kernel, by checking HAVE_FALLOCATE for a value.
Signed-off-by: Eric Blake <eblake@redhat.com>
I tried to test ./configure --without-lxc --without-remote.
First, the build failed with some odd errors, such as an
inability to build xen, or link failures for virNetTLSInit.
But when you think about it, once there is no remote code,
all of libvirtd is useless, any stateful driver that depends
on libvirtd is also not worth compiling, and any libraries
used only by RPC code are not needed. So I patched
configure.ac to make for some saner defaults when an
explicit disable is attempted. Similarly, since we have
migrated virnetdevbridge into generic code, the workaround
for Linux kernel stupidity must not depend on stateful
drivers being in use.
Then there's 'make check' that needs segregation.
Wow - quite a bit of cleanup to make --without-remote useful :)
* configure.ac: Let --without-remote toggle defaults on stateful
drivers and other libraries. Pick up Linux kernel workarounds
even when qemu and lxc are not being compiled.
* tests/Makefile.am (test_programs): Factor out programs that
require remote.
* src/libvirt_private.syms (rpc/virnet*.h): Move...
* src/libvirt_remote.syms: ...into new file.
* src/Makefile.am (SYM_FILES): Ship new syms file.
Signed-off-by: Eric Blake <eblake@redhat.com>
Fixed the safezero call for allocating the rest of the file after cloning
an existing volume; it used to always use a zero offset, causing it to
only allocate the beginning of the file.
Also modified file creation to try to use fallocate(2) to pre-allocate
disk space before copying any data to make sure it fails early on if disk
is full and makes sure we can skip zero blocks when copying file contents.
If fallocate isn't available we will zero out the rest of the file after
cloning and only use sparse cloning if client requested a lower allocation
than the input volume's capacity.
Signed-off-by: Oskari Saarenmaa <os@ohmu.fi>
My previous commit 7dc1d4ab was supposed to change safezero to allocate
1 megabyte at maximum, but had the logic reversed and will allocate 1
megabyte at minimum (and a lot more at maximum.)
Signed-off-by: Oskari Saarenmaa <os@ohmu.fi>
Commit id 'c4a4603de' added an output <path> to the nodedev xml, but
did not update the schema.
This resulted in the failure of the 'virt-xml-validate' on a file
generated by 'virsh nodedev-dumpxml pci_0000_00_00_0' (for example).
This was found/seen by running autotest on my host.
mmap can fail on 32-bit systems if we're trying to zero out a lot of data.
Fall back to using block-by-block writing in that case. While we could map
smaller blocks it's unlikely that this code is used a lot and its easier to
just fall back to one of the existing methods.
Also modified the block-by-block zeroing to not allocate a megabyte of
zeroes if we're writing less than that.
Signed-off-by: Oskari Saarenmaa <os@ohmu.fi>
Again stolen from qemu_driver.c, but dropping all the unneeded bits.
This aims to copy all the current qemu validation checks since that's
the most commonly used real driver, but some of the checks are
completely artificial in the test driver.
This only supports creation of internal snapshots for initial
simplicity.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1012824https://bugzilla.redhat.com/show_bug.cgi?id=1012834
Note that a similar problem was reported in:
https://bugzilla.redhat.com/show_bug.cgi?id=827519
but the fix only worked for <interface type='hostdev'>, *not* for
<interface type='network'> where the network itself was a pool of
hostdevs.
The symptom in both cases was this error message:
internal error: Unable to determine device index for network device
In both cases the cause was lack of proper handling for netdevs
(<interface>) of type='hostdev' when scanning the netdev list looking
for alias names in qemuAssignDeviceNetAlias() - those that aren't
type='hostdev' have an alias of the form "net%d", while those that are
hostdev use "hostdev%d". This special handling was completely lacking
prior to the fix for Bug 827519 which was:
When searching for the highest alias index, libvirt looks at the alias
for each netdev and if it is type='hostdev' it ignores the entry. If
the type is not hostdev, then it expects the "net%d" form; if it
doesn't find that, it fails and logs the above error message.
That fix works except in the case of <interface type='network'> where
the network uses hostdev (i.e. the network is a pool of VFs to be
assigned to the guests via PCI passthrough). In this case, the check
for type='hostdev' would fail because it was done as:
def->net[i]->type == VIR_DOMAIN_NET_TYPE_HOSTDEV
(which compares what was written in the config) when it actually
should have been:
virDomainNetGetActualType(def->net[i]) == VIR_DOMAIN_NET_TYPE_HOSTDEV
(which compares the type of netdev that was actually allocated from
the network at runtime).
Of course the latter wouldn't be of any use if the netdevs of
type='network' hadn't already acquired their actual network connection
yet, but manual examination of the code showed that this is never the
case.
While looking through qemu_command.c, two other places were found to
directly compare the net[i]->type field rather than getting actualType:
* qemuAssignDeviceAliases() - in this case, the incorrect comparison
would cause us to create a "net%d" alias for a netdev with
type='network' but actualType='hostdev'. This alias would be
subsequently overwritten by the proper "hostdev%d" form, so
everything would operate properly, but a string would be
leaked. This patch also fixes this problem.
* qemuAssignDevicePCISlots() - would defer assigning a PCI address to
a netdev if it was type='hostdev', but not for type='network +
actualType='hostdev'. In this case, the actual device usually hasn't
been acquired yet anyway, and even in the case that it has, there is
no practical difference between assigning a PCI address while
traversing the netdev list or while traversing the hostdev
list. Because changing it would be an effective NOP (but potentially
cause some unexpected regression), this usage was left unchanged.
Join the worker thread no matter if it is running or zombie already.
With current implementation the thread is joined iff @running is true.
However, when worker executes the last line, @running is set to false.
Hence qemuMonitorTestFree() won't join it (and free resources) even
though we can clearly see worker has run (nobody else sets @running =
false).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Right now, we are testing qemuMonitorSystemPowerdown instead of
qemuMonitorJSONSystemPowerdown. It makes no harm, as both functions have
the same header and the former is just a wrapper over the latter. But we
should be consistent as we're testing the JSON functions only in here.
The XML parser reserves 'vnet' as a prefix for automatically
generated NIC device names. Switch the veth device creation
to use this prefix, so it does not have to worry about clashes
with user specified names in the XML.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The veth device creation code run in two steps, first it looks
for two free veth device names, then it runs ip link to create
the veth pair. There is an obvious race between finding free
names and creating them, when guests are started in parallel.
Rewrite the code to loop and re-try creation if it fails, to
deal with the race condition.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If veth device allocation has a fatal error, the veths
array may contain NULL device names. Avoid calling the
virNetDevVethDelete function on such names.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The kernel automatically destroys veth devices when cleaning
up the container network namespace. During normal shutdown, it
is thus likely that the attempt to run 'ip link del vethN'
will fail. If it fails, check if the device exists, and avoid
reporting an error if it has gone. This switches to use the
virCommand APIs instead of virRun too.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>