The nwfilter driver only needs a reference to its private
state object, not a full virConnectPtr. Update the domUpdateCBStruct
struct to have a 'void *opaque' field instead of a virConnectPtr.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Commit 27e81517a8 set the payload size to 256 KB, which is
actually the max packet size, including the size of the header.
Reduce this by VIR_NET_MESSAGE_HEADER_MAX (24) and set
VIR_NET_MESSAGE_LEGACY_PAYLOAD_MAX to 262120, which was the original
value before increasing the limit in commit eb635de1fe.
This fixes the following error:
error : nodeGetInfo:933 : this function is not supported
by the connection driver: node info not implemented on this platform
The freebsdNodeGetCPUCount was renamed to appleFreebsdNodeGetCPUCount
in order to make more visible the fact, that it works on Mac OS X too.
Mac OS X can use sysctlbyname as same as FreeBSD to get the CPU
frequency. However, the MIB style name is different from FreeBSD's.
And the unit of the return frequency is also different.
Signed-off-by: Ryota Ozaki <ozaki.ryota@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
This fixes the following error:
error : virGetUserEnt:703 : Failed to find user record for uid '32654'
'32654' (it's random and varies) comes from getsockopt with
LOCAL_PEERCRED option. getsockopt returns w/o error but seems
to not set any value to the buffer for uid.
For Mac OS X, LOCAL_PEERCRED has to be used with SOL_LOCAL level.
With SOL_LOCAL, getsockopt returns a correct uid.
Note that SOL_LOCAL can be found in
/System/Library/Frameworks/Kernel.framework/Versions/A/Headers/sys/un.h.
Signed-off-by: Ryota Ozaki <ozaki.ryota@gmail.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
qemumonitorjsontest.c: In function 'testQemuMonitorJSONqemuMonitorJSONGetBalloonInfo':
qemumonitorjsontest.c:1134: warning: integer constant is too large for 'long' type
* tests/qemumonitorjsontest.c
(testQemuMonitorJSONqemuMonitorJSONGetBalloonInfo)
(testQemuMonitorJSONqemuMonitorJSONGetBlockStatsInfo): Use correct
type.
Signed-off-by: Eric Blake <eblake@redhat.com>
On RHEL 5, compilation fails with:
storage/storage_backend.c: In function 'createRawFile':
storage/storage_backend.c:339: warning: implicit declaration of function 'fallocate'
storage/storage_backend.c:339: warning: nested extern declaration of 'fallocate' [-Wnested-externs]
But:
$ grep HAVE_FALLOCATE config.h
/* #undef HAVE_FALLOCATE */
Huh? It turns out that in kernels that old, fallocate() is not
implemented (config.h is correct), but <linux/fs.h> defines
HAVE_FALLOCATE as an empty witness macro for a completely
different purpose. Since storage_backend.c is including
<linux/fs.h> on RHEL 5, we are hosed by the kernel definition.
Newer kernels no longer pollute the namespace, and it's fairly
easy to convert to an expression that works with both the old
kernel witness and the new-style config.h (undefined or 1).
Problem introduced in commit 532fef3.
* src/storage/storage_backend.c (createRawFile): Avoid namespace
pollution from kernel, by checking HAVE_FALLOCATE for a value.
Signed-off-by: Eric Blake <eblake@redhat.com>
I tried to test ./configure --without-lxc --without-remote.
First, the build failed with some odd errors, such as an
inability to build xen, or link failures for virNetTLSInit.
But when you think about it, once there is no remote code,
all of libvirtd is useless, any stateful driver that depends
on libvirtd is also not worth compiling, and any libraries
used only by RPC code are not needed. So I patched
configure.ac to make for some saner defaults when an
explicit disable is attempted. Similarly, since we have
migrated virnetdevbridge into generic code, the workaround
for Linux kernel stupidity must not depend on stateful
drivers being in use.
Then there's 'make check' that needs segregation.
Wow - quite a bit of cleanup to make --without-remote useful :)
* configure.ac: Let --without-remote toggle defaults on stateful
drivers and other libraries. Pick up Linux kernel workarounds
even when qemu and lxc are not being compiled.
* tests/Makefile.am (test_programs): Factor out programs that
require remote.
* src/libvirt_private.syms (rpc/virnet*.h): Move...
* src/libvirt_remote.syms: ...into new file.
* src/Makefile.am (SYM_FILES): Ship new syms file.
Signed-off-by: Eric Blake <eblake@redhat.com>
Fixed the safezero call for allocating the rest of the file after cloning
an existing volume; it used to always use a zero offset, causing it to
only allocate the beginning of the file.
Also modified file creation to try to use fallocate(2) to pre-allocate
disk space before copying any data to make sure it fails early on if disk
is full and makes sure we can skip zero blocks when copying file contents.
If fallocate isn't available we will zero out the rest of the file after
cloning and only use sparse cloning if client requested a lower allocation
than the input volume's capacity.
Signed-off-by: Oskari Saarenmaa <os@ohmu.fi>
My previous commit 7dc1d4ab was supposed to change safezero to allocate
1 megabyte at maximum, but had the logic reversed and will allocate 1
megabyte at minimum (and a lot more at maximum.)
Signed-off-by: Oskari Saarenmaa <os@ohmu.fi>
Commit id 'c4a4603de' added an output <path> to the nodedev xml, but
did not update the schema.
This resulted in the failure of the 'virt-xml-validate' on a file
generated by 'virsh nodedev-dumpxml pci_0000_00_00_0' (for example).
This was found/seen by running autotest on my host.
mmap can fail on 32-bit systems if we're trying to zero out a lot of data.
Fall back to using block-by-block writing in that case. While we could map
smaller blocks it's unlikely that this code is used a lot and its easier to
just fall back to one of the existing methods.
Also modified the block-by-block zeroing to not allocate a megabyte of
zeroes if we're writing less than that.
Signed-off-by: Oskari Saarenmaa <os@ohmu.fi>
Again stolen from qemu_driver.c, but dropping all the unneeded bits.
This aims to copy all the current qemu validation checks since that's
the most commonly used real driver, but some of the checks are
completely artificial in the test driver.
This only supports creation of internal snapshots for initial
simplicity.
This resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1012824https://bugzilla.redhat.com/show_bug.cgi?id=1012834
Note that a similar problem was reported in:
https://bugzilla.redhat.com/show_bug.cgi?id=827519
but the fix only worked for <interface type='hostdev'>, *not* for
<interface type='network'> where the network itself was a pool of
hostdevs.
The symptom in both cases was this error message:
internal error: Unable to determine device index for network device
In both cases the cause was lack of proper handling for netdevs
(<interface>) of type='hostdev' when scanning the netdev list looking
for alias names in qemuAssignDeviceNetAlias() - those that aren't
type='hostdev' have an alias of the form "net%d", while those that are
hostdev use "hostdev%d". This special handling was completely lacking
prior to the fix for Bug 827519 which was:
When searching for the highest alias index, libvirt looks at the alias
for each netdev and if it is type='hostdev' it ignores the entry. If
the type is not hostdev, then it expects the "net%d" form; if it
doesn't find that, it fails and logs the above error message.
That fix works except in the case of <interface type='network'> where
the network uses hostdev (i.e. the network is a pool of VFs to be
assigned to the guests via PCI passthrough). In this case, the check
for type='hostdev' would fail because it was done as:
def->net[i]->type == VIR_DOMAIN_NET_TYPE_HOSTDEV
(which compares what was written in the config) when it actually
should have been:
virDomainNetGetActualType(def->net[i]) == VIR_DOMAIN_NET_TYPE_HOSTDEV
(which compares the type of netdev that was actually allocated from
the network at runtime).
Of course the latter wouldn't be of any use if the netdevs of
type='network' hadn't already acquired their actual network connection
yet, but manual examination of the code showed that this is never the
case.
While looking through qemu_command.c, two other places were found to
directly compare the net[i]->type field rather than getting actualType:
* qemuAssignDeviceAliases() - in this case, the incorrect comparison
would cause us to create a "net%d" alias for a netdev with
type='network' but actualType='hostdev'. This alias would be
subsequently overwritten by the proper "hostdev%d" form, so
everything would operate properly, but a string would be
leaked. This patch also fixes this problem.
* qemuAssignDevicePCISlots() - would defer assigning a PCI address to
a netdev if it was type='hostdev', but not for type='network +
actualType='hostdev'. In this case, the actual device usually hasn't
been acquired yet anyway, and even in the case that it has, there is
no practical difference between assigning a PCI address while
traversing the netdev list or while traversing the hostdev
list. Because changing it would be an effective NOP (but potentially
cause some unexpected regression), this usage was left unchanged.
Join the worker thread no matter if it is running or zombie already.
With current implementation the thread is joined iff @running is true.
However, when worker executes the last line, @running is set to false.
Hence qemuMonitorTestFree() won't join it (and free resources) even
though we can clearly see worker has run (nobody else sets @running =
false).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Right now, we are testing qemuMonitorSystemPowerdown instead of
qemuMonitorJSONSystemPowerdown. It makes no harm, as both functions have
the same header and the former is just a wrapper over the latter. But we
should be consistent as we're testing the JSON functions only in here.
The XML parser reserves 'vnet' as a prefix for automatically
generated NIC device names. Switch the veth device creation
to use this prefix, so it does not have to worry about clashes
with user specified names in the XML.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The veth device creation code run in two steps, first it looks
for two free veth device names, then it runs ip link to create
the veth pair. There is an obvious race between finding free
names and creating them, when guests are started in parallel.
Rewrite the code to loop and re-try creation if it fails, to
deal with the race condition.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
If veth device allocation has a fatal error, the veths
array may contain NULL device names. Avoid calling the
virNetDevVethDelete function on such names.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The kernel automatically destroys veth devices when cleaning
up the container network namespace. During normal shutdown, it
is thus likely that the attempt to run 'ip link del vethN'
will fail. If it fails, check if the device exists, and avoid
reporting an error if it has gone. This switches to use the
virCommand APIs instead of virRun too.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
During container cleanup there is a race where the kernel may
have destroyed the veth device before we try to set it offline.
This causes log error messages. Given that we're about to
delete the device entirely, setting it offline is pointless.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
When querying for kvm, we try to find 'enabled' field. Hence the error
message should report we haven't found 'enabled' and not 'running'
(which is not even in the reply). Probably a typo or copy-paste error.
The qemuDomainChangeNet() is called when 'virsh update-device' is
invoked on a NIC. Currently, we fail to update the QoS even though
we have routines for that.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
So far the virNetDevBandwidthEqual() expected both ->in and ->out items
to be allocated for both @a and @b compared. This is not necessary true
for all our code. For instance, running 'update-device' twice over a NIC
with the very same XML results in SIGSEGV-ing in this function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>