This way qemuDomainLogContextRef() and qemuDomainLogContextFree() is
no longer needed. The naming qemuDomainLogContextFree() was also
somewhat misleading. Additionally, it's easier to turn
qemuDomainLogContext in a self-locking object.
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reviewed-by: Bjoern Walk <bwalk@linux.vnet.ibm.com>
There were multiple race conditions that could lead to segmentation
faults. The first precondition for this is qemuProcessLaunch must fail
sometime shortly after starting the new QEMU process. The second
precondition for the segmentation faults is that the new QEMU process
dies - or to be more precise the QEMU monitor has to be closed
irregularly. If both happens during qemuProcessStart (starting a
domain) there are race windows between the thread with the event
loop (T1) and the thread that is starting the domain (T2).
First segmentation fault scenario:
If qemuProcessLaunch fails during qemuProcessStart the code branches
to the 'stop' path where 'qemuMonitorSetDomainLog(priv->mon, NULL,
NULL, NULL)' will set the log function of the monitor to NULL (done in
T2). In the meantime the event loop of T1 will wake up with an EOF
event for the QEMU monitor because the QEMU process has died. The
crash occurs if T1 has checked 'mon->logFunc != NULL' in qemuMonitorIO
just before the logFunc was set to NULL by T2. If this situation
occurs T1 will try to call mon->logFunc which leads to the
segmentation fault.
Solution:
Require the monitor lock for setting the log function.
Backtrace:
0 0x0000000000000000 in ?? ()
1 0x000003ffe9e45316 in qemuMonitorIO (watch=<optimized out>,
fd=<optimized out>, events=<optimized out>, opaque=0x3ffe08aa860) at
../../src/qemu/qemu_monitor.c:727
2 0x000003fffda2e1a4 in virEventPollDispatchHandles (nfds=<optimized
out>, fds=0x2aa000fd980) at ../../src/util/vireventpoll.c:508
3 0x000003fffda2e398 in virEventPollRunOnce () at
../../src/util/vireventpoll.c:657
4 0x000003fffda2ca10 in virEventRunDefaultImpl () at
../../src/util/virevent.c:314
5 0x000003fffdba9366 in virNetDaemonRun (dmn=0x2aa000cc550) at
../../src/rpc/virnetdaemon.c:818
6 0x000002aa00024668 in main (argc=<optimized out>, argv=<optimized
out>) at ../../daemon/libvirtd.c:1541
Second segmentation fault scenario:
If qemuProcessLaunch fails it will unref the log context and with
invoking qemuMonitorSetDomainLog(priv->mon, NULL, NULL, NULL)
qemuDomainLogContextFree() will be invoked. qemuDomainLogContextFree()
invokes virNetClientClose() to close the client and cleans everything
up (including unref of _virLogManager.client) when virNetClientClose()
returns. When T1 is now trying to report 'qemu unexpectedly closed the
monitor' libvirtd will crash because the client has already been
freed.
Solution:
As the critical section in qemuMonitorIO is protected with the monitor
lock we can use the same solution as proposed for the first
segmentation fault.
Backtrace:
0 virClassIsDerivedFrom (klass=0x3100979797979797,
parent=0x2aa000d92f0) at ../../src/util/virobject.c:169
1 0x000003fffda659e6 in virObjectIsClass (anyobj=<optimized out>,
klass=<optimized out>) at ../../src/util/virobject.c:365
2 0x000003fffda65a24 in virObjectLock (anyobj=0x3ffe08c1db0) at
../../src/util/virobject.c:317
3 0x000003fffdba4688 in
virNetClientIOEventLoop (client=client@entry=0x3ffe08c1db0,
thiscall=thiscall@entry=0x2aa000fbfa0) at
../../src/rpc/virnetclient.c:1668
4 0x000003fffdba4b4c in
virNetClientIO (client=client@entry=0x3ffe08c1db0,
thiscall=0x2aa000fbfa0) at ../../src/rpc/virnetclient.c:1944
5 0x000003fffdba4d42 in
virNetClientSendInternal (client=client@entry=0x3ffe08c1db0,
msg=msg@entry=0x2aa000cc710, expectReply=expectReply@entry=true,
nonBlock=nonBlock@entry=false) at ../../src/rpc/virnetclient.c:2116
6 0x000003fffdba6268 in
virNetClientSendWithReply (client=0x3ffe08c1db0, msg=0x2aa000cc710) at
../../src/rpc/virnetclient.c:2144
7 0x000003fffdba6e8e in virNetClientProgramCall (prog=0x3ffe08c1120,
client=<optimized out>, serial=<optimized out>, proc=<optimized out>,
noutfds=<optimized out>, outfds=0x0, ninfds=0x0, infds=0x0,
args_filter=0x3fffdb64440
<xdr_virLogManagerProtocolDomainReadLogFileArgs>, args=0x3ffffffe010,
ret_filter=0x3fffdb644c0
<xdr_virLogManagerProtocolDomainReadLogFileRet>, ret=0x3ffffffe008) at
../../src/rpc/virnetclientprogram.c:329
8 0x000003fffdb64042 in
virLogManagerDomainReadLogFile (mgr=<optimized out>, path=<optimized
out>, inode=<optimized out>, offset=<optimized out>, maxlen=<optimized
out>, flags=0) at ../../src/logging/log_manager.c:272
9 0x000003ffe9e0315c in qemuDomainLogContextRead (ctxt=0x3ffe08c2980,
msg=0x3ffffffe1c0) at ../../src/qemu/qemu_domain.c:4422
10 0x000003ffe9e280a8 in qemuProcessReadLog (logCtxt=<optimized out>,
msg=msg@entry=0x3ffffffe288) at ../../src/qemu/qemu_process.c:1800
11 0x000003ffe9e28206 in qemuProcessReportLogError (logCtxt=<optimized
out>, msgprefix=0x3ffe9ec276a "qemu unexpectedly closed the monitor")
at ../../src/qemu/qemu_process.c:1836
12 0x000003ffe9e28306 in
qemuProcessMonitorReportLogError (mon=mon@entry=0x3ffe085cf10,
msg=<optimized out>, opaque=<optimized out>) at
../../src/qemu/qemu_process.c:1856
13 0x000003ffe9e452b6 in qemuMonitorIO (watch=<optimized out>,
fd=<optimized out>, events=<optimized out>, opaque=0x3ffe085cf10) at
../../src/qemu/qemu_monitor.c:726
14 0x000003fffda2e1a4 in virEventPollDispatchHandles (nfds=<optimized
out>, fds=0x2aa000fd980) at ../../src/util/vireventpoll.c:508
15 0x000003fffda2e398 in virEventPollRunOnce () at
../../src/util/vireventpoll.c:657
16 0x000003fffda2ca10 in virEventRunDefaultImpl () at
../../src/util/virevent.c:314
17 0x000003fffdba9366 in virNetDaemonRun (dmn=0x2aa000cc550) at
../../src/rpc/virnetdaemon.c:818
18 0x000002aa00024668 in main (argc=<optimized out>, argv=<optimized
out>) at ../../daemon/libvirtd.c:1541
Other code parts where the same problem was possible to occur are
fixed as well (qemuMigrationFinish, qemuProcessStart, and
qemuDomainSaveImageStartVM).
Signed-off-by: Marc Hartmayer <mhartmay@linux.vnet.ibm.com>
Reported-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Unify the *ListDevice API into virnodedeviceobj.c from node_device_driver
and test_driver. The only real difference between the two is that the test
driver doesn't call the aclfilter API. The name of the new API follows that
of other drivers to "GetNames".
NB: Change some variable names to match what they really are - consistency
with other drivers. Also added a clear of the input names.
This also allows virNodeDeviceObjHasCap to be static to virnodedeviceobj
Signed-off-by: John Ferlan <jferlan@redhat.com>
Unify the NumOfDevices API into virnodedeviceobj.c from node_device_driver
and test_driver. The only real difference between the two is that the test
driver doesn't call the aclfilter API.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Clean up the code to adhere to more of the standard two spaces between
functions, separate lines for type and function name, one argument per line.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Unlike other drivers, this is a test driver only API. Still combining
the logic of testConnectListInterfaces and testConnectListDefinedInterfaces
makes things a bit easier in the long run.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Unlike other drivers, this is a test driver only API. Still combining
the logic of testConnectNumOfInterfaces and testConnectNumOfDefinedInterfaces
makes things a bit easier in the long run.
Signed-off-by: John Ferlan <jferlan@redhat.com>
This patch reworks the Hyper-V driver structs and the code generator
to provide seamless support for both Hyper-V 2008 and 2012 or newer.
This does not implement any new libvirt APIs, it just adapts existing
2008-only driver to also handle 2012 and newer by sharing as much
driver code as possible (currently it's all of it :-)). This is needed
to set the foundation before we can move forward with implementing the
rest of the driver APIs.
With the 2012 release, Microsoft introduced "v2" version of Msvm_* WMI
classes. Those are largely the same as "v1" (used in 2008) but have some
new properties as well as need different wsman request URIs. To
accomodate those differences, most of work went into the code generator
so that it's "aware" of possibility of multiple versions of the same WMI
class and produce C code accordingly.
To accomplish this the following changes were made:
* the abstract hypervObject struct's data member was changed to a union
that has "common", "v1" and "v2" members. Those are structs that
represent WMI classes that we get back from wsman response. The
"common" struct has members that are present in both "v1" and "v2"
which the driver API callbacks can use to read the data from in
version-independent manner (if version-specific member needs to be
accessed the driver can check priv->wmiVersion and read from "v1" or
"v2" as needed). Those structs are guaranteed to be memory aligned
by the code generator (see the align_property_members implementation
that takes care of that)
* the generator produces *_WmiInfo for each WMI class "family" that
holds an array of hypervWmiClassInfoPtr each providing information
as to which request URI to use for each "version" of given WMI class
as well as XmlSerializerInfo struct needed to unserilize WS-MAN
responsed into the data structs. The driver uses those to make proper
WS-MAN request depending on which version it's connected to.
* the generator no longer produces "helper" functions such as
hypervGetMsvmComputerSystemList as those were originally just simple
wrappers around hypervEnumAndPull, instead those were hand-written
now (to keep driver changes minimal). The reason is that we'll have
more code coming implementing missing libvirt APIs and surely code
patterns will emerge that would warrant more useful "utility" functions
like that.
* a hypervInitConnection was added to the driver which "detects"
Hyper-V version by testing simple wsman request using v2 then falling
back to v1, obviously if both fail, the we're erroring out.
To express how the above translates in code:
void
hypervImplementSomeLibvirtApi(virConnectPtr conn, ...)
{
hypervPrivate *priv = conn->privateData;
virBuffer query = VIR_BUFFER_INITIALIZER;
hypervWqlQuery wqlQuery = HYPERV_WQL_QUERY_INITIALIZER;
Msvm_ComputerSystem *list = NULL; /* typed hypervObject instance */
/* the WmiInfo struct has the data needed for wsman request and
* response handling for both v1 and v2 */
wqlQuery.info = Msvm_ComputerSystem_WmiInfo;
wqlQuery.query = &query;
virBufferAddLit(&query, "select * from Msvm_ComputerSystem");
if (hypervEnumAndPull(priv, &wqlQuery, (hypervObject **) &list) < 0) {
goto cleanup;
}
if (list == NULL) {
/* none found */
goto cleanup;
}
/* works with v1 and v2 */
char *vmName = list->data.common->Name;
/* access property that is in v2 only */
if (priv->wmiVersion == HYPERV_WMI_VERSION_V2)
char *foo = list->data.v2->V2Property;
else
char *foo = list->data.v1->V1Property;
cleanup:
hypervFreeObject(priv, (hypervObject *)list);
}
Currently named as hypervObjecUnified to keep code
compilable/functional until all bits are in place.
This struct is a result of unserializing WMI request response.
Therefore, it needs to be able to deal with different "versions" of the
same WMI class. To accomplish this, the "data" member was turned in to
a union which:
* has a "common" member that contains only WMI class fields that are
safe to access and are present in all "versions". This is ensured by
the code generator that takes care of proper struct memory alignment
between "common", "v1", "v2" etc members. This memeber is to be used
by the driver code wherever the API implementation can be shared for
all supported hyper-v versions.
* the "v1" and "v2" member can be used by the driver code to handle
version specific cases.
Example:
Msvm_ComputerSystem *vm = NULL;
...
hypervGetVirtualMachineList(priv, wqlQuery, *vm);
...
/* safe for "v1" and "v2" */
char *vmName = vm->data.common->Name;
/* or if one really needs special handling for "v2" */
if (priv->wmiVersion == HYPERV_WMI_VERSION_V2) {
char *foo = vm->data.v2->SomeV2OnlyField;
}
In other words, driver should not concern itself with existence of "v1"
or "v2" of WMI class unless absolutely necessary.
This struct is to be used to carry all the information necessary to
issue wsman requests for given WMI class. Those will be defined by the
generator code (as lists) so that they are handy for the driver code to
"extract" needed info depending on which hyper-v we're connected to.
For example:
hypervWmiClassInfoListPtr Msvm_ComputerSystem_WmiInfo = {
.count = 2
{
{
.name = "Msvm_ComputerSystem",
.version = "v1",
.rootUri = "http://asdf.com",
...
},
{
.name = "Msvm_ComputerSystem",
.version = "v2",
.rootUri = "http://asdf.com/v2",
...
},
}
};
Then the driver code will grab either "v1" or "v2" to pass info wsman
API, depending on hypervPrivate->wmiVersion value.
Hyper-V 2012+ uses a new "v2" version of Msvm_* WMI classes so we will
store that info in hypervPrivate so that it is easily accessbile in the
driver API callbacks and handled accordingly.
https://bugzilla.redhat.com/show_bug.cgi?id=1439132
Add "bsd" to the list of format types to not checked during blkid
processing even though it supposedly knows the format - for some
(now unknown) reason it's returning partition table not found. So
let's just let PARTED handle "bsd" too.
Signed-off-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1439132
Commit id 'a48c674fb' added a check for format types "dvh" and "pc98"
to use the parted print processing instead of using blkid processing
in order to validate the label on the disk was what is expected for
disk pool startup. However, commit id 'a4cb4a74f' really messed things
up by missing an else condition causing PARTEDFindLabel to always
return DIFFERENT.
Signed-off-by: John Ferlan <jferlan@redhat.com>
So far only QEMU_MONITOR_MIGRATION_CAPS_POSTCOPY was reset, but only in
a single code path leaving post-copy enabled in quite a few cases.
https://bugzilla.redhat.com/show_bug.cgi?id=1425003
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
It's only called from qemuMigrationReset now so it doesn't need to be
exported and {tls,sec}Alias are always NULL.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new API is supposed to reset all migration parameters to make sure
future migrations won't accidentally use them. This patch makes the
first step and moves qemuMigrationResetTLS call inside
qemuMigrationReset.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Migration parameters are either reset by the main migration code path or
from qemuProcessRecoverMigration* in case libvirtd is restarted during
migration.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Finished qemuMigrationRun does not mean the migration itself finished
(it might have just switched to post-copy mode). While resetting TLS
parameters is probably OK at this point even if migration is still
running, we want to consolidate the code which resets various migration
parameters. Thus qemuMigrationResetTLS will be called from the Confirm
phase (or at the end of the Perform phase in case of v2 protocol), when
migration is either canceled or finished.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
qemuProcessRecoverMigrationOut doesn't explicitly call
qemuMigrationResetTLS relying on two things:
- qemuMigrationCancel resets TLS parameters
- our migration code resets TLS before entering
QEMU_MIGRATION_PHASE_PERFORM3_DONE phase
But this is not obvious and the assumptions will be broken soon. Let's
explicitly reset TLS parameters on all paths which do not kill the
domain.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
There is no async job running when a freshly started libvirtd is trying
to recover from an interrupted incoming migration. While at it, let's
call qemuMigrationResetTLS every time we don't kill the domain. This is
not strictly necessary since TLS is not supported when v2 migration
protocol is used, but doing so makes more sense.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Because of the changes done in the previous commit, @host is already a
migratable CPU and there's no need to do any additional filtering.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
We will need to store two more host CPU models and nested structs look
better than separate items with long complicated names.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This new internal API makes a copy of virCPUDef while removing all
features which would block migration. It uses cpu_map.xml as a database
of such features, which should only be used as a fallback when we cannot
get the data from a hypervisor. The main goal of this API is to decouple
this filtering from virCPUUpdate so that the hypervisor driver can
filter the features according to the hypervisor.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If formatting NUMA topology fails, the function returns immediatelly,
but the buffer structure allocated on the stack references lot of
heap-allocated memory and that would get lost in such case.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
qemuProcessVerifyHypervFeatures is supposed to check whether all
requested hyperv features were actually honored by QEMU/KVM. This is
done by checking the corresponding CPUID bits reported by the virtual
CPU. In other words, it doesn't work for string properties, such as
VIR_DOMAIN_HYPERV_VENDOR_ID (there is no CPUID bit we could check). We
could theoretically check all 96 bits corresponding to the vendor
string, but luckily we don't have to check the feature at all. If QEMU
is too old to support hyperv features, the domain won't even start.
Otherwise, it is always supported.
Without this patch, libvirt refuses to start a domain which contains
<features>
<hyperv>
<vendor_id state='on' value='...'/>
</hyperv>
</features>
reporting internal error: "unknown CPU feature __kvm_hv_vendor_id.
This regression was introduced by commit v3.1.0-186-ge9dbe7011, which
(by fixing the virCPUDataCheckFeature condition in
qemuProcessVerifyHypervFeatures) revealed an old bug in the feature
verification code. It's been there ever since the verification was
implemented by commit v1.3.3-rc1-5-g95bbe4bf5, which effectively did not
check VIR_DOMAIN_HYPERV_VENDOR_ID at all.
https://bugzilla.redhat.com/show_bug.cgi?id=1439424
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Introduce STRICT_FRAME_LIMIT_CFLAGS that will be used for
production code and RELAXED_FRAME_LIMIT_CFLAGS for tests.
Raising the limit for tests allows building them with clang
with optimizations disabled.
This header file has been created so that we can expose
internal functions to the test suite without making them
public: those in qemu_capabilities.h bearing the comment
/* Only for use by test suite */
are obvious candidates for being moved over.
This function runs an iscsi command and parses its output.
However, due to the nature of things, virISCSIExtractSession()
callback can be called multiple times. In each run it would
allocate new memory and overwrite the variable where we keep
pointer to it and thus leaking old allocations.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Even though the virMacMap object is not necessarily created at
the same time as the network object, the former makes no sense
without the latter and thus should be unref'd in the network
object dispose function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Imagine that this function is called twice over the same disk
source. While in the first run all allocated memory is freed, not
all pointers are set to NULL (e.g. def->srcpool). So when called
again, these poitners are freed again resulting in double free.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
After restart of libvirtd the 'checkPool' method is supposed to validate
that the pool is online. Since libvirt then refreshes the pool contents
anyways just return whether the pool was supposed to be online so that
the code can be reached. This is necessary since if a pool does not
implement the method it's automatically considered as inactive.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1436065