Commit v3.8.0-95-gfd885a06a dropped nmodels parameter from several APIs
in src/cpu/cpu.h, but failed to update all callers.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
If we find ourselves in the situation that the 'add' uevent has been
fired earlier than the sysfs tree for a device was created, we should
use the best-effort approach and give kernel some predetermined amount
of time, thus waiting for the attributes to be ready rather than
discarding the device from our device list forever. If those don't appear
in the given time frame, we need to move on, since libvirt can't wait
indefinitely.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1463285
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Since we have a number of places where we workaround timing issues with
devices, attributes (files in general) not being available at the time
of processing them by calling usleep in a loop for a fixed number of
tries, we could as well have a utility function that would do that.
Therefore we won't have to duplicate this ugly workaround even more.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Adjust udevEventHandleThread to be a proper thread routine running in an
infinite loop handling devices. The handler thread pulls all available
data from the udev monitor and only then waits until a wakeup signal for
new incoming data has been emitted by udevEventHandleCallback.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
This patch splits udevEventHandleCallback in two (introduces
udevEventHandleThread) in order to be later able to refactor the latter
to actually become a normal thread which will wait some time for the
kernel to create the whole sysfs tree for a device as we cannot do that
in the event loop directly.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
udevSetupSystemDev only needs the udev data lock to be locked because of
calling udevGetDMIData which accesses some protected structure members,
but it can do that on its own just fine, no need to hold the lock the
whole time.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
The driver locks are unnecessary here, since currently the cleanup is
only called from the main daemon thread, so we can't race here. Moreover
@devs and @privateData are self-lockable objects, so no problem there
either.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Since there's going to be a worker thread which needs to have some data
protected by a lock, the whole code would just simply get unnecessary
complex, since two sets of locks would be necessary, driver lock (for
udev monitor and event handle) and a mutex protecting thread-local data.
Given the future thread will need to access the udev monitor socket as
well, why not protect everything with a single lock, even better, by
converting the driver's private data to a lockable object, we get the
automatic object disposal feature for free.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
We need to perform a sanity check on the udev monitor before every
use so that we know nothing has changed in the meantime. The reason for
moving the code to a separate helper is to enhance readability and shift
the focus on the important stuff within the udevEventHandleCallback
handler.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Even though hal doesn't make use of it, the privileged flag is related
to the daemon/driver rather than the backend actually used.
While at it, get rid of some tab indentation in the driver state struct.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
Option --full will always display the name and MAC address of the
the interface. Both virsh help and virsh man page didn't mention that.
Signed-off-by: Chen Hanxiao <chenhanxiao@gmail.com>
There were a bunch of commentary blocks that were literally useless in
terms of describing what the code following them does, since most of
them were documenting "the obvious" or it just wouldn't help at all.
Signed-off-by: Erik Skultety <eskultet@redhat.com>
So we have a syntax-check rule to catch all tab indents but it naturally
can't catch tab spacing, i.e. as a delimiter. This patch is a result of
running 'vim -en +retab +wq'
(using tabstop=8 softtabstop=4 shiftwidth=4 expandtab) on each file from
a list generated by the following:
find . -regextype gnu-awk \
-regex ".*\.(rng|syms|html|s?[ch]|py|pl|php(\.code)?)(\.in)?" \
| xargs git grep -lP "\t"
Signed-off-by: Erik Skultety <eskultet@redhat.com>
When formatting an inactive or migratable XML we will need to suppress
backing chain members which were detected from the disk to keep
semantics straight. This means we need to record, whether a
virStorageSource originates from autodetection.
The file object is needed when formatting the command line, but it makes
nesting of the objects less easy for use with blockdev. Separate the
wrapping into the 'file' object into a helper used specifically for disk
sources in the old code path.
Move qemuFreeKeywords into qemu_parse_command.c as
qemuParseKeywordsFree and call it rather than inline code
in multiple places.
Signed-off-by: Kothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
Without the fix in the previous patch the JSON data from QEMU would be
interpreted as Haswell-noTSX because x86DataFilterTSX would filter rtm
and hle features as a result of
family == 6 && model == 63 && stepping < 4
test even though this CPU has stepping == 4.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Even though only family and model are used for matching CPUID data with
CPU models from cpu_map.xml, stepping is used by x86DataFilterTSX which
is supposed to disable TSX on CPU models with broken TSX support. Thus
we need to start parsing stepping from QEMU to make sure we don't
disable TSX on CPUs which provide working TSX implementation. See the
following patch for a real world example of such CPU.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
If same boot order is specified twice (or more) in domain xml
we call free for uninitiaziled loadparm on cleanup in virDomainDeviceBootParseXML
and SIGABRT (or similar) as a result.
When libvirt older than 3.9.0 reconnected to a running domain started by
old libvirt it could have messed up the expansion of host-model by
adding features QEMU does not support (such as cmt). Thus whenever we
reconnect to a running domain, revert to an active snapshot, or restore
a saved domain we need to check the guest CPU model and remove the
CPU features unknown to QEMU. We can do this because we know the domain
was successfully started, which means the CPU did not contain the
features when libvirt started the domain.
https://bugzilla.redhat.com/show_bug.cgi?id=1495171
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When reconnecting to a domain started with a host-model CPU which was
started by old libvirt that did not replace host-model with the real CPU
definition, libvirt replaces the host-model CPU with the CPU from
capabilities (because this is what the old libvirt did when it started
the domain). Without this patch libvirt could use features unknown to
QEMU in the CPU definition which replaced the original host-model CPU.
Such domain would keep running just fine, but any attempt to migrate it
will fail and once the domain is saved or snapshotted, restoring it
would fail too.
In other words whenever we want to use the CPU definition from host
capabilities as a guest CPU definition, we have to filter the unknown
features.
https://bugzilla.redhat.com/show_bug.cgi?id=1495171
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
When migration fails, QEMU may provide a description of the error in
the reply to query-migrate QMP command. We can fetch this error and use
it instead of the generic "unexpectedly failed" message.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Pavel Hrdina <phrdina@redhat.com>
Commit e371b3b changed all the links to libvirt.org to use https.
Remove the leftover 'http' links from downloads page, since they
point to https anyway.
Express a properly terminated backing chain by putting a
virStorageSource of type VIR_STORAGE_TYPE_NONE in the chain. The newly
used helpers simplify this greatly.
The change fixes a bug as formatting an incomplete backing chain and
parsing it back would end up in expressing a terminated chain since
src->backingStoreRaw was not populated. By relying on the terminator
object this can be now processed appropriately.
Add helpers that will simplify checking if a backing file is valid or
whether it has backing store. The helper virStorageSourceIsBacking
returns true if the given virStorageSource is a valid backing store
member. virStorageSourceHasBacking returns true if the virStorageSource
has a backing store child.
Adding these functions creates a central points for further refactors.
Storage driver uses virStorageSource only partially to store it's
configuration but fully when parsing backing files of storage volumes.
This patch sets the 'type' field to a value other than
VIR_STORAGE_TYPE_NONE so that further patches can add a terminator
element to backing chains without breaking iteration.
The backing store indexes were not bound to the storage sources in any
way. To allow us to bind a given alias to a given storage source we need
to save the index in virStorageSource. The backing store ids are now
generated when detecting the backing chain.
Since we don't re-detect the backing chain after snapshots, the
numbering needs to be fixed there.
Index will remain an internal property even if we allow backing store
parsing from the XML, so we need to allow backing store without it in
the schema.
Existing qemuParseCommandLineMem() will parse "-m 4G" format string.
This patch allows it to parse "-m size=8126464k,slots=32,maxmem=33554432k"
format along with existing format. And adds a testcase to validate the changes.
Signed-off-by: Kothapally Madhu Pavan <kmp@linux.vnet.ibm.com>
Hyper-V uses its own specific memory management so no mapping is going to
be perfect. However, it is more correct to map Limit to max_memory (it
really is the upper limit of what the VM may potentially use) and keep
cur_balloon equal to total_memory.
The typical value returned from Hyper-V in Limit is 1 TiB, which is not
really going to work if interpreted as "startup memory" to be ballooned
away later.
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
The code was vulnerable to SQL injection. Likely not a security issue due to
WMI SQL and other constraints but still lame. For example:
virsh # dominfo \"
error: failed to get domain '"'
error: internal error: SOAP fault during enumeration: code 's:Sender', subcode
'n:CannotProcessFilter', reason 'The data source could not process the filter.
The filter might be missing or it might be invalid. Change the filter and try
the request again. ', detail 'The WS-Management service cannot process the
request. The WQL query is invalid. '
This commit fixes the Hyper-V driver by escaping all WMI SQL string parameters.
The same command with the fix:
virsh # dominfo \"
error: failed to get domain '"'
error: Domain not found: No domain with name "
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
"%s is not a Hyper-V server" is not a correct generalization of all possible
error conditions of hypervEnumAndPull. For example:
$ virsh --connect hyperv://localhost/?transport=http
Enter username for localhost [administrator]:
Enter administrator's password for localhost: <enters incorrect password>
error: failed to connect to the hypervisor
error: internal error: localhost is not a Hyper-V server
This commit removes the general virReportError from hypervInitConnection and
also the "Invalid query" virReportError from hypervSerializeEprParam, which
does not correctly describe the error either (virBufferCheckError has
already set a meaningful error message at that point).
The same scenario with the fix:
$ virsh --connect hyperv://localhost/?transport=http
Enter username for localhost [administrator]:
Enter administrator's password for localhost: <enters incorrect password>
error: failed to connect to the hypervisor
error: internal error: Transport error during enumeration: User, password or
similar was not accepted (26)
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
The default_tls_x509_verify (and related) parameters in qemu.conf
control whether the QEMU TLS servers request & verify certificates
from clients. This works as a simple access control system for
servers by requiring the CA to issue certs to permitted clients.
This use of client certificates is disabled by default, since it
requires extra work to issue client certificates.
Unfortunately the code was using this configuration parameter when
setting up both TLS clients and servers in QEMU. The result was that
TLS clients for character devices and disk devices had verification
turned off, meaning they would ignore errors while validating the
server certificate.
This allows for trivial MITM attacks between client and server,
as any certificate returned by the attacker will be accepted by
the client.
This is assigned CVE-2017-1000256 / LSN-2017-0002
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Somewhere around commit 9ff9d9f reserving entire PCI slots was
eliminated, as demonstrated by commit 6cc2014.
Reserve the functions required by the implicit devices:
00:01.0 ISA Bridge
00:01.1 IDE Controller
00:01.2 USB Controller (unless USB is disabled)
00:01.3 Bridge
https://bugzilla.redhat.com/show_bug.cgi?id=1460143
xsaveopt is artificially removed from the host to test disabled feature
which is only included in QEMU's version of the CPU model.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
arat is now enabled even if the hardware does not support it.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>