It's a convenient wrapper around qemuMonitorTestNew which feeds the test
monitor with QMP replies from a specified file.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
This patch splits qemuMonitorJSONGetCPUx86Data in three functions:
- qemuMonitorJSONCheckCPUx86 checks if QEMU supports reporting CPUID
features for a guest CPU
- qemuMonitorJSONParseCPUx86Features parses CPUID features from a JSON
array
- qemuMonitorJSONGetCPUx86Data gets the requested guest CPU property
from QOM and uses qemuMonitorJSONParseCPUx86Features to parse it
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
CPUID instruction normally takes its parameter from EAX, but sometimes
ECX is used as an additional parameter. Let's rename 'function' to
'eax_in' in preparation for adding 'ecx_in'.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
A CPU data XML file already contains the architecture, let the parser
use it to detect which CPU driver should be used to parse the rest of
the file.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
When computing CPU data for a given guest CPU we should set CPUID vendor
bits appropriately so that we don't lose the vendor when transforming
CPU data back to XML description.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Problem is, localtime_r() returns a pointer to converted time or
NULL in case of an error. But checking the glibc sources, error
will occur iff a NULL has been passed as an either of arguments
the function takes. But GCC fails to see that:
../../tools/virsh-network.c: In function 'cmdNetworkDHCPLeases':
../../tools/virsh-network.c:1370:12: error: potential null pointer dereference [-Werror=null-dereference]
ts = *localtime_r(&expirytime_tmp, &ts);
~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
On LXC domain startup we have already called virDomainObjSetDefTransient
to fill vm->newDef.
There is no need to call virDomainLiveConfigHelperMethod which has the
ability to fill newDef if it's NULL.
On LXC domain startup we have already called virDomainObjSetDefTransient
to fill vm->newDef.
There is no need to call virDomainLiveConfigHelperMethod which has the
ability to fill newDef if it's NULL.
On LXC domain startup we have already called virDomainObjSetDefTransient
to fill vm->newDef.
There is no need to call virDomainLiveConfigHelperMethod which has the
ability to fill newDef if it's NULL.
A few functions using virDomainLiveConfigHelperMethod use the generic
name 'vmdef' to point to the persistent definition.
Use persistentDef and/or persistentDefCopy to make its purpose obvious.
In Fedora >= 21, there is a new crypto priority framework
that sets TLS policies globally for all apps. To activate
this with GNUTLS we must request "@SYSTEM" instead of
the traditional "NORMAL" string. The '@' causes gnutls todo
a lookup in its config file for the 'SYSTEM' keyword entry.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Support reading the TLS priority from the client configuration
file via the "tls_priority" config option, eg
$ cat $HOME/.config/libvirt/libvirt.conf
tls_priority="NORMAL:-VERS-SSL3.0"
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virConnectOpenInternal method opens the libvirt client
config file and uses it to resolve things like URI aliases.
There may be driver specific things that are useful to
store in the config file too, so rather than have them
re-parse the same file, pass the virConfPtr down to the
drivers.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add support for a "tls_priority" URI parameter in remote
driver URIs. eg
qemu+tls://localhost/session?tls_priority=NORMAL:-VERS-SSL3.0
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add a "tls_priority" config option to /etc/libvirt/libvirtd.conf
to allow the administrator to override the built-in default
setting. This only affects the server side configuration.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Extend the virNetTLSContextNew* constructors to allow
the TLS priority string to be passed in, overriding the
compile time default.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently libvirt calls gnutls_set_default_priority()
which on old systems resolves to "NORMAL" while new
systems it resolves to "@SYSTEM". Either way, this
is a global default that is identical across all apps.
We want to allow distros to flexibility to define a
custom default string for libvirt priority, so add
a --tls-priority=STRING flag to configure to enable
this to be set.
It is expected that distros would use this when creating
RPM/Deb/etc packages, according to their preferred crypto
handling policies.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently we set the gnutls log function when creating a
TLS context, however, the setting is in fact global, not
per context. So we should be setting it when we first call
gnutls_global_init() instead.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
We need to use the gnutls_priority_set_direct method which
was not introduced until 2.1.7, so bump version to 2.2.0
which is the first stable release with it included. This
release dates from Dec 2007 so it is reasonable to ditch
support for the 1.x.x series for gnutls releases entirely.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Currently if a guest has listen address 0.0.0.0 or [::] and you run
"virsh domdisplay $domain" you always get "spice://localhost:$port".
We want to print better address if someone is connected from a different
computer using "virsh -c qemu+ssh://some.host/system". This patch fixes the
behavior of virsh to print in this case "spice://some.host:$port".
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1332446
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>