This file contains only a single function, detect_scsi_host_caps(),
which is declared in node_device_driver.h and called from both the hal
and udev backends. Other things common to the hal and udev drivers
can be placed in that file though. As a prelude to adding further
functions, this patch renames the existing function to something
closer in line with other internal libvirt function names
(nodeDeviceSysfsGetSCSIHostCaps()), and puts the declarations into a
separate .h file.
For some reason a union (_virNodeDevCapData) that had only been
declared inside the toplevel struct virNodeDevCapsDef was being used
as an argument to functions all over the place. Since it was only a
union, the "type" attribute wasn't necessarily sent with it. While
this works, it just seems wrong.
This patch creates a toplevel typedef for virNodeDevCapData and
virNodeDevCapDataPtr, making it a struct that has the type attribute
as a member, along with an anonymous union of everything that used to
be in union _virNodeDevCapData. This way we only have to change the
following:
s/union _virNodeDevCapData */virNodeDevCapDataPtr /
and
s/caps->type/caps->data.type/
This will make me feel less guilty when adding functions that need a
pointer to one of these.
Introduces two new -machine option parameters to the QEMU command to
enable/disable the CPACF protected key management operations for a guest:
aes-key-wrap='on|off'
dea-key-wrap='on|off'
The QEMU code maps the corresponding domain configuration elements to the
QEMU -machine option parameters to create the QEMU command:
<cipher name='aes' state='on'> --> aes-key-wrap=on
<cipher name='aes' state='off'> --> aes-key-wrap=off
<cipher name='dea' state='on'> --> dea-key-wrap=on
<cipher name='dea' state='off'> --> dea-key-wrap=off
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Signed-off-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Signed-off-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Two new domain configuration XML elements are added to enable/disable
the protected key management operations for a guest:
<domain>
...
<keywrap>
<cipher name='aes|dea' state='on|off'/>
</keywrap>
...
</domain>
Signed-off-by: Tony Krowiak <akrowiak@linux.vnet.ibm.com>
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Signed-off-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Reviewed-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Currently, the libxl driver does not support any security drivers.
When the qemu driver has no security driver configued,
nodeGetSecurityModel succeeds but returns an empty virSecurityModel
object. Do the same in the libxl driver instead of reporting
this function is not supported by the connection driver:
virNodeGetSecurityModel
We have previously effectively ignored all <controller type='ide'>
elements in a domain definition.
On the i440fx-based machinetypes there is an IDE controller that is
included in the chipset and can't be removed (which is the ide
controller with index='0'>), so it makes sense to ignore that one
controller. However, if an i440fx domain definition has a 2nd
controller, nothing catches this error (unless you also have a disk
attached to it, in which case qemu will complain that you're trying to
use the ide controller named "ide1", which doesn't exist), and if any
other type of domain has even a single controller defined, it will be
incorrectly ignored.
Ignoring a bogus controller definition isn't such a big problem, as
long as an error is logged when any disk is attached to that
non-existent controller. But in the case of q35-based machinetypes,
the hardcoded id ("alias" in libvirt terms) of its builtin SATA
controller is "ide", which happens to be the same id as the builtin
IDE controller on i440fx machinetypes. So libvirt creates a
commandline believing that it is connecting the disk to the builtin
(but actually nonexistent) IDE controller, qemu thinks that libvirt
wanted that disk connected to the builtin SATA controller, and
everybody is happy.
Until you try to connect a 2nd disk to the IDE controller. Then qemu
will complain that you're trying to set unit=1 on a controller that
requires unit=0 (SATA controllers are organized differently than IDE
controllers).
After this patch, if a domain has an IDE controller defined for a
machinetype that has no IDE controllers, libvirt will log an error
about the controller itself as it is building the qemu commandline
(rather than a (possible) error from qemu about disks attached to that
controller). This is done by adding IDE to the list of controller
types that are handled in the loop that creates controller command
strings in qemuBuildCommandline() (previously it would *always* skip
IDE controllers). Then qemuBuildControllerDevStr() is modified to log
an appropriate error in the case of IDE controllers.
In the future, if we add support for extra IDE controllers (piix3-ide
and/or piix4-ide) we can just add it into the IDE case in
qemuBuildControllerDevStr(). For now, nobody seems anxious to add
extra support for an aging and very slow controller, when there are so
many better options available.
Resolves:
https://bugzilla.redhat.com/show_bug.cgi?id=1176071 (Fedora)
This makes sure that that the commandlines generated for devices and
controller devices are all using the alias that has been set in the
controller's object as the id of the controller, rather than
hardcoding a printf (or worse, encoding exceptions to the standard
${controller}${index} into the logic)
Since this "fixes" the controller name used for the sata controller,
the commandline arg for the sata controller in the sata test case had
to be adjusted to be "sata0" instead of "ahci0". All other tests
remain unchanged, verifying that the patch causes no other functional
change.
Because the function that finds a controller alias based on a device
def requires a pointer to the full domainDef in order to get the list
of controllers, the arglist of a few functions had to have this added.
There are a few extra exceptions that weren't being accounted for when
creating the alias for a controller. This resulted in 1) incorrect
status XML, and 2) exceptions/printfs of what *should* have been
directly available in the controller alias when constructing device
commandline arguments:
1) The primary (and only) IDE controller on a 440FX machinetype is
hardcoded to be "ide" in qemu.
2) The primary SATA controller on a 440FX machinetype is also
hardcoded to be "ide" in qemu.
3) On machinetypes that don't support multiple PCI buses, the PCI bus
is hardcoded in qemu to have the name "pci".
4) The first usb master controller is "usb", all others are the normal
"usb%d". (note that usb controllers that are not a "master" will have
the same index, and thus alias, as the master).
We needed to pass in the full domainDef and qemuCaps in order to
properly make the decisions about these exceptions.
Because there are multiple potential reasons for an error, this
function logs any errors before returning NULL (since the caller won't
have the information needed to determine which was the reason for
failure).
When cancelling drive mirror, always try to do that for all disks even
if it fails for some of them. Report the first error we saw.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
Instead of redoing the same filtering over and over everytime we need to
walk through all disks which are being migrated.
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
The APIs take the memory value in KiB and we store it in KiB
internally, but we cannot parse the whole ULONG_MAX range
on 64-bit systems, because virDomainParseScaledValue
needs to fit the value in bytes in an unsigned long long.
https://bugzilla.redhat.com/show_bug.cgi?id=1176739
We don't allow it in normal code, why would it need to be in the
generated one. IT also splits the line in perl code so it's readable.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since we don't have syntax-check for this, it has to be checked
manually. Let's hope this is the only place it happened.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
This only affected the servers that re-exec themselves, which is only
virtlockd and it didn't do any mess, so this is mostly a clenaup.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
Since 'autofill'd iothreadid entries are not written during XML format
processing, it is possible that if an iothreadid in the middle of an
autofilled list would then change it's id on a subsequent restart.
Thus during the iothreadid deletion, if we determine the delete is not
the "last" thread, then clear the autofill bit for all iothreadid's
following the one being deleted (either the first or one in the middle).
This way, iothreadid's will be printed/saved.
We have a lot of passing arguments code just to pass connection
object cause it holds jobTimeout. Taking into account that
right now this value is defined at compile time let's just
get rid of it and make arguments list more clear in many
places.
In case we later need some runtime configurable timeout
value we can provide this value through arguments
function already operate such as a parallels domain
object etc as this timeouts are operation( and thus
object) specific in practice.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy@parallels.com>
As of eeb008dbfc the variable is not used anymore. Drop it.
Signed-off-by: Wang Yufei <james.wangyufei@huawei.com>
Signed-off-by: Zhang Bo <oscar.zhangbo@huawei.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
In order not to bring in any link dependencies, bridge driver doesn't
use the usual stubs as other conditionally-built code does. However,
having the function as a macro imposes a problem with possibly unused
variables if just defined as "0". This was worked around by using
(dom=dom, iface=iface, 0) which should act like a 0 if used in a
condition. However, gcc still bugs about that, so I came up with
another way how to fix that.
Using static inline functions in the header won't collide with anything,
it fixes the bug and does one thing that the macro didn't do. It checks
whenther passed variables are pointers of compatible type. It has only
one downside, and that is that we need to either a) define it with
ATTRIBUTE_UNUSED, which needs an exception in cfg.mk or b) do something
like ignore_value(variable); in the function body. I went with the
first variant.
Signed-off-by: Martin Kletzander <mkletzan@redhat.com>
In the XML we have the vnc port number, but QEMU takes on command line
a vnc screen number, it's port-5900. We should fail with error message
that only ports in range [5900,65535] are valid.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1164966
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
While implementing support for SPICE, I noticed VNC passwd was
never copied to libxl_device_vfb's vnc.passwd field.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1171984https://bugzilla.redhat.com/show_bug.cgi?id=1188463
Remove the check for the source host name for iSCSI source XML processing
declaring duplicate sources when the source device path and if present the
initiator of a proposed storage pool matches an existing storage pool.
The backend iSCSI storage driver uses 'iscsiadm --mode session' to query
available iscsid target sessions. The output displayed is the IP address
and the IQN (target path) of known targets. The displayed IP address
is a resolved address based on the session --login. Additionally, iscsid
keeps track of the various ways to define the host name (IPv4 Address,
IPv6 Address, /etc/hosts, etc.) for that IQN (see output of an 'iscsiadm
--mode node'). If an incoming IQN matches and the host name provided by
libvirt is resolved to the existing IQN, then iscsid will "reuse" the
session. Although libvirt could do the same name resolution, if there
is a difference, iscsid could still declare two seemingly different sources
to be the same and not create a new session which means libvirt now has
two storage pools looking at the same source. Thus to avoid any strange
host name resolution issues, just rely on iscsid for that and do not
allow multiple pools on the same host to use the same device path (IQN).
Only perform the port number check if the incoming definition actually
provides it. Since the port number is optional we could erroneously pass
a duplicate source host check since some storage pool backends which fill
in the default port number (e.g., iSCSI and sheepdog) for the started pool.
https://bugzilla.redhat.com/show_bug.cgi?id=1220809
When cold-plugging an RNG device but something fails in
qemuDomainAssignAddresses, we will double free the RNG device.
Once a device is plugged into the domain, we should set the
device pointer to NULL to fix this issue.
...
5 0x00007fb7d180ac8a in virFree at util/viralloc.c:582
6 0x00007fb7d1895cdd in virDomainRNGDefFree at conf/domain_conf.c:19786
7 0x00007fb7d1895d99 in virDomainDeviceDefFree at conf/domain_conf.c:2022
8 0x00007fb7b92b8baf in qemuDomainAttachDeviceFlags at qemu/qemu_driver.c:8785
9 0x00007fb7d190c5d7 in virDomainAttachDeviceFlags at libvirt-domain.c:8488
10 0x00007fb7d23af9d2 in remoteDispatchDomainAttachDeviceFlags at remote_dispatch.h:2842
...
Signed-off-by: Luyao Huang <lhuang@redhat.com>
There is a lot of places, were it's pretty easy for user to enter some
characters that we need to escape to create a valid XML description.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1197580
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
The code to add device type to the commandline was identical for lsi
and other models of SCSI controllers, but was duplicated (with the
exception of a minor ordering difference of the if-else clauses) for
the two cases. This patch replaces those two with a single instance of
the code just before the if().
This patch makes qemuValideDevicePCISlotsChipsets() more consistent in
appearance by replacing several clauses of an if with the equivalent
call to qemuDomainMachineIsI440FX. The if was checking exactly the
same items, just in a slightly different order.
https://bugzilla.redhat.com/show_bug.cgi?id=1220265
Passing the return value to an enum directly is not safe. Fix this by
comparing the true integer result of virTristateSwitchTypeFromString().
Signed-off-by: Luyao Huang <lhuang@redhat.com>
For some reason, we allow a bridge name with %d in it, which we replace
with an unsigned integer to form a bridge name that does not yet exist
on the host.
Do not blindly pass it to virAsprintf if it's not the only conversion,
to prevent crashing on input like:
<network>
<name>test</name>
<forward mode='none'/>
<bridge name='virbr%d%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s'/>
</network>
Ignore any template strings that do not have exactly one %d conversion,
like we do in various drivers before calling virNetDevTapCreateInBridgePort.
Since libvirt doesn't call to update the new balloon size in qemu add
code that will handle tweaking of the size of the current balloon
statistic until qemu reports the new size using the event.
Specifying a balloon size more than the memory size of a guest isn't
something that should be rejected when parsing the XML. Truncate the
size to the maximum memory size.
Use the new domain list collection helpers to avoid going through
virDomainPtrs.
This additionally implements filter capability when called through the
api that accepts domain list filters.
Until now the virDomainListAllDomains API would lock the domain list and
then every single domain object to access and filter it. This would
potentially allow a unresponsive VM to block the whole daemon if a
*listAllDomains call would get stuck.
To avoid this problem this patch collects a list of referenced domain
objects first from the list and then unlocks it right away. The
expensive operation requiring locking of the domain object is executed
after the list lock is dropped. While a single blocked domain will still
lock up a listAllDomains call, the domain list won't be held locked and
thus other APIs won't be blocked.
Additionally this patch also fixes the lookup code, where we'd ignore
the vm->removing flag and thus potentially return domain objects that
would be deleted very soon so calling any API wouldn't make sense.
As other clients also could benefit from operating on a list of domain
objects rather than the public domain descriptors a new intermediate
API - virDomainObjListCollect - is introduced by this patch.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1181074
Extend it to a universal helper used for clearing lists of any objects.
Note that the argument type is specifically void * to allow implicit
typecasting.
Additionally add a helper that works on non-NULL terminated arrays once
we know the length.
My commit 747761a79 (v1.2.15 only) dropped this bit of logic when filling
in a default arch in the XML:
- /* First try to find one matching host arch */
- for (i = 0; i < caps->nguests; i++) {
- if (caps->guests[i]->ostype == ostype) {
- for (j = 0; j < caps->guests[i]->arch.ndomains; j++) {
- if (caps->guests[i]->arch.domains[j]->type == domain &&
- caps->guests[i]->arch.id == caps->host.arch)
- return caps->guests[i]->arch.id;
- }
- }
- }
That attempt to match host.arch is important, otherwise we end up
defaulting to i686 on x86_64 host for KVM, which is not intended.
Duplicate it in the centralized CapsLookup function.
Additionally add some testcases that would have caught this.
https://bugzilla.redhat.com/show_bug.cgi?id=1219191
https://bugzilla.redhat.com/show_bug.cgi?id=890648
So, imagine you've issued an API that involves guest agent. For
instance, you want to query guest's IP addresses. So the API acquires
QUERY_JOB, locks the guest agent and issues the agent command.
However, for some reason, guest agent replies to initial ping
correctly, but then crashes tragically while executing real command
(in this case guest-network-get-interfaces). Since initial ping went
well, libvirt thinks guest agent is accessible and awaits reply to the
real command. But it will never come. What will is a monitor event.
Our handler (processSerialChangedEvent) will try to acquire
MODIFY_JOB, which will fail obviously because the other thread that's
executing the API already holds a job. So the event handler exits
early, and the QUERY_JOB is never released nor ended.
The way how to solve this is to put flag somewhere in the monitor
internals. The flag is called @running and agent commands are issued
iff the flag is set. The flag itself is set when we connect to the
agent socket. And unset whenever we see DISCONNECT event from the
agent. Moreover, we must wake up all the threads waiting for the
agent. This is done by signalizing the condition they're waiting on.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>