In preparation for privatizing the object, use the accessor to fetch
the obj->def instead of the direct reference.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Modify virStoragePoolObjGetAutostartLink and
virStoragePoolObjGetConfigFile to return "const char *"
since that's how both are used and to ensure no one
tries to VIR_FREE the result.
To avoid any issues later on if paths ever change (unlikely but
possible) and to match the style of other generated rules the paths
of the static rules have to be quoted as well.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
libvirt allows spaces in vm names, there were issues in the past but it
seems not removed so the assumption has to be that spaces are continuing
to be allowed.
Therefore virt-aa-helper should not reject spaces in vm names anymore if
it is going to be refused causing issues then the parser or xml schema
should do so.
Apparmor rules are in quotes, so a space in a path based on the name works.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
If users only specified vendor&product (the common case) then parsing
the xml via virDomainHostdevSubsysUSBDefParseXML would only set these.
Bus and Device would much later be added when the devices are prepared
to be added.
Due to that a hot-add of a usb hostdev works as the device is prepared
and virt-aa-helper processes the new internal xml. But on an initial
guest start at the time virt-aa-helper renders the apparmor rules the
bus/device id's are not set yet:
p ctl->def->hostdevs[0]->source.subsys.u.usb
$12 = {autoAddress = false, bus = 0, device = 0, vendor = 1921, product
= 21888}
That causes rules to be wrong:
"/dev/bus/usb/000/000" rw,
The fix calls virHostdevFindUSBDevice after reading the XML from
virt-aa-helper to only add apparmor rules for devices that could be found
and now are fully known to be able to write the rule correctly.
It uncondtionally sets virHostdevFindUSBDevice mandatory attribute as
adding an apparmor rule for a device not found makes no sense no matter
what startup policy it has set.
Signed-off-by: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Skip purging the backing chain and redetecting it when it was not going
to change during the time we were not present.
The decision is based on the new flag which records whether there were
blockjobs running to the status XML.
https://bugzilla.redhat.com/show_bug.cgi?id=1447169
Since domain can have at most one watchdog it simplifies things a
bit. However, since we must be able to set the watchdog action as
well, new monitor command needs to be used.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Currently we don't do it. Therefore we accept senseless
combinations of models and buses they are attached to.
Moreover, diag288 watchdog is exclusive to s390(x).
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
For VMs with persistent config the config may change upon successful
completion of a job. Save it always if a persistent VM finishes a
blockjob. This will simplify further additions.
The status XML would be saved only for the copy job (in case of success)
or on failure even for other jobs. As the status contains the backing
chain data, which change after success we should always save it on
block job completion.
Checking of disk presence accesses storage on the host so it should be
done from the host setup function. Move the code to new function called
qemuProcessPrepareHostStorage and remove qemuDomainCheckDiskPresence.
Introduce a new function to prepare domain disks which will also do the
volume source to actual disk source translation.
The 'pretend' condition is not transferred to the new location since it
does not help in writing tests and also no tests abuse it.
Pass flags to the function rather than just whether we have incoming
migration. This also enforces correct startup policy for USB devices
when reverting from a snapshot.
qemuMigrationPrepareAny called multiple of the functions starting the
qemu process for incoming migration by adding the flags explicitly.
Extract them to a variable so that they can be easily used for other
calls or changed in the future.
Interestingly enough, we don't document the point of view of the
interface statistics. Therefore it's unknown to users if for
instance rx_packets is the number of packets received by domain or
received by host (from domain). Document this explicitly.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Similarly to previous patch, for some types of interface domain
and host are on the same side of RX/TX barrier. In that case, we
need to set up the QoS differently. Well, swapped.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1497410
The comment in virNetDevTapInterfaceStats() implementation for
Linux states that packets transmitted by domain are received by
the host and vice versa. Well, this is true but not for all types
of interfaces. For instance, for macvtaps when TAP device is
hooked right onto a physical device any packet that domain sends
looks also like a packet sent to the host. Therefore, we should
allow caller to chose if the stats returned should be straight
copy or swapped.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Small wrapper to lookup interface in domain definition by its
name.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
Users might have configured interface so that it's type of
network, but the corresponding network plugs interfaces into an
OVS bridge. Therefore, we have to check for the actual type of
the interface instead of the configured one.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
This code compiles only on Linux. Therefore the condition we
check is always true.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: John Ferlan <jferlan@redhat.com>
qemu 2.7.0 introduces multiqueue virtio-blk(commit 2f27059).
This patch introduces a new attribute "queues". An example of
the XML:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' queues='4'/>
The corresponding QEMU command line:
-device virtio-blk-pci,scsi=off,num-queues=4,id=virtio-disk0
Signed-off-by: Lin Ma <lma@suse.com>
Signed-off-by: Ján Tomko <jtomko@redhat.com>
When detaching an <interface/> from a domain, the MAC address is
parsed and if not present one is generated. If no corresponding
interface is found in the domain, the following error is
reported:
error: operation failed: no device matching mac address 52:54:00:75:32:5b found
where the MAC address is the auto generated one. This might be
very confusing. Solution to this is to ignore auto generated MAC
address when looking up the device.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
It will come handy to know if the MAC address was generated (e.g.
during XML parse) or if it was parsed since provided by user in
the XML.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>
Found by Coverity. If virNWFilterHashTablePut, then the 3rd arg @val
must be free'd since it would be leaked.
This also fixes potential problem on the error path where the caller
could assume the virNWFilterHashTablePut was successful when in fact
it failed leading to other issues.
Rather than using loop break;'s in order to force a return
of rc = -1, let's just return -1 immediately on the various
error paths and then return 0 on the success path.
virDomainDiskSourceParse got to the point of being an ugly spaghetti
mess by adding more and more stuff into it. Split out parsing of network
disk information into a separate function so that it stays contained.
On pure success paths, virNWFilterIPAddrMapAddIPAddr was validly
consuming the input @addr; however, on failure paths it was possible
that virNWFilterVarValueCreateSimple succeed, but virNWFilterHashTablePut
failed resulting in virNWFilterVarValueFree being called to clean
up @val which also cleaned up the input @addr. Thus the caller had
no way to determine on failure whether it too should clean up the
passed parameter.
Instead, let's create a copy of the input @addr, then handle that
properly in the API allowing/forcing the caller to free it's own
copy of the input parameter.
Alter qemu command line generation in order to possibly add TLS for
a suitably configured domain.
Sample TLS args generated by libvirt -
-object tls-creds-x509,id=objvirtio-disk0_tls0,dir=/etc/pki/qemu,\
endpoint=client,verify-peer=yes \
-drive file.driver=vxhs,file.tls-creds=objvirtio-disk0_tls0,\
file.vdisk-id=eb90327c-8302-4725-9e1b-4e85ed4dc251,\
file.server.type=tcp,file.server.host=192.168.0.1,\
file.server.port=9999,format=raw,if=none,\
id=drive-virtio-disk0,cache=none \
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,\
id=virtio-disk0
Update the qemuxml2argvtest with a couple of examples. One for a
simple case and the other a bit more complex where multiple VxHS disks
are added where at least one uses a VxHS that doesn't require TLS
credentials and thus sets the domain disk source attribute "tls = 'no'".
Update the hotplug to be able to handle processing the tlsAlias whether
it's to add the TLS object when hotplugging a disk or to remove the TLS
object when hot unplugging a disk. The hot plug/unplug code is largely
generic, but the addition code does make the VXHS specific checks only
because it needs to grab the correct config directory and generate the
object as the command line would do.
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
Introduce a function to setup any TLS needs for a disk source.
If there's a configuration or other error setting up the disk source
for TLS, then cause the domain startup to fail.
For VxHS, follow the chardevTLS model where if the src->haveTLS hasn't
been configured, then take the system/global cfg->haveTLS setting for
the storage source *and* mark that we've done so via the tlsFromConfig
setting in storage source.
Next, if we are using TLS, then generate an alias into a virStorageSource
'tlsAlias' field that will be used to create the TLS object and added to
the disk object in order to link the two together for QEMU.
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add an optional virTristateBool haveTLS to virStorageSource to
manage whether a storage source will be using TLS.
Sample XML for a VxHS disk:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251' tls='yes'>
<host name='192.168.0.1' port='9999'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
Additionally add a tlsFromConfig boolean to control whether the TLS
setting was due to domain configuration or qemu.conf global setting
in order to decide whether to Format the haveTLS setting for either
a live or saved domain configuration file.
Update the qemuxml2xmltest in order to add a test to show the proper
parsing.
Also update the docs to describe the tls attribute.
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
Add a new TLS X.509 certificate type - "vxhs". This will handle the
creation of a TLS certificate capability for properly configured
VxHS network block device clients.
The following describes the behavior of TLS for VxHS block device:
(1) Two new options have been added in /etc/libvirt/qemu.conf
to control TLS behavior with VxHS block devices
"vxhs_tls" and "vxhs_tls_x509_cert_dir".
(2) Setting "vxhs_tls=1" in /etc/libvirt/qemu.conf will enable
TLS for VxHS block devices.
(3) "vxhs_tls_x509_cert_dir" can be set to the full path where the
TLS CA certificate and the client certificate and keys are saved.
If this value is missing, the "default_tls_x509_cert_dir" will be
used instead. If the environment is not configured properly the
authentication to the VxHS server will fail.
Signed-off-by: Ashish Mittal <Ashish.Mittal@veritas.com>
Signed-off-by: John Ferlan <jferlan@redhat.com>
The virNWFilterIPAddrMapAddIPAddr code can consume the @addr parameter
on success when the @ifname is found in the ipAddressMap->hashTable
hash table in the call to virNWFilterVarValueAddValue; however, if
not found in the hash table, then @addr is formatted into a @val
which is stored in the table and on return the caller would be
expected to free @addr.
Thus, the caller has no way to determine on success whether @addr was
consumed, so in order to fix this create a @tmp variable which will
be stored/consumed when virNWFilterVarValueAddValue succeeds. That way
the caller can free @addr whether the function returns success or failure.
The packet with passed FD has the following format:
--------------------------
| len | header | payload |
--------------------------
where "payload" has an additional count of FDs before the actual data:
------------------
| nfds | payload |
------------------
When the packet is received we parse the "header", which as a side
effect updates msg->bufferOffset to point to the beginning of "payload".
If the message call contains FDs, we need to also parse the count of
FDs, which also updates the msg->bufferOffset.
The issue here is that when we attempt to read the FDs data from the
socket and we receive EAGAIN we finish the reading and call poll()
to wait for the data the we need. When the data arrives we already have
the packet in our buffer so we read the "header" again but this time
we don't read the count of FDs because we already have it stored.
That means that the msg->bufferOffset is not updated to point to the
actual beginning of the payload data, but it points to the count of
FDs. After all FDs are processed we dispatch the message to process
it and decode the payload. Since the msg->bufferOffset points to wrong
data, we decode the wrong payload and the API call fails with
error messages:
Domain not found: no domain with matching uuid '67656e65-7269-6300-0c87-5003ca6941f2' ()
Broken by commit 133c511b527 which fixed a FD and memory leak.
Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
VM private data is cleared when the VM is turned off and also when the
VM object is being freed. Some of the clearing code was duplicated.
Extract it to a separate function.
This also removes the now unnecessary function
qemuDomainClearPrivatePaths.